GenAI, from your Local machine to AWS with LangChain.js
Empowering JavaScript/TypeScript Developers with GenAI
Published Aug 9, 2024
Last Modified Aug 13, 2024
As Generative AI (GenAI) continues to revolutionize application development, it's crucial to recognize that this technology isn't exclusive to a specific programming language. TypeScript and JavaScript developers can also harness the power of GenAI to create innovative solutions. This article introduces a sample application that serves as an excellent starting point for TypeScript/JavaScript developers looking to experiment with advanced GenAI features.
Our sample application, DeepF1, is a Serverless GenAI chat application that demonstrates Retrieval-Augmented Generation (RAG) and Agentic AI capabilities. Built for a fictitious Formula 1 company, DeepF1 allows race engineers to gain insights about driver performance metrics and Formula 1 Championship data.
Key components of the application include:
- A web app built with Vite + React and hosted on AWS Amplify
- A backend leveraging Amazon Bedrock features and AWS Lambda
- A vector database using Amazon OpenSearch Serverless
- Document storage using Amazon S3
Notable features:
- Serverless architecture for scalability and cost-effectiveness
- RAG + Agentic approach for accurate and relevant responses
- Local development support using Ollama
You need to install the following tools to use that sample:
One of the strengths of this sample is the ability to experiment locally without incurring Cloud costs. Here's how to get started:
- Open your terminal and clone the DeepF1 sample repository:
git clone https://github.com/ajohn-wick/deepf1-ai-engineer
- Go to the
langchain-poc
folder andnpm install
dependencies - Download necessary models on your machine using Ollama:
- Go to the
src
subfolder and run local scripts to test basic functionality using LangChain.js:
LangChain.js is a JavaScript framework providing high-level APIs to interact with AI models / APIs but also many built-in tools to ease the development of complex AI applications.
Run the code thanks to the command
node 01_local_invoke_model.js
. The response from the model will be displayed in your console!If you want to use RAG technique to provide a domain-area knowledge (Formula 1 in our case) to the model, update your program with the following code:
This code will load some CSV documents containing driver performance metrics and Formula 1 Championship data, split it into smaller chunks, convert them to vectors, and then use them in a multi-step workflow (chain) to perform a vector search and generate a response using the best results.
This code shows how LangChain.js can help you build more advanced AI scenarios in a few lines of code.
This code shows how LangChain.js can help you build more advanced AI scenarios in a few lines of code.
That's great but how do I combine LangChain.js and AWS?
Thankfully, there is a
@langchain/aws
package providing us Bedrock Interfaces in order to use the same JavaScript code we produced. Here are the minor changes we have to implement:- Ollama => Foundation Model available within Amazon Bedrock
- Local RAG => Amazon Bedrock Knowledge Base
In order to run that code with success, you will have to deploy the Backend as it is relying on an Amazon Bedrock Knowledge Base instead of local data.
By starting locally, developers can quickly iterate and experiment with GenAI concepts before moving to a Cloud deployment.
When ready to scale, the sample provides a straightforward path to AWS.
Make sure to use one of the AWS region where all AWS Services used by this sample are available.
Make sure to use one of the AWS region where all AWS Services used by this sample are available.
To deploy our AWS Backend resources, we are relying on the AWS Cloud Development Kit (CDK):
- If not already done, open your terminal and clone the DeepF1 sample repository:
git clone https://github.com/ajohn-wick/deepf1-ai-engineer
- Go to the
infra
folder where our CDK project is stored andnpm install
dependencies - Execute the following bash script:
./cdk-deploy-to.sh [AWS_ACCOUNT_ID] [AWS_REGION] [AWS_PROFILE_NAME](optional)
This will provision all AWS backend resources (Amazon Bedrock, Amazon S3, AWS Lambda) and build the search index (Amazon OpenSearch Serverless) based on the files found in the
./data
subfolder.If you are not familiar with the Infrastructure as Code concept, do not hesitate to dig into the
As an example, this file has the definition of our AWS Lambda function along with our LangChain.js code (developed during the Getting Started with Local Development section) in order to interact with the Amazon Bedrock Knowledge Base.
/infra/lib/genai-stack.ts
file. This is where we coded our Cloud application resources using a familiar programming language (TypeScript in our case).As an example, this file has the definition of our AWS Lambda function along with our LangChain.js code (developed during the Getting Started with Local Development section) in order to interact with the Amazon Bedrock Knowledge Base.
Here is the code of our Lambda function (cf.
infra/src/actions-group/agent-deepf1.ts
) :Once the Backend deployed with success, go to the the
webapp
folder within your terminal and execute the following actions:npm install
dependencies- Open the
src/App.tsx
file and replace thebedrock
section with the following one:
Make sure to replace
[AMAZON BEDROCK AGENT ID]
and [AMAZON BEDROCK AGENT ALIAS ID]
with Outputs values displayed in your terminal when you deployed the Backend.If this is the 1st time you are using AWS Amplify CLI, initialize it on your local machine (via the command line
amplify configure
) in order to deploy AWS Amplify applications.- Initialize and set up authentication for the Amplify app:
- Lastly, you can experiment locally your Frontend by running the
npm run dev
command or you can add hosting and deploy it to AWS:
Once built and artifacts published, you can use the public URL displayed in your terminal to open theDeepF1 Race AI Engineer web app and start chatting with the Foundation Model you specified (llama3 by default) which will leverage Agentic approach when required.
This DeepF1 sample application demonstrates that TypeScript/JavaScript developers are well-positioned to create sophisticated GenAI applications. By providing a path from local development to AWS deployment, it offers a practical starting point for experimentation and innovation in the GenAI space.
We encourage you to explore this sample, adapt it to your needs, and share your experiences with the community. The world of GenAI is open to TypeScript/JavaScript developers, and the possibilities are endless.
P.S: If you have any issue when running or deploying the sample, please have a look at the detailed instructions available within the DeepF1 sample README.md