Create a RAG CV Chatbot using Bedrock and Knowledge Bases

Create a RAG CV Chatbot using Bedrock and Knowledge Bases

A quick tutorial to create a RAG based CV chatbot, using CDK, in AWS Bedrock

Published Jul 18, 2024
AWS Bedrock is an incredibly powerful and intuitive way to access, test, customise and integrate Generative AI with your own applications. As an emerging technology, new techniques, services and approaches are appearing constantly and while there are a plethora of demos and POC builds out there, I've struggled to find many that include IaC, perhaps because services like CDK are yet to fully catch up. In this walk through we'll create the infrastructure for a simple chatbot that you could include in your portfolio, allowing a potential client or employer to interact with your CV using Natural Language Prompts. We'll create a Knowledge Base and add a data source, and build an api to interact with that Knowledge Base using a Foundational LLM Model. And we'll do it all inside CDK so that's reproducible, maintainable and extendible.
Create a CDK Project
 
To create this CDK project you will need:
An AWS Account
TypeScript 3.8 or later
npm -g install typescript
The AWS CDK CLI
npm install -g aws-cdk
NB: Like many, I prefer to use ESBuild to bundle my Lambdas, and have done for this project. However Docker is required by one of the external modules and CDK will not be able to synth or deploy without it.
You will also need to setup your credentials and Bootstrap your AWS account, you can find more info here
With these installed we can create the project in a new folder:
mkdir cv-bot && cd cv-bot
cdk init app --language typescript
We're using TypeScript for this project, you can also use other languages in CDK, such as Python, C#, Go and more. The CDK Documentation will provide more information about each
We'll also need a couple of extra modules adding:
npm i @aws-sdk/client-bedrock-agent-runtime - needed for our Lambda
npm i @cdklabs/generative-ai-cdk-constructs - provides L2 Constructs for Bedrock, more on this later
npm i -D esbuild - allows bundling of Lambdas using ESBuild instead of Docker. This is a matter of preference and can be left out.
npm i -D @types/aws-lambda TypeScript types for our Lambda
Now we have our initial setup complete and we can start building!
Creating a Knowledge Base
Knowledge Bases for Amazon Bedrock allow us to simplify the ingestion and use of data for Retrieval Augmented Generation, or RAG. This means that we can ask an LLM questions that relate to the data provided, and the LLM will be able to use it to create responses, citing the information to back them up. For a more detailed explanation of RAG see here.
Inside our CDK project, open the cv-bot-stack.ts. Add an import for s3 from the aws-cdk-lib at the top of the file, and then create an S3 bucket inside the class constructor. This will create an S3 bucket where we will store our data (in this project it will be a simple CV .doc file, but there are many different types of file which can be used).
Next we're going to create the Knowledge Base. As mentioned before, CDK doesn't have L2 Constructs for Bedrock yet, so we will use this AWS Labs repo, which has Constructs for Knowledge Bases and Bedrock Agents, among others.
A Knowledge Base relies on a Vector Database to store your processed data, so that the LLM can semantically search it to answer the questions it is asked. By default the db uses OpenSearch Serverless (which despite being called Serverless, does not scale down to zero), but you can specify another Vector DB (CDK currently supports Amazon Aurora PostgreSQL and Pinecone) if you prefer. Bear in mind that these other Database types currently need to be created beforehand and then referenced, and may need additional resources like Secrets Manager to be used for storing access information. We'll use the default option here.
Also note we have specified the Embedding Model. This is an LLM specifically for chunking the Data and creating the Embeddings stored in the Vector DB. This is NOT the LLM you will be using to interrogate the data. You can see available Embedding Models in the image below:
A list of available Embedding Models
Embedding Models
Finally, I've added an instruction prompt for any LLM querying the Knowledge Base. You can use this one or play with creating your own.
Add the Data source to the Knowledge Base
We have an S3 bucket to hold our data, and a Knowledge Base to create the Embeddings. Now we need to link them together.
We're once again using the AWS Labs Constructs to create our Data source. Notice how we pass in the bucket and the knowledge base. The chunking strategy and max tokens are set at the defaults. So is the overlap percentage, which is the amount of overlap between chunks of data. Finally the inclusion prefixes filter out which files inside the bucket will be added to the Data source, in this case any file starting with 'CV' will be included.
The Retrieve and Generate Lambda
We're going to use the RetrieveAndGenerate API for Bedrock to interact with the LLM and our data. My personal preference would be to do this in a Step Function, as there is less operational overhead - but CDK does not have a Construct for the task yet. So instead we will create a Lambda to do it for us.
The Lambda itself is very simple:
We are using the AWS SDK, specifically the bedrock-agent-runtime APIs, so we import the relevant parts we need, and our type for the handler function. We'll bring in our Knowledge Base Id and Foundational Model ARN as environment variables, which we'll pass through from CDK later on. Our prompt will be passed in as a string, which we'll need to parse from the body of the request. Then we initialise our Bedrock Agent Runtime client outside of the handler function, in accordance with best practices and so it can be re-used if our execution environment gets re-used.
Inside of our handler we parse the event body, construct our input command using the Knowledge Base Id and Model Arn, and await a response from the client. Pretty simple right?
Let's put our Lambda into the CDK stack:
I'm using the aws-lambda-nodejs module as we are using Node. I've minified the bundle, which will keep the file small but making live adjustments on the console is much more difficult ( When I'm experimenting I often set it to false so I can make quick changes). We pass in the environment variables too - notice how I've created a const for the modelArn as we use it more than once - you can find the model ARNs by clicking on a model in the Base Models page on Bedrock.
A section of a model page on Bedrock, showing the Model ARN
Bedrock Model Info Page
I've also increased the timeout of the lambda to 2 minutes, which should be enough for the requests we are making, but can be tweaked quickly if needed.
Next we need to add the IAM permissions our Lambda will require to interact with Bedrock. Using the principle of least privilege, I am only giving it exactly what it needs for the exact resources it is interacting with.
We're nearly there, we just need an API Gateway so we can call the Lambda.
Adding an API Gateway
Using the LambdaRestApi module makes the most sense, and through the beauty of CDK we only need these few lines to stand up an API with a single endpoint, and an API key to keep it relatively secure. I've added some outputs to see some info on the resources we've created, but it's not strictly necessary.
That's our code done, now let's deploy to the cloud!
Deploying our code
If you want to check your CDK is ready to go before you deploy, go to your terminal and run:
cdk synth
Once you are happy it's all working deploy with
cdk deploy --profile {YOUR_PROFILE_NAME}
replacing {YOUR_PROFILE_NAME} with your own credentials.
You will be asked to approve the permissions changes, and then the IaC will be deployed to the cloud. Once complete log into your AWS console and navigate to the Bedrock page. Under Knowledge Bases you should see your deployed Knowledge Base - click on it to see your Data source has been set up too. Navigate to S3 and upload your CV in the created bucket (make sure it starts with the prefix we defined in the Stack, eg 'CVxxxx.doc' and then navigate back the the Knowledge Base page and sync your Data source.
A section of the Bedrock Knowledge Base page showing the Data source
Sync your Data source
In later tutorials I'll show how to automate this process with another Lambda.
At this point you can select a Model and start testing your Knowledge Base directly in the console, great news if you want to try out different models and see which is best.
A section of the Knowledge Base page showing the test chat with your data
Interrogate your data directly in the Bedrock Console
We can also navigate to API Gateway and test our API:
A section of the API Gateway page showing the testing of the API we have created
Testing the API Gateway
If you want to test using Postman or ThunderClient, remember to grab the API key and add it as the X-API-Key header to your request
And there we have it! We've created a Bedrock Knowledge Base which we can query using a Foundational Model, from our own API. You can now build a simple Chat bot front end and use it on your portfolio website, or adapt it to your own use case.
 

Comments