logo
Menu
Build a Generative AI powered Serverless chat application in Go

Build a Generative AI powered Serverless chat application in Go

Using DynamoDB, langchaingo, AWS Lambda Web adapter and Amazon Bedrock

Abhishek Gupta
Amazon Employee
Published Jan 18, 2024
Last Modified Apr 13, 2024
In a previous blog, I demonstrated how to use Redis (Elasticache Serverless as an example) as a chat history backend for a Streamlit app using LangChain. It was deployed to EKS and also make use of EKS Pod Identity to manage application Pod permissions for invoking Amazon Bedrock.
This use-case here is a similar one - a chat application. I will switch back to implementing things in Go using langchaingo (I used Python for the previous one) and continue to use Amazon Bedrock.
But there are a few unique things you can explore in this blog post. The Serverless chat application is deployed as an AWS Lambda function along with a Function URL. It uses DynamoDB as the chat history store (aka Memory) for each conversation - I extended langchaingo to include this feature.Thanks to the AWS Lambda Web Adapter, the application is built as a (good old) REST/HTTP API using a familiar library (in this case, Gin). And the other nice add-on was to be able to combine Lambda Web Adapter streaming response feature with Amazon Bedrock streaming inference API.
architecture
High level overview

Deploy using SAM CLI (Serverless Application Model)

Make sure you have Amazon Bedrock prerequisites taken care of and the SAM CLI installed
1
2
git clone https://github.com/build-on-aws/chatbot-bedrock-dynamodb-lambda-langchain
cd chatbot-bedrock-dynamodb-lambda-langchain
Run the following commands to build the function and deploy the entire app infrastructure (including the Lambda Function, DynamoDB, etc.)
1
2
sam build
sam deploy -g
Once deployed, you should see the Lambda Function URL in your terminal. Open it in a web browser and start conversing with the chatbot!
chatbot
Chatbot in action
Inspect the DynamoDB table to verify that the conversations are being stored (each conversation will end up being a new item in the table with a unique chat_id):
1
aws dynamodb scan --table-name langchain_chat_history
Scan operation is used for demonstration purposes. Using Scan in production is not recommended.
To delete the app and the rest of the infrastructure, simply use:
1
sam delete

Quick peek at the good stuff....

Here is a sneak peek of the implementation (refer to the complete code here):
1
2
3
4
5
6
7
8
9
_, err = chains.Call(c.Request.Context(), chain, map[string]any{"human_input": message}, chains.WithMaxTokens(8191), chains.WithStreamingFunc(func(ctx context.Context, chunk []byte) error {

c.Stream(func(w io.Writer) bool {
fmt.Fprintf(w, (string(chunk)))
return false
})

return nil
}))

Closing thoughts

I really like the extensibility of LangChain. While I understand that langchaingo may not be as popular as the original python version (I hope it will reach there in due time 🤞), but it's nice to be able to use it as a foundation and build extensions as required. Previously, I had written about how to use the AWS Lambda Go Proxy API to run existing Go applications on AWS Lambda. The AWS Lambda Web Adapter offers similar functionality but it has lots of other benefits, including response streaming and the fact that it is language agnostic.
Oh, and one more thing - I also tried a different approach to building this solution using the API Gateway WebSocket API. Let me know if you're interested, and I would be happy to write it up!
If you want to explore how to use Go for Generative AI solutions, you can read up on some of my earlier blogs:
Happy building!
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments