Running LangChain.js Applications on AWS Lambda

Learn how to run LangChain.js apps powered by Amazon Bedrock on AWS Lambda using function URLs and response streaming.

João Galego
Amazon Employee
Published May 29, 2024

Overview

“It is a mistake to think you can solve any major problems just with potatoes.” 🥔
― Douglas Adams, Life, the Universe and Everything
Today, I'd like to show you a simple way to run LangChain.js applications on AWS Lambda using function URLs and response streaming.
If you've been following my articles so far, you probably know that I'm a big fan of the LangChain ecosystem
and that I have a soft spot for putting things inside Lambda functions
so this one won't come as a surprise.
As a model backend, I'll be using Amazon Bedrock but feel free to use other chat models.
Ready, set... go! 💥🚀

Prerequisites ✅

Before we get started, take some time perform the following prerequisite actions:
  1. Make sure these tools are installed and properly configured:
  2. Request model access via Amazon Bedrock
💡 For more information on how enable model access, please refer to the Amazon Bedrock User Guide (Set up > Model access)

Demo ✨

👨‍💻 All code and documentation for this post is available on GitHub.
Let's start by cloning the project repository
As you can see from the tree structure below, this demo uses AWS Serverless Application Model (SAM) to build and deploy the application.
☕ If AWS SAM is not your cup of tea, please submit a pull request and feel free to refactor the project to use other deployment tools.
If you're here for the code, the actual app lives inside the lambdachain folder. The main point of interest is index.mjs which contains the handler function.
LangChain.js offers a BedrockChat class with built-in streaming capabilities that makes things a lot easier for a JS novice like myself.
The details on how to handle response streaming are well-covered by the AWS Lambda Developer Guide, see Configuring a Lambda function to stream responses.
Next, let's set up the AWS credentials that will be used to build and deploy the application
💡 For more information on how to do this, please refer to the AWS Boto3 documentation (Developer Guide > Credentials).
You can use the sam local invoke command to test the application locally, just keep in mind that response streaming is not supported (yet!).
When you're ready, feel free to build and deploy the application
❗ Don't forget to note down the function URL
By default, LambdaChain will use Claude 3 Sonnet. You can add a MODEL_ID environment variable to the Lambda function to change the target model.
Finally, let's take it for a spin:
SAM
cURL
☝️ Pro Tip: Pipe the output through jq -rj .kwargs.content for a cleaner output
 Here's the model output (though you may not like the answer):
So long, and thanks for all the fish! 🐬
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments