logo
Menu
Running LangChain.js Applications on AWS Lambda

Running LangChain.js Applications on AWS Lambda

Learn how to run LangChain.js apps powered by Amazon Bedrock on AWS Lambda using function URLs and response streaming.

João Galego
Amazon Employee
Published May 29, 2024

Overview

“It is a mistake to think you can solve any major problems just with potatoes.” 🥔
― Douglas Adams, Life, the Universe and Everything
Today, I'd like to show you a simple way to run LangChain.js applications on AWS Lambda using function URLs and response streaming.
If you've been following my articles so far, you probably know that I'm a big fan of the LangChain ecosystem
and that I have a soft spot for putting things inside Lambda functions
so this one won't come as a surprise.
As a model backend, I'll be using Amazon Bedrock but feel free to use other chat models.
Ready, set... go! 💥🚀

Prerequisites ✅

Before we get started, take some time perform the following prerequisite actions:
  1. Make sure these tools are installed and properly configured:
  2. Request model access via Amazon Bedrock
💡 For more information on how enable model access, please refer to the Amazon Bedrock User Guide (Set up > Model access)

Demo ✨

👨‍💻 All code and documentation for this post is available on GitHub.
Let's start by cloning the project repository
1
2
git clone https://github.com/JGalego/LambdaChain
cd LambdaChain
As you can see from the tree structure below, this demo uses AWS Serverless Application Model (SAM) to build and deploy the application.
1
2
3
4
5
6
7
8
9
.
├── README.md
├── lambdachain
│   ├── Dockerfile
│   ├── index.mjs
│   └── package.json
├── lambdachain.png
├── samconfig.toml
└── template.yaml
☕ If AWS SAM is not your cup of tea, please submit a pull request and feel free to refactor the project to use other deployment tools.
If you're here for the code, the actual app lives inside the lambdachain folder. The main point of interest is index.mjs which contains the handler function.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import util from 'util';
import stream from 'stream';

import { BedrockChat } from "@langchain/community/chat_models/bedrock";
import { HumanMessage } from "@langchain/core/messages";

const pipeline = util.promisify(stream.pipeline);

const model = new BedrockChat({
model: process.env.MODEL_ID || "anthropic.claude-3-sonnet-20240229-v1:0",
region: process.env.AWS_REGION || process.env.AWS_DEFAULT_REGION,
modelKwargs: {
temperature: parseFloat(process.env.TEMPERATURE) || 0.0
}
});

export const handler = awslambda.streamifyResponse(async (event, responseStream, _context) => {
const completionStream = await model.stream([
new HumanMessage({ content: JSON.parse(event.body).message })
]);
await pipeline(completionStream, responseStream);
});
LangChain.js offers a BedrockChat class with built-in streaming capabilities that makes things a lot easier for a JS novice like myself.
The details on how to handle response streaming are well-covered by the AWS Lambda Developer Guide, see Configuring a Lambda function to stream responses.
Next, let's set up the AWS credentials that will be used to build and deploy the application
💡 For more information on how to do this, please refer to the AWS Boto3 documentation (Developer Guide > Credentials).
1
2
3
4
5
6
7
# Option 1: (recommended) AWS CLI
aws configure

# Option 2: environment variables
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=...
You can use the sam local invoke command to test the application locally, just keep in mind that response streaming is not supported (yet!).
When you're ready, feel free to build and deploy the application
1
2
3
4
5
# 🏗️ Build
sam build --use-container

# 🚀 Deploy
sam deploy --guided
❗ Don't forget to note down the function URL
1
export FUNCTION_URL=`sam list stack-outputs --stack-name lambdachain --output json | jq -r '.[] | select(.OutputKey == "LambdaChainFunctionUrl") | .OutputValue'`
By default, LambdaChain will use Claude 3 Sonnet. You can add a MODEL_ID environment variable to the Lambda function to change the target model.
Finally, let's take it for a spin:
SAM
1
2
sam remote invoke --stack-name lambdachain \
--event '{"body": "{\"message\": \"What is the answer to life, the Universe and everything?\"}"}'
cURL
1
2
3
4
5
6
7
8
curl --no-buffer \
--silent \
--aws-sigv4 "aws:amz:$AWS_DEFAULT_REGION:lambda" \
--user "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY" \
-H "x-amz-security-token: $AWS_SESSION_TOKEN" \
-H "content-type: application/json" \
-d '{"message": "What is the answer to life, the Universe and everything?"}' \
$FUNCTION_URL
☝️ Pro Tip: Pipe the output through jq -rj .kwargs.content for a cleaner output
 Here's the model output (though you may not like the answer):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
The answer is 42.

This is a reference to the famous joke from The Hitchhiker's Guide to the Galaxy by Douglas Adams.

In the story, scientists build an incredibly powerful computer called Deep Thought to calculate the
Answer to the Ultimate Question of Life, the Universe, and Everything.

After 7.5 million years of computing, Deep Thought provides the answer: 42.

Of course, 42 is not really the meaningful answer everyone was hoping for.

It's simply an absurd joke playing on the deep philosophical question by giving an unhelpful numerical answer.

The point is that the question itself is too vague and impossible to definitively answer in such a simplistic way.

So in pop culture, "42" has become a tongue-in-cheek way to provide a humorous non-answer answer to
the mysteries of existence and the universe.

It's an iconic bit of silliness from the brilliant comedic mind of Douglas Adams.
So long, and thanks for all the fish! 🐬
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments