Building LangChain applications with Amazon Bedrock and Go - An introduction
How to extend the LangChain Go package to include support for Amazon Bedrock.
langchaingo
to use foundation model from Amazon Bedrock.The code is available in this GitHub repository
LangChain
's strength is its extensible architecture - the same applies to the langchaingo
library as well. It supports components/modules, each with interface(s) and multiple implementations. Some of these include:- Models - These are the building blocks that allow LangChain apps to work with multiple language models (such as ones from Amazon Bedrock, OpenAI, etc.).
- Chains - These can be used to create a sequence of calls that combine multiple models and prompts.
- Vector databases - They can store unstructured data in the form of vector embedding. At query time, the unstructured query is embedded and semantic/vector search is performed to retrieve the embedding vectors that are 'most similar' to the embedded query.
- Memory - This module allows you to persist state between chain or agent calls. By default, chains are stateless, meaning they process each incoming request independently (same goes with LLMs).
langchaingo
provides many large language models implementation - the same applies here as well.langchaingo
LLM and LanguageModel interfaces. So it implements Call
, Generate
, GeneratePrompt
and GetNumTokens
functions.- The first step is to prepare the JSON payload to be sent to Amazon Bedrock. This contains the prompt/input along with other configuration parameters.
- Next Amazon Bedrock is invoked with the payload and config parameters. Both synchronous and streaming invocation modes are supported.
The streaming/async mode will be demonstrated in an example below
ProcessStreamingOutput
function.You can refer to the details in Using the Streaming API section in this blog post.
- Once the request is processed successfully, the JSON response from Amazon Bedrock is converted (un-marshaled) back in the form of a Response struct and a
slice
of Generation instances as required by theGenerate
function signature.
langchaingo
has been implemented, using it is as easy as creating a new instance with claude.New(<supported AWS region>)
and using the Call
(or Generate
) function.You can refer to the code here
LangChain
is a powerful and extensible library that allows us to plugin external components as per requirements. This blog demonstrated how to extend langchaingo
to make sure it works with the Anthropic Claude model available in Amazon Bedrock. You can use the same approach to implement support for other Amazon Bedrock models such as Amazon Titan.Call
function. In future blog posts, I will cover how to use them as part of chains for implementing functionality like chatbot or QA assistant.Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.