Building LangChain applications with Amazon Bedrock and Go - An introduction
How to extend the LangChain Go package to include support for Amazon Bedrock.
langchaingo
to use foundation model from Amazon Bedrock.The code is available in this GitHub repository
LangChain
's strength is its extensible architecture - the same applies to the langchaingo
library as well. It supports components/modules, each with interface(s) and multiple implementations. Some of these include:- Models - These are the building blocks that allow LangChain apps to work with multiple language models (such as ones from Amazon Bedrock, OpenAI, etc.).
- Chains - These can be used to create a sequence of calls that combine multiple models and prompts.
- Vector databases - They can store unstructured data in the form of vector embedding. At query time, the unstructured query is embedded and semantic/vector search is performed to retrieve the embedding vectors that are 'most similar' to the embedded query.
- Memory - This module allows you to persist state between chain or agent calls. By default, chains are stateless, meaning they process each incoming request independently (same goes with LLMs).
langchaingo
provides many large language models implementation - the same applies here as well.langchaingo
LLM and LanguageModel interfaces. So it implements Call
, Generate
, GeneratePrompt
and GetNumTokens
functions.- The first step is to prepare the JSON payload to be sent to Amazon Bedrock. This contains the prompt/input along with other configuration parameters.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//...
payload := Request{
MaxTokensToSample: opts.MaxTokens,
Temperature: opts.Temperature,
TopK: opts.TopK,
TopP: opts.TopP,
StopSequences: opts.StopWords,
}
if o.useHumanAssistantPrompt {
payload.Prompt = fmt.Sprintf(claudePromptFormat, prompts[0])
} else {
}
payloadBytes, err := json.Marshal(payload)
if err != nil {
return nil, err
}
1
2
3
4
5
6
7
8
type Request struct {
Prompt string `json:"prompt"`
MaxTokensToSample int `json:"max_tokens_to_sample"`
Temperature float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
StopSequences []string `json:"stop_sequences,omitempty"`
}
- Next Amazon Bedrock is invoked with the payload and config parameters. Both synchronous and streaming invocation modes are supported.
The streaming/async mode will be demonstrated in an example below
1
2
3
4
5
6
7
8
9
10
11
12
13
14
//...
if opts.StreamingFunc != nil {
resp, err = o.invokeAsyncAndGetResponse(payloadBytes, opts.StreamingFunc)
if err != nil {
return nil, err
}
} else {
resp, err = o.invokeAndGetResponse(payloadBytes)
if err != nil {
return nil, err
}
}
ProcessStreamingOutput
function.You can refer to the details in Using the Streaming API section in this blog post.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
//...
func (o *LLM) invokeAsyncAndGetResponse(payloadBytes []byte, handler func(ctx context.Context, chunk []byte) error) (Response, error) {
output, err := o.brc.InvokeModelWithResponseStream(context.Background(), &bedrockruntime.InvokeModelWithResponseStreamInput{
Body: payloadBytes,
ModelId: aws.String(o.modelID),
ContentType: aws.String("application/json"),
})
if err != nil {
return Response{}, err
}
var resp Response
resp, err = ProcessStreamingOutput(output, handler)
if err != nil {
return Response{}, err
}
return resp, nil
}
func ProcessStreamingOutput(output *bedrockruntime.InvokeModelWithResponseStreamOutput, handler func(ctx context.Context, chunk []byte) error) (Response, error) {
var combinedResult string
resp := Response{}
for event := range output.GetStream().Events() {
switch v := event.(type) {
case *types.ResponseStreamMemberChunk:
var resp Response
err := json.NewDecoder(bytes.NewReader(v.Value.Bytes)).Decode(&resp)
if err != nil {
return resp, err
}
handler(context.Background(), []byte(resp.Completion))
combinedResult += resp.Completion
case *types.UnknownUnionMember:
fmt.Println("unknown tag:", v.Tag)
default:
fmt.Println("union is nil or unknown type")
}
}
resp.Completion = combinedResult
return resp, nil
}
- Once the request is processed successfully, the JSON response from Amazon Bedrock is converted (un-marshaled) back in the form of a Response struct and a
slice
of Generation instances as required by theGenerate
function signature.
1
2
3
4
//...
generations := []*llms.Generation{
{Text: resp.Completion},
}
langchaingo
has been implemented, using it is as easy as creating a new instance with claude.New(<supported AWS region>)
and using the Call
(or Generate
) function.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package main
import (
"context"
"fmt"
"log"
"github.com/build-on-aws/langchaingo-amazon-bedrock-llm/claude"
"github.com/tmc/langchaingo/llms"
)
func main() {
llm, err := claude.New("us-east-1")
input := "Write a program to compute factorial in Go:"
opt := llms.WithMaxTokens(2048)
output, err := llm.Call(context.Background(), input, opt)
//....
1
2
git clone github.com/build-on-aws/langchaingo-amazon-bedrock-llm
cd langchaingo-amazon-bedrock-llm/examples
1
go run main.go
You can refer to the code here
1
2
3
4
5
6
//...
_, err = llm.Call(context.Background(), input, llms.WithMaxTokens(2048), llms.WithTemperature(0.5), llms.WithTopK(250),
llms.WithStreamingFunc(func(ctx context.Context, chunk []byte) error {
fmt.Print(string(chunk))
return nil
}))
1
go run streaming/main.go
LangChain
is a powerful and extensible library that allows us to plugin external components as per requirements. This blog demonstrated how to extend langchaingo
to make sure it works with the Anthropic Claude model available in Amazon Bedrock. You can use the same approach to implement support for other Amazon Bedrock models such as Amazon Titan.Call
function. In future blog posts, I will cover how to use them as part of chains for implementing functionality like chatbot or QA assistant.Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.