
Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers
An introductory guide to using the AWS Go SDK and Amazon Bedrock Foundation Models (FMs) for tasks such as content generation, building chat applications, handling streaming data, and more.
- Amazon Bedrock Go APIs and how to use them for tasks such as content generation
- How to build a simple chat application and handle streaming output from Amazon Bedrock Foundation Models
- Code walkthrough of the examples
The code examples are available in this GitHub repository
- Grant programmatic access using an IAM user/role.
- Grant the below permission(s) to the IAM identity you are using:
1
2
3
4
5
6
7
8
9
10
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "bedrock:*",
"Resource": "*"
}
]
}
1
cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion(region))
aws.Config
instance using config.LoadDefaultConfig
, the AWS Go SDK uses its default credential chain to find AWS credentials. You can read up on the details here, but in my case, I already have a credentials
file in <USER_HOME>/.aws
which is detected and picked up by the SDK.- The first one, bedrock.Client, can be used for control plane-like operations such as getting information about base foundation models, or custom models, creating a fine-tuning job to customize a base model, etc.
- The bedrockruntime.Client in the bedrockruntime package is used to run inference on the Foundation models (this is the interesting part!).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
region := os.Getenv("AWS_REGION")
if region == "" {
region = defaultRegion
}
cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion(region))
bc := bedrock.NewFromConfig(cfg)
fms, err := bc.ListFoundationModels(context.Background(), &bedrock.ListFoundationModelsInput{
//ByProvider: aws.String("Amazon"),
//ByOutputModality: types.ModelModalityText,
})
for _, fm := range fms.ModelSummaries {
info := fmt.Sprintf("Name: %s | Provider: %s | Id: %s", *fm.ModelName, *fm.ProviderName, *fm.ModelId)
fmt.Println(info)
}
1
2
3
4
5
git clone https://github.com/build-on-aws/amazon-bedrock-go-sdk-examples
cd amazon-bedrock-go-sdk-examples
go mod tidy
1
go run bedrock-basic/main.go
Note that you can also filter by provider, modality (input/output), and so on by specifying it inListFoundationModelsInput
.
1
2
3
4
5
6
7
<paragraph>
"In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae, the two-word naming of species (binomial nomenclature). Canis is the Latin word meaning "dog", and under this genus, he listed the domestic dog, the wolf, and the golden jackal."
</paragraph>
Please rewrite the above paragraph to make it understandable to a 5th grader.
Please output your rewrite in <rewrite></rewrite> tags.
1
go run claude-content-generation/main.go
1
2
3
<rewrite>
Carl Linnaeus was a scientist from Sweden who studied plants and animals. In 1758, he published a book called Systema Naturae where he gave all species two word names. For example, he called dogs Canis familiaris. Canis is the Latin word for dog. Under the name Canis, Linnaeus listed the pet dog, the wolf, and the golden jackal. So he used the first word Canis to group together closely related animals like dogs, wolves and jackals. This way of naming species with two words is called binomial nomenclature and is still used by scientists today.
</rewrite>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
//...
brc := bedrockruntime.NewFromConfig(cfg)
payload := {
Prompt: fmt.Sprintf(claudePromptFormat, prompt),
MaxTokensToSample: 2048,
Temperature: 0.5,
TopK: 250,
TopP: 1,
}
payloadBytes, err := json.Marshal(payload)
output, err := brc.InvokeModel(context.Background(), &bedrockruntime.InvokeModelInput{
Body: payloadBytes,
ModelId: aws.String(claudeV2ModelID),
ContentType: aws.String("application/json"),
})
var resp Response
err = json.Unmarshal(output.Body, &resp)
//.....
JSON
formatted and its details are well documented here - Inference parameters for foundation models.ModelId
in the call that you can get from the list of Base model IDs. The JSON
response is then converted to a Response
struct.1
2
3
4
5
6
7
8
9
<directory>
Phone directory:
John Latrabe, 800-232-1995, john909709@geemail.com
Josie Lana, 800-759-2905, josie@josielananier.com
Keven Stevens, 800-980-7000, drkevin22@geemail.com
Phone directory will be kept up to date by the HR manager."
<directory>
Please output the email addresses within the directory, one per line, in the order in which they appear within the text. If there are no email addresses in the text, output "N/A".
1
go run claude-information-extraction/main.go
Since it's a simple implementation, the state is maintained in-memory.
1
2
3
4
5
6
go run claude-chat/main.go
If you want to log messages being exchanged with the LLM,
run the program in verbose mode
go run claude-chat/main.go --verbose
1
2
3
<rewrite>
Carl Linnaeus was a scientist from Sweden who studied plants and animals. In 1758, he published a book called Systema Naturae where he gave all species two word names. For example, he called dogs Canis familiaris. Canis is the Latin word for dog. Under the name Canis, Linnaeus listed the pet dog, the wolf, and the golden jackal. So he used the first word Canis to group together closely related animals like dogs, wolves and jackals. This way of naming species with two words is called binomial nomenclature and is still used by scientists today.
</rewrite>
1
go run streaming-claude-basic/main.go
You should see the output being written to the console as the parts are being generated by Amazon Bedrock.
InvokeModelWithResponseStream
API, which returns a bedrockruntime.InvokeModelWithResponseStreamOutput instance.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//...
brc := bedrockruntime.NewFromConfig(cfg)
payload := Request{
Prompt: fmt.Sprintf(claudePromptFormat, prompt),
MaxTokensToSample: 2048,
Temperature: 0.5,
TopK: 250,
TopP: 1,
}
payloadBytes, err := json.Marshal(payload)
output, err := brc.InvokeModelWithResponseStream(context.Background(), &bedrockruntime.InvokeModelWithResponseStreamInput{
Body: payloadBytes,
ModelId: aws.String(claudeV2ModelID),
ContentType: aws.String("application/json"),
})
//....
InvokeModel
API. Since the InvokeModelWithResponseStreamOutput
instance does not have the complete response (yet), we cannot (or should not) simply return it to the caller. Instead, we opt to process this output bit by bit with the processStreamingOutput
function.type StreamingOutputHandler func(ctx context.Context, part []byte) error
which is a custom type I defined to provide a way for the calling application to specify how to handle the output chunks - in this case, we simply print to the console (standard out).1
2
3
4
5
6
//...
_, err = processStreamingOutput(output, func(ctx context.Context, part []byte) error {
fmt.Print(string(part))
return nil
})
//...
processStreamingOutput
function does (some parts of the code omitted for brevity). InvokeModelWithResponseStreamOutput
provides us access to a channel of events (of type types.ResponseStream) which contains the event payload. This is nothing but a JSON
formatted string with the partially generated response by the LLM; we convert it into a Response
struct.handler
function (it prints the partial response to the console) and make sure we keep building the complete response as well by adding the partial bits. The complete response is finally returned from the function.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
func processStreamingOutput(output *bedrockruntime.InvokeModelWithResponseStreamOutput, handler StreamingOutputHandler) (Response, error) {
var combinedResult string
resp := Response{}
for event := range output.GetStream().Events() {
switch v := event.(type) {
case *types.ResponseStreamMemberChunk:
var resp Response
err := json.NewDecoder(bytes.NewReader(v.Value.Bytes)).Decode(&resp)
if err != nil {
return resp, err
}
handler(context.Background(), []byte(resp.Completion))
combinedResult += resp.Completion
//....
}
resp.Completion = combinedResult
return resp, nil
}
InvokeModelWithResponseStream
API and handle the responses as per previous example.1
go run claude-chat-streaming/main.go
So far we used the Anthropic Claude v2 model. You can also try the Cohere model example for text generation. To run:go run cohere-text-generation/main.go
1
2
3
4
5
6
go run stablediffusion-image-gen/main.go "<your prompt>"
for e.g.
go run stablediffusion-image-gen/main.go "Sri lanka tea plantation"
go run stablediffusion-image-gen/main.go "rocket ship launching from forest with flower garden under a blue sky, masterful, ghibli"
You should see an output JPG file generated.
InvokeModel
call result is converted to a Response
struct which is further deconstructed to extract the base64
image (encoded as []byte
) and decoded using encoding/base64 and write the final []byte
into an output file (format output-<timestamp>.jpg
).1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
//...
brc := bedrockruntime.NewFromConfig(cfg)
prompt := os.Args[1]
payload := Request{
TextPrompts: []TextPrompt{{Text: prompt}},
CfgScale: 10,
Seed: 0,
Steps: 50,
}
payloadBytes, err := json.Marshal(payload)
output, err := brc.InvokeModel(context.Background(), &bedrockruntime.InvokeModelInput{
Body: payloadBytes,
ModelId: aws.String(stableDiffusionXLModelID),
ContentType: aws.String("application/json"),
})
var resp Response
err = json.Unmarshal(output.Body, &resp)
decoded, err := resp.Artifacts[0].DecodeImage()
outputFile := fmt.Sprintf("output-%d.jpg", time.Now().Unix())
err = os.WriteFile(outputFile, decoded, 0644)
//...
CfgScale
, Seed
and Steps
); their values depend on your use case. For instance, CfgScale
determines how much the final image portrays the prompt: use a lower number to increase randomness in the generation. Refer to the Amazon Bedrock Inference Parameters documentation for details.Titan Embeddings G1 - Text
model for text embeddings. It supports text retrieval, semantic similarity, and clustering. The maximum input text is 8K
tokens and the maximum output vector length is 1536
.1
2
3
4
5
6
go run titan-text-embedding/main.go "<your input>"
for e.g.
go run titan-text-embedding/main.go "cat"
go run titan-text-embedding/main.go "dog"
go run titan-text-embedding/main.go "trex"
float64
s 🤷🏽.Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.