Getting started with the Amazon Bedrock Converse API
Learn the basics of using the Amazon Bedrock Converse API with large language models on Amazon Bedrock.
- Set up the Boto3 AWS SDK and Python: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
- Select an AWS region that supports the Anthropic Claude 3 Sonnet model. I'm using us-west-2 (Oregon). You can check the documentation for model support by region.
- Configure Amazon Bedrock model access for your account and region. Example here: https://catalog.workshops.aws/building-with-amazon-bedrock/en-US/prerequisites/bedrock-setup
- If you don’t have your own local integrated development environment, you can try AWS Cloud9. Setup instructions here: https://catalog.workshops.aws/building-with-amazon-bedrock/en-US/prerequisites/cloud9-setup
- Large language models are non-deterministic. You should expect different results than those shown in this article.
- If you run this code from your own AWS account, you will be charged for the tokens consumed.
- I generally subscribe to a “Minimum Viable Prompt” philosophy. You may need to write more detailed prompts for your use case.
- Not every model supports all of the capabilities of the Converse API, so it’s important to review the supported model features in the official documentation.
text
content block where we ask the model "How are you today?".maxTokens
value. We also set the temperature
to zero to minimize the variability of responses.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import boto3, json
session = boto3.Session()
bedrock = session.client(service_name='bedrock-runtime')
message_list = []
initial_message = {
"role": "user",
"content": [
{ "text": "How are you today?" }
],
}
message_list.append(initial_message)
response = bedrock.converse(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
messages=message_list,
inferenceConfig={
"maxTokens": 2000,
"temperature": 0
},
)
response_message = response['output']['message']
print(json.dumps(response_message, indent=4))
1
2
3
4
5
6
7
8
{
"role": "assistant",
"content": [
{
"text": "I'm doing well, thanks for asking! I'm an AI assistant created by Anthropic to be helpful, harmless, and honest."
}
]
}
1
2
3
message_list.append(response_message)
print(json.dumps(message_list, indent=4))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[
{
"role": "user",
"content": [
{
"text": "How are you today?"
}
]
},
{
"role": "assistant",
"content": [
{
"text": "I'm doing well, thanks for asking! I'm an AI assistant created by Anthropic to be helpful, harmless, and honest."
}
]
}
]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
with open("image.webp", "rb") as image_file:
image_bytes = image_file.read()
image_message = {
"role": "user",
"content": [
{ "text": "Image 1:" },
{
"image": {
"format": "webp",
"source": {
"bytes": image_bytes #no base64 encoding required!
}
}
},
{ "text": "Please describe the image." }
],
}
message_list.append(image_message)
response = bedrock.converse(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
messages=message_list,
inferenceConfig={
"maxTokens": 2000,
"temperature": 0
},
)
response_message = response['output']['message']
print(json.dumps(response_message, indent=4))
message_list.append(response_message)
1
2
3
4
5
6
7
8
{
"role": "assistant",
"content": [
{
"text": "The image shows a miniature model of a house, likely a decorative ornament or toy. The house has a blue exterior with white window frames and a red tiled roof. It appears to be made of ceramic or a similar material. The miniature house is placed on a surface with some greenery and yellow flowers surrounding it, creating a whimsical and natural setting. The background is slightly blurred, allowing the small house model to be the focal point of the image."
}
]
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
summary_message = {
"role": "user",
"content": [
{ "text": "Can you please summarize our conversation so far?" }
],
}
message_list.append(summary_message)
response = bedrock.converse(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
messages=message_list,
system=[
{ "text": "Please respond to all requests in the style of a pirate." }
],
inferenceConfig={
"maxTokens": 2000,
"temperature": 0
},
)
response_message = response['output']['message']
print(json.dumps(response_message, indent=4))
message_list.append(response_message)
1
2
3
4
5
6
7
8
{
"role": "assistant",
"content": [
{
"text": "Arr, matey! Let me spin ye a tale of our conversatin' thus far. Ye greeted me shipshape, askin' how I was farin' on this fine day. I replied that I be doin' well as yer trusty AI pirate mate, ready to lend a hand. Then ye showed me a pretty little image of a wee house ornament, all blue an' red with windows an' surrounded by greenery. I described to ye what I spotted in that thar image, not lettin' any details go unnoticed by me eagle eyes. Now ye be askin' ol' Claude to summarize our whole parley up to this point. I aimed to give ye a full account, regaled in true pirate style, of how our voyage has gone so far. Arrr, how'd I do wit' that summary, matey?"
}
]
}
Arr, matey! Let me spin ye a tale of our conversatin' thus far. Ye greeted me shipshape, askin' how I was farin' on this fine day. I replied that I be doin' well as yer trusty AI pirate mate, ready to lend a hand. Then ye showed me a pretty little image of a wee house ornament, all blue an' red with windows an' surrounded by greenery. I described to ye what I spotted in that thar image, not lettin' any details go unnoticed by me eagle eyes. Now ye be askin' ol' Claude to summarize our whole parley up to this point. I aimed to give ye a full account, regaled in true pirate style, of how our voyage has gone so far. Arrr, how'd I do wit' that summary, matey?
stopReason
property tells us why the model completed the message. This can be useful for your application logic, error handling, or troubleshooting.usage
property includes details about the input and output tokens. This can help you understand the charges for your API call.1
2
print("Stop Reason:", response['stopReason'])
print("Usage:", json.dumps(response['usage'], indent=4))
1
2
3
4
5
6
Stop Reason: end_turn
Usage: {
"inputTokens": 629,
"outputTokens": 154,
"totalTokens": 783
}
max_tokens
), requesting a tool (tool_use
), or triggering a content filter (content_filtered
). Review the official documentation for the full list of stop reasons.- Read Dennis Traub's article with JavaScript examples: A developer's guide to Bedrock's new Converse API.
- And this article with Java examples: A Java developer's guide to Bedrock's new Converse API.
- Not every model supports every capability of the Converse API, so it’s important to review the supported model features in the official documentation.
- You can find more generative AI hands-on activities at the Building with Amazon Bedrock workshop guide.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.