logo
Menu
Getting started with the Amazon Bedrock Converse API

Getting started with the Amazon Bedrock Converse API

Learn the basics of using the Amazon Bedrock Converse API with large language models on Amazon Bedrock.

Jason Stehle
Amazon Employee
Published Jun 9, 2024

Introduction

This article is part 1 of a series on tool use with Amazon Bedrock. Before we can dive into tool use, I’d like to provide a quick tutorial on the Amazon Bedrock Converse API.
The Amazon Bedrock Converse API provides a consistent way to access large language models (LLMs) using Amazon Bedrock. It supports turn-based messages between the user and the generative AI model. It also provides a consistent format for tool definitions for the models that support tool use (aka "function calling").
Why is the Converse API so important? Previously, with the InvokeModel API, you needed to use different JSON request and response structures for each model provider. The Converse API allows us to use a single format for requests and responses across all large language models on Amazon Bedrock.
Note that as of this article's writing, the Converse API only supports text generation models. Embeddings models and image generation models still require InvokeModel. You can find the list of Converse API supported models and features in the official documentation.

Setting up your development environment and AWS account

You’ll want to have the latest AWS SDK and Amazon Bedrock model access configured before proceeding:


Disclaimers

  • Large language models are non-deterministic. You should expect different results than those shown in this article.
  • If you run this code from your own AWS account, you will be charged for the tokens consumed.
  • I generally subscribe to a “Minimum Viable Prompt” philosophy. You may need to write more detailed prompts for your use case.
  • Not every model supports all of the capabilities of the Converse API, so it’s important to review the supported model features in the official documentation.

Code walkthrough: using the Amazon Bedrock Converse API

Let’s start by writing a Python script that you can run from the command line. I’ll demonstrate basic text messages, image-based messages, system prompts, and response metadata.

A basic call to the Converse API

Let’s start by defining a simple message and add it to an empty list of messages. We’re creating a message from the “user” role. Within that message, we can include a list of content blocks. In this example, we have a single text content block where we ask the model "How are you today?".
We’re now ready to pass that message to Amazon Bedrock. We specify Anthropic’s Claude 3 Sonnet as the target model. We can limit the number of tokens in the model’s response by setting the maxTokens value. We also set the temperature to zero to minimize the variability of responses.
This will generate a response similar to the following:

Alternating user and assistant messages

You can use the Converse API to send a list of previous messages along with a new message to the LLM to continue the conversation. You must alternate between messages from the “user” and “assistant” roles. The last message in the list should be from the “user” role, so that the LLM can respond to it.
This will display our conversation so far:
We'll need to add another "user" message to the list before we can send a request to the model.

Including an image in a message

NOTE: The code below requires a local WebP-format file named “image.webp”. You can download this image save it to your code folder, or alter the code to use an image and format of your choosing. See the Converse API ImageBlock documentation for the list of supported image types, and Anthropic's vision documentation for image size constraints.
an image of a miniature house placed outside
Now we’ll load a local image file and add its bytes to an image content block. We follow Anthropic’s vision prompting tips and preface our image with the label “Image 1:”, then follow the image with our request.
This will generate a response similar to the following:
Note: we’re not printing the message with the image bytes because it will be really really long and really really not interesting.
You can learn more about Claude 3’s vision capabilities here: https://docs.anthropic.com/en/docs/vision

Setting a system prompt

You can set a system prompt to communicate basic instructions for the large language model outside of the normal conversation. System prompts are generally used by the developer to define the tone and constraints for the conversation. In this case, we’re instructing Claude to act like a pirate.
This will generate a response similar to the following:
For those who don't want to horizontally scroll, here's our piratized summary of the conversation so far:
Arr, matey! Let me spin ye a tale of our conversatin' thus far. Ye greeted me shipshape, askin' how I was farin' on this fine day. I replied that I be doin' well as yer trusty AI pirate mate, ready to lend a hand. Then ye showed me a pretty little image of a wee house ornament, all blue an' red with windows an' surrounded by greenery. I described to ye what I spotted in that thar image, not lettin' any details go unnoticed by me eagle eyes. Now ye be askin' ol' Claude to summarize our whole parley up to this point. I aimed to give ye a full account, regaled in true pirate style, of how our voyage has gone so far. Arrr, how'd I do wit' that summary, matey?
You can learn more about system prompts here: https://docs.anthropic.com/en/docs/system-prompts

Getting response metadata and token counts

The Converse method also returns metadata about the API call.
The stopReason property tells us why the model completed the message. This can be useful for your application logic, error handling, or troubleshooting.
The usage property includes details about the input and output tokens. This can help you understand the charges for your API call.
This will generate a response similar to the following:
In this case, Claude stopped because it had nothing left to say for now. Other stop reasons include hitting the response token limit (max_tokens), requesting a tool (tool_use), or triggering a content filter (content_filtered). Review the official documentation for the full list of stop reasons.
Keep in mind that the displayed usage numbers are only for the last API call we made. You can use these token counts to determine the cost of the API call. You can learn more about token-based pricing on the Amazon Bedrock website.

Conclusion

Now that you've seen the basics of how the Converse API works, let's move on to the next article in the series where we dive into tool use!

Learn more

Continue reading articles in this series about tool use / function calling:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

5 Comments