logo
Menu
Intro to Tool Use with the Amazon Bedrock Converse API

Intro to Tool Use with the Amazon Bedrock Converse API

Learn the basics of using the Amazon Bedrock Converse API to perform tool use / function calling with LLMs on Amazon Bedrock.

Jason Stehle
Amazon Employee
Published Jun 9, 2024

Introduction

This article is part 2 of a series on tool use with Amazon Bedrock. In part 1, I provided a quick tutorial on the Amazon Bedrock Converse API. In this article, I will walk through a simple tool use example to illustrate how it works. Later articles will dive into more advanced use cases including JSON generation and agent loops.
The Converse API provides a consistent way to access large language models (LLMs) on Amazon Bedrock. It supports turn-based messages between the user and the generative AI model. It also provides a consistent format for tool definitions for the models that support tool use (aka "function calling").
Tool use is a capability that allows a large language model to tell the calling application to invoke a function with parameters supplied by the model. The available functions and supported parameters are passed to the model along with a prompt. It's important to note that the large language model does not call a function itself - it just returns JSON and lets the calling application do the rest.
Why is native tool use so important? Because now we get built-in support for turning free-form content into automation-friendly and analytics-friendly structured data. While advanced prompt engineers had some success manually building tool use applications with existing large language models, it was often brittle, or XML-based, or susceptible to creating invalid JSON. I believe that native support makes tool use capabilities much more accessible and feasible for the rest of us.
Tool Use with the Amazon Bedrock Converse API follows these steps:
  1. The calling application passes (A) tool definitions and (B) a triggering message to the large language model.
  2. If the request matches a tool definition, the model generates a tool use request, including the parameters to pass to the tool.
  3. The calling application extracts the parameters from the model’s tool use request and passes them to the corresponding local function for the tool.
  4. The calling application can then either use the tool result directly, or pass the tool result back to the model to get a follow-on response.
  5. The model either returns a final response, or requests another tool.

Setting up your development environment and AWS account

You’ll want to have the latest AWS SDK and Amazon Bedrock model access configured before proceeding:


Disclaimers

  • Large language models are non-deterministic. You should expect different results than those shown in this article.
  • If you run this code from your own AWS account, you will be charged for the tokens consumed.
  • I generally subscribe to a “Minimum Viable Prompt” philosophy. You may need to write more detailed prompts for your use case.
  • Not every model supports all of the capabilities of the Converse API, so it’s important to review the supported model features in the official documentation.

Code walkthrough: using the Amazon Bedrock Converse API

Let’s start by writing a Python script that you can run from the command line. I’ll demonstrate basic tool definition, passing generated parameters to a function, returning a tool result to the model, and error handling.

Defining a tool and sending a message that will make Claude ask for tool use

Let’s start by defining a cosine tool using the Converse API tool definition format. We'll dive deeper into this format in a later article in this series.
We’ll then create a simple message to trigger the tool use request and add it to an empty list of messages. We’re creating a message from the “user” role. Within that message, we can include a list of content blocks. In this example, we have a single text content block where we ask the model to "What is the cosine of 7?"
We’re now ready to pass the tool definition and message to Amazon Bedrock. We specify Anthropic’s Claude 3 Sonnet as the target model. We can limit the number of tokens in the model’s response by setting the maxTokens value. We also set the temperature to zero to minimize the variability of responses.
Note that we set a system message here so that Claude won’t attempt to do any math itself. The current generation of large language models cannot reliably do math.
This will generate a response similar to the following:
There are a few things to note here:
  1. In this case, Claude also generated some text prefacing its tool use request. Claude will only do this some of the time. Sometimes it just generates a tool use request with no text accompanying it.
  2. The toolUse block includes a toolUseId. You can use the toolUseId to help Claude connect the initial tool request with a corresponding tool result you send back to Claude for additional processing.
  3. The toolUse block includes the tool name to invoke, in this case cosine.
  4. The input property contains the JSON structure of arguments to pass to the tool. You can also use this JSON directly (we’ll cover this more in-depth in a later article). In this case, Claude is asking the calling application to pass the cosine function an argument x with value 7.

Calling a function based on the toolUse content block.

We’ll now loop through the response message’s content blocks. We’ll use the cosine tool if requested, and print any text content blocks from the LLM’s message.
This will generate a response similar to the following:
The above pattern might be totally adequate for your use case. If you don't need to pass the tool result back to Claude, then you can just have your application proceed with the direct tool call result. In the next section, I'll show you how to send a follow-up request to Claude to get a final response.

Passing the tool result back to Claude

Now we’ll loop through the content blocks from the response message, and check for a tool use request. If there’s a tool use request, we’ll call the named tool and pass it the input parameters provided by Claude. We’ll then build a message with a toolResult content block to send back to Claude for a final response.
This will generate a response similar to the following:
Great! That’s worked so far. But what happens when tool use fails?

Error handling - letting Claude know that tool use failed

Now we’re going to take a step back and manufacture an error to send back to the LLM. We set the status attribute to error so that Claude can decide what to do next.
This will generate a response similar to the following:
So in this case, Claude is out of options and has to give up on the tool request.

Conclusion

While this was a trivial example of tool use, hopefully this gives you a basic sense of how tool use works. In later articles I'll show you more advanced examples of orchestrating multiple tools and generating more complex JSON responses.

Learn more

Continue reading articles in this series about tool use / function calling:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment