AWS Logo
Menu

Build a Tool Use-Based Agent Loop with Amazon Bedrock

Learn how to build a simple agentic loop using the Converse API’s function calling capabilities with large language models on Amazon Bedrock.

Jason Stehle
Amazon Employee
Published Jun 9, 2024

Introduction

This article is part of a series on tool use with Amazon Bedrock. In part 1, I provided a quick tutorial on the Amazon Bedrock Converse API. In part 2, I introduced how to use tools with the Converse API. In this article, I’ll walk through building a simple agent loop to orchestrate several tools to complete a user request. I'll even throw the model a curveball to demonstrate error handling and planning. This is meant to be a trivial example, but should hopefully illustrate how agent loops could be applied to more significant use cases.
The Converse API provides a consistent way to access large language models (LLMs) on Amazon Bedrock. It supports turn-based messages between the user and the generative AI model. It also provides a consistent format for tool definitions for the models that support tool use (aka “function calling”).
Tool use is a technique that allows a large language model to tell the calling application to invoke a function with parameters supplied by the model. The available functions and supported parameters are passed to the model along with a prompt. It's important to note that the large language model does not call a function itself - it just returns JSON and lets the calling application do the rest.
An agent loop is a generative AI design pattern that allows an LLM to solve a multi-step problem by iterating over a series of interactions, calling functions and interpreting their results, until the ultimate goal is achieved.
Why are agent capabilities important? They allow us to use generative AI to help solve more complex problems and perform more advanced tasks. I believe that tool use and agent patterns add a much richer degree of usefulness to LLMs beyond basic text processing capabilities.
The tool use-based agent loop featured in this article follows these steps:
  1. The calling application passes (A) tool definitions and (B) a triggering message to the large language model.
  2. The model generates a tool use request, including the parameters to pass to the tool.
  3. The calling application extracts the parameters from the model’s tool use request and passes them to the corresponding local function to get some sort of result (that local function could then call an external service if necessary).
  4. The calling application passes the tool result back to the model to get a follow-on response.
  5. The model either returns a final response, or requests another tool (goto #3 above).
  6. If too many loops occur, then the process ends without resolution.

Setting up your development environment and AWS account

You’ll want to have the latest AWS SDK and Amazon Bedrock model access configured before proceeding:

Disclaimers

  • Large language models are non-deterministic. You should expect different results than those shown in this article.
  • If you run this code from your own AWS account, you will be charged for the tokens consumed.
  • I generally subscribe to a “Minimum Viable Prompt” philosophy. You may need to write more detailed prompts for your use case.
  • Not every model supports all of the capabilities of the Converse API, so it’s important to review the supported model features in the official documentation.

Code walkthrough: using multiple tools within an agent loop

Let’s start by writing a Python script that you can run from the command line. I’ll demonstrate defining tools, function-calling, error handling, and running a loop until a resolution or max loop limit is reached.

Define dependencies and a tool error class

We’re using a custom ToolError class to handle some of the potential things that can go wrong with tool use.

Define a function to call Amazon Bedrock and return the response

We’re going to call Anthropic Claude 3 Sonnet using the converse method. We pass it a list of messages and a list of tools. We also set an output token limit and set the temperature to 0 to reduce the variability between calls (During development and testing, it can be preferable to set temperature higher for more variability in responses).

Add a function to handle tool use method calls

We’ll implement this function as a simple series of if/elif statements to call basic math functions. Note that we're deliberately skipping the tangent tool so something interesting can happen!

Add a function to handle LLM responses and determine if a follow-up tool call is needed

The LLM may return a combination of text and tool use content blocks in its response. We’ll look for tooUse content blocks, attempt to run the requested tools, and return a message with a toolResult block if a tool was used.

Add a function to run the request/response loop

This function will run a request / response loop until either the LLM stops requesting tool use or a maximum number of loops have run.

Define the tools

We’re defining four tools for basic trigonometry functions and a division function. We’ll dive deeper into the tool definition format in a later article in this series.

Pass a prompt to start the loop

We’re asking Anthropic Claude to calculate the tangent of 7, then printing the messages that were sent back and forth to get the answer.
Now we’re ready to run the code and review the results. Note that you may observe a different but hopefully similar sequence of events.

Output

While the loop is running, the script prints the tools being called. But it’s not just calling the tangent tool, it’s using a bunch of other tools as well! Let’s investigate further.
Here’s our initial message. Let’s do some trig:
Claude knows it can use the provided tangent tool, so it requests the tool:
Uh-oh, the tangent tool doesn't work properly! Whatever shall we do? Let's tell Claude the bad news:
But what’s this? Claude knows trigonometric identities!?! Here comes plan B! Claude starts by asking for the sine tool:
The sine tool is working properly, so we send the tool result back to Claude:
Claude, happily in possession of the sine of 7, now requests the cosine of 7:
A quick trip to cosine-land, then we send the result back to Claude:
And now in a stunning climactic moment, Claude asks for the divide_numbers tool:
Lets send this sweet, sweet division result back to Claude:
And the grand finale. Tangent accomplished!
(Roll credits.)

Conclusion

While this specific example wasn't a great use of tokens, hopefully it illustrated how tools can be combined and orchestrated to solve a multi-step process. I hope you learned something and maybe got some ideas about how you could design agents based on tool use. We'll look at more advanced tool definitions in a later article.

Learn more

Continue reading articles in this series about Tool Use / Function Calling:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments