logo
Menu

Building Custom LangChain Agents and Tools with Amazon Bedrock

Learn to build custom prompts and tools for LangChain agents.

Banjo Obayomi
Amazon Employee
Published Nov 1, 2023
Last Modified Mar 13, 2024
You've already dipped your toes into generative AI, exploring large language models (LLMs), and prompt engineering. Now you're ready for the next challenge: building an "agent" that acts like a tool set for your LLMs, much like a calculator aids us humans for solving math problems.
While LangChain is great for building agents, creating custom prompts and tools can get a bit complex.
In this hands-on guide, let's get straight to it. I'll guide you through refining Agent AWS our AWS Solutions Architect Agent. You'll see how to design custom prompts and tools and plug this agent into a Streamlit chatbot. By the end, you'll have an agent capable of querying AWS documentation and deploying AWS Lambda functions, all backed by Amazon Bedrock. Ready to step up your agent game? Let's dive in.
The full code of the agent can be viewed here
Agent AWS

Prerequisites

Before we dive into building Agent AWS, we need to set the stage. If you haven’t had a chance to play with Amazon Bedrock, I encourage to go through my quick start guide before attempting to make this agent.
Once ready, we can begin by cloning the repo and installing the libraries.
Our Agent will also need you to have an AWS IAM Role and an Amazon S3 bucket it can use:
Now that we've got Amazon Bedrock and all the essential Python libraries in place, we're geared up for the real fun. Next up, we'll build the tools that our agent will use.

Tools

Tools aren't just utilities, they're the extensions of our agents. They provide a mechanism to take in structured input from an LLM to perform actions. For Agent AWS, we're building two specialized tools: one to dig into the AWS Well-Architected Framework and another to deploy Lambda functions. Let's get building.

Querying the AWS Well-Architected Framework

Our first tool dives deep into the AWS Well-Architected Framework using a method known as Retrieval Augmented Generation (RAG). Let's break down how it works.
RAG allows us to fetch documents that are relevant to a user's query. Initially, we download the text and use the Amazon Titan embedding model to convert this text into vectors.
These vectors are then stored in a vector database, making them searchable. Here is the code for the full ingestion pipeline.
When a user poses a query, we transform the text into a vector using Amazon Titan. This enables us to search our vector database for documents closely matching the query.
Below is the Python code to implement this tool.
You've now created a tool that empowers your agent to sift through AWS documentation like a pro. Ready to add another tool to your agent's toolkit? Up next, we're tackling Lambda function deployment.

Creating and Deploying Lambda Functions

This tool leverages the python AWS SDK library boto3 to take structured input from your agent and transform it to a deployed Lambda function. Say a user wants a Lambda function that generates a random number between 1 and 3000. Your agent will pass the necessary code, function name, and description to this tool.
We use the boto3 library to do the heavy lifting of deploying the function to your AWS account. While some parameters like the IAM role and S3 bucket are hard-coded, the tool is designed to be flexible where it counts.
It's crucial to strike a balance between what the LLM controls and what it doesn't. Let the LLM focus on its strengths, like generating code, while the tool handles the deployment details.
For example, this tool includes helper functions for creating a Python deployment zip, uploading it to S3, and finally deploying the Lambda function. Here is the full code for this tool.
You've now equipped your agent with the capability to deploy Lambda functions. With these two tools, your agent is not just smart, but actionable. What's next? Let's bring all these pieces together and build the agent itself.

Building the LangChain Agent

With our toolbox in place, let's pivot to constructing the core of our application—the LangChain agent. The first step in this journey? Crafting a prompt that guides the agent's behavior.
LangChain provides APIs for creating ReAct agents, which come with predefined prompts. These prompts react to user inputs and help the agent decide which tools to use. The magic lies in augmenting this prompt with a prefix and a suffix.
The more specific you are, the better your agent performs. For instance, I explicitly outline how I want my Lambda functions structured, from the file names to the return types. You can view the prefix here
The suffix is not just an afterthought; it's a way to guide the model's behavior further. In my case, I remind the model to speak like an AWS Certified Solutions Architect and instruct users on how to invoke the Lambda functions it creates. You can view the suffix here
Ready to see how it all comes together? Below is the code that initializes the agent, complete with tools, memory, and our custom prompt.
To get a feel of your newly created agent, run python test_agent.py from your terminal.

Agent Example Run

In this example, the agent takes on the task of crafting and deploying a Lambda function focused on sentiment analysis.
Agent AWS creates Lambda function
Since our agent is an AWS Certified Solutions Architect, it knows how to use Amazon Comprehend for sentiment analysis. Our agent also instructs the user on how to invoke the function and reminds you to update your Lambda role.
Agent AWS tells customer how to invoke Lambda function
You've successfully built and tested an intelligent agent with a well-defined prompt and an arsenal of tools. What's the next step? Bringing this agent to life through a chatbot interface, courtesy of Streamlit.

Creating an Agent ChatBot with Streamlit

With a fully functional agent at our disposal, let's take the user experience up a notch by embedding it into an interactive chatbot. Streamlit makes this effortless in pure Python, especially with its native support for LangChain integration.
First things first, we don't want to initialize our agent every time the app runs. Streamlit's @st.cache_resource decorator comes to the rescue, letting us cache the agent for faster interactions.
Now, let's build the chat interface itself. Streamlit provides a straightforward way to send messages to our agent. To make it even more interactive, we can use LangChain's Streamlit callback handler StreamlitCallbackHandler to visualize how the agent picks its tools based on user queries.
If you'd like to see how all these pieces fit together, here's the full code for the chatbot.
In 50 lines of code, we’ve built a Streamlit-powered chatbot that not only talks but also acts through our LangChain agent.

Conclusion

We embarked on a quest to build a custom agent, and what a journey it's been! From setting up Amazon Bedrock, crafting specialized tools, to building the LangChain agent and finally embedding it in a Streamlit chatbot, we've created an end-to-end solution that's both intelligent and user-friendly.
To summarize, we've:
  • Initialized Amazon Bedrock for our foundation models
  • Developed tools for querying the AWS Well-Architected Framework and deploying Lambda functions
  • Created a LangChain agent with a well-defined prompt and integrated it with our tools
  • Designed a Streamlit chatbot that brings our agent to life
You're now equipped to build your own customized agents, powered by Amazon Bedrock and enhanced with LangChain and Streamlit. The building blocks are all here; it's up to you to assemble them into your own innovative solutions.
So, what will you build next?

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

3 Comments