logo
Menu
How to automate user requests using Agents with large language models (LLM) and Action Groups.

How to automate user requests using Agents with large language models (LLM) and Action Groups.

This is not about creating just an informational chatbot but also to fulfill user request by identifying the correct action and how to process it. In this article we are going to learn about request fulfillment using Amazon Bedrock Agents and Action Groups.

Jose Yapur
Amazon Employee
Published Sep 27, 2024

Episodes

  1. How to automate user requests using Agents with large language models (LLM) and Action Groups.
Authors: Ana Cunha (ana.dev) & Jose Yapur (develozombie)
Have you heard of "happy problems"? When you have a successful product and a thriving community, the number of requests can easily increase exponentially. In moments like this, the only way to maintain customer service at its best is by implementing some form of automation. There are various options available when deciding on the correct approach to improve customer experience.
If you have too many customers or a small support team, you may be experiencing long customer waiting times in support interactions or poor customer satisfaction. In these cases, you need to hire more or better-qualified support representatives, or the situation will worsen. To this day, in the words of Jeff Barr, the AWS Community in LATAM is one of the most thriving and energized communities in the world. This creates many "happy problems" where we have to respond to and attend to hundreds of requests to support all the activities organized by the AWS User Groups in our region. Last year, Ana and I created NellyBot for exactly that reason.
Nelly, our Community Program Manager, has to work with thousands of people, replying to their questions and providing resources like AWS Credits or pizzas for their events. So, we thought of using Amazon Lex to create a self-service bot designed to simplify that process. Things with purpose-made bots are not so simple. People don't like having a structured conversation with a virtual agent, and if they make a mistake with an option, they have to start again. This causes a lot of frustration and ends up transferring upset customers to traditional channels. This is where Generative AI aims to help with natural, contextual, and non-deterministic conversational flow.
There are many options available to make it work, but I want to focus on Amazon Bedrock Agents. Right now, it's the simplest way to work with custom knowledge, fulfillment, and customer support using Generative AI in AWS. Amazon Bedrock is an exciting platform that offers access to a variety of large language models (LLMs) through a unified API. It's like having a bunch of super-smart AI assistants at your fingertips, ready to help with all sorts of tasks. The platform includes models from different providers, each with their own unique capabilities and strengths. It's kind of like having a team of geeky sidekicks, each with their own special powers!
One of the standout LLMs available through Amazon Bedrock is the one developed by Anthropic. This model has been trained using a special technique called "constitutional AI," which helps ensure that it behaves in an ethical and reliable manner. It's like having a wise and trustworthy advisor who always has your best interests in mind. Anthropic's model is particularly good at understanding context and nuance, making it great for tasks that require a deep understanding of language and meaning. Right now, Anthropic's LLM is arguably the best choice among the models available on Amazon Bedrock. It strikes a great balance between performance and safety, which is super important when you're dealing with powerful AI systems. Plus, Anthropic has a strong track record of developing cutting-edge AI technologies, so you know you're getting the good stuff. It's like having a geeky friend who always has the latest and greatest gadgets and is always eager to share their knowledge with you.
When it comes to working with Amazon Bedrock Agents, prompts play a crucial role in guiding the AI models to generate the desired outputs. Crafting effective prompts is an art known as prompt engineering, and it's essential for developers to master this skill to get the most out of these powerful tools. Prompt engineering involves designing prompts that are clear, concise, and specific. The goal is to provide the AI model with enough context and instructions to generate responses that align with your intentions.
For example, if you want an Amazon Bedrock Agent to write a product description, your prompt should include details about the product, its key features, and the target audience. To illustrate the importance of prompt engineering, let's consider a few examples. Suppose you want an Amazon Bedrock Agent to generate a tagline for a new fitness app. A poorly crafted prompt like "Write a tagline for a fitness app" might result in generic or irrelevant responses. On the other hand, a well-engineered prompt such as "Create an engaging and motivating tagline for a fitness app called FitLife that helps users track their workouts and reach their fitness goals" provides more context and direction, increasing the likelihood of generating a suitable tagline.
When it comes to best practices for prompt engineering with Amazon Bedrock Agents, there are a few key things to keep in mind. First, be as specific as possible in your prompts. Provide relevant details, constraints, and examples to guide the AI model towards the desired output. Second, experiment with different prompt variations to find what works best for your specific use case. Don't be afraid to iterate and refine your prompts based on the generated responses. Finally, consider using techniques like few-shot learning, where you provide a few examples of the desired output format in your prompt to help the AI model understand the expected structure and style. As a developer working with Amazon Bedrock Agents, mastering the art of prompt engineering can greatly enhance the quality and effectiveness of your AI-powered applications. By crafting clear, specific, and well-structured prompts, you can unlock the full potential of these advanced language models and create engaging, relevant, and valuable experiences for your users. So let’s give it a try!
In this walkthrough, you will:
  1. Set up an Amazon Bedrock Agent.
  2. Connect the agent to Action Groups to automate tasks.
  3. Test the agent with example prompts.

Step 1: Set Up an Amazon Bedrock Agent

Log into AWS Console:
Navigate to the Amazon Bedrock service.
Create an Amazon Bedrock Agent:
  • Go to the Agents section within the Amazon Bedrock console.
  • Click Create Agent.
  • Provide a name, e.g., RequestAutomationAgent, and select the Anthropic model (or any available model of your choice).
  • Add initial instructions for the agent on how to handle user requests. For example:
This agent helps users with support requests such as generating AWS credits, ordering event resources, or answering event-related questions. It will invoke specific actions to fulfill the request automatically.
Configure LLM Model:
  • Choose an LLM such as Anthropic’s Claude or another available model.
  • Specify the model’s capabilities, including input/output limits, to optimize responses. This helps in situations where understanding context and nuance is essential.

Step 2: Define Action Groups for Request Fulfillment

Create Actions in Amazon Bedrock:
  • Actions allow the agent to execute workflows such as sending AWS credits, generating reports, or fulfilling resource requests.
  • Go to the Action Groups tab in the Bedrock console.
  • Define your Action Group, and create actions like:
    • Issue AWS Credits: Triggers an API call to generate and send AWS credits.
    • Order Pizza: Places an order via a connected API or triggers a Lambda function.
    • Generate Event Report: Automatically compiles and sends a report on a specified event.
Connect Action Groups to the Agent:
  • Return to your agent configuration and attach the Action Group created earlier.
  • This enables the agent to invoke the appropriate actions based on the user's request.
Test Actions via Lambda (Optional):
  • For more complex actions (e.g., ordering items or sending emails), you can create a Lambda function.
  • Write a Lambda function that handles external API calls or AWS resource actions and add it to the Action Group.
  • Example Lambda function:
import json
def lambda_handler(event, context):
action = event['action']

if action == 'issue_aws_credits':
return {"status": "AWS credits issued successfully"}
elif action == 'order_pizza':
return {"status": "Pizza order placed successfully"}
else:
return {"status": "Action not recognized"}

Step 3: Test Your Amazon Bedrock Agent with a Prompt

Testing in the Amazon Bedrock Console:
  • In the Agents section, click on your newly created agent.
  • Enter a test prompt, e.g.:
"I need AWS credits for my event."
  • The agent will analyze the request, understand the need for AWS credits, and invoke the Issue AWS Credits action. If the action is linked to a Lambda function, it will trigger the function.
Test with Action Handling:
  • Try another test prompt that requires a different action, such as:
"Order pizza for the next community meetup."
  • The agent should identify the task as a pizza order and execute the relevant action from the Action Group.
Refining Agent Responses:
  • If the agent does not understand or mishandles the request, you can refine its capabilities by updating prompts and tuning the model configuration.
  • You can also enhance responses by improving the knowledge base or adding additional data sources to the S3 bucket.
Error Handling:
  • If a user request is unclear, the agent can be programmed to ask follow-up questions, like:
"Could you specify how many pizzas you need?"As our community and user base continue to grow, we’ve faced what we like to call “happy problems”—more success, more engagement, and, inevitably, more requests. Managing these demands manually can quickly become overwhelming, but that’s where the power of Amazon Bedrock Agents comes in. By leveraging large language models and Action Groups, we’ve found a way to not just respond to user queries, but to actually fulfill their requests automatically, without the frustration of rigid, pre-defined conversations.
The lessons we learned from building NellyBot made it clear that users want more than just a chatbot—they want a system that can truly understand their needs and take action, whether it’s ordering pizzas for a meetup or issuing AWS credits for an event. The flexibility and intelligence that Amazon Bedrock offers have allowed us to create a solution that feels seamless and human-like, addressing user needs in real-time while freeing us up to focus on more meaningful tasks.
For those of you who, like us, are grappling with managing community demands or automating routine workflows, I hope this guide gives you a clear path to building your own intelligent agents. It’s a game-changer when you can rely on technology to handle not just the conversations but the real work that follows. And with Amazon Bedrock, the power of generative AI is at your fingertips—ready to help you tackle your own "happy problems."
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments