AWS Logo
Menu

Streamlining Workflow Orchestration with Amazon Bedrock Agent Chaining: A Digital Insurance Agent Example

Enterprise systems run on API's but orchestrating them and managing the sequence in which the APIs should be called is tedious. This article shows how chaining domain specific Bedrock agents can simplify workflow orchestration across a system of Enterprise APIs. The use case is around designing a Digital System powered by chaining Bedrock Agents to simplify the hand-offs between different APIs belonging to different domains but which work in tandem with each other to get the job done.

Piyali Kamra
Amazon Employee
Published May 6, 2024

How does chaining Bedrock Agents help?

Chaining Bedrock Agents offers a powerful solution by centralizing flow control and orchestrating domain-specific API calls seamlessly. Leveraging NLP instructions and OpenAPI specs, Bedrock Agents dynamically manage API sequences, minimizing dependency management complexities. Additionally, they enable conversational context management in real-time scenarios, utilizing session IDs and, if necessary, backend databases like DynamoDB for extended context storage. By using prompt instructions and API descriptions, Bedrock Agents collect essential information from API schemas to solve specific problems efficiently. This approach not only enhances agility and flexibility but also demonstrates the value of chaining Bedrock Agents in simplifying complex workflows and solving larger problems effectively. Let us see a use case below where we will leverage Bedrock Agent Chaining and learn along the way!

Use case:

In the example below, we develop a workflow for an Insurance digital assistant, focused on streamlining tasks like filing claims, assessing damages, and handling policy inquiries. We simulate API sequencing dependencies, such as conducting fraud checks before claim creation and analyzing uploaded images for damage assessment if the user chooses to provide images. The orchestration dynamically adapts to user scenarios, guided by Natural Language prompts of domain-specific Bedrock Agents like the Insurance Orchestrator Bedrock Agent, Policy Bedrock Agent and Damage Analysis Notification Bedrock Agent. Open API Specs of the underlying Enterprise API's and Natural Language Prompts of the Bedrock Agents play a crucial role in directing these agents and ensuring that the API sequencing aligns with dynamic user scenarios, like claims failing fraud checks or users opting in or out for image uploads. This flexible approach made possible by chaining domain specific bedrock agents enables efficient workflow management tailored to diverse user scenarios.

Overall Architecture:

Overall architecture chaining Bedrock Agents
Overall architecture chaining Bedrock Agents

How does a Bedrock Agent decide which tools to access

Overview of Bedrock Agent Building Blocks
Overview of Bedrock Agent Building Blocks

Deploy the solution

This project is built using the AWS Cloud Development Kit (CDK) on AWS Cloud9 IDE. See CDK setup on Cloud9 for additional details and prerequisites.
  1. Clone the bedrock agent chaining repository in your Cloud9 IDE.
  2. Enter the code sample backend directory as follows: cd workflow-orchestration-bedrock-agent-chaining/
  3. Install packages using npm install
  4. Boostrap AWS CDK resources on the AWS account cdk bootstrap aws://ACCOUNT_ID/REGION
  5. Enable Access to Amazon Bedrock Models
You must explicitly enable access to models before they can be used with the Amazon Bedrock service. Please follow these steps in the Amazon Bedrock User Guide to enable access to the models (Anthropic::Claude, Cohere Embed English):.
  1. Deploy the sample in your account. cdk deploy --all
The command above will deploy one stack in our AWS account. To protect against unintended changes that affect your security posture, the AWS CDK Toolkit prompts us to approve security-related changes before deploying them. You will need to answer yes to get all the stack deployed.
Note: The IAM role creation in this example is for illustration only. Always provision IAM roles with the least required privileges.
Once the above stack is deployed, these would be the various Outputs :
Let's navigate to the Bedrock Agents console in our region and find our new agents.
Here are the 3 Bedrock Agents that are deployed in our AWS Account :
Bedrock Agents deployed
Insurance Orchestrator, Damage Analysis Notification and Policy Bedrock Agents

1. Test the claims creation, damage detection and notification workflows.

The first part of the deployed solution is to mimic filing of a new insurance claim, fraud detection, damage analysis of uploading images and subsequent notification to claims adjusters. This is a smaller version of task automation to fulfill a particular business problem achieved by chaining bedrock agents each performing a set of specific tasks (for example creating claims, fraud detection, analyzing the uploaded images for damages and finally sending damage analysis notification to the claims adjusters). These bedrock agents when chained together work in harmony to solve a bigger function of insurance claims handling.
Now, let's understand the deployed components. To modify the Insurance Orchestrator Bedrock Agent, navigate to the Bedrock Agents Console. Adjust settings in the Advanced Prompts Section, then click Prepare.
Advanced Prompt Setup for Insurance Orchestrator Bedrock Agent
Advanced Prompt Setup for Insurance Orchestrator Bedrock Agent
In the above illustration, we insert the following lines at the beginning of the existing Orchestration Prompt in the Prompt Template Editor for the Insurance Orchestrator Bedrock Agent :
This is how our Insurance Orchestrator Bedrock Agent Orchestration Prompt looks like :
Insurance Orchestrator Bedrock Agent Advanced Prompt
Insurance Orchestrator Bedrock Agent Advanced Prompt
Let's zoom into the architecture of the claims creation workflow, where the Insurance Orchestrator Bedrock Agent and the Damage Analysis Notification Bedrock Agent will work together to file new claims , asses damages and sends summary of damages to the internal claims adjusters for human oversight.
Claims creation workflow
Workflow with Insurance Orchestrator Bedrock Agent and Damage Analysis Notification Agent
In the above illustration, the Insurance orchestrator agent mimics fraud detection and claims creation as well as orchestrates handing off the responsibility to other task specific Bedrock Agents. The image damage analysis notification agent is responsible for doing a preliminary analysis of the images uploaded for a damage. In a nutshell, this Bedrock Agent invokes a lambda function which will internally invoke the Anthropic's Claude Sonnet Large Language Model (LLM) to do preliminary analysis on the images and sends the summary of the damage generated by the LLM to an SQS Queue which is checked by the Claims Adjusters. The NLP instructions prompts for the Bedrock agents combined with the Open API specifications for each Action Group, guide the Bedrock Agent in its decision-making process, determining which Action Group to invoke, the sequence of invocation, and the required parameters for calling specific APIs.
In the Amazon Bedrock console, navigate to the deployed Damage Analysis Notification Bedrock Agent in the Agent builder. Examine the deployed agent, focusing on the Instructions for Agent section and other highlighted sections for further insights :
Damage analysis Notification Bedrock Agent
Damage analysis Notification Bedrock Agent

How to use the User Interface to invoke the claims processing workflow :

Checkout the output of the deployed CDK stack or go to the Cloudformation console and find the value of the InsureAssistUIALBDnsName.
ALB for UI
ALB for UI
Invoke the url on the browser as http:://<InsureAssistUIALBDnsName>
Let's ask the following questions as shown in the prompt below :
Start filing a new claim
Start filing a new claim
Workflow showing conversation to start filing a new claim
Workflow showing conversation to start filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Upload photos showing damage for a claim
Upload photos showing damage for a claim
Choose an image to upload
Choose an image to upload
Final response after the images are analyzed and notification sent to claims adjusters
Final response after the images are analyzed and notification sent to claims adjusters
In the Amazon SQS console, check out the SQS that has been created by our CDK and check the message that shows the damage analysis from the image performed by our LLM.
SQS Message for Claims Adjuster
SQS Message for Claims Adjuster

2. Test the Policy Information workflow

Here is the architecture of just the Policy Information Agent that has been built out by the CDK
Policy workflow
Policy workflow
The Policy Bedrock Agent is responsible doing a lookup against the Insurance Policy documents in the Bedrock Knowledge Base. In a nutshell, this Bedrock Agent invokes a lambda function named which will internally invoke the Bedrock Knowledge Base to find answers to policy related questions.
In the Amazon Bedrock console, navigate to the deployed Policy Bedrock Agent in the Agent builder. To modify the Policy Bedrock Agent, adjust settings in the Advanced Prompts Section as shown in the screen shot :
Policy Bedrock Agent
Policy Bedrock Agent
In the above illustration, we insert the following lines at the beginning of the existing Orchestration Prompt in the Prompt Template Editor for the Policy Bedrock Agent :
Additionally, in the Additional Settings section of the Agent Builder for the Policy Bedrock agent we check the Enabled radio button to allow the Bedrock Agent to ask clarifying questions.
Policy Agent Advanced Settings
Policy Agent Advanced Settings
You can experiment with these features and advanced prompts to see how the agents behave for various workflows. Finally click on Prepare in the Agent Builder for the Policy Bedrock agent.

Setting up the Policy Documents and Metadata in the datasource for the deployed Knowledge Base :

In the Amazon Bedrock console, navigate to the deployed Knowledge Base and navigate to the S3 bucket that is mentioned as the data source for the Knowledge base.
Upload a few policy documents and metadata to the S3 bucket as follows :
Here is how a sample metadata.json file looks like :
After the documents are uploaded to S3, navigate to the deployed Knowledge Base, select the data source and click on Sync. To understand more about how metadata support in Knowledge Bases on Bedrock helps in getting accurate results, check out this link.
Now, we can go back to the UI and start asking questions related to the policy documents as shown below :
Policy questions
Policy questions
This example demonstrates the power of chaining Bedrock Agents, offering a fresh perspective on integrating back-office automation workflows and enterprise APIs. The benefits are manifold: as new enterprise APIs emerge, dependencies in existing ones can be minimized, reducing coupling. Moreover, Bedrock Agents can maintain conversational context, enabling follow-up queries to leverage conversation history. For extended contextual memory, a more persistent backend implementation can be considered.

Clean up

Do not forget to delete the stack to avoid unexpected charges.
First make sure to remove all data from the Amazon Simple Storage Service (Amazon S3) Bucket.
cdk destroy
Delete all the associated logs created by the different services in Amazon CloudWatch logs

Additional Links to study:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment