logo
Menu
Streamlining Workflow Orchestration with Amazon Bedrock Agent Chaining: A Digital Insurance Agent Example

Streamlining Workflow Orchestration with Amazon Bedrock Agent Chaining: A Digital Insurance Agent Example

Enterprise systems run on API's but orchestrating them and managing the sequence in which the APIs should be called is tedious. This article shows how chaining domain specific Bedrock agents can simplify workflow orchestration across a system of Enterprise APIs. The use case is around designing a Digital System powered by chaining Bedrock Agents to simplify the hand-offs between different APIs belonging to different domains but which work in tandem with each other to get the job done.

Piyali Kamra
Amazon Employee
Published May 6, 2024

How does chaining Bedrock Agents help?

Chaining Bedrock Agents offers a powerful solution by centralizing flow control and orchestrating domain-specific API calls seamlessly. Leveraging NLP instructions and OpenAPI specs, Bedrock Agents dynamically manage API sequences, minimizing dependency management complexities. Additionally, they enable conversational context management in real-time scenarios, utilizing session IDs and, if necessary, backend databases like DynamoDB for extended context storage. By using prompt instructions and API descriptions, Bedrock Agents collect essential information from API schemas to solve specific problems efficiently. This approach not only enhances agility and flexibility but also demonstrates the value of chaining Bedrock Agents in simplifying complex workflows and solving larger problems effectively. Let us see a use case below where we will leverage Bedrock Agent Chaining and learn along the way!

Use case:

In the example below, we develop a workflow for an Insurance digital assistant, focused on streamlining tasks like filing claims, assessing damages, and handling policy inquiries. We simulate API sequencing dependencies, such as conducting fraud checks before claim creation and analyzing uploaded images for damage assessment if the user chooses to provide images. The orchestration dynamically adapts to user scenarios, guided by Natural Language prompts of domain-specific Bedrock Agents like the Insurance Orchestrator Bedrock Agent, Policy Bedrock Agent and Damage Analysis Notification Bedrock Agent. Open API Specs of the underlying Enterprise API's and Natural Language Prompts of the Bedrock Agents play a crucial role in directing these agents and ensuring that the API sequencing aligns with dynamic user scenarios, like claims failing fraud checks or users opting in or out for image uploads. This flexible approach made possible by chaining domain specific bedrock agents enables efficient workflow management tailored to diverse user scenarios.

Overall Architecture:

Overall architecture chaining Bedrock Agents
Overall architecture chaining Bedrock Agents

How does a Bedrock Agent decide which tools to access

Overview of Bedrock Agent Building Blocks
Overview of Bedrock Agent Building Blocks

Deploy the solution

This project is built using the AWS Cloud Development Kit (CDK) on AWS Cloud9 IDE. See CDK setup on Cloud9 for additional details and prerequisites.
  1. Clone the bedrock agent chaining repository in your Cloud9 IDE.
  2. Enter the code sample backend directory as follows: cd workflow-orchestration-bedrock-agent-chaining/
  3. Install packages using npm install
  4. Boostrap AWS CDK resources on the AWS account cdk bootstrap aws://ACCOUNT_ID/REGION
  5. Enable Access to Amazon Bedrock Models
You must explicitly enable access to models before they can be used with the Amazon Bedrock service. Please follow these steps in the Amazon Bedrock User Guide to enable access to the models (Anthropic::Claude, Cohere Embed English):.
  1. Deploy the sample in your account. cdk deploy --all
The command above will deploy one stack in our AWS account. To protect against unintended changes that affect your security posture, the AWS CDK Toolkit prompts us to approve security-related changes before deploying them. You will need to answer yes to get all the stack deployed.
Note: The IAM role creation in this example is for illustration only. Always provision IAM roles with the least required privileges.
Once the above stack is deployed, these would be the various Outputs :
1
2
3
4
5
6
7
8
9
10
11
12
WorkflowOrchestrationBedrockAgentChainingStack.DamageAnalysisAndNotificationAgentId = <Identifier for Bedrock Agent responsible for damage image analysis, sending notifications>
WorkflowOrchestrationBedrockAgentChainingStack.DamageAnalysisLambdaFunction = <Lambda function invoking Anthropic Claude Sonnet to perform Image analysis>
WorkflowOrchestrationBedrockAgentChainingStack.DamageNotificationLambdaFunction = <Lambda function to send SQS messages>
WorkflowOrchestrationBedrockAgentChainingStack.ImageBucketName = <S3 bucket name where the images uploaded by the end user are stored>
WorkflowOrchestrationBedrockAgentChainingStack.InsuranceOrchestratorAgentId = <Identifier for Bedrock Agent that acts as the main orchestrator>
WorkflowOrchestrationBedrockAgentChainingStack.InsureAssistApiAlbDnsName = <DNS Name of the ALB that invokes the lambda function that invokes the main orchestrator Bedrock Agent>
WorkflowOrchestrationBedrockAgentChainingStack.InsureAssistUIAlbDnsName = <DNS Name of the ALB fronting the user interface>
WorkflowOrchestrationBedrockAgentChainingStack.InvokePolicyAgentLambdafunction = <Lambda function that invokes the Policy Bedrock Agent>
WorkflowOrchestrationBedrockAgentChainingStack.KnowledgeBaseId = <Knowledge base identifier for the insurance policy documents>
WorkflowOrchestrationBedrockAgentChainingStack.PolicyBedrockAgentId = <Identifier for Bedrock Agent that answers insurance policy related questions>
WorkflowOrchestrationBedrockAgentChainingStack.PolicyDocumentsBucketName = <S3 bucket name where policy related documents and metadata will be uplaoded>
WorkflowOrchestrationBedrockAgentChainingStack.PolicyRetrievalFromKBLambdaFunction = <Lambda function that will invoke the policy knowledge base>
Let's navigate to the Bedrock Agents console in our region and find our new agents.
Here are the 3 Bedrock Agents that are deployed in our AWS Account :
Bedrock Agents deployed
Insurance Orchestrator, Damage Analysis Notification and Policy Bedrock Agents

1. Test the claims creation, damage detection and notification workflows.

The first part of the deployed solution is to mimic filing of a new insurance claim, fraud detection, damage analysis of uploading images and subsequent notification to claims adjusters. This is a smaller version of task automation to fulfill a particular business problem achieved by chaining bedrock agents each performing a set of specific tasks (for example creating claims, fraud detection, analyzing the uploaded images for damages and finally sending damage analysis notification to the claims adjusters). These bedrock agents when chained together work in harmony to solve a bigger function of insurance claims handling.
Now, let's understand the deployed components. To modify the Insurance Orchestrator Bedrock Agent, navigate to the Bedrock Agents Console. Adjust settings in the Advanced Prompts Section, then click Prepare.
Advanced Prompt Setup for Insurance Orchestrator Bedrock Agent
Advanced Prompt Setup for Insurance Orchestrator Bedrock Agent
In the above illustration, we insert the following lines at the beginning of the existing Orchestration Prompt in the Prompt Template Editor for the Insurance Orchestrator Bedrock Agent :
1
2
3
4
5
6
7
8
<!--$instructions$-->
You are a helpful virtual assistant whose goal is to provide courteous and human-like responses while helping customers file insurance claims, detect fraud before filing claims, assess damages, and to answer questions related to the customer’s insurance policy.
Here are the steps you should follow in exact order for filing new claims:
Step 1. Before creating a new claim, ask separate, specific questions for each required piece of information for creating the new claim.
Step 2. After Step 1, check for fraud before proceeding with creating the claim. Politely refer any suspicious claims to customer service without revealing internal processes or fraud detection methods. Avoid processing fraudulent claims.
Step 3. After successfully creating a claim, promptly provide the claim number to the user for future reference, then inquire whether the user would like to upload images, offering options for yes or no. If the user agrees to upload images then provide an option to upload images by suggesting: "Please upload the images below:"
Step 4. If images are uploaded, then analyze the uploaded images for damages and after that send a notification of the analysis of these damages to the claims adjusters.
Step 5. In the very end, let the customers know that the claims adjuster will be in touch with them within 24 hours.
This is how our Insurance Orchestrator Bedrock Agent Orchestration Prompt looks like :
Insurance Orchestrator Bedrock Agent Advanced Prompt
Insurance Orchestrator Bedrock Agent Advanced Prompt
Let's zoom into the architecture of the claims creation workflow, where the Insurance Orchestrator Bedrock Agent and the Damage Analysis Notification Bedrock Agent will work together to file new claims , asses damages and sends summary of damages to the internal claims adjusters for human oversight.
Claims creation workflow
Workflow with Insurance Orchestrator Bedrock Agent and Damage Analysis Notification Agent
In the above illustration, the Insurance orchestrator agent mimics fraud detection and claims creation as well as orchestrates handing off the responsibility to other task specific Bedrock Agents. The image damage analysis notification agent is responsible for doing a preliminary analysis of the images uploaded for a damage. In a nutshell, this Bedrock Agent invokes a lambda function which will internally invoke the Anthropic's Claude Sonnet Large Language Model (LLM) to do preliminary analysis on the images and sends the summary of the damage generated by the LLM to an SQS Queue which is checked by the Claims Adjusters. The NLP instructions prompts for the Bedrock agents combined with the Open API specifications for each Action Group, guide the Bedrock Agent in its decision-making process, determining which Action Group to invoke, the sequence of invocation, and the required parameters for calling specific APIs.
In the Amazon Bedrock console, navigate to the deployed Damage Analysis Notification Bedrock Agent in the Agent builder. Examine the deployed agent, focusing on the Instructions for Agent section and other highlighted sections for further insights :
Damage analysis Notification Bedrock Agent
Damage analysis Notification Bedrock Agent

How to use the User Interface to invoke the claims processing workflow :

Checkout the output of the deployed CDK stack or go to the Cloudformation console and find the value of the InsureAssistUIALBDnsName.
ALB for UI
ALB for UI
Invoke the url on the browser as http:://<InsureAssistUIALBDnsName>
Let's ask the following questions as shown in the prompt below :
Start filing a new claim
Start filing a new claim
Workflow showing conversation to start filing a new claim
Workflow showing conversation to start filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Workflow showing continued conversation for filing a new claim
Upload photos showing damage for a claim
Upload photos showing damage for a claim
Choose an image to upload
Choose an image to upload
Final response after the images are analyzed and notification sent to claims adjusters
Final response after the images are analyzed and notification sent to claims adjusters
In the Amazon SQS console, check out the SQS that has been created by our CDK and check the message that shows the damage analysis from the image performed by our LLM.
SQS Message for Claims Adjuster
SQS Message for Claims Adjuster

2. Test the Policy Information workflow

Here is the architecture of just the Policy Information Agent that has been built out by the CDK
Policy workflow
Policy workflow
The Policy Bedrock Agent is responsible doing a lookup against the Insurance Policy documents in the Bedrock Knowledge Base. In a nutshell, this Bedrock Agent invokes a lambda function named which will internally invoke the Bedrock Knowledge Base to find answers to policy related questions.
In the Amazon Bedrock console, navigate to the deployed Policy Bedrock Agent in the Agent builder. To modify the Policy Bedrock Agent, adjust settings in the Advanced Prompts Section as shown in the screen shot :
Policy Bedrock Agent
Policy Bedrock Agent
In the above illustration, we insert the following lines at the beginning of the existing Orchestration Prompt in the Prompt Template Editor for the Policy Bedrock Agent :
1
2
3
4
5
6
7
8
You are a knowledgeable and helpful virtual assistant for insurance policy questions.
When responding You MUST follow the following guidelines:
No 1. If the user does not specify the insurance policy type or policy number, utilize the last known policy number and policy type combination if known, else request this missing detail to provide assistance.
No 2. Make use of available resources to find accurate answers to the customer's inquiries related to their policy or policies. Ask clarifying questions if needed.
No 3. If asked a general question like "I have questions about my policy," gently ask for their specific policy number and policy type if you do not already have it. Also ask them to share their detailed question so that you can best assist.
No 4. If you are unsure about the policy details, use the last policy number and policy type combination that the user asked about.
No 5. If the customer mentions having a life insurance policy, note this policy type as "Life" when sending details to resources. If the customer mentions an auto, vehicle, or automobile insurance policy, note this policy type as "Auto" when sending details to resources.
No 6. Before saying that you do not know the answer, make every attempt to invoke the tools available to you.
Additionally, in the Additional Settings section of the Agent Builder for the Policy Bedrock agent we check the Enabled radio button to allow the Bedrock Agent to ask clarifying questions.
Policy Agent Advanced Settings
Policy Agent Advanced Settings
You can experiment with these features and advanced prompts to see how the agents behave for various workflows. Finally click on Prepare in the Agent Builder for the Policy Bedrock agent.

Setting up the Policy Documents and Metadata in the datasource for the deployed Knowledge Base :

In the Amazon Bedrock console, navigate to the deployed Knowledge Base and navigate to the S3 bucket that is mentioned as the data source for the Knowledge base.
Upload a few policy documents and metadata to the S3 bucket as follows :
Here is how a sample metadata.json file looks like :
After the documents are uploaded to S3, navigate to the deployed Knowledge Base, select the data source and click on Sync. To understand more about how metadata support in Knowledge Bases on Bedrock helps in getting accurate results, check out this link.
Now, we can go back to the UI and start asking questions related to the policy documents as shown below :
Policy questions
Policy questions
This example demonstrates the power of chaining Bedrock Agents, offering a fresh perspective on integrating back-office automation workflows and enterprise APIs. The benefits are manifold: as new enterprise APIs emerge, dependencies in existing ones can be minimized, reducing coupling. Moreover, Bedrock Agents can maintain conversational context, enabling follow-up queries to leverage conversation history. For extended contextual memory, a more persistent backend implementation can be considered.

Clean up

Do not forget to delete the stack to avoid unexpected charges.
First make sure to remove all data from the Amazon Simple Storage Service (Amazon S3) Bucket.
cdk destroy
Delete all the associated logs created by the different services in Amazon CloudWatch logs

Additional Links to study:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment