How to use apply guardrail to protect PII information with AWS Bedrock Converse API, Lambda and Python! - Anthropic Haiku

How to use apply guardrail to protect PII information with AWS Bedrock Converse API, Lambda and Python! - Anthropic Haiku

Use Amazon Bedrock Apply Guardrails while creating generative AI applications with the Converse API! Apply these guardrails to ensure your GenAI app is a responsible AI app!

Published Jul 13, 2024
In this article, I am going to demonstrate a revision of previously published workshop on how to build a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API and protect the sensitive information like PII using guardrail policy.
On July 10, 2024, during the AWS New York Summit, AWS announced the introduction of the Apply Guardrail feature for its Generative AI services. Amazon Bedrock, already a standout service in the AWS lineup, gains further flexibility with this new feature, allowing developers to decouple the guardrail from Large Language Model (LLM) invocation using Bedrock.
In June 2024, AWS added support for guardrails via the Converse API. However, a challenge I encountered was that guardrail policies were being applied to both input and response. With additional code for guard content, I was able to accomplish the desired result however it was additional lines of code to meet the use case requirements.
Here's the link to my previous article on how to apply guardrails using the Converse API to protect Personally Identifiable Information (PII).
The newly announced Apply Guardrail function empowers developers with more control over how best to implement guardrails when invoking the Bedrock API. This feature ensures that based on the guardrail policy, if an input is blocked, developers can return a response prior to getting a response from the LLM. This approach not only enhances security but also optimizes efficiency by saving unnecessary calls to the foundational model.
This enhancement marks a significant step forward in the customization and control developers have over their AI applications, ensuring safer and more efficient interactions with generative AI models.
In this article, I have updated my previous published article and code to demonstrate how the Apply Guardrail feature can be used to protect PII information from a transcript summary generated using Amazon Bedrock, Lambda, and API.
Examples of PII (Personal Identifiable Information) - SSN, Account Number, Phone, Email, Address etc.
Let's revisit the Guardrail Policies supported by AWS
Guardrail Policies
The Amazon Bedrock Guardrail feature allows you to configure various filters, providing responsible boundaries for the responses generated by your AI solution. These guardrails help ensure that the outputs are appropriate and align with your requirements and standards.
Content Filters
Content Filters across 6 categories
  • Hate
  • Insults
  • Sexual
  • Violence
  • Misconduct
  • Prompt Attack
Filters can be set to None, Low, Medium, High.
Β 
Denied Topics
You can specify filter for the topic that API should not respond to!
Word Filter
You can specify words that you want filter to act on before providing a response!
Sensitive Information Filter
Filter to either block or mask the Personal Identifiable Information.
Amazon Bedrock allow provides a way to configure the message provided back to the user if input or the response in violation with the guardrail configured policies. For example, if Sensitive information filter configured to block a request with account number, then, you can provide a customize response letting user know that request cannot be processed since it contains a forbidden data element.
Let's review our use cases:
  • There is a transcript available for a case resolution and conversation between customer and support/call center team member.
  • A call summary needs to be created based on this resolution/conversation transcript.
  • An automated solution is required to create call summary.
  • An automated solution will provide a repeatable way to create these call summary notes.
  • Increase in productivity as team members usually work on documenting these notes can focus on other tasks.
  • Guardrail should be configured so that PII information is not displayed in the response.
  • Guardrail will also be applied to input. If input contains blocked data element like account number, then, API will not invoke the LLM and return the initial response to the consumer.
I am generating my lambda function using AWS SAM, however similar can be created using AWS Console. I like to use AWS SAM wherever possible as it provides me the flexibility to test the function without first deploying to AWS cloud.
Here is the architecture diagram for our use case.
Architecture
Architecture
I will create a SAM template for the lambda function that will contain the code to invoke Bedrock Converse API along with required parameters and a prompt. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function.
AWS SAM
AWS SAM
Create a Lambda Function
The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating a summary of the call center transcript using the Amazon Bedrock Converse API. This Lambda function accepts a prompt, which is then forwarded to the Bedrock Converse API to generate a response using the Anthropic Haiku foundation model. Now, Let's look at the code behind it.
Example of apply guardrail in the function:
input guardrail
input guardrail
output guardrail
output guardrail
AWS Lambda
AWS Lambda
Build function locally using AWS SAM
Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are:
  • SAM Build
  • SAM local invoke
  • SAM deploy
Bedrock Invoke Model Vs. Bedrock Converse API
Bedrock InvokeModel
Bedrock Invoke
Bedrock Invoke
Bedrock Converse
Bedrock Converse
Validate the GenAI Model response using a prompt
Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model.
Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review.
Postman
Postman
I am using Postman to pass transcript file for the prompt.
This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account.
I am using Postman to pass transcript file for the prompt.
This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account.
  • John: Hello, thank you for calling technical support. My name is John and I will be your technical support representative. Can I have your account number, please?
  • Girish: Yes, my account number is 21X-45X-8790.
  • John: Thank you. I see that you have locked your account due to multiple failed attempts to enter your password. To reset your password, I will need to ask you a few security questions. Can you please provide me with the answers to your security questions?
  • Girish: Sure, my security questions are: What is your favorite color? and What is your favorite food?
  • John: Please can you provide your zip code?
  • Girish: Yes, my zip code is 43215.
  • John: one final question, Please confirm your email address.
  • Girish: my email is gbtest@gmailtest.com.
  • John: Great, thank you. I will now reset your password and send you an email with instructions on how to log in to your account. Please check your email in a few minutes.
  • Girish: Thank you so much for your help.
  • John: You're welcome. Is there anything else I can assist you with today?
  • Girish: No, that's all for now. Thank you again for your help.
  • John: You're welcome. Have a great day!
Review the guarded/masked response returned by Generative AI Foundation Model
As you can note in the response above, GenAI response has masked the PII information.
Let's look at the response once guardrail policy is updated to block the PII data.
Response with blocked data
Here is the response when policy is updated to block if PII contains account number.
Input with blocked data
Since apply guardrail provides the flexibility of applying to input or response independently, here is the result of applying the guard rail to input with blocked data element:
With these steps, a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API has been successfully completed. Amazon Bedrock Guardrail has been configured to protect PII information. Apply guardrail was applied to demonstrate how both input and response data can be protected using the guardrail. Python/Boto3 were used to invoke the Bedrock Converse API with Anthropic Haiku.
As was demonstrated, with Converse API, guardrail was used to implement a policy to control the GenAI response and abstract and block the PII data!
A guardrail was created to remove the PII information from the response. Also, guardrail config was updated to validate that account number when configured for blocking, will be blocked.
Thanks for reading!
Click here to get to YouTube video for this solution.
π’’π’Ύπ“‡π’Ύπ“ˆπ’½ ℬ𝒽𝒢𝓉𝒾𝒢
𝘈𝘞𝘚 𝘊𝘦𝘳𝘡π˜ͺ𝘧π˜ͺ𝘦π˜₯ 𝘚𝘰𝘭𝘢𝘡π˜ͺ𝘰𝘯 𝘈𝘳𝘀𝘩π˜ͺ𝘡𝘦𝘀𝘡 & π˜‹π˜¦π˜·π˜¦π˜­π˜°π˜±π˜¦π˜³ 𝘈𝘴𝘴𝘰𝘀π˜ͺ𝘒𝘡𝘦
𝘊𝘭𝘰𝘢π˜₯ π˜›π˜¦π˜€π˜©π˜―π˜°π˜­π˜°π˜¨π˜Ί 𝘌𝘯𝘡𝘩𝘢𝘴π˜ͺ𝘒𝘴𝘡
Β 

Comments