Building a WhatsApp genAI Assistant with Amazon Bedrock and Claude 3
The blog teaches deploying a WhatsApp app on Amazon Bedrock to chat in any language with an LLM. Send voice notes, get transcripts, and talk through them. Previously, you used Claude 1 or 2. Now, leverage Claude 3 for conversations and visual content like images, charts, and diagrams.
Elizabeth Fuentes
Amazon Employee
Published Mar 29, 2024
In the previous blog βBuilding a WhatsApp genAI Assistant with Amazon Bedrockβ, you learned how to deploy a WhatsApp app that allows you to chat in any language using either Anthropic Claude 1 or 2 as large language model (LLM) on Amazon Bedrock. You can send voice notes and receive transcripts, and if you prefer, you can even dialog with the model using voice notes.
In this new blog, I'll show you how to harness the enhanced capabilities of Anthropic Claude 3 to handle conversations more effectively while seamlessly process visual content such as photos, charts, graphs, and technical diagrams.
π Your data remains securely stored within your AWS account and is never shared or used for model training purposes, ensuring complete privacy. However, it's advisable to avoid sharing sensitive personal information, as WhatsApp's data security cannot be guaranteed.
β
AWS Level: 300
Prerequisites:
π° Cost to complete:
In previous versions Create a Text Completion(now legacy API) is used, For proper response generation you will need to format your prompt using alternating
\n\nHuman: and \n\nAssistant:
conversational turns.This is what the code looks like with Amazon Bedrock:
With Anthropic Claude 3 the conversation is handle The Messages API:
messages=[{"role": "user", "content": content]
.Each input message must be an object with a
role
(user or assistant) and content
. The content can be in either a single string or an array of content blocks, each block having its own designated type
(text or image).type
equal text
:type
equal image
:πΌοΈ Anthropic currently support the base64 source type for images, and the image/jpeg, image/png, image/gif, and image/webp media types. See more input examples.
This Messages API allows us to add context or instructions to the model through a System Prompt (
system
).This is what the code looks like with Amazon Bedrock:
Let me break down the key components:
- The system receives user inputs in the form of text, voice, or images through WhatsApp.
- Message processing is performed based on the input format (text, voice, or image).
- For text processing, the process_stream Lambda function sends the message text to another Lambda Function that invokes a Large Language Model (LLM) through a call to the Amazon Bedrock API. The response from the LLM is then sent using the whatsapp_out Lambda function, which delivers it to the user via WhatsApp.
- For voice processing,the audio_job_transcriptor Lambda Function is triggered. This Lambda Function downloads the WhatsApp audio from the link in the message to an Amazon S3 bucket, using WhatsApp Token authentication. It then converts the audio to text using the Amazon Transcribe start_transcription_job API, which leaves the transcript file in an Output Amazon S3 bucket. The transcriber_done Lambda Function is triggered by an Amazon S3 Event Notification put item once the Transcribe Job is complete. It extracts the transcript from the Output S3 bucket and sends it to the whatsapp_out Lambda Function to respond to WhatsApp.
- For image processing, invokes a Claude 3 through a call to the Amazon Bedrock API.
- The system can access databases like Amazon DynamoDB to retrieve contextual information like message history and user sessions.
- After processing, the system generates a response that is sent back to the user via WhatsApp.
β You have the option to uncomment the code in the transcriber_done Lambda Function and send the voice note transcription to the agent_text_v3 Lambda Function.
The following system prompt is used:
π‘ The phrase "Always reply in the original user language" ensures that it always responds in the original language and the multilingual capacity is provided by Anthropic Claude.
Follow the steps in https://github.com/build-on-aws/building-gen-ai-whatsapp-assistant-with-amazon-bedrock-and-python
β
Chat and ask follow-up questions. Test your multi-language skills.
β
Send and transcribe voice notes. Test the app's capabilities for transcribing multiple languages.
β
Send photos and test the app's capabilities to describe and identify what's in the images. Play with prompts
If you finish testing and want to clean the application, you just have to follow these two steps:
- Delete the files from the Amazon S3 bucket created in the deployment.
- Run this command in your terminal:
In this post, you explored how to build a WhatsApp app powered by Anthropic's Claude 3 language model using Amazon Bedrock. You leveraged the new Messages API to handle conversations and incorporate visual content like images, charts, and diagrams seamlessly.
With Claude 3's advanced capabilities, can engage in natural, context-aware conversations, understanding and responding to both text and visual inputs. Whether you're practicing a new language, transcribing voice notes, or seeking insights from technical diagrams, this WhatsApp assistant stands ready to assist.
The power of large language models combined with the scalability and ease of deployment offered by Amazon Bedrock opens up exciting possibilities for building intelligent, multimodal conversational interfaces.
If you're interested in exploring other use cases or diving deeper into the technical details, be sure to check out the AWS Samples repository for more projects and code samples. Additionally, the Anthropic and Amazon Bedrock documentation are excellent resources for staying up-to-date with the latest features and best practices.
We encourage you to experiment with this WhatsApp chatbot and share your feedback or ideas for improvements in the comments below. Happy coding!
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.