AWS Logo
Menu

Building a WhatsApp genAI Assistant with Amazon Bedrock and Claude 3

The blog teaches deploying a WhatsApp app on Amazon Bedrock to chat in any language with an LLM. Send voice notes, get transcripts, and talk through them. Previously, you used Claude 1 or 2. Now, leverage Claude 3 for conversations and visual content like images, charts, and diagrams.

Elizabeth Fuentes
Amazon Employee
Published Mar 29, 2024
In the previous blog β€œBuilding a WhatsApp genAI Assistant with Amazon Bedrock”, you learned how to deploy a WhatsApp app that allows you to chat in any language using either Anthropic Claude 1 or 2 as large language model (LLM) on Amazon Bedrock. You can send voice notes and receive transcripts, and if you prefer, you can even dialog with the model using voice notes.
In this new blog, I'll show you how to harness the enhanced capabilities of Anthropic Claude 3 to handle conversations more effectively while seamlessly process visual content such as photos, charts, graphs, and technical diagrams.

Example Claude 3 handle visual content

The diagram illustrates a workflow integrating AWS services to process WhatsApp messages
Claude 3 handle visual content: Describe a diagram.
Claude 3 handles the visual content: deliver a Json of a handwritten note.
Claude 3 handles the visual content: deliver a Json of a handwritten note.

Example Claude 3 text generation

Example Claude 3 text generation: Request to explain how to create a complex application.
Example Claude 3 text generation: Request to explain how to create a complex application.
Example Claude 3 text generation: answer on how to build a complex application (part 1).
Example Claude 3 text generation answer on how to build a complex application. (part 1)
Example Claude 3 text generation: answer on how to build a complex application (part 2).
Example Claude 3 text generation: answer on how to build a complex application (part 2).

πŸ” Your data remains securely stored within your AWS account and is never shared or used for model training purposes, ensuring complete privacy. However, it's advisable to avoid sharing sensitive personal information, as WhatsApp's data security cannot be guaranteed.
βœ… AWS Level: 300
Prerequisites:
πŸ’° Cost to complete:

What differentiates the API call of Claude 3 from its previous versions

In previous versions Create a Text Completion(now legacy API) is used, For proper response generation you will need to format your prompt using alternating \n\nHuman: and \n\nAssistant: conversational turns.
This is what the code looks like with Amazon Bedrock:
With Anthropic Claude 3 the conversation is handle The Messages API: messages=[{"role": "user", "content": content].
Each input message must be an object with a role (user or assistant) and content. The content can be in either a single string or an array of content blocks, each block having its own designated type (text or image).
type equal text:
type equal image:
πŸ–ΌοΈ Anthropic currently support the base64 source type for images, and the image/jpeg, image/png, image/gif, and image/webp media types. See more input examples.
This Messages API allows us to add context or instructions to the model through a System Prompt (system).
This is what the code looks like with Amazon Bedrock:

How The App Works

The image depicts a 3-step process of input, message processing, and LLM output for handling text, v
APP Flow
Diagrama
APP Diagram
Let me break down the key components:
  1. The system receives user inputs in the form of text, voice, or images through WhatsApp.
  2. Message processing is performed based on the input format (text, voice, or image).
  3. For text processing, the process_stream Lambda function sends the message text to another Lambda Function that invokes a Large Language Model (LLM) through a call to the Amazon Bedrock API. The response from the LLM is then sent using the whatsapp_out Lambda function, which delivers it to the user via WhatsApp.
  4. For voice processing,the audio_job_transcriptor Lambda Function is triggered. This Lambda Function downloads the WhatsApp audio from the link in the message to an Amazon S3 bucket, using WhatsApp Token authentication. It then converts the audio to text using the Amazon Transcribe start_transcription_job API, which leaves the transcript file in an Output Amazon S3 bucket. The transcriber_done Lambda Function is triggered by an Amazon S3 Event Notification put item once the Transcribe Job is complete. It extracts the transcript from the Output S3 bucket and sends it to the whatsapp_out Lambda Function to respond to WhatsApp.
  5. For image processing, invokes a Claude 3 through a call to the Amazon Bedrock API.
  6. The system can access databases like Amazon DynamoDB to retrieve contextual information like message history and user sessions.
  7. After processing, the system generates a response that is sent back to the user via WhatsApp.
βœ… You have the option to uncomment the code in the transcriber_done Lambda Function and send the voice note transcription to the agent_text_v3 Lambda Function.
The following system prompt is used:
πŸ’‘ The phrase "Always reply in the original user language" ensures that it always responds in the original language and the multilingual capacity is provided by Anthropic Claude.

πŸš€ Let's build!

βœ… Chat and ask follow-up questions. Test your multi-language skills.
βœ… Send and transcribe voice notes. Test the app's capabilities for transcribing multiple languages.
βœ… Send photos and test the app's capabilities to describe and identify what's in the images. Play with prompts

πŸš€ Keep testing the app, play with the prompt and adjust it to your need.

🧹Clean the house!:

If you finish testing and want to clean the application, you just have to follow these two steps:
  1. Delete the files from the Amazon S3 bucket created in the deployment.
  2. Run this command in your terminal:

Conclusion:

In this post, you explored how to build a WhatsApp app powered by Anthropic's Claude 3 language model using Amazon Bedrock. You leveraged the new Messages API to handle conversations and incorporate visual content like images, charts, and diagrams seamlessly.
With Claude 3's advanced capabilities, can engage in natural, context-aware conversations, understanding and responding to both text and visual inputs. Whether you're practicing a new language, transcribing voice notes, or seeking insights from technical diagrams, this WhatsApp assistant stands ready to assist.
The power of large language models combined with the scalability and ease of deployment offered by Amazon Bedrock opens up exciting possibilities for building intelligent, multimodal conversational interfaces.
If you're interested in exploring other use cases or diving deeper into the technical details, be sure to check out the AWS Samples repository for more projects and code samples. Additionally, the Anthropic and Amazon Bedrock documentation are excellent resources for staying up-to-date with the latest features and best practices.
We encourage you to experiment with this WhatsApp chatbot and share your feedback or ideas for improvements in the comments below. Happy coding!

πŸš€ Some links for you to continue learning and building:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments