A Comprehensive Guide to Mastering GenAI with Amazon Bedrock
Dive into GenAI with Amazon Bedrock with an AWS solution architect's curated curriculum spanning two great AWS Workshops
Kinman
Amazon Employee
Published Aug 15, 2024
One of the great things of being a solutions architect at AWS is facilitating workshops and immersions days for my individual customers or for multi-customer events.
If you haven’t attended an AWS Immersion Day, I’d definitely recommend it when there is one on a topic of interest to you. The general format for an all-day event is presentations in the morning and workshops in the afternoon, and your day’s food is provided!
I’ve evaluated many of the great workshops the amazing people here at AWS have built to help people with their GenAI journey, but there were two that stood out that I will highlight today that I used to build out a “Mastering GenAI with Amazon Bedrock series”.
The two workshops are:
The great think about these workshops are that you don’t have to attend an AWS event to get access to them. Both workshops have a self-serve feature that allows you go thru the workshop within you own AWS account. Do know that if you go the self-serve route that you will need to pay for the services and resources. Review each workshop's costing guidance, but generally speaking if you follow the clean-up process the workshops are very inexpensive to run.
Here is my overview of each workshop and guidance on how to most effectively go thru them. It should go without saying, the workshop creators spent a lot of thought and effort to design and create the content and I agree with the incremental step size that they use to introduce new concepts. I equally believe there are quick learners out there that could benefit from a condensed curriculum where larger step sizes are taken.
If you are just getting started with building GenAI applications this is the workshop I recommend you start with this workshop first because it provides a good intro to:
The Boto3 library: The Python library used to interact with various AWS services including Amazon Bedrock.
The LangChain framework: A popular open source framework an wrapper to facilitate building and interacting with various components that are essential to GenAI applications
Streamlit: Another staple GenAI open source package that is great for rapid prototyping the UI / UX components of a GenAI application.
The example use-cases focus on text generation, document Q&A, document summarization, image generation, image search and image modification.
The workshop concludes by providing the starting building blocks for good content and data moderation using guardrails to perform content blocking, PII masking, and prompt attack blocking.
🏡 Running from your own AWS account - Complete this section so that your account is setup to run the rest of the labs.
- Lab F-1: Bedrock Console - Provides a good overview of the Bedrock Console which is a graphical way to interact with FMs without the need for an IDE or command-line
- Lab F-4: Inference parameters - Cool quick example of evoking multiple models, and the variance of each model
- Lab B-1: Text generation - Example of Streamlit & invoking an LLM
- Lab L-3: Retrieval-Augmented Generation - Introduces in-memory vector storage
- Leverages Langchain’s memory buffer and conversation chain (client-side)
- ConversationSummaryBufferMemory combines the two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. It uses token length rather than number of interactions to determine when to flush interactions.
- Lab I-1: Chatbot with RAG - Combines concepts from Lab B-3 and B-4
Chose either I-6 or I-7 based on the format you are most interested. The rest of the concepts are the same
- Lab M-1: Image search - Good parallel to text embeddings
- Lab M-4: Masking introduction - Good example of more advanced image manipulation use-case
We will conclude the workshop by focusing providing the building blocks for good content and data moderation such as guardrails, content blocking, PII masking, and prompt attack blocking.
Once you are comfortable with interacting directly with LLMs and putting the basic building block around them, the trend I am seeing with my most innovative customers are building GenAI agents.
Put simply agents are the ability for a LLM to:
1) Be aware of APIs available for it to use
2) Determine when it is appropriate to use those APIs to get additional information or perform a desired action
3) Request those actions to be performed
This workshop provides progressively more advanced HR, Insurance, and restaurant related use-cases that leverage GenAI agents.
Self paced (Setup) - Absolutely necessary :-)
Lab 1 - Create an Agent with Function Definition – You will create an HR agent that looks up an employees available vacation time, and books time off, if possible.
[Skip] Lab 2 - Create Agents with API Schema - You will create an insurance agent, and instead of using function declaration to define available APIs, you will use API schema base declaration using OpenAPI format. The agent can create, lookup, see pending items and also send out reminders on pending items.
Know that in addition to the function declaration method, you can also define the usable APIs to the LLM using OpenAPI format
Lab 3 - Create Agents with Return of Control (Function Calling) – You will be modifying the HR agent in Lab 1, and instead of having the Agents for Bedrock and Action Groups execute the API request, you will be execute it. Consider this implementation if you want add your own validation prior to calling the API.
[Skip to Lab 5] Lab 4 - Create Agent with a Single Knowledge Base – You will create a new agent that is able to lookup information from Knowledge Bases for Amazon Bedrock. Knowledge Bases for Amazon Bedrock is a fully managed RAG solution.
Lab 5 - Create an Agent with a Knowledge Base and an Action Group – You are creating a restaurant assistant the combines both Knowledge Base lookup of information like in Lab 4 and adding the ability to make actions / call APIs using Action Groups. Additional there is an intro to Agent Evaluation Framework.
Lab 6 - Passing Prompt and Session Attributes to your Agent – You will get create a more advanced HR agent that has additional session attributes to work with. In the lab you introduce the user’s timezone that will persist during the chat session so that it eliminates the ambiguity when a user types tomorrow.
Lab 7 - Overwriting Advanced Prompt and Custom Lambda Parsers – In all the previous labs you have been using all the default system prompts the Agent for Bedrock service generates for you, the only guidance you’ve been providing is the role the agent is performing. In Lab 7, you now start modifying the prompts used under the hood to further tailor the actions taken by the agent.
Lab 8 - Creating Agent with Guardrails for Amazon Bedrock integration – Guardrails! Guardrails are important tool to safeguard the LLM from malicious end-user prompts and prevent the LLM from going off-topic.
After completing 1 to 8, you have all the necessary skills and example to get started building. For advance agent capabilities see:
Lab 10 - Creating an agents with memory – Where in Lab 6, you supplied additional session data to the LLM, in Lab 10 you are now providing persistent memory that can used to store the chat history the LLM had with your end-user across multiple chat sessions.
With the knowledge and skills gained from completing these two AWS Bedrock workshops, you'll be well-equipped to embark on your GenAI journey, unlock new possibilities and push the boundaries of what's achievable with this revolutionary technology. If you’d like to get notified on my next publication follow me on LinkedIn.
Till next time!
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.