AWS Logo
Menu
Solving LLM Amnesia - Cross Session Memory

Solving LLM Amnesia - Cross Session Memory

Public Preview: Long Term Memory for Agents for Amazon Bedrock

Mike Chambers
Amazon Employee
Published Jul 18, 2024
In the rapidly evolving world of generative AI, one of the most exciting developments are conversational architectures, weather that be within a chatbot or under the covers. However as large language models (LLMs) are stateless, this means that we need to manage this state. For a basic conversation we can keep track of the conversation history, but over time this may grow bigger than the models context size, and, when we stop the conversation the history is lost. To address this, at the end of the session, you can summarise the chat history and save it, to be used in the system prompt of the next session. Today, we're going to explore a new feature for Agents for Amazon Bedrock that's currently in public preview: long-term memory. This feature is a fully managed, cross session, long term memory for agents.

Understanding the Need for Long-Term Memory

Traditional large language models, which power most chatbots, don't have inherent state or memory. They typically rely on passing the entire conversation history with each request to maintain context. While this approach works for short interactions, it falls short when you want to create a bot that remembers information across multiple sessions or days.
This is where the new long-term memory feature for Agents for Amazon Bedrock comes in. It allows us to create chatbots that can recall information from previous conversations, even after the session has ended. Let's walk through how to implement this feature and see it in action.

Setting Up Your Environment

Before we begin, make sure you have:
  1. An AWS account with access to Amazon Bedrock
  2. The AWS CLI configured with appropriate permissions
  3. Python 3.x installed
  4. The Boto3 library installed (pip install boto3==1.34.144)

Creating an Agent with Long-Term Memory

Let's start by creating a new Bedrock Agent with long-term memory enabled. You can use the attached Jupyter notebook for this process, or you can adapt this code to run in any Python environment.

Step 1: Import Required Libraries

First, let's import the necessary Python libraries:

Step 2: Set Up AWS Clients

Next, we'll set up our AWS clients and define some variables:

Step 3: Create IAM Role and Policy

We need to create an IAM role that allows our agent to invoke the foundation model:

Step 4: Create the Bedrock Agent

Now, let's create our Bedrock Agent with long-term memory enabled:
In this code, we create an agent as per usual, and we add the the configuration for the memory:
And thats all we need to do to enable the feature!

Step 5: Prepare the Agent and Create an Alias

After creating the agent, we need to prepare it and create an alias:

Interacting with the Agent

Now that our agent is set up, let's create some helper functions to interact with it.
The memory feature relies on a memoryId attribute to the invoke_agent() call. This ID represents the memory for this agent alias and you should store it securely against the user of the agent.
In this code I set the sessionId to some random value to represent the session. And set a hard coded memoryId for the sake of the demo.

Testing Long-Term Memory

Let's have a conversation with our agent and test its long-term memory:
Now, wait for the agent to process and summarize the conversation, this should take around 90 seconds.
While we wait we can poll get_agent_memory() to see what memory we have captured.
Finally, let's start a new session and see if our agent remembers our preferences:

Conclusion

We've successfully created an Amazon Bedrock Agent with long-term memory, had a conversation with it, and demonstrated how it can recall information across sessions. This capability opens up a world of possibilities for creating more intelligent and personalized AI assistants.
Remember, this feature is currently in public preview, so Amazon is eager for your feedback. As you experiment with long-term memory in Bedrock Agents, consider the following.
  • The memory may well end up containing sensitive information, including PII (if thats what the user added to the chat session). As such it's important to follow Well Architected good practice, make the `memoryId` unique per user, and store it securly in an encrypted data store or database.
Don't forget to clean up your resources when you're done experimenting:
As we continue to push the boundaries of what's possible with generative AI, features like long-term memory will play a crucial role in creating more human-like and context-aware AI assistants. Keep experimenting, and don't hesitate to share your findings and experiences with the AWS community!
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments