AWS Logo
Menu

Working With Your Live Data Using LangChain

Use LangChain with Amazon Bedrock and Amazon DynamoDB and to build applications to keep conversation with LLMs with consistent and engage in a natural dialogue

Elizabeth Fuentes
Amazon Employee
Published Oct 6, 2023
Last Modified May 8, 2024
When building applications leveraging large language models (LLM), providing the full conversation context in each prompt is crucial for coherence and natural dialogue. Rather than treating each user input as an isolated question (Fig 1), the model must understand how it fits into the evolving conversation.
Architecture
Storing every new entry and response in the message(Fig. 2) makes it bigger and requires more memory and processing. Without optimizing the storage of dialogue history using appropriate techniques to balance performance and natural interaction, resources would quickly stagnate.
Architecture
In this blog post, I will show you how to use techniques to efficiently provide conversation context to models with LangChain to create a conversational agent that can engage in natural dialogue, maintain the context of the conversation by appending each generated response into the prompt to inform the next response. This allows us to have extended, coherent conversations with the agent across multiple turns. By the end, you will have the skill to create your own conversational application powered by the latest advances in generative AI.

Let’s get started!

1 - Install The LangChain Library

Once installed, you can include all these modules to your application.

2 - Create the LLM Invocation

The invocation is made using Amazon Bedrock, a fully managed service that makes base models from Amazon and third-party model providers accessible through an API.
Anthropic Claude V2 100K Model is used in this example.
To use Amazon Bedrock’s capabilities with LangChain import:
Then create the Amazon Bedrock Runtime Client:
📚Note: Learn more about Amazon Bedrock and LangChain here, the Amazon Bedrock client here and here.
A Chain, the tool to call the components of the application, is need it to generate conversation with the model, set verbose = True to make debug and see the internal states of the Chain:
Test the Chain with this line:
Amazon Bedrock Chain
Additionally, you can invoke the Amazon Bedrock API directly with the Invoke Model API:
Amazon Bedrock invoke model

3 - Add Chat Memory To The Chain

There are different memory types in LangChain, but in this blog we are going to review the following:

3.1 ConversationBufferMemory

Using this memory allows you to store all the messages in the conversation.

3.2 ConversationBufferWindowMemory

Limits the dialogue history size to the most recent K interactions. Older interactions are discarded as new ones are added to keep the size fixed at K.

3.3 ConversationSummaryMemory

This uses a LLM model to created a summary of the conversation and then injected into a prompt, useful for a large conversations.

3.4 ConversationSummaryBufferMemory

Use both the buffer and the summary, stores the full recent conversations in a buffer and also compiles older conversations into a summary.

3.5 ConversationTokenBufferMemory

Keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.
📚Note: In all types of memory belong the parameter return_messages=True is present, this to get the history as a list of messages

4 - Try it!

To try the different memory configurations, add them as a parameter in the Chain.
Test the Chain:
In the following gif you see an example of the ConversationBufferMemory.
Try the different memory types and check the difference.

5 - Save The Conversation Memory In An Amazon DynamoDB Table

Do this using the LangChain integration module with Amazon DynamoDB
Following the instructions in the LangChain documentation:
  • Create the Amazon DynamoDB:
  • Add Chat Memory To The Chain:
  • Try It!
See, you have a new record in the DynamoDB table:
Architecture

Conclusion

Thank you for joining me on this journey in which you gained the skills to maintain a coherent conversation with the LLMs and engage in a natural dialogue using the LangChain memory module, using the Amazon Bedrock API to invoke LLMs models and storing memory: history of conversations in an Amazon DynamoDB.
Some links for you to continue learning and building:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments