AWS Logo
Menu

Integrate Amazon Bedrock Prompt Management in LangChain applications

This post demonstrates how to enable prompt versioning and management, using LangChain and Amazon Bedrock Prompt management.

Michele Ricciardi
Amazon Employee
Published Aug 9, 2024
Prompts are one of the most important components of any GenAI application. Prompts and Model Parameters control the outputs of LLMs, determining the quality of the LLM responses and overall the behavior of the application.
The lifecycle of your prompts will often differ from the lifecycle of the entire application. For example, you may want to iterate on a prompt at a much higher frequency than an API specification or business logic. Additionally, you may want to experiment with multiple prompts, and keep track of previous prompt versions.
As a result, many Prompt Management tooling have emerged to address these needs, and on July 10th, 2024, AWS announced the preview launch of Amazon Bedrock Prompt management.
Amazon Bedrock Prompt Management simplifies the creation, evaluation, versioning, and sharing of prompts to help developers and prompt engineers get the best responses from foundation models for their use cases.
In this post, I will show you how to create a prompt with Amazon Bedrock Prompt management using the AWS SDK for Python and then show you how to use the prompt in a LangChain application.

Creating a Prompt

It is possible to create a Prompt in Bedrock Prompt Management using the AWS Console, AWS CLI or AWS SDK.
Here an example of how to create a Prompt using the Python AWS SDK.
This code will perform 2 separate operations. First, it will create a new Prompt, and second, it will publish a new "version". By default, when a new Prompt is created in Amazon Bedrock Prompt Management, the Prompt is marked as a "draft", and once finalised it can be published as a new version.
Additionally, notice how the Prompt does not only contain the template, but it also contains the LLM identifier for Bedrock (modelId) as well as the Model Parameters (inferenceConfiguration). This is extremely helpful as you can iterate on the entire LLM configuration, as well as the prompt template, with each new Prompt version.

Retrieving a Prompt

Now that I showed you how to create a new Prompt, let's take a look at how to use it in the application.
First, retrieve the Prompt in the application using the get_prompt SDK call.
Next, let's extract all the details from the prompt_response that the application needs.

Using the Prompt with LangChain

Now, let's build our LangChain chain using the all the prompt configurations extracted in the previous step.
Note that when using a template from Bedrock Prompt Management, you need to set the template_format field to jinja2. In Bedrock Prompt Management, you use double curly braces to include variables (as in {{variable}}). This is the same template convention used by Jinja, which is one of the template formats supported by LangChain.
Lastly, let's go ahead and invoke the chain with some input.
Sample response:

Conclusion

In this blog post I showed you how to add prompt management to your LangChain-based application with Amazon Bedrock Prompt Management, following these steps:
  1. Create and manage the prompt outside the application.
  2. Retrieve the prompt in the application using Prompt ID and Prompt Version.
  3. Use the prompt template in LangChain by configuring the Jinja prompt template formatting.
     

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments