logo
Menu
Customizing AI Behavior: System prompts and inference parameters in Bedrock's Converse API

Customizing AI Behavior: System prompts and inference parameters in Bedrock's Converse API

In this tutorial, you'll learn how to configure a generative AI model with a system prompt and additional inference parameters, using the Bedrock Converse API.

Dennis Traub
Amazon Employee
Published May 30, 2024
Last Modified May 31, 2024

Welcome 👋

Thanks for joining me in this tutorial series, where I'll show you all about Bedrock's new Converse API!
Today, I will show you how to use the Bedrock Converse API to implement an ongoing chat with conversational turns.

Recap: The Amazon Bedrock Converse API

The Amazon Bedrock Converse API contains two new Actions for the Bedrock Runtime: Converse and ConverseStream, simplifying the interaction with all text-based generative AI models on Amazon Bedrock. It provides a cohesive set of functionality through common and strongly typed request format, no matter which foundation model you want to use.
To learn more, check out the Amazon Bedrock User Guide, browse our growing collection of code examples, covering multiple models and programming languages, or jump directly into the AWS console and try it out yourself.

Series overview and outlook

In future posts I will also show how to extract invocation metrics and metadata from the model response, how to send and receive model-specific parameters and fields, how to retrieve, process, and print the model response in real-time, and a lot more.
So stay tuned!

A quick note on programming languages

For the longest time, the default language for AI/ML used to be Python, and unfortunately it's hard to find examples for the rest of us. This is why I am using JavaScript in this totorial, and have also created additional examples in Java, C#, etc..
And now, without any further ado, let's and dive into some actual code 💻

Step-by-step: Customize the AI with system prompts and inference parameters

We're using Anthropic Claude 3 Haiku today, but you can replace it with any other model that supports the unified Messages API. To find any specific model ID, here's the most current list in the Amazon Bedrock User Guide.

Prerequisites

  • Install the latest stable version of Node.js.
  • Set up a shared configuration file with your credentials. For more information, see the AWS SDK for JavaScript Developer Guide.
  • Request access to the foundation models you want to use. For more information, see Model access.

Step 1 - Import and create an instance of the Bedrock Runtime client

To interact with the API, you can use the Bedrock Runtime client, provided by the AWS SDK.
  1. Create a new file, e.g., bedrock_customized.js, and open it in an IDE or text editor.
  2. Import the BedrockRuntimeClient and the ConverseCommand from the AWS SDK for JavaScript.
  3. Create an instance of the client and configure it with the AWS Region of your choice.
1
2
3
4
5
6
import {
BedrockRuntimeClient,
ConverseCommand,
} from "@aws-sdk/client-bedrock-runtime";

const client = new BedrockRuntimeClient({ region: "us-east-1" });
Note: Please double-check that the model you want to use is available in the region and you have requested access.

Step 2 - Prepare a message to send

  1. Define a user message.
  2. Add your message, along with the role "user", to a list, starting what we may call a "conversation".
1
2
3
4
5
6
7
8
const userMessage = "Explain 'rubber duck debugging' in one line.";

const conversation = [
{
role: "user",
content: [{ text: userMessage }],
},
];

Step 3 - Define a system prompt and additional inference parameters

  1. Add a system prompt to provide additional information and context, or restrict and shape the model's responses.
  2. Add inference parameters to further influence the length and creativity of the response.
1
2
3
4
5
6
7
const systemPrompt = [{ text: "You must always respond in rhymes." }];

const parameters = {
maxTokens: 100,
temperature: 0.9,
topP: 0.5,
};

Step 3 - Prepare the request and send it to the API

Now we'll prepare an invocation command, send it to the client, and wait for the response.
  1. Set the model ID.
  2. Create a new ConverseCommand with the model ID, the conversation, the system prompt, and the additional inference confifuration.
  3. Send the command to the Bedrock Runtime and wait for the response.
1
2
3
4
5
6
7
8
9
10
const modelId = "anthropic.claude-3-haiku-20240307-v1:0";

const command = new ConverseCommand({
system: systemPrompt,
inferenceConfig: parameters,
messages: conversation,
modelId,
});

const response = await client.send(command);
Note: You can find the list of models supporting the Converse API and a list of all model IDs in the documentation.

Step 4 - Extract and print the model's response

Now, we can extract the model's response text and print it to the console:
1
2
const responseText = response.output.message.content[0].text;
console.log(responseText);

Let's run the program

🚀 Now let's see our program in action! Open a terminal, run it using Node, and observe the response.
Here's is what I got when running my example:
1
2
3
4
$ node bedrock_customized.js

To debug with a rubber duck, a solution you'll find,
Explaining your code, line by line, to a friend so kind.

Next steps

You just got a first taste of Amazon Bedrock's powerful new Converse API. You learned how to influence the model's responses with a system prompt and inference parameters.
Ready for more? Here are some ideas to keep exploring:
  • Swap out the model and see how they respond differently.
  • Challenge yourself to rewrite this program in another programming language. Here are examples in multiple languages.
  • Try to write a system prompt that will restrict the model so that only take orders for a fictitious delivery service, no matter how much you try to get it to do something else.
  • Learn more about the Converse API in the Amazon Bedrock User Guide.
In future posts I will also show how to extract invocation metrics and metadata from the model response, how to send and receive model-specific parameters and fields, how to retrieve, process, and print the model response in real-time, and a lot more.
Thanks for joining me today, I hope you learned something new! See you soon 👋

The complete source code for this tutorial

Here's the complete source code. Feel free to copy, paste, and start building your own AI-enhanced app!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import {
BedrockRuntimeClient,
ConverseCommand,
} from "@aws-sdk/client-bedrock-runtime";

const client = new BedrockRuntimeClient({ region: "us-east-1" });

const userMessage = "Explain 'rubber duck debugging' in one line.";

const conversation = [
{
role: "user",
content: [{ text: userMessage }],
},
];

const systemPrompt = [{ text: "You must always respond in rhymes." }];

const parameters = {
maxTokens: 100,
temperature: 0.9,
topP: 0.5,
};

const modelId = "anthropic.claude-3-haiku-20240307-v1:0";

const command = new ConverseCommand({
system: systemPrompt,
inferenceConfig: parameters,
messages: conversation,
modelId,
});

const response = await client.send(command);

const responseText = response.output.message.content[0].text;

console.log(responseText);
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments