logo
Menu
Use System Prompts with Anthropic Claude on Amazon Bedrock

Use System Prompts with Anthropic Claude on Amazon Bedrock

System prompts are a great way to predefine how a generative AI model should respond to subsequent user input. In this post you'll learn what they are and how you can use them for your own applications with Anthropic Claude 2.x and 3.

Dennis Traub
Amazon Employee
Published Mar 6, 2024
Last Modified Mar 20, 2024
Did you know that you can use system prompts when interacting with Anthropic's Claude models on Amazon Bedrock?
In this post I will briefly explain what system prompts are, before showing you how to use them with the Text Completions API for Claude 2.x and the new Messages API for Claude 3.
⚠ Warning: This post contains code!
Speaking of which, if you already know about system prompts and Claude's two types of APIs, and want to get right to the code, I've added two fully functional examples at the bottom of this post!

What is a system prompt?

Since we're talking about Claude today, let's start by having a look at what Anthropic has to say:
A system prompt is a way to provide context, instructions, and guidelines to Claude before presenting it with a question or task. By using a system prompt, you can set the stage for the conversation, specifying Claude's role, personality, tone, or any other relevant information that will help it better understand and respond to the user's input.
In other words, a system prompts is a form of in-context learning, providing an effective way to pre-define the context, scope, guardrails, or output format for the model to use during an interaction.
System prompts try to ensure that the AI's output aligns with specific goals or tasks across various domains. Some typical use cases include:
  • Pizza Order Processing: Reduce the scope of the model to focus on taking orders based on the pizza service's menu and location.
  • Technical Support Troubleshooting: Inform the model about product details, FAQs, and decision trees to help users solve technical issues with a product.
  • Code Debugging: Inject information about libraries, frameworks, and programming language versions to identify and suggest fixes for software bugs.

The two types of APIs for Claude on Amazon Bedrock

Now that we know about system prompts, let's have a look at the two different APIs provided by the different versions of Claude on Amazon Bedrock:
  • The text completion API, used by Claude versions 1 and 2.x.
  • The messages API introduced by the new Claude version 3.

The text completion API

With the release of Amazon Bedrock, AWS customers have gained access to Anthropic's Claude versions 1 and 2, followed by the release of Claude 2.1 during re:Invent 2023. All Claude models up to 2.1 offer a text completions API, which optimize for single-turn text generation based on a user-provided prompt with the following template:
Note: You can find fully functional code examples at the end of this post.
1
2
3
# The prompt format for the text completion API (Claude 1 and 2.x)
user_prompt = "Tell me a story."
prompt = "Human: " + user_prompt + "\n\nAssistant:"
While I ran this prompt with Claude v2.1, I received the following completion:
The young boy wandered into the dark, mysterious forest, hoping to find the rare flower his mother needed to recover from her illness, but instead encountered a wise old owl who offered him cryptic advice about believing in himself.

Using a system prompt with the text completion API

To add a system prompt, all you need to do is add it to the beginning of the prompt. Let's try:
1
2
3
4
# Using a system prompt with the text completion API (Claude 1 and 2.x)
system_prompt = "All your output must be pirate speech 🦜"
user_prompt = "Tell me a story."
prompt = "System:" + system_prompt + "\n\nHuman: " + user_prompt + "\n\nAssistant:"
Sending this prompt to Claude 2.1 created the following completion:
Yarrr, 'twas a dark 'n stormy night when Blackbeard 'n his scurvy crew set sail on the seven seas, plunderin' merchant ships fer pieces of eight 'n fine silks, before returnin' to Tortuga fer a night of rum-filled debauchery!

The messages API

With the addition of Claude 3 Sonnet in March 2024, Amazon Bedrock introduced a messages API, optimized for conversational exchanges, e.g. for chat bots or virtual assistants, and multimodal requests, e.g. sending an image along with a text prompt to ask questions about the image.
Claude 3 has been trained to operate on alternating conversational turns between user and assistant. When creating a new message, you specify the prior conversational turns with the messages parameter and the model will generate the next message in the conversation.
Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages.
Note: You can find fully functional code examples at the end of this post.
Here's the example from above, wrapped in a request object for the messages API:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// The request object for the messages API (Claude 3)
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
]
}
As you can see, there's only one message, it's role is the user, and Claude will return a new message containing the response:
1
2
3
4
5
6
7
8
9
10
11
12
// The response object returned by the messages API
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Orphaned as a child, she overcame poverty, discrimination, and countless ..."
}
]
...
}

Using a system prompt with the messages API

To add a system prompt, all you have to do is add the "system" parameter to the request object like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// The request including a system prompt with the Messages API
{
"system": "All your output must be pirate speech 🦜",
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
]
}
Which produces the following result:
1
2
3
4
5
6
7
8
9
10
11
12
// The response object returned by the messages API
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Ahoy, matey! Hoist the mainsail an' brace yeself ... 🏴 ☠️"
}
],
...
}
And that's it!
If you enjoyed this post or learned something new, please hit the like button or let me know what you think in the comments below.
If you want to learn more, have a look at the following resources:

Example code

If you're anything like me, you probably want to go ahead and experiment with it yourself. So, as promised, here are two fully functional scripts in Python for both APIs. Happy coding!

System prompt with the text completion API - full example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# This Python example demonstrates the use of a system prompt with the
# Text Completion API for Claude 2.x

import boto3
import json

# Initialize the client with the service and region
client = boto3.client('bedrock-runtime', 'us-east-1')

# Define model ID and prompt
model_id = 'anthropic.claude-v2:1'

system_prompt = 'All your output must be pirate speech 🦜'
user_prompt = 'Tell me a story.'
prompt = f"System: {system_prompt}\n\nHuman: {user_prompt}\n\nAssistant:"

# Create the request body
body = {
"prompt": prompt,
"max_tokens_to_sample": 200,
"temperature": 0.5,
"stop_sequences": ["\n\nHuman:"]
}

# Invoke the model and print the response
response = client.invoke_model(modelId=model_id, body=json.dumps(body))
print(json.loads(response["body"].read())["completion"])

System prompt with the messages API - full example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# This Python example demonstrates the use of a system prompt with the
# Messages API for Claude 3

import boto3
import json

client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"

response = client.invoke_model(
modelId=model_id,
body=json.dumps(
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"system": "All your output must be pirate speech 🦜",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
],
}
),
)

# Process and print the response(s)
response_body = json.loads(response.get("body").read())
for output in response_body.get("content", []):
print(output["text"])
 Have fun!
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.