Use System Prompts with Anthropic Claude on Amazon Bedrock
System prompts are a great way to predefine how a generative AI model should respond to subsequent user input. In this post you'll learn what they are and how you can use them for your own applications with Anthropic Claude 2.x and 3.
A system prompt is a way to provide context, instructions, and guidelines to Claude before presenting it with a question or task. By using a system prompt, you can set the stage for the conversation, specifying Claude's role, personality, tone, or any other relevant information that will help it better understand and respond to the user's input.
- Pizza Order Processing: Reduce the scope of the model to focus on taking orders based on the pizza service's menu and location.
- Technical Support Troubleshooting: Inform the model about product details, FAQs, and decision trees to help users solve technical issues with a product.
- Code Debugging: Inject information about libraries, frameworks, and programming language versions to identify and suggest fixes for software bugs.
- The text completion API, used by Claude versions 1 and 2.x.
- The messages API introduced by the new Claude version 3.
Note: You can find fully functional code examples at the end of this post.
1
2
3
# The prompt format for the text completion API (Claude 1 and 2.x)
user_prompt = "Tell me a story."
prompt = "Human: " + user_prompt + "\n\nAssistant:"
The young boy wandered into the dark, mysterious forest, hoping to find the rare flower his mother needed to recover from her illness, but instead encountered a wise old owl who offered him cryptic advice about believing in himself.
1
2
3
4
# Using a system prompt with the text completion API (Claude 1 and 2.x)
system_prompt = "All your output must be pirate speech 🦜"
user_prompt = "Tell me a story."
prompt = "System:" + system_prompt + "\n\nHuman: " + user_prompt + "\n\nAssistant:"
Yarrr, 'twas a dark 'n stormy night when Blackbeard 'n his scurvy crew set sail on the seven seas, plunderin' merchant ships fer pieces of eight 'n fine silks, before returnin' to Tortuga fer a night of rum-filled debauchery!
Note: You can find fully functional code examples at the end of this post.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// The request object for the messages API (Claude 3)
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
]
}
1
2
3
4
5
6
7
8
9
10
11
12
// The response object returned by the messages API
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Orphaned as a child, she overcame poverty, discrimination, and countless ..."
}
]
...
}
"system"
parameter to the request object like this:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// The request including a system prompt with the Messages API
{
"system": "All your output must be pirate speech 🦜",
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
]
}
1
2
3
4
5
6
7
8
9
10
11
12
// The response object returned by the messages API
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Ahoy, matey! Hoist the mainsail an' brace yeself ... 🏴 ☠️"
}
],
...
}
- Amazon Bedrock code examples - Our constantly growing list of examples across models and programming languages.
- The inference parameter reference for Claude and all other models on Amazon Bedrock.
- And, of course, the Generative AI Space here on community.aws with a curated list of articles all around Amazon Bedrock and Generative AI.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# This Python example demonstrates the use of a system prompt with the
# Text Completion API for Claude 2.x
import boto3
import json
# Initialize the client with the service and region
client = boto3.client('bedrock-runtime', 'us-east-1')
# Define model ID and prompt
model_id = 'anthropic.claude-v2:1'
system_prompt = 'All your output must be pirate speech 🦜'
user_prompt = 'Tell me a story.'
prompt = f"System: {system_prompt}\n\nHuman: {user_prompt}\n\nAssistant:"
# Create the request body
body = {
"prompt": prompt,
"max_tokens_to_sample": 200,
"temperature": 0.5,
"stop_sequences": ["\n\nHuman:"]
}
# Invoke the model and print the response
response = client.invoke_model(modelId=model_id, body=json.dumps(body))
print(json.loads(response["body"].read())["completion"])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# This Python example demonstrates the use of a system prompt with the
# Messages API for Claude 3
import boto3
import json
client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
response = client.invoke_model(
modelId=model_id,
body=json.dumps(
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 1024,
"system": "All your output must be pirate speech 🦜",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me a story."
}
]
}
],
}
),
)
# Process and print the response(s)
response_body = json.loads(response.get("body").read())
for output in response_body.get("content", []):
print(output["text"])
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.