Customizing AI Behavior: System prompts and inference parameters with Java and Bedrock's Converse API

Customizing AI Behavior: System prompts and inference parameters with Java and Bedrock's Converse API

In this tutorial, you'll learn how to configure a generative AI model with a system prompt and additional inference parameters, using the Bedrock Converse API and the AWS SDK for Java.

Dennis Traub
Amazon Employee
Published Jun 27, 2024
[ Jump directly to the code 💻 or switch to the JavaScript version ]
Welcome to Part 3 of my tutorial series on Amazon Bedrock's Converse API!
In previous parts, we covered the basics of sending requests to the API and implementing conversational turns. If you haven't gone through those tutorials yet, I recommend starting with Part 1 to get a solid foundation.
In this tutorial, we'll explore how to customize the behavior of AI models using system prompts and inference parameters.
Note: The examples in this edition use Java. I've also prepared a JavaScript edition, and many more examples in Python, C#, etc.

Series overview

This series guides you through Amazon Bedrock's Converse API:
  • In Part 1: Getting Started, you learned how to send your first request.
  • In Part 2: Conversational AI, I'll show you how to implement conversational turns.
  • In Part 3: Customizing AI Behavior (this post), we'll configure the model with a system prompt and additional inference parameters.
Future posts will cover extracting invocation metrics and metadata, sending and receiving model-specific parameters and fields, processing model responses in real-time, the new tool-use feature, and more.
Let's dive in and start building! 💻

Step-by-step: Customize the AI response with system prompts and inference parameters


Before you begin, ensure all prerequisites are in place. You should have:
  • The AWS CLI installed and configured with your credentials
  • A Java Development Kit (JDK) version 17 or later and a build tool like Apache Maven installed
  • Requested access to the model you want to use

Step 1: Set up a new Java project

If you haven't done so already, create a new Java project using Maven and add the Bedrock Runtime and STS dependencies to your pom.xml file:
Note: Replace the aws.sdk.version with the latest version of the AWS SDK for Java.

Step 2: Create an instance of the Bedrock Runtime client

Create an instance of the BedrockRuntimeClient, specifying the AWS region where the model is available:

Step 3: Specify the model ID

Specify the ID of the model you want to use.
In this example, we'll use Claude 3 Haiku:
You can find the complete list of models supporting the Converse API and a list of all available model IDs in the documentation.

Step 4: Prepare a message to send

Prepare a message with your input text and the USER role:

Step 5: Define a system prompt and additional inference parameters

A system prompt is a special type of message that provides additional context or instructions to the AI model. Unlike user messages, which represent the user's input, a system prompt is used to guide the model's behavior and set expectations for its responses.
In this example, we'll use a system prompt to instruct the model to respond in rhymes:
In addition to the system prompt, we can also specify inference parameters to further customize the model's behavior. Inference parameters allow us to control various aspects of the generated response, such as its length and randomness.
In this example, we will configure the following additional parameters:
These parameters will limit the response to a maximum of 100 tokens and maintain a balance between creativity and coherence with a temperature of 0.5.
By combining the system prompt and inference parameters, you can fine-tune the model's behavior to suit your specific use case. Experiment with different values to observe their impact on the generated responses.

Step 6: Send the request

Send the message, inference configuration, and system prompt to the model using the Bedrock Runtime client's converse() method:
The converse method sends the conversation to the specified model and returns its response.
Print out the model's response:

Let's run the program

With the code complete, let's run it and see the AI engage in a multi-turn conversation!
Here's the full code for reference:
To run it:
  1. Save the code in a file named BedrockCustomized.java
  2. Compile and run the Java application using your preferred IDE or command-line tools.
If everything is set up correctly, you should see the model's response printed in the console:
If you encounter any errors, double check that you have:

Next steps

Congratulations on influencing the model's responses with a system prompt and additional inference parameters.
Ready for more? Here are some ideas to keep exploring:
In future posts you will learn how to extract invocation metrics and metadata from the model response, how to send and receive model-specific parameters and fields, how to retrieve, process, and print the model response in real-time, and a lot more.
I'd love to see what you build with Amazon Bedrock! Feel free to share your projects or ask questions in the comments.
Thanks for following along and happy building! 💻🤖

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.