Customizing AI Behavior: System prompts and inference parameters in Bedrock's Converse API
In this tutorial, you'll learn how to configure a generative AI model with a system prompt and additional inference parameters, using the Bedrock Converse API and the AWS SDK for JavaScript.
- In Part 1: Getting Started, you learned how to send your first request.
- In Part 2: Conversational AI, I'll show you how to implement conversational turns.
- In Part 3: Customizing AI Behavior (this post), we'll configure the model with a system prompt and additional inference parameters.
- An AWS account
- The AWS CLI installed and configured with your credentials
- The latest stable version of Node.js and npm installed
- Requested access to the model you want to use
BedrockRuntimeClient
, specifying the AWS region where the model is available:ConverseCommand
:- Save the code in a file named
bedrock_customized.js
- In your terminal, navigate to the directory containing
bedrock_customized.js
- Run the following command:
- Configured your AWS credentials
- Requested access to the model you are using
- Installed the required dependencies
- Experiment with different models and compare their responses. Here's the list of all models supporting the Converse API.
- Challenge yourself to rewrite this program in another language. Here are examples in Python, Java, C#, and more.
- Dive deeper into the Converse API in the Amazon Bedrock User Guide.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.