
Telling bedtime stories with generative AI
Learn how to build an interactive storyteller with AWS Amplify, Amazon Bedrock, and the Converse API
systemPrompt
feature to provide context, instructions, and guidelines on how it should respond.bedrock:InvokeModel
and the specific model resource we specify.amplify/backend.ts
file:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// amplify/backend.ts
import { defineBackend } from '@aws-amplify/backend';
import { auth } from './auth/resource';
import { data, CHAT_MODEL_ID, generateChatResponseFunction } from './data/resource';
import { Effect, PolicyStatement } from 'aws-cdk-lib/aws-iam';
export const backend = defineBackend({
auth,
data,
generateChatResponseFunction,
});
backend.generateChatResponseFunction.resources.lambda.addToRolePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: ["bedrock:InvokeModel"],
resources: [
`arn:aws:bedrock:*::foundation-model/${CHAT_MODEL_ID}`,
],
})
);
generateChatResponseFunction
using defineFunction
and configure it with the model id, timeout, and which Node runtime top use. The entry
key will specify the handler that with the core logic.generateChatResponse
and add it to the schema. Here, we define the arguments allowed, the return type, and which function to use.amplify/data/resource.ts
file like this:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// amplify/data/resource.ts
import { type ClientSchema, a, defineData, defineFunction } from "@aws-amplify/backend";
export const CHAT_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0";
export const generateChatResponseFunction = defineFunction({
entry: "./generateChatResponse.ts",
environment: {
CHAT_MODEL_ID,
},
timeoutSeconds: 180,
runtime: 20
});
const schema = a.schema({
generateChatResponse: a
.query()
.arguments({ conversation: a.json().required(), systemPrompt: a.string().required() })
.returns(a.string())
.authorization((allow) => [allow.publicApiKey()])
.handler(a.handler.function(generateChatResponseFunction)),
});
export type Schema = ClientSchema<typeof schema>;
export const data = defineData({
schema,
authorizationModes: {
defaultAuthorizationMode: "apiKey",
apiKeyAuthorizationMode: {
expiresInDays: 30,
},
},
});
conversation
. This will include the entire conversation from our interactions with our interactive storyteller.systemPrompt
. We'll use this to customize our interactions with the model, by providing context, instructions, and guidelines on how to respond.BedrockRuntimeClient
, prepare the input and conversation as a ConverseCommandInput
, and then make the call.conversation
argument will be passed a JSON string representing the full conversation between the user and the interactive story teller. This is then parsed and loaded as an object with this structure:1
2
3
4
5
6
7
8
9
10
11
12
13
14
[
{
role: "user",
content: [{ text: firstUserMessage }]
},
{
role: "assistant",
content: [{ text: firstResponseMessage }]
},
{
role: "user",
content: [{ text: secondUserMessage }]
}
]
amplify/data/generateChatResponse.ts
file with the code below:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// amplify/data/generateChatResponse.ts
import type { Schema } from "./resource";
import {
BedrockRuntimeClient,
ConverseCommand,
ConverseCommandInput,
} from "@aws-sdk/client-bedrock-runtime";
const client = new BedrockRuntimeClient();
export const handler: Schema["generateChatResponse"]["functionHandler"] = async (
event
) => {
const conversation = event.arguments.conversation;
// System prompt for context
const systemPrompt = [{text: event.arguments.systemPrompt}];
const input = {
modelId: process.env.CHAT_MODEL_ID,
system: systemPrompt,
messages: conversation,
inferenceConfig: {
maxTokens: 1000,
temperature: 0.5,
}
} as ConverseCommandInput;
const command = new ConverseCommand(input);
const response = await client.send(command);
const jsonResponse = JSON.stringify(response.output?.message);
return jsonResponse;
};
maxTokens
inference parameter defines the maximum number of tokens the model can generate and temperature
controls how creative it can get (a number between 0 and 1 with closer to 1 being more creative). Read more about these inference parameters here.ChatComponent.tsx
and how each part works, and then I'll share the full code (jump to [here] if that's all you're looking for).ChatComponent.tsx
, we iterate over the conversation
between the AI (labeled assistant here) and the user (labeled human). The conversation alternates between the assistant and human, so we style each a little different.TextField
component to capture the user's message, setting the inputValue
via handleInputChange
when text is entered and calling setNewUserMessage
whenever the Enter
key is pressed or the Send button is pressed. There are also properties to show an error message if one exists.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<View width="60vw">
<Flex direction="column" wrap="wrap" justifyContent="space-between">
{conversation.map((item, i) => item.role === "assistant" ? (
<Message width="40vw" className="assistant-message" colorTheme="neutral" key={i}>{item.content[0].text}</Message>
) : (
<Message width="40vw" className="human-message" hasIcon={false} colorTheme="info" key={i}>{item.content[0].text}</Message>
))}
{isLoading ? (<Loader />) : (<div></div>)}
<TextField label="What would you like to chat about?"
name="prompt"
value={inputValue}
onChange={handleInputChange}
onKeyUp={(event) => {
if (event.key === 'Enter') {
setNewUserMessage();
}
}}
labelHidden={true}
hasError={error !== ""}
errorMessage={error}
width="60vw"
outerEndComponent={<Button onClick={setNewUserMessage}>Send</Button>} />
</Flex>
</View>
handleInputChange
function that clears the error if there was one whenever the user starts typing and then sets the input value.1
2
3
4
const handleInputChange = (e: ChangeEvent<HTMLInputElement>) => {
setError("");
setInputValue(e.target.value);
};
setNewUserMessage
to add the new message from the human (using the role user
) to the conversation. This is in the same structure as we covered earlier, alternating between user
and assistant
roles.1
2
3
4
const setNewUserMessage = async () => {
const newUserMessage = { role: "user", content: [{ text: inputValue }] };
setConversation(prevConversation => [...prevConversation, newUserMessage]);
};
generateChatResponse
query to send the message to the model with Amazon Bedrock. We use the useEffect
hook because we need to wait on setConversation
to be complete before making the call. We implement fetchChatResponse
as an async function to make the actual call and only call it when there is a conversation where the last message is from the user. We do this last check because we only want to send user messages to the model. We do not want to send the assistant's responses, which we also push back onto the conversation (remember that alternating user-assistant conversation array from earlier?) so the model has our entire conversation history as context.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
useEffect(() => {
const fetchChatResponse = async () => {
setInputValue('');
setIsLoading(true);
const { data, errors } = await client.queries.generateChatResponse({
conversation: JSON.stringify(conversation),
systemPrompt: systemPrompt
});
if (!errors && data) {
setConversation(prevConversation => [...prevConversation, JSON.parse(data)]);
} else {
setError(errors?.[0].message || "An unknown error occurred.")
console.error("errors", errors);
}
setIsLoading(false);
}
// only fetch the response if there is a conversation and it ends with a user role message
if (conversation.length > 0 && conversation[conversation.length - 1].role === "user") {
fetchChatResponse();
}
}, [conversation]);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
// ChatComponent.tsx
import { ChangeEvent, useState, useEffect } from "react";
import { Button, Flex, Loader, TextField, View, Message } from "@aws-amplify/ui-react";
import { generateClient } from "aws-amplify/api";
import { Schema } from "../../../amplify/data/resource";
import "./chat.css"
const client = generateClient<Schema>();
export default function ChatComponent({systemPrompt} : { systemPrompt: string}) {
const [conversation, setConversation] = useState<{ role: string, content: { text: string }[] }[]>([]);
const [inputValue, setInputValue] = useState("");
const [error, setError] = useState("");
const [isLoading, setIsLoading] = useState(false);
const handleInputChange = (e: ChangeEvent<HTMLInputElement>) => {
setError("");
setInputValue(e.target.value);
};
useEffect(() => {
const fetchChatResponse = async () => {
setInputValue('');
setIsLoading(true);
const { data, errors } = await client.queries.generateChatResponse({
conversation: JSON.stringify(conversation),
systemPrompt: systemPrompt
});
if (!errors && data) {
setConversation(prevConversation => [...prevConversation, JSON.parse(data)]);
} else {
setError(errors?.[0].message || "An unknown error occurred.")
console.error("errors", errors);
}
setIsLoading(false);
}
// only fetch the response if there is a conversation and it ends with a user role message
if (conversation.length > 0 && conversation[conversation.length - 1].role === "user") {
fetchChatResponse();
}
}, [conversation]);
const setNewUserMessage = async () => {
const newUserMessage = { role: "user", content: [{ text: inputValue }] };
setConversation(prevConversation => [...prevConversation, newUserMessage]);
};
return (
<View width="60vw">
<Flex direction="column" wrap="wrap" justifyContent="space-between">
{conversation.map((item, i) => item.role === "assistant" ? (
<Message width="40vw" className="assistant-message" colorTheme="neutral" key={i}>{item.content[0].text}</Message>
) : (
<Message width="40vw" className="human-message" hasIcon={false} colorTheme="info" key={i}>{item.content[0].text}</Message>
))}
{isLoading ? (<Loader />) : (<div></div>)}
<TextField label="What would you like to chat about?"
name="prompt"
value={inputValue}
onChange={handleInputChange}
onKeyUp={(event) => {
if (event.key === 'Enter') {
setNewUserMessage();
}
}}
labelHidden={true}
hasError={error !== ""}
errorMessage={error}
width="60vw"
outerEndComponent={<Button onClick={setNewUserMessage}>Send</Button>} />
</Flex>
</View>
)
}
1
2
3
4
5
6
7
8
/* chat.css */
.assistant-message {
margin-right: auto;
}
.human-message {
margin-left: auto;
}
App.tsx
file:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// App.tsx
<View as="section">
<Heading
width='60vw'
level={2}>
Let's tell a story together.
</Heading>
<Text
variation="primary"
as="p"
lineHeight="1.5em"
fontWeight={400}
fontSize="1em"
fontStyle="normal"
textDecoration="none"
width="60vw">
Start by introducing yourself and saying hi.
</Text>
<Chat
systemPrompt={ "Pretend you are an author of a choose your own adventure style story for children age 3-5. Start by asking the user a series of three questions to understand the theme of the adventure. Tell the first part of four parts of the story and then ask the user to make a choice about the path they would like to take. Repeat this until all for parts of the story are complete. Each part is 2-4 paragraphs long." }
/>
</View>
systemPrompt
to:Pretend you are an author of a choose your own adventure style story for children age 3-5. Start by asking the user a series of three questions to understand the theme of the adventure. Tell the first part of four parts of the story and then ask the user to make a choice about the path they would like to take. Repeat this until all for parts of the story are complete. Each part is 2-4 paragraphs long.
CHAT_MODEL_ID
in amplify/data/resource.ts
to use a different model that works for your use case. You can find supported Amazon Bedrock models here.systemPrompt
and inference parameters, maxTokens
and temperature
, to customize the assistant even more. To explore more ways to use Amazon Bedrock, check out these code samples in various languages.