AWS Logo
Menu
Telling bedtime stories with generative AI

Telling bedtime stories with generative AI

Learn how to build an interactive storyteller with AWS Amplify, Amazon Bedrock, and the Converse API

Jenna Pederson
Amazon Employee
Published Aug 14, 2024
Last Modified Aug 15, 2024
I read to my kid a lot. To be clear, I read one or two books a lot to my kid. To the point that I have the words to both books memorized. What if we could create our own story, together, to tell a new story each night with new characters? In this article, I will show how we can use an AI chat integration and customize it to act as an interactive bedtime storyteller. With this approach, you could build a personalized meal planner, a travel guide, or an assistant to help you ideate and refine your social media copy.
Let's first check out the high-level solution.

Our solution

I'm working within a React app built and deployed with AWS Amplify. If you'd like to start here, you can spin up the sample app using this quickstart guide and then incorporate the code I share.
We'll use Amazon Bedrock as a datasource and make a custom query to generate a story using the new Converse API. The Converse API will allow us to handle multi-turn conversations and maintain a conversation history. We'll go a step further, utilizing the systemPrompt feature to provide context, instructions, and guidelines on how it should respond.
The example code below is Typescript, but you can also find sample code for other languages here.
Let's get started building!

Add a data source

Our first task is to add Amazon Bedrock as a data source to our app. This approach gives an AWS Lambda function (we'll define that in the next step) permission to call a specific foundation model in Amazon Bedrock. We're limiting the actions it can make to bedrock:InvokeModel and the specific model resource we specify.
To do this, let's add the following to the amplify/backend.ts file:
Next, we need to define the custom query and connect it to the handler function that will make the call to Amazon Bedrock.

Define a custom query

In this step, we first define the generateChatResponseFunction using defineFunction and configure it with the model id, timeout, and which Node runtime top use. The entry key will specify the handler that with the core logic.
Now, we define a custom query generateChatResponse and add it to the schema. Here, we define the arguments allowed, the return type, and which function to use.
To do this, update the amplify/data/resource.ts file like this:
In the code above, the first argument is a JSON string, conversation. This will include the entire conversation from our interactions with our interactive storyteller.
The second argument is a string, systemPrompt. We'll use this to customize our interactions with the model, by providing context, instructions, and guidelines on how to respond.

Create the handler code

Next, we'll create the handler code with our core logic that makes the call to Amazon Bedrock. Here, we'll initialize the BedrockRuntimeClient, prepare the input and conversation as a ConverseCommandInput, and then make the call.
When this handler code is called, the conversation argument will be passed a JSON string representing the full conversation between the user and the interactive story teller. This is then parsed and loaded as an object with this structure:
This allows us to maintain a conversation history and handle multi-turn conversations.
To implement this, create the amplify/data/generateChatResponse.ts file with the code below:
The maxTokens inference parameter defines the maximum number of tokens the model can generate and temperature controls how creative it can get (a number between 0 and 1 with closer to 1 being more creative). Read more about these inference parameters here.

Create the component to facilitate the conversation

Our last big step is creating the component that makes the function call via the custom query. This component will facilitate the conversation between the user and the interactive story teller. Here's what our component will look like:
React component showing conversation between assistant and user with a text field for user message.
Below, I'll cover the major parts of the ChatComponent.tsx and how each part works, and then I'll share the full code (jump to [here] if that's all you're looking for).
In the JSX portion of ChatComponent.tsx, we iterate over the conversation between the AI (labeled assistant here) and the user (labeled human). The conversation alternates between the assistant and human, so we style each a little different.
We add a TextField component to capture the user's message, setting the inputValue via handleInputChange when text is entered and calling setNewUserMessage whenever the Enter key is pressed or the Send button is pressed. There are also properties to show an error message if one exists.
We'll use a simple handleInputChange function that clears the error if there was one whenever the user starts typing and then sets the input value.
Next, we'll implement setNewUserMessage to add the new message from the human (using the role user) to the conversation. This is in the same structure as we covered earlier, alternating between user and assistant roles.
Now it's time to make the call to the generateChatResponse query to send the message to the model with Amazon Bedrock. We use the useEffect hook because we need to wait on setConversation to be complete before making the call. We implement fetchChatResponse as an async function to make the actual call and only call it when there is a conversation where the last message is from the user. We do this last check because we only want to send user messages to the model. We do not want to send the assistant's responses, which we also push back onto the conversation (remember that alternating user-assistant conversation array from earlier?) so the model has our entire conversation history as context.

Full code for the component

Here is the full code for the component:
And to style the human-assistant messages:

Putting it all together

Now we have our backend code - the custom query with a handler backed by a Lambda function that makes the call to Amazon Bedrock. And our chat component - a React component that displays the full chat conversation between human and AI assistant with a text field to collect the human's next message.
We'll add the following to our App.tsx file:
To make this customized to our use case -- interactive bedtime storyteller -- we set the systemPrompt to:
Pretend you are an author of a choose your own adventure style story for children age 3-5. Start by asking the user a series of three questions to understand the theme of the adventure. Tell the first part of four parts of the story and then ask the user to make a choice about the path they would like to take. Repeat this until all for parts of the story are complete. Each part is 2-4 paragraphs long.
You can customize this to your own use case so that your assistant has context, instructions, and guidelines on how to respond to your user.
Additionally, you may also want to update the CHAT_MODEL_ID in amplify/data/resource.ts to use a different model that works for your use case. You can find supported Amazon Bedrock models here.

Wrapping up

And that's it! In this article, we used the interactive bedtime storyteller use case to show how to integrate the Amazon Bedrock Converse API into your Amplify (Gen 2) app, how to send the full conversation to maintain a history and handle multi-turn conversations, and how to use the systemPrompt and inference parameters, maxTokens and temperature, to customize the assistant even more. To explore more ways to use Amazon Bedrock, check out these code samples in various languages.
I hope this has been helpful. If you'd like more like this, smash that like button 👍, share this with your friends 👯, or drop a comment below 💬.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments