
Harness the power of Nova Canvas for creative content generation
Learn how to use Nova Canvas to quickly generate and manipulate creative content to accelerate productivity
Gareth Floodgate
Amazon Employee
Published Jan 14, 2025
Last Modified Jan 17, 2025
The Amazon Nova model family was announced as generally available at re:invent 2025. They include
- Amazon Nova Micro
- Amazon Nova Lite
- Amazon Nova Pro
- Amazon Nova Canvas
- Amazon Nova Reel
This guide will focus on Amazon Nova Canvas. A model that can be used to generate images from text and/or input images, along with other image manipulation features. You will learn how to generate images, how to tune the image style and attributes. I will also discuss some of the constraints for working with Nova Canvas.
"Amazon Nova Canvas is a state-of-the-art image generation model that creates professional-grade images from text or images provided in prompts. Amazon Nova Canvas also provides features that make it easy to edit images using text inputs, and it provides controls for adjusting color scheme and layout. The model comes with built-in controls to support safe and responsible AI use. These include features such as watermarking and content moderation." - source: Amazon Nova creative content generation models
For more information, see the Image Generation section of the Amazon Nova user guide.
At the time of writing, the Nova Canvas model can be invoked using the Amazon Bedrock Runtime API
InvokeModel
. More information on this API can be found at the InvokeModel API reference documentation.This is a synchronous operation.
Please Note: For larger image generation tasks you may need to increase the default timeout value of your client for direct API calls, or when using an AWS SDK, the default timeout value. The sample code shown throughout this guide demonstrates how the timeout value can be modified in the BOTO3 SDK, please consult the relevant SDK documentation for the equivalent in other language SDKs.
Nova Canvas is not just a text to image model, it provides a series of features that can be used for different use cases
- Text-to-image (T2I) generation
- Inpainting
- Outpainting
- Image Variation
- Image Conditioning
- Subject Consistency
- Color Guided Content
- Background Removal
- Content Provenance
Useful Information on how to use these features can be found at the links below:
This guide will initially focus on text to image generation
IMPORTANT: Amazon Nova Canvas provides significant flexibility with respect to output image generation. You must, however, ensure that the dimensions you specify must adhere to the following constraints:
- Each side must be between 320-4096 pixels, inclusive.
- Each side must be evenly divisible by 16.
- The aspect ratio must be between 1:4 and 4:1. That is, one side can't be more than 4 times longer than the other side.
- The total pixel count must be less than 4,194,304.
source - Supported Image Resolutions in Nova the user guide documentation
IMPORTANT: To use this model you will need to:
- Authorise access to the Amazon Nova Canvas model via the Amazon Bedrock console
- Have an active session with attached IAM permission to call
bedrock:InvokeModel
on the Amazon Nova Canvas modelamazon.nova-canvas-v1:0
inus-east-1
(current region at time of writing)
Imagine you work for a creative agency. You have been asked to put together a draft story for a client meeting to illustrate the clients concept. Their brief states: "We want a story about bears in the woods".
You have only hours to put together an example to show to the client, and certainly do not have time to illustrate a full page of content.
This is where you can use the power of Nova Canvas !
The demonstration code is written in Python 3.10, with the following
requirements.txt
You can either copy/paste these code blocks into your favourite IDE, or if you wish you can run them in a SageMake Jupyter Notebook. If you choose the notebook option, you can create a cell at the top of your notebook as follows to install the packages:
To generate an image, you can use the text to image operation in Nova Canvas in combination with the Amazon Bedrock
InvokeModel
API. The code block below shows how you can invoke this model:At the end of this invocation, the generated image data will be in the
image_bytes
variable. You can now save this image using the following code snippet:You can now open the generated image, it will look similar to the image below.

If the generated image is not as you desire, then there are several mechanisms where you can modify the output.
You can ask Nova Canvas to output multiple images for the given settings. This allows you to choose 'the most appropriate' output. This can be achieved by modifying the request body as follows:
Important: The maximum number of images is 5
You can then access the multiple generated images using the response as follows:
You can see the three images generated below
(Please Note: The images have been placed side by side using an image editor for the purposes of showing in this blog)

If the generated image does not capture the gist of what you expect, or the output needs tweaking (maybe a missing feature), you can modify the prompt. This can be achieved by modifying the prompt text as follows:
You can see the image generated below

If the prompt generated a good image, but you want to try some variations of the image, you can change the seed value. This can be achieved by modifying the request body as follows:
You can see the image generated below

If you feel that the model is not following the prompt instructions well, you can modify the cfgScale parameter to change how closely the model follows the prompt vs how much randomness is introduced. The lower the value, the less closely the prompt will be followed.
Important: The minimum
cfgScale
value is 1.1, the maximum cfgScale
value is 10.0You can see the image generated below

If you want to remove specific features from an image, you must use the
negativeText
property. Note that the words used themselves are not negative. We are telling the model what to remove.You can see the image generated below. Note that the windows are gone.

You now understand the mechanisms for tweaking the output from Nova Canvas. For more information on prompt optimisation and output tweaking, see the Nova documentation here
Now that you know how to manage the image generation, you decide to scale this do generate a storyboard page for you - ready to show to the client.
You decide on a story, cell layout, and put together some Python code to generate the storyboard as a single page PDF.
Update
requirements.txt
You can either copy/paste these code blocks into your favourite IDE, or if you wish you can run them in a SageMake Jupyter Notebook. If you choose the notebook option, you can create a cell at the top of your notebook as follows to install the packages:
The code snippet for this is shown below:
You run the code, and it generates you a PDF ready to show. The output will look something like this:

You show the finished page to your manager. He is really impressed that you were able to produce an output that shows the concept in such a fast pace. He does, however, give you some feedback that the client prefers to see cartoon style mock-ups in meetings.
Fortunately, Nova Canvas allows a user to prompt with differing image styles.
You modify your code, adding a method that allows changing the prompting, so that you can switch between different styles. The modified code block is shown below
You then change the style to cartoon and re-run the generation. You now have the same output, but conceptualised as a cartoon.

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.