
AWS Bedrock From Scratch: Build & Integrate AI Models Today!
By the end of this article, we will learn how to enable Amazon Bedrock up and running, ready to integrate foundation models into your application.
Published Jan 22, 2025
For the last few weeks, I was playing with Amazon Bedrock to create a game for the AWS Game Challenge. You can learn more about the game I created here. During this challenge, I had to learn about the Amazon Bedrock integration into my code. Here, I will share how to do it.

First things first, you need to request access to the models you need. On the AWS Console, go to the Model Access section, and click on Enable Specific Models. You can also enable all models, but let's enable them once they are needed. Don't forget that models are regional; you need to enable models per region, and some of the models might not be available in all regions.
I enabled the following models. There is a difference between the models chosen. Let's start with Llama 3 8B Instruct, which is quite straightforward to use.

It might take a few seconds for you to see "Access Granted" on the models you chose.
Here, we are going to use Python to write a simple code that uses the AWS Bedrock Llama 3 8B Instruct model. You need the model ID for that. On the Amazon Bedrock console, go to the Model Catalog section and find the model you want to use. You can copy the model ID here.

now it's time to code
let's create a virtual env and install the required packages:
Great job! Here is the output you will probably get when you run the code. You can change the prompt as you like.
Now let's use the other model, Llama 3.2 3B Instruct. You might notice that this model has Cross-region inference in front of it in the AWS model access section. This feature increases throughput and improves resiliency by routing your requests across multiple AWS Regions.
By changing the model_id=meta.llama3-2-3b-instruct-v1:0 variable, we encounter the following error:
Error: An error occurred (ValidationException) when calling the InvokeModel operation: Invocation of model ID meta.llama3-2-3b-instruct-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
Go to the Cross-region inference section on the AWS Bedrock console, and use a system-defined inference profile to search for your model here. In our case, it's US Meta Llama 3.2 3B Instruct. Copy the Inference Profile ARN, replace the account ARN with yours, and then place it in the model_id variable.
model_id=arn:aws:bedrock:us-east-1:12345678912:inference-profile/us.meta.llama3-2-3b-instruct-v1:0
Now you have a working code. Well done.