AWS Logo
Menu
Democratising Generative AI - One API Call at a Time with Amazon Bedrock

Democratising Generative AI - One API Call at a Time with Amazon Bedrock

Learn about integrating your code with Amazon Bedrock, as part of a chatbot reference architecture in this reInvent 2023 interactive Chalk Talk (BOA314).

Paul Colmer
Amazon Employee
Published Nov 9, 2023
You can subscribe to this Chalk Talk here in the reInvent 2023 session catalogue:
Have you ever wanted to create a simple chatbot that integrates with the latest generative AI models?
In this 300 level 1 hour, reInvent chalk talk, I will cover the following:
  1. The basics of Generative AI, including it's definition, typical use cases and a definition of a foundation model.
  2. Whiteboard a typcial chatbot architecture using AWS Lambda, Amazon API Gateway, Amazon SageMaker and Amazon Bedrock, including integration with third-party models.
  3. Provide a hands-on demonstration using Python code on how to invoke the API calls within Amazon Bedrock, using simple prompt engineering techniques and a few lines of code.
  4. Provide a hands-on demonstration of how to use the Amazon Bedrock playground to experiment with each model, before writing code.
All of the links that I share in the chalk talk, will also be shared in this blog post.
A foundation model is one of the basic building blocks to creating an interactive chatbot. It is a pre-trained general purpose model trained on a massive amount of data. You can then choose to fine-tune your foundation model with specific information that may be domain-specific or proprietary to your organisation. For example, you could fine-tune a model using data relating to disease, to help your customers answer their questions about the symptoms and signs of illness.
With Amazon Bedrock there are a range of foundation models that you can choose for your use-case. Models such as Claude 2, supplied by Anthropic through to the Titan model supplied by Amazon.
You have the option to use the fine-tuning feature to train each model on your own data. There are a range of security features available, including encryption at rest for your custom models and controlling who can access your models using IAM policies. At no stage is your prompt data used by AWS or by any third parties. More information on the Amazon Bedrock features is here:
Amazon Bedrock is serverless, which means AWS manages and runs all of the infrastructure for you. AWS also takes care of the model hosting component too, and there are a range of model providers to choose from. That means you can focus on writing great code to serve your customers using a simple API call, without the burden of worrying about infrastructure, model hosting or model training.
You simply create a secure connection to Bedrock, name the model that you would like to use, along with the prompt, and within seconds you'll receive a response back.
A secure chatbot architecture places your code behind Amazon API Gateway, keeping your back-end business functions secure. I will sketch this out and explain each component in the talk. For example, API Gateway provides authentication and authorisation controls, mitigates risk against a DDoS attack and does not directly expose your code to the outside world. You can find more information on a more complex form of this architecture in the following Github article:
I recommend using a Lambda function to host your code in production. As part of the hands-on demonstration we will introduce you to the Langchain open-source library that will help make writing Python code easier when working with generative AI models, and to help you learn about coding with Amazon Bedrock. The Amazon Bedrock Workshop is the foundation upon which the demo has been built running in Python, using the SageMaker Studio IDE. The link to this workshop is here:
The two main API calls that you can use to invoke a model are InvokeModel and InvokeModelWithResponseStream. I will explain the usage of these API calls in the talk. Here is a link that provides more information on these and other API calls in Amazon Bedrock:
At the end of the talk, you will walk away with the knowledge and confidence to start developing your own chatbot apps that can integrate with the generative AI model of choice, including Amazon Bedrock. Take a look at what our customers are doing with Amazon Bedrock:
If you want to learn more about prompt engineering techniques or better understand how some of the foundation model algorithms work, please check out my previous blog post here:
That's it from me. I hope you have enjoyed this blog post, and I look forward to seeing you at reInvent 2023.
Take care,
Paul Colmer @digitalcolmer
About Paul
Paul is an AWS Senior Technical Trainer based in Brisbane Australia. His vision is to inspire customers through engaging experiences, empowering them to deliver business outcomes. Paul has a passion for helping people develop and grow, via compelling story telling, sharing his experiences, showing hands-on demonstrations, digital story boarding and with comedy.
Please connect with Paul on LinkedIn here:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments