logo
Menu
Family of Titan Text Models - Boto3 Demo

Family of Titan Text Models - Boto3 Demo

AWS Bedrock - Learning Series. Explore sample code for leveraging the Titan Text Model family, including Express, Lite, and Embedding.

Published Dec 13, 2023
Last Modified Mar 11, 2024

Github Link - Notebook

https://github.com/jayyanar/learning-aws-bedrock/blob/main/blog3-Titan/Bedrock_Titan_Learning.ipynb

Environment Setup

I am using vscode local environment with AWS Credential configured.

Install Latest Python

1
2
! python --version
Python 3.11.5

Upgrade pip

1
! pip install --upgrade pip

Install latest boto3,awscli, boto3-core

1
2
3
4
! pip install --no-build-isolation --force-reinstall \
"boto3>=1.33.6" \
"awscli>=1.31.6" \
"botocore>=1.33.6"

Load the Library

1
2
3
4
5
6
7
import json
import os
import sys
import boto3
import botocore
bedrock = boto3.client(service_name="bedrock")
bedrock_runtime = boto3.client(service_name="bedrock-runtime")

Titan Text Model - Express

Set the Prompt

1
express_prompt = "write article about AWS Lambda"

Configure the Model configuration

1
2
3
4
5
6
7
8
9
body = json.dumps({
"inputText": express_prompt,
"textGenerationConfig":{
"maxTokenCount":128,
"stopSequences":[], #define phrases that signal the model to conclude text generation.
"temperature":0, #Temperature controls randomness; higher values increase diversity, lower values boost predictability.
"topP":0.9 # Top P is a text generation technique, sampling from the most probable tokens in a distribution.
}
})

Invoke the Model

1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="amazon.titan-text-express-v1",
accept="application/json",
contentType="application/json"
)

Parse the Configuration

1
2
response_body = json.loads(response.get('body').read())
outputText = response_body.get('results')[0].get('outputText')
#The code text = outputText[outputText.index('\n')+1:] extracts the substring following the first newline character in the outputText string. This is useful for scenarios where the initial content before the first newline is irrelevant, and you want to capture the text that comes after it. The index('\n')+1 locates the position of the first newline character, and the slicing [index+1:] fetches the subsequent text, assigning it to the variable text
1
2
3
text = outputText[outputText.index('\n')+1:]
about_lambda = text.strip()
print(about_lambda)Text within this block will maintain its original spacing when published
Text completion:
AWS Lambda, an Amazon Web Services (AWS) serverless computing service, enables developers to execute code in response to events without managing infrastructure. It offers high scalability, automatically adjusting to application demand. With a pay-as-you-go model, it's cost-effective for unpredictable workloads. Supporting various languages like Python and Java, Lambda provides flexibility. Integration with AWS services like S3 and DynamoDB streamlines application development. To use AWS Lambda, developers create functions, configure event sources (e.g., CloudWatch, S3), and invoke functions through the AWS Lambda API or CLI, providing a versatile and efficient solution for building serverless applications.

Titan Text Model - Lite

Set the Prompt

1
lite_prompt = "2 difference between AWS DynamoDB and AWS Redis"

Configure the Model configuration

1
2
3
4
5
6
7
8
9
body = json.dumps({
"inputText": lite_prompt,
"textGenerationConfig":{
"maxTokenCount":128,
"stopSequences":[], #define phrases that signal the model to conclude text generation.
"temperature":0, #Temperature controls randomness; higher values increase diversity, lower values boost predictability.
"topP":0.9 # Top P is a text generation technique, sampling from the most probable tokens in a distribution.
}
})

Invoke the Model

1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="amazon.titan-text-lite-v1",
accept="application/json",
contentType="application/json"
)

Parse the Configuration

1
2
3
4
5
response_body = json.loads(response.get('body').read())
outputText = response_body.get('results')[0].get('outputText')
text = outputText[outputText.index('\n')+1:]
compare_dynamodb_redis = text.strip()
print(compare_dynamodb_redis)
Text completion:
Amazon DynamoDB is a fully managed NoSQL database service in the cloud that offers fast and predictable performance with seamless scalability. It is designed to run high-performance applications at any scale. On the other hand, Amazon Redis is a fully managed in-memory data structure store that provides real-time analytics, caching, and key-value data storage. It is suitable for applications that require fast data retrieval and low latency.

Titan Text Model - Embedding

Set the Prompt

1
embed_prompt = "AWS re:Invent 2023, our biggest cloud event of the year, in Las Vegas, Nevada, featured keynotes, innovation talks, builder labs, workshops, tech and sustainability demos"

Configure the Model configuration

1
2
3
body = json.dumps({
"inputText": embed_prompt,
})

Invoke the Model

1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="amazon.titan-embed-text-v1",
accept="application/json",
contentType="application/json"
)

Parse the Configuration

1
2
response_body = json.loads(response.get("body").read())
embedding_output = response_body.get("embedding")
#This code retrieves the "embedding" vector from the response body and prints its length along with a preview of the first three and last three values, showing a snippet of the embedding vector.
1
print(f"You can find the Embedding Vector {len(embedding_output)} values\n{embedding_output[0:3]+['...']+embedding_output[-3:]}")
Text within this block will maintain its original spacing when publishedYou can find the Embedding Vector 1536 values
[0.40429688, -0.38085938, 0.19726562, '...', 0.2109375, 0.012573242, 0.18847656]

Titan MultiModel - Embedding

Download the Image

1
! wget https://raw.githubusercontent.com/jayyanar/learning-aws-bedrock/main/blog3/tajmahal.jpg

Configure the Model configuration

1
2
3
4
5
6
7
8
# The size of the file the 1024*1024 - Created using Stability XL
with open("tajmahal.jpg", "rb") as image_file:
tajmahal_image = base64.b64encode(image_file.read()).decode('utf8')
body = json.dumps(
{
"inputImage": tajmahal_image
}
)

Invoke the Model

1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="amazon.titan-embed-image-v1",
accept="application/json",
contentType="application/json"
)

Parse the Configuration

1
2
response_body = json.loads(response.get("body").read())
embedding_output = response_body.get("embedding")
#This code retrieves the "embedding" vector from the response body and prints its length along with a preview of the first three and last three values, showing a snippet of the embedding vector.
1
print(f"You can find the Embedding Vector {len(embedding_output)} values\n{embedding_output[0:3]+['...']+embedding_output[-3:]}")
Text within this block will maintain its original spacing when publishedYou can find the Embedding Vector 1024 values
[0.016090883, 0.03545449, 0.0026249958, '...', 0.0065908986, 0.0172727, 0.002738632]

Titan Image Generator

Prompt for Image Generation

1
image_prompt = "Beautiful basketball ground"

Configure the Model configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
body = json.dumps(
{
"taskType": "TEXT_IMAGE",
"textToImageParams": {
"text": image_prompt, # Required
# "negativeText": "<text>" # Optional
},
"imageGenerationConfig": {
"numberOfImages": 1, # Range: 1 to 5
"quality": "premium", # Options: standard or premium
"height": 1024, # Supported height list in the docs
"width": 1024, # Supported width list in the docs
"cfgScale": 7.5, # Range: 1.0 (exclusive) to 10.0
"seed": 42 # Range: 0 to 214783647
}
}
)

Invoke the Model

1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="amazon.titan-image-generator-v1",
accept="application/json",
contentType="application/json"
)

Parse and store the Image Output

1
2
3
4
5
6
7
import base64
from PIL import Image
from io import BytesIO
response_body = json.loads(response.get("body").read())
images = [Image.open(BytesIO(base64.b64decode(base64_image))) for base64_image in response_body.get("images")]
for i, img in enumerate(images):
img.save(f"tajmahal_bedrock_{i+1}.jpg")
Thanks for reading the blog3 --- Feel free to share your feedback
 

Comments