Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

AWS Logo
Menu
Family of Titan Text Models - CLI Demo

Family of Titan Text Models - CLI Demo

This is continuation of AWS Bedrock - Learning Series - Blog 1: https://community.aws/content/2ZAHJMCN4Ffi6W2DPJFIgq8MHkX

Published Dec 13, 2023

Model Name: amazon.titan-embed-text-v1

About Model:

The latest Titan Embeddings G1 – Text v1.2 processes 8k tokens, generating a 1536-dimensional vector. It operates in 25+ languages and excels in text retrieval, semantic similarity, and clustering. While supporting long documents, it's advisable to segment them for optimal retrieval tasks, like using paragraphs or sections.

AWS CLI:

1
2
3
aws bedrock-runtime invoke-model --model-id amazon.titan-embed-text-v1 \
--body "{\"inputText\":\"Write a Article About AWS Cloudwatch for Linkedin\"}" \
--cli-binary-format raw-in-base64-out --region us-east-1 embedding-invoke-model-output.txt

AWS CLI Output: Number of Tokens

1
2
3
$ cat embedding-invoke-model-output.txt | jq -r '.inputTextTokenCount'

"inputTextTokenCount":9,}

AWS CLI Output: Embedding

1
2
3
4
5
6
7
8
9
10
11
$ cat embedding-invoke-model-output.txt | jq -r '.embedding' | head

0.6015625,
-0.625,
0.28125,
-0.006958008,
-0.47460938,
0.26757812,
-0.38476562,
-0.0005950928,
-0.55859375,

Model Name: amazon.titan-text-lite-v1

About Model:

Amazon Titan Text Lite is a lightweight, efficient model perfect for fine-tuning English-language tasks like summarization and copywriting. It caters to customers seeking a smaller, cost-effective, and highly customizable model. It supports various formats, including text generation, code generation, rich text formatting, and orchestration (agents). Key model attributes encompass fine-tuning, text generation, code generation, and rich text formatting.

AWS CLI:

1
2
3
4
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body "{\"inputText\":\"Write article about benifits of Sagemaker for DataScientist\",\"textGenerationConfig\":{\"maxTokenCount\":256,\"stopSequences\":[],\"temperature\":0,\"topP\":0.9}}" \
--cli-binary-format raw-in-base64-out --region us-east-1 titanlite-invoke-model-output.txt

Text Completion Details:

1
$ cat titanlite-invoke-model-output.txt | jq -r '.results[].outputText'
Amazon SageMaker is a fully managed service that makes machine learning (ML) and artificial intelligence (AI) easy for developers. With SageMaker, you can easily deploy, monitor, and manage ML models at scale. SageMaker provides a wide range of built-in data preparation, feature engineering, and model training capabilities, making it easy for data scientists to get started with machine learning. It also offers a flexible runtime environment that can run on multiple machine learning frameworks, including TensorFlow, Scikit-Learn, and PyTorch. Additionally, SageMaker provides built-in support for popular data storage and transfer services, such as Amazon S3 and Amazon Redshift, making it easy to integrate data with machine learning models. Overall, SageMaker is a powerful tool that makes it easy for data scientists to build, train, and deploy machine learning models at scale, making it a great choice for anyone looking to get started with machine learning.

Model Name: amazon.titan-tg1-large

About Model:

Amazon Titan Large is efficient model perfect for fine-tuning English-language tasks like summarization, create article, marketing campaign. generate code.
1
2
3
4
aws bedrock-runtime invoke-model \
--model-id amazon.titan-tg1-large \
--body "{\"inputText\":\"Provide Python to create S3 Bucket name awsome-s3-greatservice\",\"textGenerationConfig\":{\"maxTokenCount\":256,\"stopSequences\":[],\"temperature\":0,\"topP\":0.9}}" \
--cli-binary-format raw-in-base64-out --region us-east-1 titanlarge-invoke-model-output.txt

Text Completion Details:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat titanlarge-invoke-model-output.txt | jq -r '.results[].outputText'

To create a new S3 bucket in Amazon S3, you can use the following Python code:
```Python
import boto3
# Create a boto3 S3 client
s3 = boto3.client('s3')
# Specify the bucket name
bucket_name = 'awesome-s3-greatservice'
# Create the bucket
try:
s3.create_bucket(Bucket=bucket_name)
print(f"Bucket '{bucket_name}' created successfully.")
except Exception as e:
print(f"Error creating bucket '{bucket_name}': {e}")
```
The 'boto3' library is imported, which provides the necessary AWS SDK for Python. An S3 client is created using 'boto3.client('s3')', which allows us to interact with S3 services. The 'bucket_name' variable is set to the desired name for the S3 bucket. The 'create_bucket()' method is called on the S3 client, passing the 'Bucket' parameter with the value of 'bucket_name'. This method creates a new S3 bucket with the specified name.

Model Name: amazon.titan-text-express-v1

About Model:

Amazon Titan Text Express, with a context length of up to 8,000 tokens, excels in advanced language tasks like open-ended text generation and conversational chat. It's also optimized for Retrieval Augmented Generation (RAG). Initially designed for English, the model offers preview multilingual support for over 100 additional languages.

AWS CLI:

1
2
3
4
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-express-v1 \
--body "{\"inputText\":\"Write a Article About AWS Cloudwatch for Linkedin\",\"textGenerationConfig\":{\"maxTokenCount\":256,\"stopSequences\":[],\"temperature\":0,\"topP\":0.9}}" \
--cli-binary-format raw-in-base64-out --region us-east-1 titanexpress-invoke-model-output.txt

Text Completion Details:

1
$ cat titanexpress-invoke-model-output.txt| jq -r '.results[].outputText'
Amazon CloudWatch is a monitoring and observability service provided by Amazon Web Services (AWS). It allows users to monitor their AWS resources, applications, and services in real-time, and provides a wide range of features and capabilities to help users optimize their infrastructure, improve performance, and detect and resolve issues.
One of the key features of CloudWatch is its ability to collect and store metric data. Metric data is information about the performance, behavior, and health of AWS resources and applications. CloudWatch can collect metrics from a wide range of sources, including AWS services such as Amazon EC2, Amazon S3, and Amazon DynamoDB, as well as custom applications and services.
CloudWatch provides a flexible and scalable way to store and analyze metric data. Users can create metric alarms, which trigger notifications when a metric exceeds a specified threshold. This can help users quickly identify and respond to issues before they impact their applications or services. CloudWatch also provides a range of visualization tools, such as graphs, charts, and dashboards, which can help users easily understand and analyze metric data.
Thanks for reading - Looking forward your feedback

Stay tuned for below models in the next blog…

  • amazon.titan-image-generator-v1
  • amazon.titan-embed-image-v1

Comments

Log in to comment