
AWS BedRock - Boto3 Demo - Cohere Model
Cohere offers a range of text generation and representation models designed for diverse business applications. The Command model, with 52 billion parameters, excels in tasks such as chat, text generation, and summarization. A lighter version, Command Light, with 6 billion parameters, provides a more resource-efficient alternative.
1
2
! python --version
Python 3.11.5
1
! pip install --upgrade pip
1
2
3
4
! pip install --no-build-isolation --force-reinstall \
"boto3>=1.33.6" \
"awscli>=1.31.6" \
"botocore>=1.33.6"
1
2
3
4
5
6
7
8
9
import json
import os
import sys
import boto3
import botocore
bedrock = boto3.client(service_name="bedrock")
bedrock_runtime = boto3.client(service_name="bedrock-runtime")
1
cohere_command_prompt = "Create a Story Similar to Marvel Antman story"
1
2
3
4
5
body = json.dumps({
"prompt": cohere_command_prompt,
"max_tokens":1024,
"temperature":0.2 #Temperature controls randomness; higher values increase diversity, lower values boost predictability.
})
1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="cohere.command-text-v14", # REPLACE WITH ai21.j2-mid-v1 lessthan powerful than Ultra but cost effective
accept= "*/*",
contentType="application/json"
)
1
2
3
response_body = json.loads(response.get('body').read())
parse_text = response_body['generations'][0]['text']
parse_text
Would you like me to expand on the story or provide a different ending?
1
cohere_command_light_prompt = "Give me benefits about Artificial Intelligence"
1
2
3
4
5
body = json.dumps({
"prompt": cohere_command_light_prompt,
"max_tokens":128,
"temperature":0.2 #Temperature controls randomness; higher values increase diversity, lower values boost predictability.
})
1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="cohere.command-light-text-v14", # REPLACE WITH ai21.j2-mid-v1 lessthan powerful than Ultra but cost effective
accept= "*/*",
contentType="application/json"
)
1
2
3
response_body = json.loads(response.get('body').read())
parse_text = response_body['generations'][0]['text']
parse_text
1
embed_prompt = "AWS re:Invent 2023, our biggest cloud event of the year, in Las Vegas, Nevada, featured keynotes, innovation talks, builder labs, workshops, tech and sustainability demos"
1
2
3
4
5
6
7
8
9
10
11
12
corpus = [
"I love playing football",
"Football is my favorite sport",
"I like shoot three points in Basketball"
"Basketball is an exciting game",
"I like swimming in the ocean"
]
body = json.dumps({
"texts" : corpus,
"input_type" : 'search_query'
})
1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="cohere.embed-english-v3",
accept="application/json",
contentType="application/json"
)
1
2
3
response_body = json.loads(response.get("body").read())
embedding_output = response_body.get("embeddings")
print (embedding_output)