logo
Menu
AWS BedRock - Boto3 Demo - Anthropic's Claude Models

AWS BedRock - Boto3 Demo - Anthropic's Claude Models

Explore Anthropic's Claude Models: Purpose-built for Conversations, Summarisation, Q&A and more– Boto3 with Bedrock

Published Dec 13, 2023
Last Modified Mar 11, 2024

Previous Blog on this Learning Series

Blog 1: https://www.dataopslabs.com/p/aws-bedrock-learning-series-blog
Blog 2: https://www.dataopslabs.com/p/family-of-titan-text-models-cli-demo
Blog 3: https://www.dataopslabs.com/p/family-of-titan-text-models-boto3

Github Link - Notebook

https://github.com/jayyanar/learning-aws-bedrock/blob/main/blog4-Anthropic-Claude/Bedrock_Anthropic_Claude.ipynb

Environment Setup

I am using a vscode local environment with AWS credentials configured.

Install Latest Python

1
2
! python --version
Python 3.11.5
Upgrade pip
1
! pip install --upgrade pip

Install latest boto3,awscli, boto3-core

1
2
3
4
! pip install --no-build-isolation --force-reinstall \
"boto3>=1.33.6" \
"awscli>=1.31.6" \
"botocore>=1.33.6"

Load the Library

1
2
3
4
5
6
7
8
9
import json
import os
import sys

import boto3
import botocore

bedrock = boto3.client(service_name="bedrock")
bedrock_runtime = boto3.client(service_name="bedrock-runtime")

Anthropic - Claude2 Model

Anthropic's Claude models excel in conversations, summarization, and Q&A. Claude 2.1 introduces improvements, doubling the context window and enhancing reliability for various use cases."

Set the Prompt

1
2
3
4
5
6
7
8
9
claude_instant_prompt = """
Human: Please provide a summary of the following text.
<text>
Guardrails for Amazon Bedrock offer a robust framework for implementing safeguards in generative AI applications, aligning with responsible AI policies and use cases. These guardrails facilitate controlled user-Foundation Model (FM) interactions by filtering out undesirable content and will soon include redacting personally identifiable information (PII), enhancing privacy. Multiple guardrails, each configured for specific use cases, can be created, allowing continuous monitoring for policy violations.
The safeguard features within Guardrails encompass Denied Topics, enabling the definition of undesirable topics; Content Filters, with configurable thresholds for filtering harmful content in categories like hate, insults, sexual, and violence; and upcoming features like Word Filters and PII Redaction. The latter will allow blocking specific words and redacting PII in FM-generated responses, contributing to content safety.
Guardrails are compatible with various large language models on Amazon Bedrock, including Titan, Anthropic Claude, Meta Llama 2, AI21 Jurassic, Cohere Command FMs, as well as fine-tuned FMs and Agents.
AWS extends intellectual property indemnity, specifically uncapped for copyright claims, covering generative outputs from services like Amazon Titan models and CodeWhisperer Professional. This indemnity protects customers from third-party copyright claims linked to outputs generated in response to customer-provided inputs, emphasizing responsible usage and avoiding inputting infringing data or disabling filtering features. The standard IP indemnity safeguards customers from third-party claims regarding IP infringement, encompassing copyright claims for the services and their training data.
</text>
Assistant:"""Configure the Model configuration

Configure Model Configuration

1
2
3
4
5
6
7
8
body = json.dumps({
"prompt": claude_instant_prompt,
"max_tokens_to_sample":256,
"top_k":250,
"stop_sequences":[], #define phrases that signal the model to conclude text generation.
"temperature":0, #Temperature controls randomness; higher values increase diversity, lower values boost predictability.
"top_p":0.9 # Top P is a text generation technique, sampling from the most probable tokens in a distribution.
})

Invoke the Model

# You can also use "anthropic.claude-instant-v1 and it is faster and cheaper yet still very capable model, which can handle a range of tasks including casual dialogue, text analysis, summarization, and document question-answering.
1
2
3
4
5
6
response = bedrock_runtime.invoke_model(
body=body,
modelId="anthropic.claude-v2",
accept="application/json",
contentType="application/json"
)

Parse the Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
from io import StringIO
import sys
import textwrap

def llm_output_parser(*args, width: int = 100, **kwargs):
"""
llm_output_parser function:
Parses and prints output with line wrapping to a specified width.

Parameters:
- *args: Variable-length argument list for print function.
- width (int): Width for line wrapping (default 100).
- **kwargs: Keyword arguments for print function.

Returns:
None

Example Usage:
llm_output_parser("This is a sample output for llm_output_parser function.", width=50)
"""
buffer = StringIO()

try:
# Redirect sys.stdout to capture the output
_stdout = sys.stdout
sys.stdout = buffer
print(*args, **kwargs)
output = buffer.getvalue()
except Exception as e:
# Handle any exceptions that may occur during capturing
print(f"Error capturing output: {e}")
return
finally:
# Restore the original sys.stdout
sys.stdout = _stdout

try:
# Wrap lines and print the parsed output
for line in output.splitlines():
print("\n".join(textwrap.wrap(line, width=width)))
except Exception as e:
# Handle any exceptions that may occur during line wrapping
print(f"Error wrapping lines: {e}")response_body = json.loads(response.get('body').read())
llm_output_parser(response_body.get('completion'))

Text completion:

Here is a summary of the key points from the text:
- Amazon Bedrock's guardrails provide a framework for implementing safeguards in generative AI
applications, aligning with responsible AI policies and use cases.
- The guardrails allow controlled user-Foundation Model interactions by filtering out undesirable
content.
- Soon the guardrails will also redact personally identifiable information to enhance privacy.
- Multiple guardrails can be created, each configured for specific use cases, allowing continuous
monitoring for policy violations.
 

Comments