AWS Logo
Menu
AWS BedRock - Boto3 Demo - Anthropic's Claude Models

AWS BedRock - Boto3 Demo - Anthropic's Claude Models

Explore Anthropic's Claude Models: Purpose-built for Conversations, Summarisation, Q&A and more– Boto3 with Bedrock

Published Dec 13, 2023
Last Modified Mar 11, 2024

Previous Blog on this Learning Series

Blog 1: https://www.dataopslabs.com/p/aws-bedrock-learning-series-blog
Blog 2: https://www.dataopslabs.com/p/family-of-titan-text-models-cli-demo
Blog 3: https://www.dataopslabs.com/p/family-of-titan-text-models-boto3

Github Link - Notebook

https://github.com/jayyanar/learning-aws-bedrock/blob/main/blog4-Anthropic-Claude/Bedrock_Anthropic_Claude.ipynb

Environment Setup

I am using a vscode local environment with AWS credentials configured.

Install Latest Python

Upgrade pip

Install latest boto3,awscli, boto3-core

Load the Library

Anthropic - Claude2 Model

Anthropic's Claude models excel in conversations, summarization, and Q&A. Claude 2.1 introduces improvements, doubling the context window and enhancing reliability for various use cases."

Set the Prompt

Configure Model Configuration

Invoke the Model

# You can also use "anthropic.claude-instant-v1 and it is faster and cheaper yet still very capable model, which can handle a range of tasks including casual dialogue, text analysis, summarization, and document question-answering.

Parse the Configuration

Text completion:

Here is a summary of the key points from the text:
- Amazon Bedrock's guardrails provide a framework for implementing safeguards in generative AI
applications, aligning with responsible AI policies and use cases.
- The guardrails allow controlled user-Foundation Model interactions by filtering out undesirable
content.
- Soon the guardrails will also redact personally identifiable information to enhance privacy.
- Multiple guardrails can be created, each configured for specific use cases, allowing continuous
monitoring for policy violations.
 

Comments