Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

AWS Logo
Menu
Exploring Claude 3.7 Sonnet's Hybrid Reasoning on Amazon Bedrock

Exploring Claude 3.7 Sonnet's Hybrid Reasoning on Amazon Bedrock

Discover how to leverage Claude 3.7 Sonnet's hybrid reasoning capabilities on Amazon Bedrock with practical Python examples comparing standard and extended thinking modes.

Bhavin Patel
Amazon Employee
Published Feb 26, 2025
Anthropic's Claude 3.7 Sonnet has arrived on Amazon Bedrock, bringing with it a groundbreaking new capability: hybrid reasoning. This innovative model can now perform detailed step-by-step thinking before responding, giving you unprecedented insight into its problem-solving process.
In this article, we'll explore Claude 3.7 Sonnet's reasoning capabilities through practical Python examples. You'll see how to enable reasoning, compare standard and extended thinking modes, and even combine reasoning with tool use.

✨ What Makes Claude 3.7 Sonnet Special?

Claude 3.7 Sonnet represents a significant advancement in generative AI. As the first hybrid reasoning model in the Claude family, it can work through complex problems using careful, step-by-step reasoning while maintaining the ability to provide quick responses when appropriate.
Key features include:
  • Hybrid Reasoning - A single model that can toggle between standard responses and detailed reasoning
  • Extended Thinking Mode - Analyses problems in detail with transparent step-by-step thinking
  • Adjustable Reasoning Budget - Control how many tokens are allocated to the thinking process
  • Massive Output Capacity - Up to 15x longer output than predecessor models (up to 128K tokens)
  • Enhanced Coding Capabilities - Industry-leading performance on coding benchmarks

🛠️ Prerequisites

Before getting started with the examples in this article, make sure you have:
  1. An AWS account with access to Amazon Bedrock
  2. AWS CLI installed and configured with appropriate permissions
  3. Python 3.x installed
  4. The latest version of boto3 and AWS CLI
To install or upgrade boto3 and the AWS CLI:
1
pip install --upgrade boto3 awscli
Most importantly, you need to request access to Claude 3.7 Sonnet in your AWS account:
  1. Navigate to the Amazon Bedrock console
  2. Go to "Model access" under "Bedrock configurations"
  3. Select "Modify model access" and request access for Claude 3.7 Sonnet
Claude 3.7 Sonnet is currently available in the following regions:
  • us-east-1 (N. Virginia)
  • us-east-2 (Ohio)
  • us-west-2 (Oregon)

🧠 Example 1: Comparing Standard and Extended Thinking Modes

Our first example compares Claude 3.7 Sonnet's responses with and without reasoning enabled. This helps illustrate the difference between standard mode and extended thinking mode.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
import boto3
from botocore.exceptions import ClientError

def compare_thinking_modes():
"""
Compares Claude 3.7 Sonnet's responses with and without reasoning enabled.
Shows the difference in responses, thinking process, latency, and token usage.
"""

# Create the Amazon Bedrock runtime client
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Specify the model ID for Claude 3.7 Sonnet
model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"

# Example prompt to test
prompt = "What would be the impact on global sea levels if all ice in Greenland melted?"

print(f"PROMPT: {prompt}\n")

try:
# Get standard response (without reasoning)
print("Getting standard response...")
standard_response = client.converse(
modelId=model_id,
messages=[
{
"role": "user",
"content": [{"text": prompt}]
}
],
inferenceConfig={"maxTokens": 8000} # We use default values for other parameters
)

# Get response with reasoning enabled
print("Getting extended thinking response...")
reasoning_response = client.converse(
modelId=model_id,
messages=[
{
"role": "user",
"content": [{"text": prompt}]
}
],
inferenceConfig={"maxTokens": 8000}, # Max tokens must be higher than budget_tokens
additionalModelRequestFields={
"thinking": {
"type": "enabled",
"budget_tokens": 4000 # Set a budget for the thinking process
}
}
)

# Extract standard response content
standard_content_blocks = standard_response["output"]["message"]["content"]
standard_text = None
for block in standard_content_blocks:
if "text" in block:
standard_text = block["text"]
break

# Extract reasoning response content
reasoning_content_blocks = reasoning_response["output"]["message"]["content"]
reasoning_text = None
thinking = None
for block in reasoning_content_blocks:
if "text" in block:
reasoning_text = block["text"]
elif "reasoningContent" in block:
thinking = block["reasoningContent"]["reasoningText"]["text"]

# Display results for standard mode
print("\n--- STANDARD MODE ---")
print(f"Tokens used: {standard_response['usage']['inputTokens']} input, {standard_response['usage']['outputTokens']} output")
print("\nRESPONSE:")
print(standard_text)

# Display results for reasoning mode
print("\n\n--- EXTENDED THINKING MODE ---")
print(f"Tokens used: {reasoning_response['usage']['inputTokens']} input, {reasoning_response['usage']['outputTokens']} output")

if thinking:
print("\nTHINKING PROCESS:")
print(thinking)

print("\nRESPONSE:")
print(reasoning_text)

except (ClientError, Exception) as e:
print(f"ERROR: Can't invoke Claude 3.7 Sonnet. Reason: {e}")
exit(1)

if __name__ == "__main__":
compare_thinking_modes()

Output Analysis: Standard vs. Extended Thinking

When I ran this example with the prompt What would be the impact on global sea levels if all ice in Greenland melted? I received two distinctly different responses:

Standard Mode Response:

1
2
3
4
5
6
7
8
9
10
11
12
# Impact of Greenland Ice Sheet Melting

If all ice in Greenland completely melted, global sea levels would rise by approximately **7.2 meters (24 feet)**.

This significant rise would:
- Inundate coastal cities worldwide
- Displace hundreds of millions of people
- Redraw coastlines globally
- Impact critical infrastructure in low-lying areas

For context, while the Antarctic ice sheet contains much more ice (potential for ~58 meters of sea level rise), Greenland's ice is currently melting at a faster rate due to Arctic warming.
Scientists estimate complete melting of Greenland's ice sheet would take centuries under current climate scenarios, though the process is accelerating with climate change.

Extended Thinking Process:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
This question is asking about the potential impact on global sea levels from a complete melt of Greenland's ice sheet. This is a well-studied scientific question related to climate change impacts.

Greenland contains the second largest ice sheet on Earth (after Antarctica). The Greenland ice sheet contains approximately 2.85-3 million cubic kilometers of ice. If all of this ice melted, it would contribute significantly to sea level rise.

Based on scientific estimates, the complete melting of the Greenland ice sheet would raise global sea levels by approximately 7.2 meters (24 feet) to 7.4 meters. This is a substantial increase that would have dramatic consequences for coastal cities and low-lying areas around the world.

Some key facts about this scenario:

1. Timeframe: Complete melting would likely take centuries or millennia under current warming projections, although the process is already underway.

2. Global impact: A 7+ meter sea level rise would inundate many coastal cities, island nations, and heavily populated river deltas.

3. Current situation: Greenland is already losing ice mass, contributing to observed sea level rise. The rate of ice loss has been accelerating in recent decades due to climate change.

4. Comparison: Antarctica's ice sheet contains much more ice and if it completely melted, it would raise sea levels by about 58 meters (190 feet).

In providing my answer, I'll focus on the estimated sea level rise and briefly mention the impacts this would have, while noting that this scenario represents a long-term projection rather than an immediate threat.

Extended Thinking Mode Response:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Impact of Complete Greenland Ice Melt on Sea Levels

If all the ice in Greenland melted completely, global sea levels would rise by approximately **7.2 meters (24 feet)**.

## Key context:

- Greenland contains the world's second-largest ice sheet (after Antarctica)
- This amount of sea level rise would have catastrophic consequences for coastal cities and communities worldwide
- Complete melting would likely take centuries to occur under current warming scenarios
- However, partial melting is already happening and accelerating due to climate change

Many major cities like New York, Mumbai, Shanghai, and London would face severe flooding or partial submersion at this level of rise, and some island nations would become uninhabitable.

Would you like me to elaborate on any particular aspect of this scenario?

Analysis:

What's fascinating about these responses is how the extended thinking mode reveals Claude's internal reasoning process, which contains several elements that don't appear in the final response:
  1. Detailed quantification: The thinking process references "2.85-3 million cubic kilometers of ice" - a detail not included in either final response.
  2. Structured approach: Claude organises its thoughts into numbered points (timeframe, global
    impact, current situation, comparison) before synthesising a more readable response.
  3. Self-instruction: Claude tells itself "In providing my answer, I'll focus on the estimated sea level rise..." showing how it plans its final response.
  4. Tone differences: The standard mode response is more direct and confident, while the extended thinking shows a more deliberative, academic approach weighing facts.
  5. Token usage: Extended thinking used 532 output tokens compared to 161 for standard mode - more than 3 times as many tokens.
  6. Different final format: The extended thinking mode response uses a different structure with a more conversational ending, asking if the user would like more elaboration on any aspect.
This transparency into the reasoning process helps us better understand how Claude reaches its conclusions and allows us to verify its thought process for accuracy.

🔧 Example 2: Tool Use with Reasoning

Our second example combines Claude 3.7 Sonnet's reasoning capability with its ability to use tools. This demonstrates how the model thinks through a problem before determining that it needs to use a tool to solve it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
import boto3
import json
from botocore.exceptions import ClientError

def tool_use_with_reasoning():
"""
Demonstrates how to use Claude 3.7 Sonnet with tools and reasoning enabled.
Shows how the model thinks through a problem before using tools.
"""

# Create the Amazon Bedrock runtime client
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Specify the model ID for Claude 3.7 Sonnet
model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"

# Example prompt that requires tools
prompt = "I need to calculate the compound interest on an investment of $5,000 with an annual interest rate of 6.5% compounded monthly for 8 years."

# Define a calculator tool
tools = [
{
"toolSpec": {
"name": "calculator",
"description": "Evaluate mathematical expressions and return the result.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate."
}
},
"required": ["expression"]
}
}
}
}
]

try:
# Send initial request with tools and reasoning enabled
print(f"PROMPT: {prompt}\n")
print("Sending request with tools and reasoning enabled...")

response = client.converse(
modelId=model_id,
messages=[
{
"role": "user",
"content": [{"text": prompt}]
}
],
inferenceConfig={"maxTokens": 8000}, # Must be higher than budget_tokens
toolConfig={"tools": tools}, # Define available tools
additionalModelRequestFields={
"thinking": {
"type": "enabled",
"budget_tokens": 4000 # Allocate tokens for thinking
}
}
)

# Check if the model wants to use a tool
if response["stopReason"] == "tool_use":
content_blocks = response["output"]["message"]["content"]
thinking = None
tool_use = None

# Extract thinking content and remove it from content_blocks
filtered_content_blocks = []
for block in content_blocks:
if "reasoningContent" in block:
thinking = block["reasoningContent"]["reasoningText"]["text"]
else:
filtered_content_blocks.append(block)

# Now find the tool use in the filtered blocks
for block in filtered_content_blocks:
if "toolUse" in block:
tool_use = block["toolUse"]

# Display the thinking process
if thinking:
print("--- THINKING PROCESS ---")
print(thinking)
print()

# Handle the tool use request
if tool_use:
tool_name = tool_use.get("name")
tool_input = tool_use.get("input", {})
tool_id = tool_use.get("toolUseId")

print(f"--- TOOL REQUEST ---")
print(f"Tool: {tool_name}")
print(f"Input: {json.dumps(tool_input, indent=2)}")

# Execute the calculator tool
if tool_name == "calculator":
expression = tool_input.get("expression", "")
try:
# Python uses ** for exponentiation, not ^
# Replace ^ with ** for Python evaluation
expression = expression.replace("^", "**")
result = str(eval(expression))
print(f"Result: {result}")
except Exception as e:
result = f"Error: {str(e)}"
print(f"Result: {result}")
else:
# If we don't recognize the tool, return an error
result = "Error: Unknown tool"
print(f"Result: {result}")

# Send follow-up with tool result
print("\nSending follow-up with tool result...")

follow_up_response = client.converse(
modelId=model_id,
messages=[
{
"role": "user",
"content": [{"text": prompt}]
},
{
"role": "assistant",
"content": filtered_content_blocks # Use filtered blocks without thinking
},
{
"role": "user",
"content": [
{
"toolResult": { # Provide tool result in the correct format
"toolUseId": tool_id,
"content": [{"text": result}]
}
}
]
}
],
inferenceConfig={"maxTokens": 8000},
toolConfig={"tools": tools} # Must include toolConfig in follow-up
)

# Extract and display the final response
final_text = None
for block in follow_up_response["output"]["message"]["content"]:
if "text" in block:
final_text = block["text"]
break

print("\n--- FINAL RESPONSE ---")
print(final_text)
else:
print("--- NO TOOL USE REQUESTED ---")
print("The model responded with a stop reason of 'tool_use' but no tool use block was found.")
else:
# If the model didn't request a tool, display the direct response
content_blocks = response["output"]["message"]["content"]
thinking = None
response_text = None

for block in content_blocks:
if "reasoningContent" in block:
thinking = block["reasoningContent"]["reasoningText"]["text"]
elif "text" in block:
response_text = block["text"]

if thinking:
print("--- THINKING PROCESS ---")
print(thinking)
print()

print("--- RESPONSE (No tool used) ---")
print(response_text)

except (ClientError, Exception) as e:
print(f"ERROR: Can't invoke Claude 3.7 Sonnet. Reason: {e}")
exit(1)

if __name__ == "__main__":
tool_use_with_reasoning()

Output Analysis: Tool Use with Reasoning

I ran this example with a prompt asking for a compound interest calculation: I need to calculate the compound interest on an investment of $5,000 with an annual interest rate of 6.5% compounded monthly for 8 years.
Claude 3.7 Sonnet first thought through the problem, then requested the calculator tool to perform the computation.

Thinking Process:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
To calculate compound interest, I'll use the formula: A = P(1 + r/n)^(nt)

Where:
- A = final amount
- P = principal (initial investment)
- r = annual interest rate (as a decimal)
- n = number of times interest is compounded per year
- t = time in years

Given information:
- Principal (P) = $5,000
- Annual interest rate (r) = 6.5% = 0.065
- Compounding frequency (n) = 12 (monthly)
- Time (t) = 8 years

I need to calculate: A = 5000 * (1 + 0.065/12)^(12*8)

I'll use the calculator function to evaluate this expression:

Tool Request:

1
2
3
4
5
Tool: calculator
Input: {
"expression": "5000 * (1 + 0.065/12)^(12*8)"
}
Result: 8398.344845047079

Final Response:

1
2
3
4
5
6
7
8
9
10
11
For your investment of $5,000 with an annual interest rate of 6.5% compounded monthly over 8 years:

- Initial investment: $5,000
- Final value: $8,398.34
- Compound interest earned: $3,398.34

The calculation used the compound interest formula: P(1 + r/n)^(nt), where:
- P = Principal ($5,000)
- r = Annual interest rate (6.5% or 0.065)
- n = Number of times interest is compounded per year (12 for monthly compounding)
- t = Time in years (8 years)

Analysis:

This example demonstrates how Claude 3.7 Sonnet's reasoning capability integrates with tool use:
  1. Structured Problem Solving: Claude first breaks down the problem, identifies the formula and values needed, and realises it needs computational help.
  2. Appropriate Tool Selection: It correctly determines that the calculator tool is needed to evaluate the complex expression.
  3. Formula Translation: Claude correctly translates the mathematical formula A = P(1 + r/n)^(nt) into a calculable expression.
  4. Complete Response: After receiving the calculation result, Claude formats a clear, comprehensive response that explains both the result and how it was calculated.
  5. Tool Use Steps: The code demonstrates the full life cycle of tool use - from thinking, to tool request, to result processing, to final response.
Crucially, the reasoning process here shows that Claude isn't blindly using a tool but is thinking through why the tool is needed and what to do with the result afterwards.

🔍 Important Implementation Details

When working with Claude 3.7 Sonnet's reasoning capability, keep these technical details in mind:
  1. Reasoning and Inference Parameters: Reasoning is not compatible with temperature, top_p, or top_k modifications, as well as forced tool use. When comparing standard and reasoning modes, I used default values for these parameters to ensure a fair comparison.
  2. Budget Tokens: You must specify how many tokens to allocate for reasoning via the budget_tokensparameter. The minimum is 1,024 tokens, but 4,000+ tokens are recommended for complex problems.
  3. Max Tokens Requirement: The maxTokens value must be higher than budget_tokens. A good rule of thumb is to set maxTokens at least twice as high as budget_tokens.
  4. Filtered Content in Follow-ups: When using tool results in a follow-up request, you must filter out the reasoningContent blocks from the previous response to avoid validation errors.
  5. Tool Config in Follow-ups: When sending tool results back to the model, you must include the same toolConfig in the follow-up request.
  6. Python Exponentiation: Note that Claude uses ^ for exponentiation in mathematical expressions, but Python uses **. The code handles this conversion automatically.

💡 Use Cases for Extended Thinking

The hybrid reasoning capability of Claude 3.7 Sonnet opens up exciting possibilities:
  1. Educational Tools: Showing students the step-by-step reasoning process for solving complex problems
  2. Research Assistance: Breaking down complex research questions into logical components
  3. Math and Science Problem Solving: Tackling multi-step calculations with transparent working
  4. Decision Making Transparency: Understanding how AI arrives at recommendations or conclusions
  5. Complex Planning: Creating detailed plans with clear reasoning behind each step

🚀 Best Practices

To get the most out of Claude 3.7 Sonnet's reasoning capabilities:
  1. Adjust Budget Based on Complexity: Use higher reasoning budgets (6,000+ tokens) for very complex problems and lower budgets for simpler ones.
  2. Explicitly Request Step-by-Step Thinking: When you want detailed reasoning, phrases like "Think step by step" or "Show your work" can help guide the model.
  3. Consider Performance Trade-offs: Extended thinking increases token usage and response time, so use it strategically when deeper reasoning is valuable.
  4. Examine Thinking Process for Verification: The thinking process can reveal potential issues in the model's reasoning that might not be apparent in the final response.
  5. Code Defensively: Handle different response structures and potential errors when working with reasoning and tool use in production code.

🌐 Conclusion

Claude 3.7 Sonnet's hybrid reasoning capability represents a significant advancement in making AI thinking more transparent and trustworthy. By providing insight into its step-by-step reasoning process, Claude allows developers and users to better understand, verify, and trust the outputs it produces.
The examples we've explored show how reasoning can be used both standalone and in combination with tools to solve complex problems while maintaining transparency throughout the process.
As you build with Claude 3.7 Sonnet on Amazon Bedrock, consider how the reasoning capability might enhance your applications by providing deeper insights, better explanations, and more transparent
problem-solving.

Have you tried Claude 3.7 Sonnet's reasoning capabilities? What interesting use cases have you found? Share your experiences in the comments below!
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment

Log in to comment