AWS Logo
Menu
Building AI Agents with Strands: Part 1 - Creating Your First Agent

Building AI Agents with Strands: Part 1 - Creating Your First Agent

Create your first AI agent with just a few lines of code using the Strands Agents SDK and Amazon Bedrock.

Dennis Traub
Amazon Employee
Published May 21, 2025
In this first tutorial, I'll walk you through creating a simple but functional AI agent using the Strands Agents SDK.
We'll set up our development environment, install the necessary packages, and create a subject expert agent - the first component of our Integrated Learning Lab project.
Let's dive in!

Prerequisites

Before we begin, you'll need:
If you're not familiar with setting up AWS credentials, I recommend checking out the AWS CLI configuration guide.

Setting Up Your Environment

First, let's create a virtual environment and install the required packages:
The strands-agents package provides the core SDK functionality, while strands-agents-tools includes a variety of built-in tools we can use to enhance our agents.

Creating Your First Agent

Now, let's create a simple Subject Expert agent focused on computer science education. Create a new file called subject_expert.py with the following code:
Save this file and run it:
And just like that, you have a functional AI agent! Isn't it incredible how little code it took to create something so capable?
Cross-Region Inference note: If you encounter permission errors despite configuring credentials correctly, you may need to enable model access in multiple AWS regions. Strands Agents uses inference profiles that can route requests to the region with the lowest latency. For US users, consider enabling Claude model access in all US regions where it's available (us-east-1, us-east-2, us-west-2), even if you're primarily working in just one region - similarly in the EU. See the Strands Model Providers documentation for more details on region configuration.

Understanding What's Happening

Let's break down what's going on in our code:

The Agent Object

This creates an Agent instance with a specialized system prompt. The system prompt is crucial as it defines the agent's personality, expertise, and behavior guidelines.
I've found that crafting effective system prompts is something of an art—it's about finding the right balance between specificity and flexibility. For our Subject Expert, I wanted to ensure it provides clear explanations with practical examples.

Agent Invocation

This line:
  • Sends our query to the agent
  • The agent processes it using the AI model
  • The response is automatically printed to the console
  • The response is also returned as a string for further use if needed

The Default Model

By default, Strands Agents uses Amazon Bedrock with Claude 3.7. The SDK automatically handles:
  1. Creating a secure connection to Amazon Bedrock
  2. Formatting requests and responses for Claude 3.7
  3. Managing token limits and other model-specific requirements
This abstraction lets us focus on building agent functionality rather than worrying about API details.
What's even better is that Strands Agents supports multiple model providers beyond Amazon Bedrock. The framework is designed to be provider-agnostic, so you can easily switch between Amazon Bedrock, Anthropic, LiteLLM, Ollama, Llama API, and build your own custom providers for specialized needs
This flexibility means you can start with one provider and easily switch to another later, or even use different providers for different agents in the same application. You can find more details in the Model Providers documentation.
In future tutorials, we'll explore how to use alternative model providers, but for now, we'll stick with the default Amazon Bedrock integration to keep things simple.

Enabling Debug Logging

While developing, I often want to see what's happening "under the hood." Strands provides built-in logging to help with this. Let's modify our script to enable debug logging:
With logging enabled, you'll see detailed information about what's happening behind the scenes:
  • Model initialization
  • Tool discovery and registration
  • Event loop processing
  • Request/response flow
  • Conversation management
  • Any errors or warnings
Here's a snippet of what the logging output may look like:
These logs have been invaluable for troubleshooting issues and understanding the inner workings of the agent as I've been experimenting with it.

Building an Interactive Session

Let's enhance our script to create an interactive session with our subject expert agent:
Run this script to start an interactive session:
Now you can have a continuous conversation with your subject expert agent, asking various computer science questions.

What We've Learned So Far

Creating this simple agent has taught us several things about the Strands Agents SDK:
  1. The entry barrier is remarkably low—you can have a functional agent in just a few lines of code
  2. System prompts are powerful for defining agent behavior and expertise
  3. The SDK handles many complexities (authentication, request formatting, etc.) behind the scenes
  4. Debug logging provides valuable insights into the agent's operation
While this is just the beginning, it's exciting to see how quickly we can create a functional AI agent. In the next tutorial, we'll explore how to enhance our agent with custom tools to extend its capabilities beyond conversation.

Next Steps & Resources

In the next lesson, we'll learn how to add tools that allow our agent to:
  • Access and manage learning resources
  • Look up specific information
  • Perform calculations and analysis
These capabilities will significantly enhance our subject expert's ability to deliver an interactive learning experience.
Ready? Let's continue with Part 2: Tool Integration

Troubleshooting tip: If you encounter authentication errors, double-check that your AWS credentials are properly configured and that you have access to Amazon Bedrock models. Sometimes simply running aws configure to refresh your credentials can solve these issues.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

2 Comments