
Building Intelligent AI Agents with AWS Strands: A Hands-On Guide
Discover how to build intelligent, task-specific AI agents using the new AWS Strands SDK. This hands-on guide walks you through integrating Amazon Bedrock models with Strands Agents, connecting to Model-Connected Plugins (MCP) for real-time tools like AWS documentation and diagram generation.
Prasanna Sridharan
Amazon Employee
Published Jun 3, 2025
In the rapidly evolving landscape of generative AI, the ability to create intelligent agents that can reason, plan, and interact with external systems is becoming increasingly vital. AWS's Strands Agents SDK emerges as a powerful, open-source framework designed to simplify the development of such agents and Agentic workflows.
In this blog post, we’ll delve into Strands Agents — what they are, how to get started, and explore eight practical use cases that highlight the power of agentic AI in enterprise environments.
AWS Strands Agents is an open-source SDK that adopts a model-driven approach to building and running AI agents with minimal code. It enables developers to create agents that can plan, reason, and select tools autonomously, leveraging the capabilities of large language models (LLMs) like those available through Amazon Bedrock.
Key Features:
- Simplified Agent Development: Build agents with just a few lines of code by defining prompts and tools.
- Tool Integration: Easily integrate external tools and APIs to extend agent capabilities.
- Model Agnostic: Supports various LLM providers, including Amazon Bedrock, Anthropic, and more.
- Multi-Agent Systems: Facilitates the creation of complex workflows involving multiple collaborating agents.
- Security and Observability: Includes features for guardrails, prompt security, and monitoring agent performance.
The AWS Strands Agents SDK empowers developers to build intelligent AI agents by leveraging the advanced reasoning capabilities of large language models (LLMs). At its core, the SDK facilitates a model-driven approach where agents can autonomously plan, reason, and act to accomplish complex tasks.
Core Components:
- Model: The LLM serves as the agent's "brain," interpreting prompts, making decisions, and determining the sequence of actions required to achieve a goal.
- Tools: External functions or APIs that the agent can invoke to perform specific tasks, such as data retrieval, computations, or interactions with other services.
- Prompt: Defines the agent's objective and provides context for the task at hand.
Agentic Loop:
The Strands Agents SDK operates on an iterative loop, enabling agents to:
- Plan: Decompose tasks into manageable sub-tasks and determine the optimal sequence of actions.
- Reason: Utilize the LLM to make informed decisions based on context, available tools, and prior interactions.
- Act: Invoke appropriate tools to execute actions.
- Reflect: Assess the outcomes of actions, identify any discrepancies or errors, and adjust strategies accordingly.
This feedback loop ensures that agents can adapt to dynamic environments, handle unexpected scenarios, and improve their performance over time.
Let's begin by creating a simple "Hello, World!" agent using the Strands Agents SDK.
- Python 3.10 or higher
- An AWS account with access to Amazon Bedrock
- Enable access to the following Bedrock models
- Anthropic Claude 3.5 Haiku
- Anthropic Claude 3.5 Sonnet V2
- Anthropic Claude 3.7 Sonnet
- Amazon Nova Micro
- Amazon Nova Lite
- Amazon Nova Pro
- AWS CLI configured with appropriate credentials and permissions to the required resources
After installing the packages, restart the kernel.
In this example, we import the Agent class from the strands module, and interact with the agent by passing a message. The agent processes the input using the underlying LLM and returns a response.
This simple Python example shows how to interact with an Amazon Bedrock Agent using the
strands
library. The Agent
is created with a system prompt (or defaults) and then sent a question — in this case, “Explain Amazon Bedrock Agents.” The agent processes the query and returns a natural language response, which is printed out. This demonstrates how easily you can build conversational AI apps powered by Amazon Bedrock with minimal code.The true power of Strands Agents becomes evident when exploring real-world applications. Below are several use cases demonstrating the versatility of the SDK.
Automate the extraction of article titles and links from Hacker News using Python libraries like
requests
and BeautifulSoup
.This Python snippet demonstrates how to build a simple web scraping agent using the
strands
library with two tools: a Python REPL to run scripts, and a file writer to save results. The agent is prompted to scrape article titles and links from the Hacker News front page (https://news.ycombinator.com/news
), then save the output as a CSV file named with the current date. The Python code is executed in a non-interactive way, letting the agent automate data extraction and storage seamlessly. This shows how you can quickly build agents to fetch, process, and save live web data using minimal code.When you run this code, the agent reads the prompt, plans the steps, uses the Python REPL tool to scrape article titles and links from Hacker News, and saves the results as a date-stamped CSV file using the file write tool—all autonomously.
Analyze historical stock data for Amazon using
yfinance
, calculate moving averages, and compare with S&P 500.This example shows how to create a financial analyst agent using the
strands
library with a Python REPL tool. The agent uses the yfinance
Python module to fetch historical stock data for a given company (e.g., Amazon). It then generates key visualizations, including the 20-day moving average of closing prices and daily return rate comparison against the S&P 500 over the past year. Additionally, the agent calculates the stock’s volatility based on return rates. This demonstrates how to combine natural language instructions with automated data retrieval and analysis to generate insightful financial reports programmatically.When you run this code, the agent acts as a financial analyst: it retrieves Amazon’s historical stock data using
yfinance
, calculates and plots the 20-day moving average and daily return rates vs. S&P 500, computes the volatility of returns, and generates the results—all by reasoning through the system prompt and executing code via the Python REPL tool.Fetch detailed weather information by city and date/time from a public weather source, extracts key weather metrics, and stores them in DynamoDB for historical analysis.
This example demonstrates how to build a weather data agent using the
strands
library that fetches weather details for a specified city and date/time from a public weather service like wttr.in. The agent sends an HTTP GET request to retrieve JSON weather data including temperature, humidity, wind speed, and general conditions.It then stores the extracted weather information into an AWS DynamoDB table named CityWeatherData in the
us-west-2
region. The DynamoDB table uses City as the partition key and DateTime as the sort key, with other weather metrics saved as additional attributes.If the live data is unavailable or access is blocked, the agent simulates realistic weather data for testing purposes.
This approach enables automated weather data collection, storage, and retrieval, making it easy to build time-series weather analysis or alerting applications.
When you run this code, the agent fetches weather data for New York City from
wttr.in
via an HTTP GET request, extracts relevant weather details, and saves them into the DynamoDB table CityWeatherData
using City
and DateTime
as keys—autonomously reasoning through the task using the http_request
and use_aws
tools.Perform data transformations using pandas, such as adding computed columns and aggregating statistics.
This example demonstrates how to use an AI agent to automatically generate a Python script that performs common data analysis tasks using the pandas library. The prompt asks the agent to:
- Create a sample DataFrame with columns like 'Name', 'Age', and 'Salary'.
- Calculate a new 'Bonus' column as 10% of each salary.
- Filter the data to include only entries where age is above 30.
- Group the filtered data into age brackets (20s, 30s, 40s) and compute average Salary and Bonus for each group.
- Include inline comments and a detailed function docstring for clarity.
- Provide usage examples and list any required external libraries.
- Generate documentation explaining how the code works and how to run it.
The agent executes the script and returns the output, providing an end-to-end example of AI-assisted code generation for data processing tasks, complete with explanations and runnable examples.
When this prompt is executed, the agent uses the Python REPL tool to generate, run, and display the output of a full pandas-based script that creates and transforms a DataFrame, computes bonuses, filters by age, groups data into brackets, and calculates averages—while also adding comments, docstrings, usage examples, required libraries, and documentation as instructed.
Process large datasets using PySpark, perform transformations, and save results as Parquet files.
This example shows how an AI agent can generate a complete PySpark script to perform typical big data processing steps. The prompt instructs the agent to:
- Initialize a SparkSession to start the PySpark environment.
- Create a sample CSV file named
users.csv
containing user data columns likeid
,name
,age
, andcity
. - Load this CSV file into a DataFrame.
- Transform the data by:
- Filtering users older than 25.
- Adding a new column classifying users as 'Adult' or 'Minor' based on age.
- Grouping by city to compute the average age per city.
- Save the final DataFrame as a Parquet file for efficient storage.
- Include detailed inline comments, a function docstring, usage examples, external dependencies, and documentation on how to run the code.
- Execute the script and return the output.
This use case highlights how AI can help quickly generate complex data engineering workflows in PySpark with clear documentation and runnable code, making it easier for developers and data engineers to get started or prototype quickly.
When executed, the agent generates and runs a PySpark script that creates a sample CSV, performs filtering, labeling, and grouping transformations, and saves the result as a Parquet file, along with comments, docstrings, usage example, and documentation.
Train and evaluate machine learning models for tasks like customer churn prediction.
This example demonstrates how an AI agent can help automatically generate Python code for a full machine learning workflow — from data preprocessing to model training and evaluation — with detailed explanations and runnable output at each stage.
Step 1: Data Loading and Preprocessing for Customer Churn Prediction
Creates a synthetic customer dataset, handles missing values, encodes categorical features, normalizes numerical data, and splits into training and test sets with detailed commentary.
Creates a synthetic customer dataset, handles missing values, encodes categorical features, normalizes numerical data, and splits into training and test sets with detailed commentary.
The AI generates code to create a sample customer churn CSV dataset, handle missing values, encode categorical variables, normalize numerical features, and split the data into training and test sets. It uses popular libraries like
pandas
and scikit-learn
with clear inline comments and documentation.Step 2: Training Multiple Machine Learning Models
Trains Random Forest, Gradient Boosting, and Logistic Regression models using 5-fold cross-validation, calculating and displaying key classification metrics for comparison.
Trains Random Forest, Gradient Boosting, and Logistic Regression models using 5-fold cross-validation, calculating and displaying key classification metrics for comparison.
Building on the preprocessed data, the agent writes code to train three different classifiers: Random Forest, Gradient Boosting, and Logistic Regression. It applies 5-fold cross-validation and calculates performance metrics (accuracy, precision, recall, F1 score) for each model. This step includes detailed metric summaries and example usage.
Step 3: Model Evaluation, Visualization, and Selection
Evaluates all trained models, visualizes ROC curves and confusion matrices, selects the best model based on F1 score
Evaluates all trained models, visualizes ROC curves and confusion matrices, selects the best model based on F1 score
Finally, the AI produces code to visualize model performance via ROC curves and confusion matrices, compare models based on F1 scores, select the best-performing model, and save it to disk using
joblib
. Visualization is done with matplotlib
and seaborn
, and the code is documented for ease of use.When executed, the agent builds a full churn prediction pipeline by generating Python code to preprocess data, train and evaluate multiple ML models with cross-validation, and finally visualize performance, select the best model, and save it to disk.
Deploy specialized agents for investment research, budget optimization, and financial planning.
This Python code creates a multi-agent financial assistant using the Strands SDK and Amazon Bedrock models, each tailored for a specific financial task:
- Specialized Agents
- Investment Research Assistant
Usesus.amazon.nova-pro-v1:0
to provide insights into stocks, ETFs, and market trends. - Budget Optimizer Assistant
Usesus.amazon.nova-lite-v1:0
to help users track and improve monthly spending habits. - Financial Planner Assistant
Usesus.amazon.nova-micro-v1:0
to guide users through long-term financial planning.
Each assistant is wrapped in a
@tool
-decorated function and initialized with a relevant system prompt to specialize the agent’s behavior.- Orchestrator Agent
A main orchestrator agent uses a system prompt to route user queries to the most suitable assistant, based on the query’s intent. In this case, it identifies that the query involves budgeting, investing, and financial planning, and delegates the tasks accordingly.
- Natural User Interaction
A user simply asks:
"I'm 30 years old, earning $6,000/month. I want help managing my budget, investing, and building a plan for early retirement."
The orchestrator automatically distributes this complex query to the relevant specialized tools and returns an integrated, helpful response.
When executed, the orchestrator agent receives the user’s financial query and intelligently routes parts of it to three specialized assistants—investment research, budget optimization, and financial planning—each powered by a tailored Bedrock model, then aggregates their responses to provide comprehensive, expert financial advice.
Call the Tools
1. Query AWS documentation (via an MCP server).
2. Generate architectural diagrams (via another MCP server).
This code uses the Strands SDK, Amazon Bedrock, and MCP (Model Context Protocol) to generate AWS architecture diagrams from natural language.
- Uses
Claude 3.5 Haiku
via Bedrock to understand and respond to user prompts. - Connects to AWS MCP plugins:
- 📚
aws-documentation-mcp-server
for querying AWS docs. - 🗺️
aws-diagram-mcp-server
for generating diagrams.
- Sends a prompt like:
"Create a diagram of a website that uses AWS Lambda for a static site hosted on S3." - The agent returns the diagram file path, and the code renders it automatically.
This enables developers and architects to visually design AWS solutions just by asking questions, automating documentation and diagrams in one go.
Generate the Architecture Diagram
AWS Strands Agents SDK offers a powerful yet accessible framework for building intelligent AI agents capable of complex reasoning and interaction with external systems. With its model-driven approach, support for various tools and models, and ease of integration, developers can rapidly prototype and deploy agents for a wide range of applications
To explore more and get hands-on experience, visit the [Strands Agent Jupyter Notebook]
https://github.com/aws-samples/sample-advanced-rag-using-bedrock-and-sagemaker/blob/main/Lab%205.%20Strands%20Agent/Strands.ipynb
https://github.com/aws-samples/sample-advanced-rag-using-bedrock-and-sagemaker/blob/main/Lab%205.%20Strands%20Agent/Strands.ipynb
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.