AWS Logo
Menu
Building AI Agents with Strands: Part 3 - MCP Integration

Building AI Agents with Strands: Part 3 - MCP Integration

Learn how to build and integrate MCP servers with the Strands Agents SDK. Connect your AI agents to external tools and services using the Model Context Protocol.

Dennis Traub
Amazon Employee
Published May 22, 2025
Last Modified May 27, 2025
Welcome to the third part of our series on building AI agents with Strands!
In Part 1: Creating Your First Agent, we created a simple agent that acts as a computer science expert, and in Part 2: Tool Integration, we enhanced it with custom tools, like a glossary of terms and the ability to directly access the web.
Now, we'll integrate the Model Context Protocol (MCP) to connect our agent with external specialized services. You'll learn how to expand your agent's capabilities by connecting to any MCP server with just a few lines of code.
Our use case: Connect your agent to a specialized quiz service - perhaps a platform provided by a university, or by a commercial vendor. For this tutorial, we'll build a simple quiz service to show the integration patterns, but in practice you'll often connect to an existing service by a third-party provider.

Prerequisites

  • Strands Agents SDK and tools installed (pip install strands-agents strands-agents-tools)

What is the Model Context Protocol (MCP)?

Before we dive into the code, let's briefly talk about MCP:
The Model Context Protocol (MCP) is an open protocol standardizing how AI agents connect to external services, like databases, APIs, legacy systems, or third-party tools. Instead of building custom integrations for each service, MCP provides one standard interface for all external connections - somewhat like REST, but for AI agents.
Manual MCP implementation involves a lot of work: managing handshakes, connection state, message parsing, schema validation, etc.
With Strands, on the other hand, it's really just a few lines of code:
The Strands SDK handles all the protocol complexity, letting you focus on agent functionality rather than integration details.

Building Our Quiz MCP Server

To demonstrate MCP integration, we'll create a simple quiz server in a new file, called quiz_mcp_server.py:

Running the MCP Server

Once you've create quiz_mcp_server.py with the code above, start it in its own terminal:
Now you should see something similar to this:
Important: Keep this terminal running! The MCP server needs to stay active for your agent to connect to it.

Connecting to the MCP Server

Now let's integrate our subject expert agent with the quiz service. Create subject_expert_with_mcp.py:

Connecting Your Agent

Open a second terminal and activate your virtual environment there too:
You should see:

Testing the Integration

Now you can interact with your agent, which seamlessly combines local tools with external services:
Discovering available content:
Taking a quiz:
Getting feedback:
Once you've answered all questions, the agent will show you the results.
Now try experimenting with correct and incorrect answers. You could also ask the agent for more detailed explanations to help you learn the concepts.

Direct Tool Calling

While the agent automatically selects tools based on conversation, you can also call MCP tools directly:
This gives you direct control when needed, while still benefiting from the agent's natural language interface.

Understanding Strands' MCP Integration

This integration demonstrates several key advantages of the MCP approach:
  • Service Abstraction: Your agent doesn't need to know the internal implementation of the quiz service. It could be a simple JSON file, a complex database, or even an AI-powered agent itself - the MCP interface remains the same.
  • Technology Independence: The quiz service could be rewritten in Java, hosted anywhere on the internet, or replaced with a completely different provider - your agent code doesn't change.
  • Scalability: You can easily connect to multiple services, and even mix them with your own custom or built-in tools:

Production Considerations

Some important considerations when connecting to real MCP servers in production are...
šŸ“Š Monitoring: Track service health and performance.
āš ļø Error Handling: Implement robust fallbacks for service unavailability.
šŸ” Authentication: Many commercial MCP servers may require API keys or OAuth.
Here is an example with a custom timeout, an authorization header, and a local fallback in case the MCP server is unavailable:

What We've Learned

In this tutorial, we've:
  • āœ… Built a simple MCP server to demonstrate external integration
  • āœ… Connected our agent to the MCP server with minimal code
  • āœ… Accomplished seamless tool integration through natural language
  • āœ… Understood how the Strands Agents SDK abstracts MCP complexity
  • āœ… Explored advanced patterns for scaling, security, and error handling
Our subject expert agent can now use external services, opening up endless possibilities to integrate with all kinds of specialized platforms and tools.

Next Steps & Resources

In Part 4, we'll explore Alternative Model Providers, showing you how to set up local model deployment for development and testing.

Want to learn more about the Strands Agents SDK?

Here are some resources to help you deepen your understanding:
What kind of MCP servers would you like to use - or even build yourself? Share your ideas in the comments below!

šŸ’” Troubleshooting Tips

Connection Issues:
  • Ensure the MCP server is running before starting your agent
  • Verify the URL includes the /mcp path: http://localhost:8080/mcp
  • Check firewall settings if running on different machines
Service Discovery Problems:
  • Restart both server and client if tools aren't discovered
  • Check the MCP server terminal for error messages
Virtual Environment Issues:
  • Make sure both terminals have the virtual environment activated before running the server and client
  • If you see import errors, verify that strands-agents and strand-agents-tools are installed in your active environment with pip list | grep strands
    Ā 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments