
Building Safe AI Agents: Integrating Amazon Bedrock Guardrails with CrewAI
Implement multi-layered protection for your AI agents using Amazon Bedrock Guardrails at every level - from initial queries and tool execution to final output
zeekg
Amazon Employee
Published Jun 8, 2025
As organizations increasingly adopt multi-agent AI systems to automate complex workflows, ensuring these agents operate safely and within policy boundaries becomes critical. A single AI agent making an inappropriate recommendation or exposing sensitive information can have serious business consequences. This is particularly challenging when agents use multiple tools and interact with external data sources. Thats where guardrails come in.
Amazon Bedrock Guardrails helps you build safe AI applications by providing consistent safety controls across all types of AI models, whether they're hosted on Amazon Bedrock or elsewhere.
In this post, I'll show you how to integrate Amazon Bedrock Guardrails with CrewAI, a popular multi-agent framework, to create AI systems that are both powerful and safe. We'll walk through a practical implementation that demonstrates how to apply content filtering at every stage of an AI agent's workflow.
Traditional AI safety approaches often focus on filtering final outputs, but multi-agent systems can present unique challenges:
- Multiple Entry Points: Agents can receive input from users, APIs, web scraping, and file systems
- Tool Interactions: Each tool an agent uses can introduce new risks
- Chain Reactions: One agent's output becomes another's input, potentially amplifying problems
- Dynamic Workflows: Agents make real-time decisions about which tools to use
Consider a scenario where you have an AI agent that can research topics online, analyze documents, and provide recommendations. Without proper guardrails, this agent might:
- Scrape and propagate misinformation
- Generate inappropriate financial advice
- Expose sensitive information from documents
- Bypass content policies through tool chaining
Our solution implements defense in depth by applying Amazon Bedrock Guardrails at multiple layers:

Install dependencies
Note: This was tested using crewai version 0.70.1
Note: This was tested using crewai version 0.70.1
This guide assumes you already know how to create a Bedrock Guardrail, if you need help please follow this guide to create a guardrail. For this implementation, I've configured a guardrail with a denied topic policy that blocks investment and financial advice content.
Now i'll create a function that calls the apply guardrails API.
Test out the guardrail to make sure everything is working so far.
Next, i'll wrap standard CrewAI tools with guardrail protection:
Now create the main agent workflow with comprehensive guardrail integration:
Let's see how our protected agent handles different types of queries:
Safe Query Example:
Result: The agent successfully researches and provides a comprehensive travel guide with recommendations for accommodations, activities, and dining.
Blocked Query Example:
Result: The Guardrails are triggered to block the response.
- Input validation prevents problematic queries from starting
- Tool-level filtering catches issues during execution
- Output validation ensures final responses are appropriate
- Users receive clear feedback about why content was blocked
- Logs provide audit trails for compliance
- Graceful degradation maintains user experience
- Guardrails can be updated without code changes
- Different guardrails for different use cases
- Fine-grained control over content policies
- Works with any CrewAI agent configuration
- Integrates with existing AWS infrastructure
- Scales with your application needs
- Start with broader policies and refine based on usage
- Test guardrails with representative data before deployment
- Consider different guardrails for different agent roles
- Always provide fallback behavior when guardrails fail
- Log guardrail actions for monitoring and debugging
- Implement graceful degradation for blocked content
- Cache guardrail results for repeated content
- Use asynchronous processing for non-blocking validation
- Monitor guardrail API usage and costs
- Create comprehensive test suites with edge cases
- Test both positive and negative scenarios
- Validate guardrail behavior across different content types
Amazon Bedrock Guardrails pricing is based on the number of text units processed. To optimize costs apply guardrails only where needed by using tags.
Integrating Amazon Bedrock Guardrails with CrewAI creates a robust foundation for building safe, enterprise-ready AI agent systems. By implementing protection at multiple layers, you can harness the power of multi-agent AI while maintaining control over content and compliance requirements.
The approach demonstrated here can be extended to support various use cases, from customer service bots to research assistants to content generation systems. As AI agents become more prevalent in enterprise applications, implementing comprehensive safety measures like these will be essential for successful deployment.
The approach demonstrated here can be extended to support various use cases, from customer service bots to research assistants to content generation systems. As AI agents become more prevalent in enterprise applications, implementing comprehensive safety measures like these will be essential for successful deployment.
To implement this solution in your environment:
- Set up AWS Bedrock Guardrails in your AWS account
- Configure appropriate policies for your use case
- Integrate the guardrail functions into your CrewAI implementation
- Test thoroughly with representative data
- Monitor and refine based on usage patterns
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.