AWS Logo
Menu
Enforcing Guardrails in Amazon Bedrock using IAM

Enforcing Guardrails in Amazon Bedrock using IAM

Enforce AI safety with IAM policies that require guardrails on all Amazon Bedrock model invocations, creating systematic protection without developer oversight.

Steven Warwick
Amazon Employee
Published Apr 22, 2025
Last Modified Apr 23, 2025
Organizations using Amazon Bedrock often need to ensure all generative AI requests include appropriate content guardrails. Rather than depending on developers to consistently implement these safeguards, AWS Identity and Access Management (IAM) policies can be configured to automatically reject any Bedrock requests that don't include guardrails. This approach creates a systematic enforcement mechanism at the permission level, ensuring unfiltered AI content cannot be generated within your environment. This post demonstrates how to implement IAM policies that require guardrails for all Bedrock model invocations. By establishing these policies, organizations can implement consistent safety standards across all AI applications without relying on individual compliance.

Policies to enforce Guardrails

This following AWS IAM policy grants permission to invoke Amazon Bedrock foundation models through InvokeModel and InvokeModelWithResponseStream API actions. However, the policy simultaneously enforces a critical security requirement by explicitly denying these same actions if the request doesn't include a guardrail identifier, effectively making guardrails mandatory for all model invocations.
  • Allow Statement (First Part)
    • This statement permits users to invoke Amazon Bedrock AI models in the using both standard and streaming methods.
  • Deny Statement (Second Part)
    • This statement blocks any attempt to use Bedrock models without specifying a guardrail identifier, effectively making guardrails mandatory for all model invocations.
    • The "Condition" part is the most important piece - it says "block any attempt to use Bedrock models if there's no guardrail specified."
Another policy is also required to make sure the role being used has permission to apply the guardrail.

AWS Lambda Example

If this AWS Lambda is executed with a payload containing a “guardrailIdentifier” parameter which contains a valid Bedrock Guardrail ID the system will produce the expected results. If the “guardrailIdentifier” parameter is missing the system will produce an "AccessDeniedException".

Example Input

Lambda Function

Result with guardrail specified

Result without guardrail specified

Conclusion

The IAM policies shown here provide a simple yet effective way to enforce Amazon Bedrock guardrails across your organization. Instead of hoping developers remember to implement safety measures, these policies automatically block any AI requests that don't include guardrails. As demonstrated in the Lambda example, properly configured requests work normally, while those missing guardrails are immediately rejected. This approach ensures consistent AI safety standards without requiring individual compliance from every developer.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments