AWS Logo
Menu

Secure Deployment Strategies in Amazon EKS with Azure DevOps

Build and Deploy containerized applications on Amazon EKS using Azure DevOps

Abhishek Nanda
Amazon Employee
Published Mar 12, 2025
This article is written in collaboration with Vijay Kamath and Jayaprakash Alawala

Introduction

As enterprises explore multi-cloud DevOps strategies, managing development workflows across different cloud providers presents significant challenges. Many organizations have invested in Azure DevOps and are now seeking to extend their proven pipelines to the Amazon Elastic Kubernetes Service (Amazon EKS) ecosystem. However, a comprehensive approach to securely connecting Azure DevOps with Amazon EKS has been lacking. This gap poses a critical challenge for enterprises looking to streamline software delivery, improve collaboration, and accelerate time-to-market by leveraging the scalability and flexibility of AWS's Kubernetes platform.

What is Azure DevOps?

Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.Azure DevOps provides integrated features that you can access through your web browser or IDE client. You can use all the services included with Azure DevOps or choose just what you need to complement your existing workflows.

What is Amazon EKS?

Amazon EKS is a fully managed Kubernetes service that reduces the operational overhead of running Kubernetes, automatically updating, patching, and operating the control plane. EKS provides high availability and scalability, with a highly resilient and fault-tolerant control plane across multiple AWS Availability Zones. By integrating natively with other AWS services, EKS simplifies security management and enables organizations to leverage the full breadth of the AWS ecosystem. EKS offers an easier path for organizations already using AWS to adopt Kubernetes, without the need to manage the entire Kubernetes infrastructure themselves.

Benefits of Integrating Azure DevOps with Amazon EKS

  • Increased Flexibility and Resilience
    • A multi-cloud strategy helps you choose the best-fit services for your specific needs, avoiding lock-in to a single provider. This approach improves your operational resilience by reducing dependency on one cloud platform.
  • Accelerated Development Cycles
    • Integrate DevOps workflows across cloud providers to streamline your software delivery process. By automating deployments and connecting development directly with infrastructure provisioning, you can significantly reduce time-to-market for new features and updates.
  • Enhanced Collaboration
    • Implement DevOps practices consistently across cloud environments to foster better collaboration between development and operations teams. This shared understanding helps break down silos and drive more efficient product development.

Solution Overview

You'll learn how to:
  • Configure secure authentication between Azure DevOps and AWS using OpenID Connect (OIDC)
  • Set up automated container image building and pushing to Amazon Elastic Container Registry (Amazon ECR)
  • Deploy applications to Amazon EKS using Azure DevOps pipelines
  • Implement monitoring and observability for your deployments

Pre-Requisites

Before you begin, make sure you have:
  • An AWS account with administrator access
  • An Azure DevOps organization and project
  • The AWS Toolkit for Azure DevOps (version 1.15 or later) installed
  • Basic familiarity with Kubernetes concepts

Cost

This solution uses several AWS services that incur costs:
  • Amazon EKS cluster running costs
  • Amazon ECR storage and data transfer
  • AWS Identity and Access Management (IAM) (no additional cost)

Solution Architecture

Solution Architecture
Here's the pipeline flow explained in sequential steps:
  1. Development Phase:
    • Developer pushes code to Git repository
    • Azure DevOps Pipeline is triggered automatically
  2. Build & Integration between Azure DevOps with Amazon EKS Integration:
    • Pipeline executes through Azure DevOps
    • Establishes connection to AWS via Service Connection
    • Authenticates using IAM role configured for Azure DevOps
  3. Container Management:
    • Pipeline pushes the built image to Amazon ECR
    • Image is tagged and stored in registry
  4. Pipeline triggers deployment to EKS cluster
    • Amazon EKS control plane manages orchestration
    • Container Images for Pods are pulled from Amazon ECR
    • Application is deployed across multiple pods for high availability
This architecture demonstrates a seamless integration between Azure DevOps and AWS services, enabling automated deployment while maintaining security through proper IAM roles and AWS Service Connections.

Configure Azure DevOps for Amazon EKS integration

Azure DevOps integration with Amazon EKS requires setting up AWS IAM roles, Azure Service Connections, and proper kubectl configurations to enable pipeline deployments. The connection between Azure DevOps and Amazon EKS is established through AWS IAM roles allowing Azure DevOps Pipelines to interact with the Kubernetes API server. This integration leverages the EKS Access entires ( or aws-auth ConfigMap ) for RBAC management and requires specific IAM permissions to handle cluster operations, while the Azure DevOps Service Connection maintains the authentication context for pipeline execution.
High-level overview
Note: Since this post uses IAM as part of the solution, it will require permission to CreateRole, ListOpenIDConnectProviders, and CreateOpenIDConnectProvider as a minimum. In most cases, you would need to attach permissions to the role, but not needed in our example.
  1. Create a pipeline (using YAML) in Azure DevOps and gather the organization GUID.
  2. Configure an Identity provider in AWS for OIDC federation.
  3. Create an IAM role in AWS that can be assumed from the Identity provider.
  4. Run the Azure DevOps pipeline to confirm successful federation.
  5. Create Service connection to connect Azure devops pipeline and AWS
Prerequisites
  • An AWS account with sufficient permissions to create IAM Identity providers and IAM role and policies
  • An Azure DevOps project with access to configure service connections, those are authenticated connections between Azure Pipelines and external or remote services.
  • The AWS Toolkit for Azure DevOps version 1.15+ installed for that project, see AWS Toolkit for Azure DevOps in the Visual Studio Marketplace for installation instructions.
  • ToolkitForAzureDevops
    ToolkitForAzureDevops
For the configuration of the Identity provider in AWS, we will need the Organization GUID from Azure DevOps. First, we need to create a AWS Service Connection that will reference an IAM role named azure-federation that we will create later.
Create a service connection
Service connections are authenticated connections between Azure Pipelines and external or remote services (AWS in this case) that you use to execute tasks in a job.You can read more on this here
From your Azure DevOps project settings:
  1. Log into your Azure DevOps Portal and create an Organization if not already available
  2. Create a +NewProject and set Name to eks-devops and Visibility to Private
  3. Click on the newly created project eks-devops and Go to Project Settings option on the bottom left of the screen
  4. Under Pipelines, select Service Connections.
  5. Click on Create service connection.
  6. Choose AWS, select Next.
  7. In Role to assume, use the role ARN, for our example we will use a role named azdo-federation, to get the ARN replace your AWS account ID in the following: arn:aws:iam::<<accountid>>:role/azdo-federation.
  8. Select the Checkbox Use OIDC (optional)
  9. For Service Connection Name use aws-oidc-federation.
  10. Click on Save.
Service Connection
Service Connection
Create a pipeline(using YAML) in Azure DevOps and gather the organization GUID
Now lets obtain the organization GUID from Azure DevOps by running the pipeline
Create a Repo for the project eks-devops
  1. Go to the project eks-devops and select Repos
  2. Under Initialize main branch with a README or gitignore Click on initialize.
  3. This will add a README file to newly created Repo with Name eks-devops
From your Azure DevOps project pipelines:
  • Got to the project eks-devops and select Pipelines
  • Click on Create pipeline.
  • Choose Azure Repos Git and**** select the Repo**** eks-devops
  • Select Starter pipeline.
  • Copy and paste the following YAML, adjusting awsCredentials for the service connection name, and the regionName if needed.
  • Click Save and run.
  • Leave the default commit message Set up CI with Azure Pipelines
  • After a few seconds, your pipeline will be prompted for permission to use the service connection
  • Select view and review the information to grant permission using the Permit button.
  • After the pipeline runs, check the logs of the task named, Running aws-cli get-caller-identity, for a line that start with OIDC Token generated. From this, you will have the issuer, audience and subject line needed for the rest of the setup (Figure 3).
Logs of the tasks Running aws-cli get-caller-identity
Logs of the tasks Running aws-cli get-caller-identity
With this information, we can create the identity provider in our AWS account.
Configure an identity provider in AWS for OIDC federation
In this step, we will use the issuer obtained from the logs.
From the AWS IAM console follow these steps.
  • Under Access management, select Identity providers on the left menu.
  • Select Add Provider.
  • Choose OpenID Connect as the Provider type.
  • In the Provider URL use the issuer URL obtained from the previous section. Each tenant of Azure DevOps will have a unique OrganizationGUID.
  • In the Audience field, use api://AzureADTokenExchange. This is a fixed value for Azure DevOps. It was also found in the logs from the pipeline run.
  • Select Add Provider.
  • Take note of the ARN of the newly created provider, it will be needed in the next step.
IAM OIDC
IAM OIDC
Create an IAM role in AWS that can be assumed from the identity provider
An IAM role is an entity that allow you to assign specific permissions. To control who can use that role and under which conditions, we use a trust policy. To follow the least-privileged principle, we will add a condition in the trust policy so that only one specific service connection from Azure DevOps will be able to use the IAM role that we are creating. Azure DevOps passes the service connection in the subject field as follows: “sc://{OrganizationName}/{ProjectName}/{ServiceConnectionName}“.
Continue from the AWS IAM console.
In this step, we will use the subject that we got from the logs in our first run. The expected format is sc://{OrganizationName}/{ProjectName}/{ServiceConnectionName}.
Under Access management, select Roles.
  1. Select Create role.
  2. For Trusted entity type, select Web Identity.
  3. Select the right identity provider from the drop-down.
  4. In the Audience drop-down select api://AzureADTokenExchange.
  5. To limit this role to only one service connection, we will add a condition. Under Condition, select Add condition, in the Key, select vstoken.dev.azure.com/{OrganizationGUID}:sub. In the Condition, select StringEquals. For the Value, use the subject obtained from the logs, it should have this format: sc://{OrganizationName}/{ProjectName}/{ServiceConnectionName}.
  6. Select Next, you can leave the permission empty as our pipeline just validates our identity, but in a real pipeline, this is where you would attach the needed policy.
  7. Select Next, input azdo-federation as the Role name, review the details.Here is the complete trust policy. Replace the bolded italicized text with the correct ids
Run the Azure devops pipeline to confirm successful federation
Now that we have created the role - if we rerun the pipeline we creates earlier - the federation should now succeed

Set up automated container image building and pushing to Amazon Elastic Container Registry (Amazon ECR)

Building Docker Images with Azure DevOps
Creating a Docker image build pipeline in Azure DevOps is a crucial step in containerizing your applications. Let's walk through the process, focusing on key concepts and best practices.
First, consider your Docker build context. While your application code might be complex, your Dockerfile should be clean and efficient. Here's a minimal example to illustrate the key concepts:
When setting up your build pipeline, triggers are your first consideration. Rather than triggering on every commit, consider implementing path filters to build only when relevant files change. Your pipeline should start with trigger configuration that defines when builds occur:
The heart of your pipeline is the Docker build task. While the command itself is straightforward, the power lies in how you structure your build arguments and leverage variables. Here's a streamlined example:
Image tagging strategy deserves special attention. While 'latest' is common, implementing a more sophisticated versioning scheme provides better traceability. Consider using build ID, git commit hash, or semantic versioning. Your tags should tell a story about the image's origin and version.
Authentication to your container registry is crucial. While you could use service principals, managed identities provide a more secure, maintenance-free alternative. Azure DevOps can seamlessly authenticate to Azure Container Registry using managed identities, eliminating the need to manage credentials.
Build optimization is another critical consideration. Leverage Docker's layer caching effectively by organizing your Dockerfile commands from least to most frequently changing. Additionally, consider implementing multi-stage builds for production images:
Variables play a crucial role in making your pipeline flexible and maintainable. Instead of hardcoding values, use variable groups or library variables for elements like registry URLs, image names, and environment-specific settings. This approach makes your pipeline more maintainable and reusable across projects.
Finally, consider implementing quality gates in your pipeline. Before pushing an image, you might want to run security scans, test coverage checks, or validate the image size. These checks ensure that only high-quality images make it to your registry:
Remember that a well-structured pipeline is not just about building images; it's about creating a reliable, secure, and maintainable process for your containerization workflow. Regular reviews and updates of your pipeline configuration ensure it evolves with your project's needs and incorporates the latest best practices in container security and efficiency.
This foundational setup provides a robust starting point for your containerization journey with Azure DevOps and can be extended based on your specific requirements and complexity needs.
Orchestrating Secure Multi-Environment Deployments
The implementation of a release pipeline for Amazon EKS deployments begins with establishing a secure authentication mechanism using AWS IAM roles and OIDC federation, eliminating the need for static credentials while ensuring robust security. By leveraging OIDC tokens, the pipeline authenticates seamlessly with AWS services, allowing for dynamic access to different Amazon EKS clusters based on the environment context and deployment stage. The IAM role configurations can be customized for each environment (development, staging, production), enabling granular access control and maintaining the principle of least privilege throughout the deployment process. Environment-specific variables and configurations are managed through Azure DevOps variable groups, while approval gates and authentication policies ensure controlled progression between environments without requiring credential reconfiguration. This streamlined approach not only enhances security but also simplifies the maintenance of deployment pipelines across multiple environments, as the OIDC federation handles authentication automatically based on the pipeline's context and the predefined IAM role permissions.
ECR Image Management
Implementing a robust container image management strategy in Amazon ECR begins with establishing a standardized tagging convention that incorporates build information, version numbers, and environment designations to ensure traceability and reproducibility. Leveraging ECR's native vulnerability scanning capabilities, combined with automated security gates in your CI/CD pipeline, ensures that only validated and secure images are promoted to production environments. The implementation of lifecycle policies automates the cleanup of unused or outdated images, optimizing storage costs while maintaining compliance with retention requirements for critical releases. Cross-region replication configurations enhance disaster recovery capabilities and improve global application deployment performance by maintaining synchronized image repositories across geographical regions. Finally, integrating immutable tags and implementing strict access controls through IAM roles and repository policies ensures the integrity of your container images while maintaining a clear audit trail of image modifications and deployments.

Transitioning from Build to Deploy: Orchestrating Amazon EKS Deployments

After successfully building and pushing your container images to ECR, the next crucial phase is orchestrating their deployment to your EKS cluster.
EKS access management via access entries is crucial because it provides a secure way to map AWS IAM roles to Kubernetes RBAC permissions without modifying the aws-auth ConfigMap. This new method replaces the traditional aws-auth approach, offering better security, easier automation, and more granular access control for CI/CD pipelines and cross-account access.
Create EKS Access Entry:
This maps AWS IAM roles to Kubernetes RBAC groups, enabling role-based access control for AWS principals like service accounts and IAM users/roles.
This is the foundation of modern EKS access management, replacing the traditional aws-auth ConfigMap method with a more secure and scalable approach.
Create ClusterRoleBinding
Once Kubernetes groups are created - we need to link Kubernetes groups (specified in access entries) to Kubernetes RBAC roles, defining what actions the mapped IAM roles can perform within the cluster.
This provides granular control over permissions and enables you to use standard Kubernetes RBAC mechanisms while maintaining AWS IAM integration.
Associate Access Policy
After mapping the groups to Kubernetes RBAC roles, we shall define the scope and level of access for IAM principals within EKS, linking AWS-managed policies to determine cluster-wide or namespace-specific permissions.
This acts as the final piece that ties together IAM roles, Kubernetes groups, and actual permissions, making the access control system complete and operational.
Now that we have given the role necessary EKS access management - we can proceed with deployment
Deployment Configuration
Lets understand each section of the deployment pipeline
Authentication & ECR Setup (First Task - displayName: "ECR login"):
The pipeline begins with AWS authentication and ECR login, using OIDC federation through Azure DevOps service connections to securely access AWS resources without storing credentials, and establishes connection to the ECR repository for container image access.
EKS Configuration & Deployment (Second Task - displayName: "Deploy to EKS"):
This section handles EKS cluster authentication, kubeconfig setup, and deployment creation, where it verifies AWS identity, connects to the EKS cluster, and prepares the Kubernetes manifests for a containerized web application with specific security contexts and resource limits.
The next section will cover monitoring and observability of your deployments, ensuring you can effectively trac and manage your applications post-deployment.

Monitoring and Observability

  • Azure Devops can be integrated with Grafana using the grafana-azuredevops plugin. In order to use this plugin, a Grafana enterprise licence is required.
  • We created an Amazon Grafana workspace and setup SAML authentication integration using OIDC. Then we enabled the Amazon Grafana enterprise license so that we could install the azuredevops plugin as a datasource. Once the enterprise license is available, then the Azure Devops plugin can be installed and a new datasource can be added to the Grafana.
  • Azure Devops Grafana plugin
    Azure Devops Grafana plugin
  • We can create the Grafana dashboards querying the Azure Devops Pipeline Runs. In the below example, we can visualize all the runs in the configured Azure Devops datasource.
  • Grafana Dashboard
    Grafana Dashboard

Conclusion

The integration of Azure DevOps with Amazon EKS represents a powerful approach to modern cloud-native application deployment, combining the robust DevOps capabilities of Azure with the scalability and reliability of AWS's managed Kubernetes service. Through the implementation of OIDC federation and IAM roles, organizations can maintain high security standards while achieving seamless automation across multiple environments. This integration not only streamlines the deployment process but also establishes a foundation for sustainable, scalable, and secure container operations.
The journey from initial setup through image management to production deployment demonstrates how carefully planned DevOps practices can bridge multi-cloud environments effectively. By leveraging container best practices, implementing proper security controls, and maintaining robust monitoring solutions, organisations can build a reliable and efficient pipeline that supports their containerisation initiatives across cloud boundaries.
Consider this implementation as a starting point for your multi-cloud DevOps journey, and continue to iterate and improve based on your specific requirements and emerging best practices in the cloud-native ecosystem.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments