logo
Menu

Easily Consume AWS Secrets Manager Secrets From Your Amazon EKS Workloads

Leverage secret stores without complex code modifications.

Ryan Stebich
Amazon Employee
Published Oct 30, 2023
Last Modified Mar 28, 2024
Secrets management is a challenging but critical aspect of running secure and dynamic containerized applications at scale. To support this need to securely distribute secrets to running applications, Kubernetes provides native functionality to manage secrets in the form of Kubernetes Secrets. However, many customers choose to centralize the management of secrets outside of their Kubernetes clusters by using external secret stores such as AWS Secrets Manager to improve the security, management, and auditability of their secret usage.
Consuming secrets from external secret stores often requires modifications to your application code so it supports secret store specific API calls, allowing retrieval of secrets at application run time. This can increase the complexity of your application code base and potentially reduce the portability of containerized applications as they move between environments or even leverage different secret stores. However, when running applications on Amazon EKS, you have a more streamlined alternative that minimizes code changes. Specifically, you can leverage the AWS Secrets and Configuration Provider (ASCP) and the Kubernetes Secrets Store CSI Driver. Acting as a bridge between AWS Secrets Manager and your Kubernetes environment, ASCP mounts your application secrets directly into your pods as files within a mounted storage volume. This approach simplifies management and enhances the portability of your workloads, without requiring significant application-level code modifications to access secrets.
Building on the Amazon EKS cluster from part 1 of our series, this tutorial dives into setting up the AWS Secrets and Configuration Provider(ASCP) for the Kubernetes Secrets Store CSI Driver. Included in the cluster configuration for the previous tutorial is the OpenID Connect (OIDC) endpoint to be used by the ASCP IAM Role for Service Account (IRSA). For part one of this series, see Building an Amazon EKS Cluster Preconfigured to Run High Traffic Microservices. Alternatively, to set up an existing cluster with the components required for this tutorial, use the instructions in Create an IAM OpenID Connect (OIDC) endpoint in EKS official documentation.
In this tutorial, you will learn how to set up the AWS Secrets and Configuration Provider (ASCP) for the Kubernetes Secrets Store CSI Driver on your Amazon EKS cluster and AWS Secrets Manager to store you application secrets. You will leverage ASCP to expose secrets to your applications running on EKS improving the security and portability of your workloads.
About
✅ AWS experience200 - Intermediate
⏱ Time to complete30 minutes
🧩 Prerequisites- AWS Account
📢 FeedbackAny feedback, issues, or just a 👍 / 👎 ?
⏰ Last Updated2023-10-30

Prerequisites

  • Install the latest version of kubectl. To check your version, run: kubectl version --short
  • Install the latest version of eksctl. To check your version, run: eksctl info
  • Install the latest version of Helm. To check your version, run: helm version
  • Install the latest version of the AWS CLI (v2). To check your version, run: aws --version
  • Get IAM OIDC provider configured on an existing EKS cluster.

Overview

This tutorial is part of a series on managing high traffic microservices platforms using Amazon EKS, and it's dedicated to managing application secrets with AWS Secrets and Configuration Provider (ASCP) for the Kubernetes Secrets Store CSI Driver. This tutorial shows not only how to consume an external secret from your EKS workloads, but also how to create a secret in AWS Secrets Manager. It covers the following components:
Note that AWS Secrets Manager includes a 30-day free trial period that starts when you store your first secret. If you have already stored a secret and are past the 30-day mark, additional charges based on usage will apply.

Step 1: Set Environment Variables

Before interacting with your Amazon EKS cluster using Helm or other command-line tools, it's essential to define specific environment variables that encapsulate your cluster's details. These variables will be used in subsequent commands, ensuring that they target the correct cluster and resources.
  1. First, confirm that you are operating within the correct cluster context. This ensures that any subsequent commands are sent to the intended Kubernetes cluster. You can verify the current context by executing the following command:
  1. Define the CLUSTER_NAME environment variable for your EKS cluster. Replace the sample value for cluster region. If you are using your own existing EKS cluster, replace the sample value for name.
  1. Define the CLUSTER_REGION environment variable for your EKS cluster. Replace the sample value for cluster region.
To validate the variables have been set properly, run the following commands. Verify the output matches your specific inputs.

Step 2: Create Secret in AWS Secrets Manager

Creating a secret in AWS Secrets Manager is the first step in securely managing sensitive information for your applications. Using the AWS CLI, you'll store a sample secret that will later be accessed by your Kubernetes cluster. This eliminates the need to hard-code sensitive information in your application, thereby enhancing security.
The above command will store the Secret’s ARN in a variable for later use. To validate you successfully created the secret, run the following command to output the variable:
The expected output should look like this:

Step 3: Create IAM Policy for Accessing the Secret in AWS Secrets Manager

In this section, you'll use the AWS CLI to create an IAM policy that grants specific permissions for accessing the secret stored in AWS Secrets Manager. By using the $SECRET_ARN variable from the previous step, you'll specify which secret the IAM policy should apply to. This approach ensures that only the specified secret can be accessed by authorized entities within your Kubernetes cluster. We will associate this IAM Policy to a Kubernetes service account in the next step.
The above command will store the policy’s ARN in a variable for later use. To validate you successfully created the policy, run the following command to output the variable:
The expected output should look like this:

Step 4: Create IAM Role and Associate With Kubernetes Service Account

In this section, you'll use IAM Roles for service accounts (IRSA) to map your Kubernetes service accounts to AWS IAM roles, thereby enabling fine-grained permission management for your applications running on EKS. Using eksctl, you'll create and associate an AWS IAM Role with a specific Kubernetes service account within your EKS cluster. With the Secret Store CSI driver, you will apply IAM permissions at the application pod level, not the CSI driver pods. This ensures that only the specific application pods that are leveraging the IRSA associated Kubernetes service account will have permission to access the secret stored in AWS Secrets Manager. We will associate the IAM policy we created in the previous step to the newly created IAM role. Note that you must have an OpenID Connect (OIDC) endpoint associated with your cluster before you run these commands.
Upon completion, you should see the following response output:
Ensure the "eksdemo-secretmanager-sa" service account is correctly set up in the "default" namespace in your cluster.
The expected output should look like this:

Step 5: Install AWS Secrets and Configuration Provider and Secrets Store CSI Driver

In this section, you’ll install the AWS Secrets and Configuration Provider (ASCP) and Secrets Store CSI Driver using Helm, which sets up a secure bridge between AWS Secrets Manager and your Kubernetes cluster. This enables your cluster to access secrets stored in AWS Secrets Manager without requiring complex application-code changes. The ASCP and Secrets Store CSI Driver will each be installed as DaemonSets to ensure a copy of the driver and provider are running on each node in the cluster.
The following command will add the Secrets Store CSI Driver Helm chart repository to your local Helm index to allow for installation:
The expected output should look like this:
The following command will add the AWS Secrets and Configuration Provider(ASCP) Helm chart repository to your local Helm index to allow for installation:
The expected output should look like this:
To install the Secrets Store CSI Driver, run the following Helm command:
The expected output should look like this:
As mentioned in the above output, to verify the Secrets Store CSI Driver has started run the following command:
You should see the following output. Make sure all the pod’s STATUS are Running:
To install the AWS Secrets and Configuration Provider(ASCP), run the following Helm command:
The expected output should look like this:
You can also run the following Helm command to verify the installation has completed successfully:
You will see an output like below:

Step 6: Create ASCP SecretProviderClass Resource

In this section, you’re defining the SecretProviderClass Kubernetes object, which sets the stage for seamless secrets management within your Kubernetes workloads. This resource acts as a set of instructions for the AWS Secrets and Configuration Provider (ASCP), specifying which secrets to fetch from AWS Secrets Manager and how to mount them into your pods. Note that the SecretProviderClass must be deployed in the same namespace as the workload that references it. To learn more, see SecretProviderClass documentation.
Create a Kubernetes manifest called eksdemo-spc.yaml and paste the following contents into it:
Apply the YAML manifest.
To verify the SecretProviderClass was created successfully, run the following command:
The expected output should look like this:

Step 7: Deploy Sample Workload to Consume Secret

In this section, you'll deploy a sample workload to bridge your application and AWS Secrets Manager. By mounting the secret as a file on the workload's filesystem, you'll complete the end-to-end process of securely managing and accessing secrets within your Kubernetes environment. In the pod template you will specify the Secrets Store CSI as the volume driver and then a path to mount your secret, just like you would a traditional volume mount. In this example we will mount the secret in the /mnt/secrets-store location.
Create a Kubernetes manifest called eksdemo-app.yaml and paste the following contents into it:
Apply the YAML manifest.
To verify the pod was created successfully, run the following command:
You should see the following output. Make sure the pod STATUS is Running:

Step 8: Test the Secret

Finally, we’ll use kubectl to execute into the pod we just deployed and see if we can read the mounted secret.
This command should output the secret we created earlier:

Clean Up

After finishing with this tutorial, for better resource management, you may want to delete the specific resources you created.

Conclusion

Upon completion of this tutorial, you will have successfully set up an integration between AWS Secrets Manager and your Amazon EKS cluster. This integration allows you to centralize the management of your application secrets, while easily consuming these secrets from your workloads running on EKS, without complex code modifications. Security and governance of your secrets as well as portability of your applications is improved with minimal overhead. This example can easily be replicated for the various types of secrets your workloads may require such as database credentials, API keys, and more.
To learn more about setting up and managing Amazon EKS for your workloads, check out Navigating Amazon EKS.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments