AWS Logo
Menu
Using AWS AppConfig with EKS without stress and code changes

Using AWS AppConfig with EKS without stress and code changes

A Practical Guide to Integrating AWS AppConfig with Kubernetes (EKS) for Application Configuration Management using EventBridge, Lambda, and External Secrets Operator

Mikhail Ishenin
Amazon Employee
Published Apr 2, 2025
Viacheslav Romanov, AWS
Mikhail Ishenin, AWS

Task at Hand

Recently, a customer approached us with a request: they needed to provide different team members — developers, testers, analysts — with the ability to manage configurations within Kubernetes applications while ensuring security, versioning, and automatic propagation of changes. They faced limitations with standard AWS solutions and wanted to find an architectural approach that could be implemented at scale. We studied the task and developed an architecture that we describe in this article, serving as prescriptive guidance — if you build the system this way, it will meet all the stated requirements.
Many engineering teams face the need to delegate configuration management not only to DevOps engineers but also to other development participants: backend and frontend developers, analysts, and testing specialists. This is a logical trend: configurations are increasingly becoming part of application logic rather than exclusively infrastructure. However, when it comes to nested structures, versioning, access control, and editing without direct cluster access — serious challenges arise.
AWS offers several solutions for storing configurations: Parameter Store and SecretsManager. The first is simple to use but limited in size and not designed for complex nested structures. The second is more suitable for storing sensitive information (such as API keys). As a result, teams start inventing their own solutions: from custom controllers to storing JSON files in S3 with IAM access. But this is either inconvenient or unsafe.

AppConfig Advantages

For the task at hand, AWS AppConfig looks like a more mature and suitable solution. It allows storing configurations in JSON format, validating them before application, managing versions, and viewing change history. The editing interface through AWS Console makes it accessible to users without deep infrastructure knowledge. This allows the DevOps team to focus on platform building while developers focus on application logic.
Several key advantages of AppConfig over Parameter Store are particularly worth highlighting:
  1. Phased Deployment - the ability to roll out new configurations gradually: first to 5% of servers, then to 25%, and so on. This allows detecting problems early and preventing complete system failure.
  2. Lambda Validation - you can connect a Lambda function that will verify configuration correctness before its application. This enables complex validation: schema checking, business rule compliance, parameter dependencies.
  3. Feature Flags and A/B Testing - built-in feature flag support allows easily enabling/disabling specific application features without redeployment. This is ideal for conducting experiments and gradually introducing new features.
  4. Monitoring and Automatic Rollback - AppConfig provides configuration deployment metrics and can automatically cancel deployment when anomalies are detected.
  5. Detailed Access Control - fine-grained IAM policy settings allow giving different teams various rights: some can only view configurations, others can edit specific profiles or only for certain environments.
However, integration with Kubernetes isn't so straightforward. Unlike Secrets or ConfigMap, AppConfig cannot be directly connected to a pod. It doesn't have a volume mounting mechanism or environment variable usage. Access to AWS AppConfig can only be achieved either through direct API data plane service access or through the AWS AppConfig agent. In a container environment, the agent is implemented as a sidecar container running a local HTTP server.

Integrating AWS AppConfig and Amazon EKS

We approached the task differently: instead of implementing AppConfig directly in the pod, we built an event-reactive infrastructure on top of it. The solution centers around EventBridge, Lambda, and External Secrets Operator. When a configuration changes in AppConfig, an event is triggered that EventBridge catches. Then a Lambda function runs, updating the corresponding value in SecretsManager. This update is noticed by External Secrets Operator, which synchronizes it with Kubernetes Secret. Next, Reloader comes into play, monitoring changes in the Secret and initiating rolling updates of the corresponding pods.
This approach provides several advantages. First, we use standard Kubernetes mechanisms for delivering configurations: Secret and ConfigMap. Second, the configuration automatically enters the pod without needing to access external APIs during runtime. Third, we maintain role isolation: developers edit AppConfig, infrastructure components update cluster state, and the application simply restarts with the updated configuration.
The implementation works as follows:
  • Dev or QA edits the configuration in AppConfig;
  • The operator initiates configuration deployment
  • EventBridge captures the Update event;
  • Lambda updates only the configuration hash in SecretsManager (under a fixed key appconfig-version-hash);
This approach separates the update signal (hash) from the configuration data itself, allowing the use of standard K8s mechanisms for restart while obtaining configuration from the primary source.

Lambda and CDK Implementation Example

Here's an example of a Python Lambda function that receives an event from AppConfig, calculates a hash of the configuration, and saves it to Secrets Manager:
This function does not pass the configuration itself to the cluster, but only the hash, which is sufficient to trigger the update chain through External Secrets Operator and Reloader. It's important to understand that EventBridge events from AppConfig don't contain the configuration itself, so the function additionally calls the AppConfig API to obtain it. The application retrieves the configuration directly from AppConfig — through initContainer, sidecar, or AppConfig SDK when the pod restarts. Such separation of signal and data simplifies the architecture and enhances security - configurations aren't duplicated in different places.
We can also create all the necessary infrastructure using AWS CDK. Below is a CDK example that creates a Lambda function and binds an EventBridge trigger:
This CDK wrapper creates a Lambda, configures IAM access, and subscribes it to events from AppConfig. It's enough to specify the needed configuration profile in the event filter — and everything will start working automatically.
What's particularly valuable in this solution is that it provides predictability and control. All configuration changes can be tracked by versions in AppConfig. Change history is available, rollback is done with one click. Role separation makes the system secure: no one gets direct access to the prod cluster. Everything updates through predefined channels.
However, it's worth remembering: the solution isn't without nuances. The configuration update chain isn't fully reactive — changes can take anywhere from a few seconds to several minutes to propagate from AppConfig to the pods. This delay is primarily due to the External Secrets Operator's periodic reconciliation schedule rather than real-time updates. By default, the operator checks for updates every 5 minutes (controlled by the --store-requeue-interval parameter).
Also required is careful IAM policy configuration — Lambda, External Secrets, and pods must have properly restricted rights. Add monitoring here: if a failure occurs somewhere in the chain, you need to quickly understand where and why.

Conclusion

From an engineering perspective, the architecture turned out to be flexible, scalable, and manageable. It's suitable for scenarios where configurations change frequently, but it's important to maintain security, control, and automation. This isn't an out-of-the-box solution — it needs to be configured, but the result is worth the effort.
In conclusion, we can say: AWS AppConfig combined with EventBridge, Lambda, External Secrets, and Reloader allows building a mature configuration management process for applications in EKS. It not only reduces the load on the infrastructure team but also gives the business more flexibility. This isn't just a technical improvement — it's a step towards a true product platform where configurations become part of the dialogue between teams, not a closed DevOps world.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments