
Kong on AWS EKS: Cloud-Native API Management Journey
Explore how Kong Gateway on AWS EKS streamlines cloud-native API management with features like CORS, TLS, and authentication in a scalable Kubernetes setup.
Published Apr 26, 2025
At its core, an API Gateway acts as a proxy, mediating communication between clients and backend services. It streamlines the process by handling various tasks in transit, including CORS validation, TLS termination, JWT authentication, header injection, session management, response transformation, rate-limiting, ACLs, and much more. This intermediary layer ensures seamless and secure interactions within a microservices architecture.
Developed by KongHQ, Kong Gateway stands out as a lightweight and decentralized API Gateway solution. Operating as a Lua application within NGINX and distributed with OpenResty, Kong Gateway sets the stage for modular extensibility through a rich ecosystem of plugins. Whether your API management needs are basic or complex, Kong Gateway provides a scalable and versatile solution.
Traditionally, Kong Gateway configurations, including routes, services, and plugins, were stored in a database. However, the landscape shifted with the advent of "DBLess" Kong Gateway, also known as the "declarative" method. In this mode, configuration management shifts entirely to code, typically saved as a declarative.yaml file. This paradigm shift brings about several advantages:
Configuration becomes easily versionable, enabling seamless collaboration and tracking changes over time.
Eliminating the need for a separate database streamlines deployment and enhances agility in managing configurations.
The move towards a code-centric approach aligns with the principles of Infrastructure as Code, promoting consistency and reproducibility.
With configurations stored as code, the need for maintaining a separate database diminishes, simplifying the overall maintenance process.
Now that we grasp the significance of Kong Gateway and the advantages of the DBLess approach, let's delve into the process of deploying Kong Gateway on an AWS EKS cluster.
Specifies the Kong services, routes, and associated plugins using a DBLess approach.
Helm chart configurations for Kong deployment, including resource limits, ingress controller settings, environment variables, and autoscaling parameters.
Configuration for the Vertical Pod Autoscaler, which adjusts resource requests and limits for pods based on their usage.
Configuration for the Ingress resource, specifying rules and annotations for AWS ALB.
Shell script with functions for common tasks, such as printing colored text, checking the existence of commands, and defining an exit strategy.
Script for installing kubectl, Helm, adding the Kong Helm repo, setting up AWS authentication, creating namespaces, applying VPA configuration, and deploying Kong using Helm.
Script for validating the setup, including Helm diff, VPA configuration, and Ingress configuration (for prd environment).
CI/CD pipeline configuration for validating and deploying Kong in the prd environment.
For validations job,
For Deployment Job,
The deployment setup appears to be well-organized and follows best practices for deploying Kong on AWS EKS. The use of GitLab CI/CD enhances automation and ensures consistent deployments.
- Ensure that your deployment scripts and configurations align with your specific requirements and AWS EKS environment.
- Monitor Kong Gateway's performance, logs, and metrics in the AWS EKS cluster to identify and address any issues.
- Consider further optimizations or enhancements based on specific use cases or evolving requirements.
If you have any specific concerns or questions, feel free to ask!