
Amazon EKS Auto Mode ENABLED - Build your super-powered cluster
Deploy a Fully Functional Amazon EKS Cluster with Auto Mode Using Terraform – See How It Simplifies Operations
- Compute: It creates new nodes when pods can't fit onto existing ones, and identifies low utilization nodes for deletion.
- Networking: It configures AWS Load Balancers for Kubernetes Service and Ingress resources, to expose cluster apps to the internet.
- Storage: It creates EBS Volumes to back Kubernetes storage resources.
1
2
3
4
5
6
7
8
9
10
11
#
# EKS Cluster using Auto Mode
module "eks" {
# ---removed code that is the same ---
cluster_compute_config = {
enabled = true
node_pools = ["general-purpose"]
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#
# EKS Cluster
module "eks" {
# ---removed code that is the same ---
eks_managed_node_groups = {
xyz_managed_nodes = {
name = "managed-eks-nodes"
ami_type = "AL2023_x86_64_STANDARD"
use_latest_ami_release_version = true
instance_type = var.instance_type
min_size = 1
max_size = 5
desired_size = 3
# Setup a custom launch template for the managed nodes
# Notes these settings are the same as the defaults
use_custom_launch_template = true
create_launch_template = true
}
}
}
Note: for my EKS Cluster code without Auto Mode, I am using the GitHub repos: setheliot/xyz_infra_poc and setheliot/xyz_app_poc.
enabled=true
on cluster_compute_config
.cluster_compute_config
, this is actually enabling Auto Mode for not just compute, but for everything:- compute
- networking (load balancing)
- storage
create_cluster
, but you get them all when setting enabled=true
using the Terraform eks
module.general-purpose
node pool you are letting EKS Auto Mode select for you the instance size and number of nodes. If, however, you need more control over such things, you can create custom node pools.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#
# EBS Storage Class
resource "kubernetes_storage_class" "ebs" {
metadata {
name = "ebs-storage-class"
}
storage_provisioner = "ebs.csi.eks.amazonaws.com"
reclaim_policy = "Delete"
volume_binding_mode = "WaitForFirstConsumer"
parameters = {
type = "gp3"
encrypted = "true"
}
}
#
# EBS Persistent Volume Claim
resource "kubernetes_persistent_volume_claim_v1" "ebs_pvc" {
metadata {
name = local.ebs_claim_name
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "1Gi"
}
}
storage_class_name = "ebs-storage-class"
}
wait_until_bound = false
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# Retrieve the CSI driver policy
data "aws_iam_policy" "csi_policy" {
name = "AmazonEBSCSIDriverPolicy"
}
# Attach the policy to the cluster IAM role
resource "aws_iam_role_policy_attachment" "csi_policy_attachment" {
policy_arn = data.aws_iam_policy.csi_policy.arn
role = local.eks_node_iam_role_name
}
#
# EBS Storage Class
resource "kubernetes_storage_class" "ebs" {
metadata {
name = "ebs-storage-class"
}
storage_provisioner = "ebs.csi.aws.com"
reclaim_policy = "Delete"
volume_binding_mode = "WaitForFirstConsumer"
parameters = {
type = "gp3"
fsType = "ext4"
}
}
#
# EBS Persistent Volume Claim
resource "kubernetes_persistent_volume_claim_v1" "ebs_pvc" {
metadata {
name = "ebs-volume-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "1Gi"
}
}
storage_class_name = "ebs-storage-class"
}
wait_until_bound = false
}
AmazonEBSCSIDriverPolicy
). Auto Mode has its own CSI driver and permissions are already set up.Note: EKS purists may notice I took a shortcut when setting up CSI permissions in the "without Auto Mode" case. I should be using IRSA (IAM roles for service accounts), but am not. In the new repo "with Auto Mode enabled" I do indeed use IRSA when needed later (to give my application necessary permissions).
StorageClass
looks pretty similar. However note the different storage_provisioner
values. ebs.csi.eks.amazonaws.com
with Auto Mode enabledebs.csi.amazonaws.com
using the older CSI driver
eks
. It also shows how with Auto Mode enabled, it is using a new CSI driver, specific to Auto Mode.PersistentVolumeClaim
setup is the same. Also, later, when configuring your pods, you would reference the PersistentVolumeClaim
the same way to create the PersistentVolumes
. (Here if you are curious).1
2
3
4
5
6
7
#
# Create policy to give EKS nodes necessary permissions to run the LBC
resource "aws_iam_policy" "alb_controller_custom" {
name = "AWSLoadBalancerControllerIAMPolicy"
description = "IAM policy for AWS Load Balancer Controller"
policy = file("${path.module}/policies/iam_policy.json")
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#
# AWS Load Balancer Controller
# Retrieve the LBC IAM policy
data "aws_iam_policy" "lbc_policy" {
name = "AWSLoadBalancerControllerIAMPolicy"
}
# Attach the policy (existing or newly created) to the node IAM role
resource "aws_iam_role_policy_attachment" "alb_policy_node" {
policy_arn = data.aws_iam_policy.lbc_policy.arn
role = local.eks_node_iam_role_name
}
#
# Create the K8s Service Account that will be used by Helm
resource "kubernetes_service_account" "alb_controller" {
metadata {
name = "aws-load-balancer-controller"
namespace = "kube-system"
}
}
resource "helm_release" "aws_load_balancer_controller" {
name = "aws-load-balancer-controller"
namespace = "kube-system"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
set {
name = "clusterName"
value = local.cluster_name
}
set {
name = "serviceAccount.create"
value = "false"
}
set {
name = "serviceAccount.name"
value = kubernetes_service_account.alb_controller.metadata[0].name
}
set {
name = "region"
value = local.region
}
set {
name = "vpcId"
value = local.vpc_id
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
# Kubernetes Ingress Resource for ALB via AWS Auto Mode
resource "kubernetes_ingress_v1" "ingress_alb" {
metadata {
name = "${var.prefix_env}-ingress-alb"
namespace = "default"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
}
}
spec {
ingress_class_name = "${var.prefix_env}-ingressclass-alb"
rule {
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service_v1.service_alb.metadata[0].name
port {
number = 8080
}
}
}
}
}
}
}
}
# Kubernetes Service for the App
resource "kubernetes_service_v1" "service_alb" {
metadata {
name = "${var.prefix_env}-service-alb"
namespace = "default"
labels = {
app = var.app_name
}
}
spec {
selector = {
app = var.app_name
}
port {
port = 8080
target_port = 8080
}
type = "ClusterIP"
}
}
resource "kubernetes_ingress_class_v1" "ingressclass_alb" {
depends_on = [null_resource.apply_ingressclassparams_manifest]
metadata {
name = "${var.prefix_env}-ingressclass-alb"
}
spec {
# Configures the IngressClass to use EKS Auto Mode
controller = "eks.amazonaws.com/alb"
parameters {
api_group = "eks.amazonaws.com"
kind = "IngressClassParams"
name = "${var.prefix_env}-ingressclassparams-alb"
}
}
}
resource "null_resource" "apply_ingressclassparams_manifest" {
provisioner "local-exec" {
command = <<EOT
aws eks --region ${var.aws_region} update-kubeconfig --name ${var.cluster_name}
kubectl apply -f - <<EOF
apiVersion: eks.amazonaws.com/v1
kind: IngressClassParams
metadata:
name: "${var.prefix_env}-ingressclassparams-alb"
spec:
scheme: internet-facing
EOF
EOT
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# This is the IngressClass created when Helm installed the AWS LBC
locals {
ingress_class_name = "alb"
}
# Kubernetes Ingress Resource for ALB via AWS Load Balancer Controller
resource "kubernetes_ingress_v1" "xyz_ingress_alb" {
metadata {
name = "xyz-ingress-alb-${var.env_name}"
namespace = "default"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}]"
"alb.ingress.kubernetes.io/load-balancer-attributes" = "idle_timeout.timeout_seconds=60"
}
}
spec {
ingress_class_name = local.ingress_class_name
rule {
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service_v1.xyz_service_alb.metadata[0].name
port {
number = 8080
}
}
}
}
}
}
}
}
# Kubernetes Service for the App
resource "kubernetes_service_v1" "xyz_service_alb" {
metadata {
name = "xyz-service-alb-${var.env_name}"
namespace = "default"
labels = {
app = var.app_name
}
}
spec {
selector = {
app = var.app_name
}
port {
port = 8080
target_port = 8080
}
type = "ClusterIP"
}
}
IngressClass
. If you look at the "without Auto Mode" code there is no such resource, but you can see it referenced:ingress_class_name = local.ingress_class_name
IngressClass
referenced was automatically created by Helm when installing the AWS Load Balancer Controller (LBC).IngressClassParams
is optional when using LBC, but mandatory for Auto Mode.IngressClassParams
is not a standard Kubernetes resource; it is a custom resource managed inside the cluster.- Terraform does not have a built-in resource to create it, so we must use a workaround—executing the AWS CLI via Terraform to apply it.
IngressClass
. With Auto Mode enabled you can see in the code above we use:controller = "eks.amazonaws.com/alb"
IngressClass
that was created by Helm. It looks like this:controller = ingress.k8s.aws/alb
Secrets
, it still requires setting up an OIDC provider and configuring roles properly.