logo
Menu
How to run Kubernetes on Amazon Lightsail

How to run Kubernetes on Amazon Lightsail

A brief guide step-by-step to run Kubernetes cluster on Amazon Lightsail instances

Rio Astamal
Amazon Employee
Published Dec 22, 2023
Last Modified Apr 26, 2024
To deploy containerized applications on Amazon Lightsail, you can use Amazon Lightsail container service. It provides a simple and easy way to run containers without needing to set up clusters or manage servers. However, if you need more control over your cluster and require more flexibility, you have an option to run your own container orchestration software such as Kubernetes. This post will give you a starting point of how to run your own Kubernetes cluster on Amazon Lightsail.
We will create a Kubernetes cluster using one control plane node and two worker nodes. We will deploy a small web app into the cluster. Each node will be hosted on a different availability zone for high availability. We will expose the app to the internet using Lightsail Load Balancer for secure and highly available access. The load balancer also provides a free SSL/TLS certificate for a custom domain. Below is a diagram showing an overview of the Kubernetes cluster that we will create.
Kubernetes cluster on Amazon Lightsail

Create nodes

For this Kubernetes cluster deployment, we will use Amazon Lightsail $10 instance plan for each node. With $10 plan, we get 2 GB of RAM, 2 vCPUs, and 60 GB of SSD storage per node.
Go to your Amazon Lightsail console and create 3 instances as described below. You may choose different region than mine, but keep in mind to spread worker nodes across different availability zones (AZ).
ConfigurationControl planeWorker 1Worker 2
Nameal2023-kube-cpal2023-kube-wrk1al2023-kube-wrk2
Instance plan$10$10$10
OSAmazon Linux 2023Amazon Linux 2023Amazon Linux 2023
Availability zoneus-east-1aus-east-1bus-east-1c
Networking (ports)22 (SSH)22 (SSH), 80 (HTTP)22 (SSH), 80 (HTTP)
Kubernetes nodes
By default, Lightsail allows instances to communicate with each other via private IP addresses. You do not need to set up any OS level firewalls. However, if you do set up firewalls, you can refer to ports and protocols documentation used by Kubernetes.

Install Kubernetes (All nodes)

SSH into each Amazon Lightsail instance, then run following series of commands to install Kubernetes and all required packages. Default user for Amazon Linux 2023 is ec2-user. To SSH into the instance, you can use following command:
1
ssh ec2-user@INSTANCE_PUBLIC_IP
Based on Kubernetes documentation, it is recommended to set SELinux to "Permissive" mode. This is required to allow containers to access the host filesystem. You can leave SELinux enabled but you may need to configure it properly.
1
2
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Amazon Linux 2023 is RPM-based distribution. Add the Kubernetes yum repository from the official k8s.io repository. We're going to use Kubernetes version 1.28.
1
2
3
4
5
6
7
8
9
10
KUBERNETES_VERSION=1.28
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${KUBERNETES_VERSION}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${KUBERNETES_VERSION}/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Now install Kubernetes tools such as kubelet, kubeadm, and kubectl.
1
sudo yum install -y kubelet kubeadm kubectl iproute-tc --disableexcludes=kubernetes
Enable kubelet systemd service to start automatically on system startup.
1
sudo systemctl enable --now kubelet
We will not use Docker Engine, instead we will install containerd as the container runtime. containerd package is available in the official Amazon Linux 2023 repository.
1
sudo yum install -y containerd
Notes: If you use other distributions you may refer to Docker documentation to install containerd. The package name should be containerd.io.
Create default configuration to be used by the containerd process. The configuration should be written to /etc/containerd/config.toml.
1
2
3
sudo mkdir -p /etc/containerd/
sudo rm -r /etc/containerd/config.toml >/dev/null 2>&1
containerd config default | sudo tee /etc/containerd/config.toml
Configure systemd as the cgroup driver for containerd.
1
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
As of Kubernetes version 1.28.4, kubeadm suggests using container image registry.k8s.io/pause:3.9. Let's update containerd configuration to use this image.
1
sudo sed -i 's@registry.k8s.io/pause:3.8@registry.k8s.io/pause:3.9@g' /etc/containerd/config.toml
To apply the changes, enable and restart containerd.
1
2
sudo systemctl enable --now containerd
sudo systemctl restart containerd
Enable IPv4 packet forwarding, bridged network traffic, and disable swap. Make sure to load required kernel modules.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
sudo mkdir -p /etc/modules-load.d/
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Apply all the changes.
1
sudo sysctl --system

Configure control plane

SSH into control plane node al2023-kube-cp.
1
ssh ec2-user@CONTROL_PLANE_PUBLIC_IP
Change the system hostname to kube-cp.
1
sudo hostnamectl set-hostname kube-cp
You may not see the changes on your current shell. To see the changes, you need to log out and log back in. To verify the changes without logging out, issue the hostname command.
1
hostname
You should now see your current hostname.
1
kube-cp
Create Kubernetes cluster using kubeadm. We will use flannel for pod networking, flannel has default pod network CIDR set to 10.24.0.0/16.
1
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[...cut...]
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.9.209:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[LONG_SHA256]
Notes: If you need to configure CIDR other than the default one please refer to flannel documentation.
Wait for few minutes to complete. Now create Kubernetes config for kubectl to administer cluster as a non-root user.
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This is important: Make sure to write down the output of the kubeadm join command. We will run this command on the worker node later to join it to the cluster.
Check the status of our control plane in the cluster.
1
kubectl get nodes
Output:
1
2
NAME STATUS ROLES AGE VERSION
kube-cp NotReady control-plane 107s v1.28.4
As you can see above, the status is NotReady. Did we miss something? Let's see the status of the pods in the cluster.
1
kubectl get pods -A
Output:
1
2
3
4
5
6
7
8
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5dd5756b68-hf6fz 0/1 Pending 0 2m1s
kube-system coredns-5dd5756b68-xbddq 0/1 Pending 0 2m1s
kube-system etcd-kube-cp 1/1 Running 0 2m13s
kube-system kube-apiserver-kube-cp 1/1 Running 0 2m15s
kube-system kube-controller-manager-kube-cp 1/1 Running 0 2m16s
kube-system kube-proxy-t78gg 1/1 Running 0 2m1s
kube-system kube-scheduler-kube-cp 1/1 Running 0 2m13s
The coredns pods is not running. We still need to install flannel as pod network plugin. Let’s install Helm first.
1
2
3
HELM_VERSION=3.13.2
curl -s -L https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz -o 'helm.tar.gz'
sudo tar xvf helm.tar.gz --strip-components=1 -C /usr/local/bin/ linux-amd64/helm
Verify helm installation.
1
helm version
Output:
1
version.BuildInfo{Version:"v3.13.2", GitCommit:"2a2fb3b98829f1e0be6fb18af2f6599e0f4e8243", GitTreeState:"clean", GoVersion:"go1.20.10"}
Install the flannel pod network plugin using Helm. Use the same pod CIDR 10.244.0.0/16 that was specified when creating the cluster.
1
2
3
4
5
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

helm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel --set podCidr="10.244.0.0/16" --namespace kube-flannel flannel/flannel
Output:
1
2
3
4
5
6
NAME: flannel
LAST DEPLOYED: Fri Dec 8 21:08:38 2023
NAMESPACE: kube-flannel
STATUS: deployed
REVISION: 1
TEST SUITE: None
In a couple of seconds, new pods for flannel should be up and running, as well as coredns pods which were previously in pending status.
1
kubectl get pods -A
Output:
1
2
3
4
5
6
7
8
9
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-dsmf5 1/1 Running 0 67m
kube-system coredns-5dd5756b68-j4lxr 1/1 Running 0 8h
kube-system coredns-5dd5756b68-qmqf5 1/1 Running 0 8h
kube-system etcd-kube-cp 1/1 Running 1 8h
kube-system kube-apiserver-kube-cp 1/1 Running 1 8h
kube-system kube-controller-manager-kube-cp 1/1 Running 1 8h
kube-system kube-proxy-lnlgp 1/1 Running 0 8h
kube-system kube-scheduler-kube-cp 1/1 Running 1 8h
Check the current status of control plane node.
1
kubectl get nodes
Output:
1
2
NAME STATUS ROLES AGE VERSION
kube-cp Ready control-plane 6m30s v1.28.4
Our control plane is ready, now proceed to worker node 1.

Configure worker node 1

SSH into al2023-kube-wrk1 instance.
1
ssh ec2-user@WORKER1_PUBLIC_IP
Change the system hostname to kube-wrk1.
1
sudo hostnamectl set-hostname kube-wrk1
Verify that the hostname was set correctly.
1
hostname
Output:
1
kube-wrk1
Join worker node 1 to the cluster using the kubeadm join command that was shown to you when creating the control plane. Make sure to run the command as root.
1
2
sudo kubeadm join [CONTROL_PLANE_PRIVATE_IP]:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[LONG_SHA256]
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "kube-wrk1" could not be reached
[WARNING Hostname]: hostname "kube-wrk1": lookup kube-wrk1 on 172.26.0.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Let's go back to the control plane node instance and check the status of worker node 1.
1
kubectl get nodes
When you found the status of kube-wrk1 is NotReady, check the status of flannel pod for node kube-wrk1.
1
kubectl get pods -A -o wide
Output:
1
2
3
4
5
6
7
8
9
10
11
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-j6kqw 1/1 Running 0 108s 172.26.23.64 kube-wrk1 <none> <none>
kube-flannel kube-flannel-ds-t9548 1/1 Running 0 10m 172.26.9.209 kube-cp <none> <none>
kube-system coredns-5dd5756b68-hf6fz 1/1 Running 0 15m 10.244.0.2 kube-cp <none> <none>
kube-system coredns-5dd5756b68-xbddq 1/1 Running 0 15m 10.244.0.3 kube-cp <none> <none>
kube-system etcd-kube-cp 1/1 Running 0 16m 172.26.9.209 kube-cp <none> <none>
kube-system kube-apiserver-kube-cp 1/1 Running 0 16m 172.26.9.209 kube-cp <none> <none>
kube-system kube-controller-manager-kube-cp 1/1 Running 0 16m 172.26.9.209 kube-cp <none> <none>
kube-system kube-proxy-fskw9 1/1 Running 0 108s 172.26.23.64 kube-wrk1 <none> <none>
kube-system kube-proxy-t78gg 1/1 Running 0 15m 172.26.9.209 kube-cp <none> <none>
kube-system kube-scheduler-kube-cp 1/1 Running 0 16m 172.26.9.209 kube-cp <none> <none>
If you notice the role of the worker node 1 is <none>, to label it as worker run following command on control plane node.
1
2
3
kubectl label node kube-wrk1 node-role.kubernetes.io/worker=worker

kubectl get nodes
Output:
1
2
3
NAME STATUS ROLES AGE VERSION
kube-cp Ready control-plane 12h v1.28.4
kube-wrk1 Ready worker 3h6m v1.28.4
Let’s proceed to worker node 2.

Configure worker node 2

SSH into al2023-kube-wrk2 instance.
1
ssh ec2-user@WORKER2_PUBLIC_IP
Change the system hostname to kube-wrk2.
1
sudo hostnamectl set-hostname kube-wrk2
Verify that the hostname was set correctly.
1
hostname
Output:
1
kube-wrk2
Run the kubeadm join command on the other worker nodes, just like we did on worker node 1, to join them to the cluster.
1
2
sudo kubeadm join [CONTROL_PLANE_PRIVATE_IP]:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[LONG_SHA256]
Switch back to the control plane node instance and check the status of worker node 2. Wait a few seconds for kube-wrk2 to become ready.
1
kubectl get nodes
Output:
1
2
3
4
NAME STATUS ROLES AGE VERSION
kube-cp Ready control-plane 6h33m v1.28.4
kube-wrk1 Ready worker 6h18m v1.28.4
kube-wrk2 Ready <none> 109s v1.28.4
Let’s add worker label to the kube-wrk2.
1
2
3
kubectl label node kube-wrk2 node-role.kubernetes.io/worker=worker

kubectl get nodes
Output:
1
2
3
4
NAME STATUS ROLES AGE VERSION
kube-cp Ready control-plane 6h33m v1.28.4
kube-wrk1 Ready worker 6h19m v1.28.4
kube-wrk2 Ready worker 2m17s v1.28.4

Deploy app to the cluster (control plane)

We will deploy a simple web server called http-echo, which will echo back the text argument provided when starting the server. To give an overview, here is how you would run the server using Docker:
1
docker run -p 8080:8080 hashicorp/http-echo -listen=:8080 -text="hello world"
Make sure you’re on control plane node. To deploy http-echo on our cluster, create a YAML Deployment file.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
cat <<'EOF' > http-echo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-echo-deployment
labels:
app: http-echo
spec:
replicas: 6
selector:
matchLabels:
app: http-echo
template:
metadata:
annotations:
add-node-label: always
labels:
app: http-echo
spec:
containers:
- name: http-echo
image: hashicorp/http-echo
args:
- "-text=Node: $(MY_NODE_NAME)/$(MY_HOST_IP) - Pod: $(MY_POD_NAME)/$(MY_POD_IP)"
- "-listen=:8080"
ports:
- containerPort: 8080
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
EOF
The Deployment will create 6 replicas and it should be spread across the worker nodes. We display several pieces of information such as the node name and its IP address, the pod name and its IP. The server listens on port 8080. By default, containerd does not allow binding on privileged ports (< 1024). So keep that in mind when running the containers.
Now apply the changes.
1
kubectl apply -f http-echo-deployment.yaml
Check the status of the deployment.
1
kubectl get deployment --show-labels
Output:
1
2
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
http-echo-deployment 6/6 6 6 78s app=http-echo
There should be 6 pods created and running at the moment.
1
kubectl get pods --show-labels
Output:
1
2
3
4
5
6
http-echo-deployment-6759fdb5d-7rvqh 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
http-echo-deployment-6759fdb5d-84zwj 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
http-echo-deployment-6759fdb5d-8mlg4 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
http-echo-deployment-6759fdb5d-fpvqw 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
http-echo-deployment-6759fdb5d-pqzst 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
http-echo-deployment-6759fdb5d-pxczb 1/1 Running 0 27s app=http-echo,pod-template-hash=6759fdb5d
In order for other services to be able to access the app, the deployment needs to be exposed via a Service.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat <<'EOF' > http-echo-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: http-echo-svc
spec:
selector:
app: http-echo
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
externalIPs:
- WORKER_NODE_1_PRIVATE_IP
- WORKER_NODE_2_PRIVATE_IP
EOF
The service will expose the app on port 80 and forward the traffic to the container port 8080. The type of the service is ClusterIP. It will select all the pods which have label app=http-echo. The externalIPs configuration are the private IP addresses of the Lightsail instances for the worker nodes. By configuring externalIPs, the service can be exposed to the Lightsail Load Balancer.
You need to change the value of externalIPs to match the private IP addresses of the worker nodes. You can find each worker node's private IP address on your Amazon Lightsail console, by using the ip command, or with the kubectl command.
1
kubectl get nodes -o wide
Output:
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-cp Ready control-plane 6h45m v1.28.4 172.26.9.209 <none> Amazon Linux 2023 6.1.61-85.141.amzn2023.x86_64 containerd://1.7.2
kube-wrk1 Ready worker 6h30m v1.28.4 172.26.23.64 <none> Amazon Linux 2023 6.1.61-85.141.amzn2023.x86_64 containerd://1.7.2
kube-wrk2 Ready worker 13m v1.28.4 172.26.38.139 <none> Amazon Linux 2023 6.1.61-85.141.amzn2023.x86_64 containerd://1.7.2
You can see the private IP of each node in the INTERNAL-IP column.
Once you finish editing the file, deploy the Service using kubectl apply command.
1
kubectl apply -f http-echo-svc.yaml
Check the status of all services in the cluster.
1
kubectl get services
Output:
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-echo-svc ClusterIP 10.99.201.31 172.26.23.64,172.26.38.139 80/TCP 112s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h52m
The http-echo-svc is up and listening on two external IPs as well as listening on port 80. To test the application, we can send requests to the cluster IP or external IPs. Let's try sending requests to the cluster IP multiple times. The result should be different depending on which pod is responding to the request.
1
2
3
4
5
curl -s 10.109.71.26
Node: kube-wrk1/172.26.12.193 - Pod: http-echo-deployment-6759fdb5d-q9ms5/10.244.1.2

curl -s 10.109.71.26
Node: kube-wrk2/172.26.3.120 - Pod: http-echo-deployment-6759fdb5d-4jh9r/10.244.2.2
Now let’s try with external IPs.
1
2
3
4
5
curl -s 172.26.12.193
Node: kube-wrk2/172.26.3.120 - Pod: http-echo-deployment-6759fdb5d-5kvnp/10.244.2.4

curl -s 172.26.3.120
Node: kube-wrk1/172.26.12.193 - Pod: http-echo-deployment-6759fdb5d-9bq9c/10.244.1.3
Behind the scenes, when you hit external IPs, the traffic is routed to the cluster IP via iptables rules set by Kubernetes. That's why you will see the same result whether you hit the cluster IP or external IPs.
Now, how do you expose the app to the world? Enter Lightsail Load Balancer.

Expose app to internet using Lightsail Load Balancer

Follow these steps:
  1. Open Amazon Lightsail Console and navigate to the Networking page.
  2. Create a new load balancer, make sure to choose the same region that your instances are in.
  3. Give your load balancer the name http-echo-lb.
  4. For the target instances, attach both worker nodes to the load balancer: al2023-kube-wrk1 and al2023-kube-wrk2.
  5. Wait a couple minutes for the health check to pass before proceeding.
Expose app using Lightsail load balancer
You can now access the app from the internet using the load balancer URL. I recommend using cURL to test it, since browsers tend to cache the results. This means you may not see a difference when hitting the URL multiple times.
1
curl -s http://RANDOM_IDS.REGION.elb.amazonaws.com/
Output:
1
Node: kube-wrk1/172.26.23.64 - Pod: http-echo-deployment-6759fdb5d-84zwj/10.244.1.3
Try sending requests to the URL multiple times to see the different responses from different nodes and pods.
We will not discuss setting up custom domains and free SSL/TLS certificates in this post. However, you can find more information in the Lightsail load balancer documentation.

Improve load balancer (optional)

Our web app is exposed through a Service called http-echo-svc. This Service uses a ClusterIP service type configuration, as seen below.
1
kubectl get services
Output:
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-echo-svc ClusterIP 10.99.201.31 172.26.23.64,172.26.38.139 80/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
When Lightsail Load Balancer sends traffic to one of the instances, it gets handled by the http-echo-svc. Even though the load balancer hits node worker 1, it may forward the traffic to a pod running on node worker 2, as http-echo-svc tries to load balance the traffic across the two worker nodes.
This is not optimal. What we want is that when Lightsail Load Balancer hits the worker node, the response is coming from pods that running on those particular worker node.
How do we achieve this? We can create two services for each worker node and add new label containing name of the worker node (e.g. node=kube-wrk1 for worker node 1). Take a look at diagram below.
Improved Lightsail Load balancer routing
Unfortunately, there is no straightforward way to dynamically modify a pod's label. We could use something like kyverno.io policy management to add a new label when pods are created, or we could use Init Containers to add a new label to the pods. For this post, we will use the Init Containers method. In a nutshell, containers inside Init Containers are executed before the main containers. The Init Containers should return exit status zero to indicate success, otherwise the main containers will not be executed.
The idea is that we will call the Kubernetes API server on the control plane from the Init Containers to add new label for the pods. But first, we need a token to authorize our API calls. For this example, we will create new Service Account and assign the cluster-admin role to the Service Account. In a real-world scenario, you should create your own role and give least-privilege access to the Service Account which will call the API server.
On the control plane node, create new YAML file for Service Account and it’s token.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cat <<'EOF' > service-account-initc.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: initc-pod-labels
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: initc-pod-labels-token
annotations:
kubernetes.io/service-account.name: initc-pod-labels
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: initc-pod-labels-bind
subjects:
- kind: ServiceAccount
name: initc-pod-labels
namespace: default
apiGroup: ""
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF
Apply the changes.
1
kubectl apply -f service-account-initc.yaml
Output:
1
2
3
serviceaccount/initc-pod-labels created
secret/initc-pod-labels-token created
clusterrolebinding.rbac.authorization.k8s.io/initc-pod-labels-bind created
Now we should be able to call the API server using token initc-pod-labels-token.
1
kubectl describe secret/initc-pod-labels-token
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
Name: initc-pod-labels-token
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: initc-pod-labels
kubernetes.io/service-account.uid: 92f6ace5-d9af-4aa9-883c-66f3db0d9edc

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1107 bytes
namespace: 7 bytes
token: YOUR_SERVICE_ACCOUNT_TOKEN
We can test the token by using cURL to call Kubernetes API server. Let's first try without the token. We should get a 403 response from the API server.
1
curl -k https://localhost:6443/api/v1/namespaces/default/pods
Output:
1
2
3
4
5
6
7
8
9
10
11
12
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
Now let’s try with our Service Account token.
1
2
3
SVC_TOKEN="$( kubectl describe secret/initc-pod-labels-token | grep 'token: ' | awk '{print $NF}' )"

curl -k -H "Authorization: Bearer $SVC_TOKEN" https://localhost:6443/api/v1/namespaces/default/pods
You should get list of pods in default namespace in a JSON format.
We know that the token is valid and now we need to modify http-echo-deployment Deployment to include Init Containers and use the token to add new label for the pod. Rather than modifying the old file, let's just create a new one.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
cat <<'EOF' > http-echo-deployment-initc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-echo-deployment
labels:
app: http-echo
spec:
replicas: 6
selector:
matchLabels:
app: http-echo
template:
metadata:
labels:
app: http-echo
spec:
initContainers:
- name: initc
image: alpine:3
command: ['sh', '-c']
args:
- |
# Download curl to call the Kubernetes API server
wget 'https://github.com/moparisthebest/static-curl/releases/download/v8.4.0/curl-amd64' -O /usr/bin/curl
chmod +x /usr/bin/curl

# Get the value of service account token
TOKEN_PATH=/var/run/init-secrets
TOKEN="$( cat $TOKEN_PATH/token )"
NAMESPACE="$(cat $TOKEN_PATH/namespace )"

# Get node name
NODE_NAME="$( curl -s -k -H "Accept: application/yaml" \
-H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST/api/v1/namespaces/$NAMESPACE/pods/$HOSTNAME \
| grep nodeName: | grep -v '{' | awk '{ print $NF }' )"

# Patch the pod to add new label 'node=NODE_NAME'
printf '{"metadata":{"labels":{"node":"%s"}}}\n' "$NODE_NAME" | curl -k -v -XPATCH \
-H "Accept: application/json" \
-H "Content-Type: application/merge-patch+json" \
-H "Authorization: Bearer $TOKEN" \
"https://$KUBERNETES_SERVICE_HOST/api/v1/namespaces/$NAMESPACE/pods/$HOSTNAME" \
-d @-
volumeMounts:
- name: sa-init-token
mountPath: /var/run/init-secrets
volumes:
- name: sa-init-token
secret:
secretName: initc-pod-labels-token
defaultMode: 0600

containers:
- name: http-echo
image: hashicorp/http-echo
args:
- "-text=Node: $(MY_NODE_NAME)/$(MY_HOST_IP) - Pod: $(MY_POD_NAME)/$(MY_POD_IP)"
- "-listen=:8080"
ports:
- containerPort: 8080
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
EOF
For the Init Containers, we use alpine:3 image and mount the service account token to /var/run/init-secrets/. Once the Init Containers start, they will call Kubernetes API server to get the node name where the current pod is running. The Init Containers will then patch the pod's labels to add node=$NODE_NAME.
Apply the changes to current http-echo-deployment.
1
kubectl apply -f http-echo-deployment-initc.yaml
Kubernetes should terminate all pods and creating new one with the new label.
1
kubectl get pods --show-labels
Output:
1
2
3
4
5
6
7
NAME READY STATUS RESTARTS AGE LABELS
http-echo-deployment-5b85c5bb95-4drk7 1/1 Running 0 5s app=http-echo,node=kube-wrk1,pod-template-hash=5b85c5bb95
http-echo-deployment-5b85c5bb95-6mh28 1/1 Running 0 9s app=http-echo,node=kube-wrk1,pod-template-hash=5b85c5bb95
http-echo-deployment-5b85c5bb95-b44vw 1/1 Running 0 9s app=http-echo,node=kube-wrk1,pod-template-hash=5b85c5bb95
http-echo-deployment-5b85c5bb95-bxjhs 1/1 Running 0 6s app=http-echo,node=kube-wrk2,pod-template-hash=5b85c5bb95
http-echo-deployment-5b85c5bb95-cclwm 1/1 Running 0 5s app=http-echo,node=kube-wrk2,pod-template-hash=5b85c5bb95
http-echo-deployment-5b85c5bb95-vjz2z 1/1 Running 0 9s app=http-echo,node=kube-wrk2,pod-template-hash=5b85c5bb95
As you can see, all pods now have a new node label that indicates where the pod runs. We now need a Service for each node by updating the selector to add the node label as one of the conditions.
Create a new Service to route traffic to a specific app and node. Don't forget to replace the value of externalIPs with your own.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
cat <<'EOF' > http-echo-svc-initc.yaml
apiVersion: v1
kind: Service
metadata:
name: http-echo-svc-1
spec:
selector:
app: http-echo
node: kube-wrk1
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
externalIPs:
- WORKER_NODE_1_PRIVATE_IP
---
apiVersion: v1
kind: Service
metadata:
name: http-echo-svc-2
spec:
selector:
app: http-echo
node: kube-wrk2
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
externalIPs:
- WORKER_NODE_2_PRIVATE_IP
EOF
Apply the new Services.
1
kubectl apply -f http-echo-svc-initc.yaml
Now you should have several Services running.
1
2
3
4
5
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-echo-svc ClusterIP 10.99.201.31 172.26.23.64,172.26.38.139 80/TCP 20m
http-echo-svc-1 ClusterIP 10.102.153.206 172.26.23.64 80/TCP 5s
http-echo-svc-2 ClusterIP 10.97.33.102 172.26.38.139 80/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h10m
You can safely delete the old Service http-echo-svc.
1
kubectl delete service/http-echo-svc
Now let’s hit Service http-echo-svc-1 several times, the result should always come from worker node 1.
1
2
3
4
5
6
7
8
curl -s WORKER_NODE_1_PRIVATE_IP
Node: kube-wrk1/172.26.23.64 - Pod: http-echo-deployment-5b85c5bb95-4drk7/10.244.1.7

curl -s WORKER_NODE_1_PRIVATE_IP
Node: kube-wrk1/172.26.23.64 - Pod: http-echo-deployment-5b85c5bb95-6mh28/10.244.1.6

curl -s WORKER_NODE_1_PRIVATE_IP
Node: kube-wrk1/172.26.23.64 - Pod: http-echo-deployment-5b85c5bb95-b44vw/10.244.1.5
Let’s do the same to the http-echo-svc-2.
1
2
3
4
5
6
7
8
curl -s WORKER_NODE_2_PRIVATE_IP
Node: kube-wrk2/172.26.38.139 - Pod: http-echo-deployment-5b85c5bb95-bxjhs/10.244.2.6

curl -s WORKER_NODE_2_PRIVATE_IP
Node: kube-wrk2/172.26.38.139 - Pod: http-echo-deployment-5b85c5bb95-vjz2z/10.244.2.5

curl -s WORKER_NODE_2_PRIVATE_IP
Node: kube-wrk2/172.26.38.139 - Pod: http-echo-deployment-5b85c5bb95-cclwm/10.244.2.7
Now the traffic which is sent from Lightsail Load Balancer to the Service should use more optimal routing, even though the end user does not notice it.
1
2
3
4
5
6
7
8
curl -s http://RANDOM_IDS.REGION.elb.amazonaws.com/
Node: kube-wrk1/172.26.23.64 - Pod: http-echo-deployment-5b85c5bb95-4drk7/10.244.1.7

curl -s http://RANDOM_IDS.REGION.elb.amazonaws.com/
Node: kube-wrk2/172.26.38.139 - Pod: http-echo-deployment-5b85c5bb95-vjz2z/10.244.2.5

curl -s http://RANDOM_IDS.REGION.elb.amazonaws.com/
Node: kube-wrk2/172.26.38.139 - Pod: http-echo-deployment-5b85c5bb95-cclwm/10.244.2.7

Clean up

To avoid incurring future charges, clean up resources created in this post using Amazon Lightsail console by deleting following resources:
  • Lightsail instances: al2023-kube-cp, al2023-kube-wrk1, and al2023-kube-wrk2.
  • Lightsail load balancer: http-echo-lb

Summary

In this post, we deploy Kubernetes cluster on Amazon Lightsail using kubeadm and kubectl commands. To form the cluster, we create 3 Lightsail instances as a control plane and two worker nodes. To expose the app from Kubernetes Service to the internet, we utilize Lightsail Load Balancer, which routes traffic to the worker nodes. In order to make the app highly available, we spread the worker nodes across different availability zones. Visit Lightsail Getting Started page to learn more.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments