logo
Menu
How to run Kubernetes on Amazon Lightsail

How to run Kubernetes on Amazon Lightsail

A brief guide step-by-step to run Kubernetes cluster on Amazon Lightsail instances

Rio Astamal
Amazon Employee
Published Dec 22, 2023
Last Modified Apr 26, 2024
To deploy containerized applications on Amazon Lightsail, you can use Amazon Lightsail container service. It provides a simple and easy way to run containers without needing to set up clusters or manage servers. However, if you need more control over your cluster and require more flexibility, you have an option to run your own container orchestration software such as Kubernetes. This post will give you a starting point of how to run your own Kubernetes cluster on Amazon Lightsail.
We will create a Kubernetes cluster using one control plane node and two worker nodes. We will deploy a small web app into the cluster. Each node will be hosted on a different availability zone for high availability. We will expose the app to the internet using Lightsail Load Balancer for secure and highly available access. The load balancer also provides a free SSL/TLS certificate for a custom domain. Below is a diagram showing an overview of the Kubernetes cluster that we will create.
Kubernetes cluster on Amazon Lightsail

Create nodes

For this Kubernetes cluster deployment, we will use Amazon Lightsail $10 instance plan for each node. With $10 plan, we get 2 GB of RAM, 2 vCPUs, and 60 GB of SSD storage per node.
Go to your Amazon Lightsail console and create 3 instances as described below. You may choose different region than mine, but keep in mind to spread worker nodes across different availability zones (AZ).
ConfigurationControl planeWorker 1Worker 2
Nameal2023-kube-cpal2023-kube-wrk1al2023-kube-wrk2
Instance plan$10$10$10
OSAmazon Linux 2023Amazon Linux 2023Amazon Linux 2023
Availability zoneus-east-1aus-east-1bus-east-1c
Networking (ports)22 (SSH)22 (SSH), 80 (HTTP)22 (SSH), 80 (HTTP)
Kubernetes nodes
By default, Lightsail allows instances to communicate with each other via private IP addresses. You do not need to set up any OS level firewalls. However, if you do set up firewalls, you can refer to ports and protocols documentation used by Kubernetes.

Install Kubernetes (All nodes)

SSH into each Amazon Lightsail instance, then run following series of commands to install Kubernetes and all required packages. Default user for Amazon Linux 2023 is ec2-user. To SSH into the instance, you can use following command:
Based on Kubernetes documentation, it is recommended to set SELinux to "Permissive" mode. This is required to allow containers to access the host filesystem. You can leave SELinux enabled but you may need to configure it properly.
Amazon Linux 2023 is RPM-based distribution. Add the Kubernetes yum repository from the official k8s.io repository. We're going to use Kubernetes version 1.28.
Now install Kubernetes tools such as kubelet, kubeadm, and kubectl.
Enable kubelet systemd service to start automatically on system startup.
We will not use Docker Engine, instead we will install containerd as the container runtime. containerd package is available in the official Amazon Linux 2023 repository.
Notes: If you use other distributions you may refer to Docker documentation to install containerd. The package name should be containerd.io.
Create default configuration to be used by the containerd process. The configuration should be written to /etc/containerd/config.toml.
Configure systemd as the cgroup driver for containerd.
As of Kubernetes version 1.28.4, kubeadm suggests using container image registry.k8s.io/pause:3.9. Let's update containerd configuration to use this image.
To apply the changes, enable and restart containerd.
Enable IPv4 packet forwarding, bridged network traffic, and disable swap. Make sure to load required kernel modules.
Apply all the changes.

Configure control plane

SSH into control plane node al2023-kube-cp.
Change the system hostname to kube-cp.
You may not see the changes on your current shell. To see the changes, you need to log out and log back in. To verify the changes without logging out, issue the hostname command.
You should now see your current hostname.
Create Kubernetes cluster using kubeadm. We will use flannel for pod networking, flannel has default pod network CIDR set to 10.24.0.0/16.
Output:
Notes: If you need to configure CIDR other than the default one please refer to flannel documentation.
Wait for few minutes to complete. Now create Kubernetes config for kubectl to administer cluster as a non-root user.
This is important: Make sure to write down the output of the kubeadm join command. We will run this command on the worker node later to join it to the cluster.
Check the status of our control plane in the cluster.
Output:
As you can see above, the status is NotReady. Did we miss something? Let's see the status of the pods in the cluster.
Output:
The coredns pods is not running. We still need to install flannel as pod network plugin. Let’s install Helm first.
Verify helm installation.
Output:
Install the flannel pod network plugin using Helm. Use the same pod CIDR 10.244.0.0/16 that was specified when creating the cluster.
Output:
In a couple of seconds, new pods for flannel should be up and running, as well as coredns pods which were previously in pending status.
Output:
Check the current status of control plane node.
Output:
Our control plane is ready, now proceed to worker node 1.

Configure worker node 1

SSH into al2023-kube-wrk1 instance.
Change the system hostname to kube-wrk1.
Verify that the hostname was set correctly.
Output:
Join worker node 1 to the cluster using the kubeadm join command that was shown to you when creating the control plane. Make sure to run the command as root.
Output:
Let's go back to the control plane node instance and check the status of worker node 1.
When you found the status of kube-wrk1 is NotReady, check the status of flannel pod for node kube-wrk1.
Output:
If you notice the role of the worker node 1 is <none>, to label it as worker run following command on control plane node.
Output:
Let’s proceed to worker node 2.

Configure worker node 2

SSH into al2023-kube-wrk2 instance.
Change the system hostname to kube-wrk2.
Verify that the hostname was set correctly.
Output:
Run the kubeadm join command on the other worker nodes, just like we did on worker node 1, to join them to the cluster.
Switch back to the control plane node instance and check the status of worker node 2. Wait a few seconds for kube-wrk2 to become ready.
Output:
Let’s add worker label to the kube-wrk2.
Output:

Deploy app to the cluster (control plane)

We will deploy a simple web server called http-echo, which will echo back the text argument provided when starting the server. To give an overview, here is how you would run the server using Docker:
Make sure you’re on control plane node. To deploy http-echo on our cluster, create a YAML Deployment file.
The Deployment will create 6 replicas and it should be spread across the worker nodes. We display several pieces of information such as the node name and its IP address, the pod name and its IP. The server listens on port 8080. By default, containerd does not allow binding on privileged ports (< 1024). So keep that in mind when running the containers.
Now apply the changes.
Check the status of the deployment.
Output:
There should be 6 pods created and running at the moment.
Output:
In order for other services to be able to access the app, the deployment needs to be exposed via a Service.
The service will expose the app on port 80 and forward the traffic to the container port 8080. The type of the service is ClusterIP. It will select all the pods which have label app=http-echo. The externalIPs configuration are the private IP addresses of the Lightsail instances for the worker nodes. By configuring externalIPs, the service can be exposed to the Lightsail Load Balancer.
You need to change the value of externalIPs to match the private IP addresses of the worker nodes. You can find each worker node's private IP address on your Amazon Lightsail console, by using the ip command, or with the kubectl command.
Output:
You can see the private IP of each node in the INTERNAL-IP column.
Once you finish editing the file, deploy the Service using kubectl apply command.
Check the status of all services in the cluster.
Output:
The http-echo-svc is up and listening on two external IPs as well as listening on port 80. To test the application, we can send requests to the cluster IP or external IPs. Let's try sending requests to the cluster IP multiple times. The result should be different depending on which pod is responding to the request.
Now let’s try with external IPs.
Behind the scenes, when you hit external IPs, the traffic is routed to the cluster IP via iptables rules set by Kubernetes. That's why you will see the same result whether you hit the cluster IP or external IPs.
Now, how do you expose the app to the world? Enter Lightsail Load Balancer.

Expose app to internet using Lightsail Load Balancer

Follow these steps:
  1. Open Amazon Lightsail Console and navigate to the Networking page.
  2. Create a new load balancer, make sure to choose the same region that your instances are in.
  3. Give your load balancer the name http-echo-lb.
  4. For the target instances, attach both worker nodes to the load balancer: al2023-kube-wrk1 and al2023-kube-wrk2.
  5. Wait a couple minutes for the health check to pass before proceeding.
Expose app using Lightsail load balancer
You can now access the app from the internet using the load balancer URL. I recommend using cURL to test it, since browsers tend to cache the results. This means you may not see a difference when hitting the URL multiple times.
Output:
Try sending requests to the URL multiple times to see the different responses from different nodes and pods.
We will not discuss setting up custom domains and free SSL/TLS certificates in this post. However, you can find more information in the Lightsail load balancer documentation.

Improve load balancer (optional)

Our web app is exposed through a Service called http-echo-svc. This Service uses a ClusterIP service type configuration, as seen below.
Output:
When Lightsail Load Balancer sends traffic to one of the instances, it gets handled by the http-echo-svc. Even though the load balancer hits node worker 1, it may forward the traffic to a pod running on node worker 2, as http-echo-svc tries to load balance the traffic across the two worker nodes.
This is not optimal. What we want is that when Lightsail Load Balancer hits the worker node, the response is coming from pods that running on those particular worker node.
How do we achieve this? We can create two services for each worker node and add new label containing name of the worker node (e.g. node=kube-wrk1 for worker node 1). Take a look at diagram below.
Improved Lightsail Load balancer routing
Unfortunately, there is no straightforward way to dynamically modify a pod's label. We could use something like kyverno.io policy management to add a new label when pods are created, or we could use Init Containers to add a new label to the pods. For this post, we will use the Init Containers method. In a nutshell, containers inside Init Containers are executed before the main containers. The Init Containers should return exit status zero to indicate success, otherwise the main containers will not be executed.
The idea is that we will call the Kubernetes API server on the control plane from the Init Containers to add new label for the pods. But first, we need a token to authorize our API calls. For this example, we will create new Service Account and assign the cluster-admin role to the Service Account. In a real-world scenario, you should create your own role and give least-privilege access to the Service Account which will call the API server.
On the control plane node, create new YAML file for Service Account and it’s token.
Apply the changes.
Output:
Now we should be able to call the API server using token initc-pod-labels-token.
Output:
We can test the token by using cURL to call Kubernetes API server. Let's first try without the token. We should get a 403 response from the API server.
Output:
Now let’s try with our Service Account token.
You should get list of pods in default namespace in a JSON format.
We know that the token is valid and now we need to modify http-echo-deployment Deployment to include Init Containers and use the token to add new label for the pod. Rather than modifying the old file, let's just create a new one.
For the Init Containers, we use alpine:3 image and mount the service account token to /var/run/init-secrets/. Once the Init Containers start, they will call Kubernetes API server to get the node name where the current pod is running. The Init Containers will then patch the pod's labels to add node=$NODE_NAME.
Apply the changes to current http-echo-deployment.
Kubernetes should terminate all pods and creating new one with the new label.
Output:
As you can see, all pods now have a new node label that indicates where the pod runs. We now need a Service for each node by updating the selector to add the node label as one of the conditions.
Create a new Service to route traffic to a specific app and node. Don't forget to replace the value of externalIPs with your own.
Apply the new Services.
Now you should have several Services running.
You can safely delete the old Service http-echo-svc.
Now let’s hit Service http-echo-svc-1 several times, the result should always come from worker node 1.
Let’s do the same to the http-echo-svc-2.
Now the traffic which is sent from Lightsail Load Balancer to the Service should use more optimal routing, even though the end user does not notice it.

Clean up

To avoid incurring future charges, clean up resources created in this post using Amazon Lightsail console by deleting following resources:
  • Lightsail instances: al2023-kube-cp, al2023-kube-wrk1, and al2023-kube-wrk2.
  • Lightsail load balancer: http-echo-lb

Summary

In this post, we deploy Kubernetes cluster on Amazon Lightsail using kubeadm and kubectl commands. To form the cluster, we create 3 Lightsail instances as a control plane and two worker nodes. To expose the app from Kubernetes Service to the internet, we utilize Lightsail Load Balancer, which routes traffic to the worker nodes. In order to make the app highly available, we spread the worker nodes across different availability zones. Visit Lightsail Getting Started page to learn more.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments