Picturesocial - How to deploy an app to Kubernetes
Deploying an app to Kubernetes is like having that one recipe that always works. If it is well-written once, it can be used to create a lot of different dishes. In this post, we are going to learn about Kubernetes Manifest and the basic commands to deploy an App in an easy and reusable way.
This is an 8-part series about Picturesocial:
So far we have learned about containers, Kubernetes, and Terraform. Now, it’s time to use the knowledge that we acquired in the previous posts to deploy a container on our Amazon EKS cluster. In this post, we are also going to learn about the Kubectl tool and some commands to handle basic Kubernetes tasks.
To understand the basic flow of application deployment into Kubernetes, we have to understand the complete flow of containerized application development. I designed this diagram to help summarize the Build, Push, Compose, Connect, Deploy process.
I divided the diagram above into the five steps explained below to clarify the activities involved:
- First, we have to build our container image, setting an image name and a tag. Once the image is created, we can test the container locally before going further.
- Once that the container works properly, we have to push the image into a container registry. In our case, we are going to use Amazon ECR. All the required steps to get here are in our first post.
- When the container image is stored on the container registry, we have to create a Kubernetes manifest so we can send instructions to the cluster to create a pod, a replica set, and a service. We also tell the cluster where to retrieve the container image and how to update the application version. If you want to remember how, this a good time to review the second post.
- Now, we are ready to create our Kubernetes cluster in order to get the credentials. This is done once per project. We learn how to create and connect to a Kubernetes cluster in our third post.
- And last but not least, we use Kubectl to deploy our application to Kubernetes using the Manifest. We are going to learn about this final step in the walk-through below.
Now, that we have reviewed and put together what we learned from previous posts, let’s go and deploy the application! I’m assuming you already reviewed the walk-throughs from previous posts before continuing with this.
- An AWS Account.
- If you are using Linux or macOS, you can continue to the next bullet point. If you are using Microsoft Windows, I suggest you to use WSL2.
- Install Git.
- Install Kubectl.
- Install AWS CLI 2.
If this is your first time working with AWS CLI or you need a refresher on how to set up your credentials, I suggest you follow this step-by-step guide of how to configure your local AWS environment. In this same guide, you can also follow steps to configure AWS Cloud9, as that will be very helpful if you don’t want to install everything from scratch.
- First, we are going to connect to check if we have Kubectl correctly installed by running:
You should have a version of at least
Major:"1", Minor:"23" to run this walk-through. Otherwise, I suggest you to upgrade the version first.
- When you created the cluster, you also run a command to update the
kubeconfigfile. You don’t have to run it again, but just a friendly reminder that it is a necessary step to continue further.
1aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
This downloads a
kubeconfig file into your local terminal with: a) the cluster name, b) kubernetes api URL, c) key to connect. That file is saved by default in
/.kube/config. You can see an example below, from Kubernetes official documentation:
- name: developer
- name: experimenter
## Source: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters
Now that we have the
kubeconfig, we have established a trust relationship between your terminal and Kubernetes that will work through kubectl.
- Next, let’s look at the workers for this cluster by running the command below. The command will return the 3 workers that we created, the version of Kubernetes and the age of the worker since creation or upgrade. As shown below, each worker is on a different subnet that belongs to 3 different availability zones.
1kubectl get nodes
- In addition to nodes, you can also check for pods by running the command below. But keep in mind that we haven’t deploy anything yet. Also, if you don’t specify a namespace in the command, it will return everything from the "default" namespace.
1kubectl get pods
- We can also specify the pods in all namespaces, including the ones that Kubernetes needs to run properly by adding the
1kubectl get pods --all-namespaces
- Similarly with services, we can check all the services in the cluster. As you can see, you have the default service that will handle the
kubecontrolrequests and the kube-dns that will handle the calls to the
corednsof the cluster. It’s important that we don’t edit or delete any of the those services or pods, believe me :)
1kubectl get services --all-namespaces
- We can also check the replica sets of the cluster by running this command:
1kubectl get rs --all-namespaces
As shown above, the commands for running the basics are pretty simple and self explanatory.
- Now let’s deploy the container that we created from our first post. I have prepared a branch with everything that you will need, we are going to clone it first. And position our self in the folder that we are going to use.
2git clone https://github.com/aws-samples/picture-social-sample.git -b ep4
- Now, lets open the file
manifest.yml. That file includes the Kubernetes manifest. This manifest will create a deployment called
helloworldwith a pod of a container stored at
111122223333.dkr.ecr.us-east-1.amazonaws.com/helloworld, also replicated twice. The manifest also includes a service with a public load balancer that will expose port 80 and will target container port 5111.
# Definicion de las POD
- name: helloworld
# Definicion del servicio
- port: 80
Now we are ready to deploy the application to Kubernetes. Make sure you change the Amazon ECR Account ID on the manifest above before proceeding.
We going to work using the namespace "tests" for this post. Remember that namespaces will help us handle the order of the pods and group them by business domain or affinity. So let’s create the namespace with this command:
1kubectl create namespace tests
- Now that we have the namespace created, we are going to apply changes to Kubernetes using the manifest and specifying the newly created namespace.
1kubectl apply -f manifest.yml -n tests
- Now you can check the deployment, the pods, and the service. Don’t forget to always pass the parameter, namespace. For pods, you should get two replicas of the same pod.
1kubectl get pods -n tests
- If you want to see details from a specific pod, you can run the following command, where
podNameis the name of the pod that you want to check. It also includes the scheduling from the Kubernetes Control Plane to that specific pod and the history of all events.
1kubectl describe pod **podName** -n tests
- You can also stream the logs from an specific pod by running the following command and specifying the
1kubectl logs **podName** -f -n tests
I recommend you to store the Kubernetes logs for observability into CloudWatch, but we are going to cover this in the next post. You can take a look to the official EKS documentation for more information.
Now, you can also check your service status and address to test if the application is running. The
EXTERNAL-IPcolumn is the one that contains the FQDN address that you can use.
1kubectl get services -n tests
- Now, you can open the browser and test your application. Be sure to use http instead of https for this specific test. We are going to learn how to protect your API endpoints in a future post.
That simple "Hello Jose" from the API is the response from the call to a load balancer that chose one of the two pods to send the request to. Then "Hello Jose" was rendered in your browser as the output. I highly suggest you to try this only locally and not exposing it to Internet. We are going to learn how to expose endpoints to the outside world using other security layers like API Gateways and Layer 7 load balancers in the next posts.
- Now let’s prove why Kubernetes is a self-healing container orchestrator. We are going to delete one of the two pods and see what happens.
1kubectl delete pod **podName** -n tests
As soon as a pod is deleted, Kubernetes will provision another clone because the replica set has to be honored. If you want to see the stream of pods and status you can add the
-w parameter to watch for changes.
1kubectl get pods -w -n tests
- You can also set an autoscale rule for your deployment by running the following command. Where
—maxis the number of max replicas that will handle this HPA or Horizontal Pod Autoscaler,
—minis the number of minimum replicas running, and
—cpu-percentis the percentage of CPU of all the current pods from this deployment. In the case of exceeding the number specified, it will scale up.
1kubectl autoscale deployment helloworld --max 10 --min 2 --cpu-percent 70 -n tests
- You can check the status of the HPA by running the following command:
1kubectl get hpa -n tests
I hope you enjoy this post as much as I enjoyed writing it! And I also hope that it gave you some clarification from the previous posts. If everything went well, you learned how to deploy an application to Kubernetes, create services, access the commands to create namespaces and work through namespaces, check for object descriptions, review the logs of your application, scale your application, and create rules for autoscaling.
In the next post we are going to develop one of the core parts of Picturesocial, the API for image recognition and auto tagging using Amazon Rekognition!