Picturesocial - How to deploy an app to Kubernetes
Deploying an app to Kubernetes is like having that one recipe that always works. If it is well-written once, it can be used to create a lot of different dishes. In this post, we are going to learn about Kubernetes Manifest and the basic commands to deploy an App in an easy and reusable way.
- First, we have to build our container image, setting an image name and a tag. Once the image is created, we can test the container locally before going further.
- Once that the container works properly, we have to push the image into a container registry. In our case, we are going to use Amazon ECR. All the required steps to get here are in our first post.
- When the container image is stored on the container registry, we have to create a Kubernetes manifest so we can send instructions to the cluster to create a pod, a replica set, and a service. We also tell the cluster where to retrieve the container image and how to update the application version. If you want to remember how, this a good time to review the second post.
- Now, we are ready to create our Kubernetes cluster in order to get the credentials. This is done once per project. We learn how to create and connect to a Kubernetes cluster in our third post.
- And last but not least, we use Kubectl to deploy our application to Kubernetes using the Manifest. We are going to learn about this final step in the walk-through below.
- An AWS Account.
- If you are using Linux or macOS, you can continue to the next bullet point. If you are using Microsoft Windows, I suggest you to use WSL2.
- Install Git.
- Install Kubectl.
- Install AWS CLI 2.
- First, we are going to connect to check if we have Kubectl correctly installed by running:
1
kubectl version
Major:"1", Minor:"23"
to run this walk-through. Otherwise, I suggest you to upgrade the version first.- When you created the cluster, you also run a command to update the
kubeconfig
file. You don’t have to run it again, but just a friendly reminder that it is a necessary step to continue further.
1
aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
kubeconfig
file into your local terminal with: a) the cluster name, b) kubernetes api URL, c) key to connect. That file is saved by default in /.kube/config
. You can see an example below, from Kubernetes official documentation:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
- cluster:
insecure-skip-tls-verify: true
server: https://5.6.7.8
name: scratch
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
- context:
cluster: development
namespace: storage
user: developer
name: dev-storage
- context:
cluster: scratch
namespace: default
user: experimenter
name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
- name: experimenter
user:
password: some-password
username: exp
## Source: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters
kubeconfig
, we have established a trust relationship between your terminal and Kubernetes that will work through kubectl.- Next, let’s look at the workers for this cluster by running the command below. The command will return the 3 workers that we created, the version of Kubernetes and the age of the worker since creation or upgrade. As shown below, each worker is on a different subnet that belongs to 3 different availability zones.
1
kubectl get nodes
- In addition to nodes, you can also check for pods by running the command below. But keep in mind that we haven’t deploy anything yet. Also, if you don’t specify a namespace in the command, it will return everything from the "default" namespace.
1
kubectl get pods
- We can also specify the pods in all namespaces, including the ones that Kubernetes needs to run properly by adding the
—all-namespaces
parameter.
1
kubectl get pods --all-namespaces
- Similarly with services, we can check all the services in the cluster. As you can see, you have the default service that will handle the
kubecontrol
requests and the kube-dns that will handle the calls to thecoredns
of the cluster. It’s important that we don’t edit or delete any of the those services or pods, believe me :)
1
kubectl get services --all-namespaces
- We can also check the replica sets of the cluster by running this command:
1
kubectl get rs --all-namespaces
- Now let’s deploy the container that we created from our first post. I have prepared a branch with everything that you will need, we are going to clone it first. And position our self in the folder that we are going to use.
1
2
git clone https://github.com/aws-samples/picture-social-sample.git -b ep4
cd picture-social-sample/HelloWorld
- Now, lets open the file
manifest.yml
. That file includes the Kubernetes manifest. This manifest will create a deployment calledhelloworld
with a pod of a container stored at111122223333.dkr.ecr.us-east-1.amazonaws.com/helloworld
, also replicated twice. The manifest also includes a service with a public load balancer that will expose port 80 and will target container port 5111.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#########################
# Definicion de las POD
#########################
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
labels:
app: helloworld
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: 111122223333.dkr.ecr.us-east-1.amazonaws.com/helloworld
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
#########################
# Definicion del servicio
#########################
kind: Service
apiVersion: v1
metadata:
name: helloworld-lb
spec:
selector:
app: helloworld
ports:
- port: 80
targetPort: 5111
type: LoadBalancer
- Now we are ready to deploy the application to Kubernetes. Make sure you change the Amazon ECR Account ID on the manifest above before proceeding.
- We going to work using the namespace "tests" for this post. Remember that namespaces will help us handle the order of the pods and group them by business domain or affinity. So let’s create the namespace with this command:
1
kubectl create namespace tests
- Now that we have the namespace created, we are going to apply changes to Kubernetes using the manifest and specifying the newly created namespace.
1
kubectl apply -f manifest.yml -n tests
- Now you can check the deployment, the pods, and the service. Don’t forget to always pass the parameter, namespace. For pods, you should get two replicas of the same pod.
1
kubectl get pods -n tests
- If you want to see details from a specific pod, you can run the following command, where
podName
is the name of the pod that you want to check. It also includes the scheduling from the Kubernetes Control Plane to that specific pod and the history of all events.
1
kubectl describe pod **podName** -n tests
- You can also stream the logs from an specific pod by running the following command and specifying the
podName
.
1
kubectl logs **podName** -f -n tests
- I recommend you to store the Kubernetes logs for observability into CloudWatch, but we are going to cover this in the next post. You can take a look to the official EKS documentation for more information.
- Now, you can also check your service status and address to test if the application is running. The
EXTERNAL-IP
column is the one that contains the FQDN address that you can use.
1
kubectl get services -n tests
- Now, you can open the browser and test your application. Be sure to use http instead of https for this specific test. We are going to learn how to protect your API endpoints in a future post.
- Now let’s prove why Kubernetes is a self-healing container orchestrator. We are going to delete one of the two pods and see what happens.
1
kubectl delete pod **podName** -n tests
-w
parameter to watch for changes.1
kubectl get pods -w -n tests
- You can also set an autoscale rule for your deployment by running the following command. Where
—max
is the number of max replicas that will handle this HPA or Horizontal Pod Autoscaler,—min
is the number of minimum replicas running, and—cpu-percent
is the percentage of CPU of all the current pods from this deployment. In the case of exceeding the number specified, it will scale up.
1
kubectl autoscale deployment helloworld --max 10 --min 2 --cpu-percent 70 -n tests
- You can check the status of the HPA by running the following command:
1
kubectl get hpa -n tests
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.