Installing and Configuring Karpenter on Fargate for Autoscaling in Amazon EKS

Doing less to improve the efficiency and cost of running workloads.

Olawale Olaleye
Amazon Employee
Published Jan 19, 2024
Last Modified Jul 25, 2024
Customers seeking to architect their Kubernetes cluster for best practices maximise on autoscaling which is an important concept in the AWS well-architected framework. As workloads increases and often times with changing compute capacity requirements, organizations wants to adapt to these changes but with concerns for selecting the resource types and sizes optimized for workload requirements and ultimately avoid unnecessary costs.
Karpenter is an open-source cluster autoscaler that automatically provisions new nodes in response to unschedulable pods in Kubernetes cluster. With Karpenter, you don’t need to create several node groups to achieve flexibility and diversity if you want to isolate nodes based on operating systems or compute type. For example, you may have a cluster that consist of GPU, CPU and Habana Gaudi accelerators instance types, to achieve this without Karpenter, you will need to create dedicated node groups for each instance type and use nodeSelectors to achieve node selection constraint.
When you deploy Karpenter in your Kubernetes cluster, it installs the Karpenter controller and a webhook pod that must be in running state before the controller can be used for scaling your cluster. This would require a minimum of one small node group with at least one worker node. As an alternative, you can run these pods on EKS Fargate by creating a Fargate profile for the karpenter namespace. Doing so will cause all pods deployed into this namespace to run on EKS Fargate.
EKS Cluster with Fargate and Karpenter
About
✅ AWS experience200 - Intermediate
⏱ Time to complete30 minutes
🧩 Prerequisites- AWS Account
📢 FeedbackAny feedback, issues, or just a 👍 / 👎 ?
⏰ Last Updated2024-01-19

Prerequisites

Before you begin this tutorial, you need to:
  • Install the latest version of kubectl. To check your version, run: kubectl version.
  • Install the latest version of eksctl. To check your version, run: eksctl info.
  • Install the latest version of Helm CLI

Overview

Using the eksctl cluster template that follows, you'll build an Amazon EKS cluster with Fargate profile to provide the compute capacity we need to run the core clusters components in the karpenter and kube-system namespaces. It configures the following components:
  • Fargate Profile: AWS Fargate is a compute engine for EKS that removes the need to configure, manage, and scale EC2 instances. Fargate ensures Availability Zone spread while removing the complexity of managing EC2 infrastructure and works to ensure that pods in a Replica Service are balanced across Availability Zones.
  • Authentication: Necessary IAM Roles for Service Accounts (IRSAs) mappings to enable communication between Kubernetes pods and AWS services. This includes the Karpenter Controller responsible for provisioning EC2 compute capacity needed in the cluster. Additionally, an OpenID Connect (OIDC) endpoint enables seamless and secure communication.
  • Identity Mappings: Necessary mapping for the Karpenter IAM principal and the required cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane.
  • Sample Application Deployment: Create a sample deployment to validate that Karpenter autoscales compute capacity needed as pod counts increase.

Step 1: Create the Cluster

In this section, you will deploy a base infrastructure using CloudFormation which Karpenter need for its core functions. Then you will proceed with creating this cluster config, you'll define the settings for Fargate profile to provide the compute capacity needed by Karpenter and also core cluster such as CoreDNS.
To create the cluster
  1. Copy and paste the content below in your terminal to define your environment variable parameters
  1. Copy and paste the content below in your terminal to use CloudFormation to set up the infrastructure needed by the EKS cluster.
CloudFormation for EKS Cluster with Fargate and Karpenter
Above is a graphical representation of the interrelationship between the resources such as SQS queue, events rules and IAM role that the CloudFormation template will deploy.
Now, we're ready to create our Amazon EKS cluster. This process takes several minutes to complete. If you'd like to monitor the status, see the AWS CloudFormation console and change the region if you are creating the cluster in a different AWS region.
  1. Copy and paste the content below in your terminal to create the Amazon EKS cluster.
  1. Verify the cluster creation by executing the commands below in your terminal:

Step 2: Install Karpenter on the Cluster

Upon successful verification of the cluster creation, we are now ready to install Karpenter in the cluster.
  1. Logout of helm registry to perform an unauthenticated pull against the public ECR just in case you were logged in.
  1. Proceed to install Karpenter with the command below:
When the previous command completes, verify that Karpenter pods and CoreDNS, a core cluster component, are in Running state with the following command with Fargate providing the compute capacity:
Sample output:
The previous output shows that Karpenter is healthy and ready to start provision EC2 compute capacity that will be required by the workloads we will be deploying in the cluster.

Step 3: Create a default NodePool

After installing Karpenter, you need to setup a default NodePool. The NodePool’s task is to set constraints on the nodes that can be created by Karpenter and the pods that can be scheduled on those nodes.
  1. Copy and paste the content below in your terminal to create the default provisioner for the cluster:
This NodePool will create EC2 spot instances as the capacity we need to run subsequent pods in other namespaces aside kube-system namespace running core cluster components and the Karpenter pods.

Step 4: Autoscaling Demo

  1. Deploy the sample application with 0 count using the command below:
  1. Now we are ready to observe how Karpenter autoscale and provision EC2 compute capacity needed by pods. Open a second terminal and run the command below to monitor Karpenter:
  1. In the previous terminal, run the command below to scale the workload and watch the Karpenter controller logs in the other terminal:
  1. The previous command will launch 5 pods that needs to be scheduled on an EC2 worker node(s). Verify Karpenter has launched the pods on EC2 instance worker node:
Sample output:

Clean Up

To avoid incurring future charges, you should delete the resources created during this tutorial. Delete the sample deployment
Uninstall Karpenter:
Delete the Infrastructure deployed with CloudFormation
You can delete the EKS cluster with the following command:
Upon completion, you should see the following response output:

Conclusion

In this article, you've successfully set up an Amazon EKS cluster with Karpenter deployed on Fargate to autoscale the cluster with the needed EC2 compute capacity as your workload increases. With Fargate providing compute capacity for the Karpenter pods, you will not need to manage that node. Karpenter manages the lifecycle of the other nodes in the EKS cluster. To explore more tutorials, check out Navigating Amazon EKS.
This article was co-authored with Ryan French.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

1 Comment