Amazon Onboarding with Learning Manager Chanci Turner

Amazon Onboarding with Learning Manager Chanci TurnerLearn About Amazon VGT2 Learning Manager Chanci Turner

From our initial discussions with users, our vision has always been to ensure that Amazon Elastic Kubernetes Service (EKS) delivers the finest managed Kubernetes experience available in the cloud. With the launch of EKS, we began by providing a managed Kubernetes control plane, but this was just the beginning. Today, we are thrilled to introduce the next phase of the Amazon EKS service: EKS managed node groups.

In this article, we will discuss the rationale behind the development of EKS managed node groups, guide you on how to create and oversee worker nodes for your clusters, and outline what lies ahead in our mission to simplify running Kubernetes on AWS.

Overview

Managed node groups streamline the process of adding worker nodes (EC2 instances) that supply compute capacity for your clusters. You can create, update, scale, or terminate nodes for your cluster using a single command via the EKS console, eksctl, AWS CLI, AWS API, or infrastructure-as-code tools like CloudFormation and Terraform.

Each managed node group initiates an Auto Scaling Group (ASG) for your cluster, which can extend across multiple availability zones. EKS manages rolling updates and node draining prior to terminations, ensuring that your applications remain highly available.

All nodes operate using the latest EKS-optimized AMIs within your AWS account, and there are no extra charges for using EKS managed node groups. You only incur costs for the AWS resources you provision, such as EC2 instances or EBS volumes.

Before diving into the specifics of how managed node groups function, let’s take a moment to conceptualize how they transform the management of your EKS clusters.

The Extended EKS API

Managed node groups introduce several new features to the EKS API:

Previously, as depicted on the left, the EKS API provided a highly available control plane across multiple availability zones (AZs), including logging and least privilege access (IAM) support at the pod level. After establishing your control plane, you would utilize eksctl, CloudFormation, or other tools to create and manage the EC2 instances for your cluster. Now, with the EKS API’s expansion, we can natively manage the Kubernetes data plane, as illustrated on the right. We’ve elevated node groups to first-class status in the EKS management console, simplifying the management and visualization of the infrastructure used to operate your cluster from a single location.

Why is this Important?

This development lays the foundation for offering you a fully managed data plane, enabling us to handle everything from security patches to Kubernetes version updates, as well as monitoring and alerting. It brings us closer to our vision of delivering production-ready clusters while alleviating the burdens of undifferentiated heavy lifting. Let’s delve into the new features in the EKS API.

New API Commands

Let’s briefly explore the new EKS node group API from the AWS CLI perspective. For simplicity, we will present the command usage conceptually, omitting additional parameters, such as --cluster-name, that you would typically include when executing a command.

To create a managed node group, you would use:

$ aws eks create-nodegroup

You may only create a node group for your cluster that matches the current Kubernetes version. All node groups are created with the latest AMI release version for the respective minor Kubernetes version of the cluster. Initially, managed worker nodes will only be available for newly created Kubernetes version 1.14 clusters (platform version 3). Additionally, there is a limit of 10 managed node groups per EKS cluster, with a maximum of 100 nodes per group. This allows for a maximum of 1000 managed nodes to be active on a given EKS cluster.

To view the managed node groups operating on a cluster, you can run:

$ aws eks list-nodegroups

For more information about a specific managed node group, you would use:

$ aws eks describe-nodegroup

To modify the configuration of a node group, such as scaling parameters, you can execute:

$ aws eks update-nodegroup-config

While node groups are lightweight and typically immutable, you can adjust three parameters:

  1. Add or remove Kubernetes labels. Any labels you modify directly through the Kubernetes API or via kubectl will not be reflected in the EKS API and will not persist in new nodes launched for this group.
  2. Add or remove AWS tags. These tags apply to the node group object within the EKS API and can be utilized to control IAM access. Note that during the initial launch, these tags do not propagate down to the EC2 resources created by the node group.
  3. Alter the size of your node groups (min, max, desired number of nodes).

To remove a managed node group, you would execute:

$ aws eks delete-nodegroup

The draining of worker nodes is automatically managed when deleting a node group or updating its version, as well as during ASG rebalancing or scaling. This represents a significant improvement over self-managed nodes, where you would typically need daemonset deployments or Lambda functions to orchestrate a graceful node termination. During version updates, EKS respects pod disruption budgets, allowing you to control the node lifecycle and maintain your critical pods until they can be safely rescheduled.

Please remember that you must delete node groups attached to a cluster before deleting the cluster itself; refer to the managed node groups documentation for more information.

EKS Console Walkthrough

One of the exciting features of managed node groups is the revamped EKS console. Let’s take a look.

To get started, you will need to create a new EKS cluster. The initial screen for configuring the cluster’s general settings, VPC/security groups selection, logging settings, etc., remains unchanged and is therefore not covered here. However, once your cluster is operational, you’ll need to create a Node IAM Role according to the EKS documentation.

Once the EKS cluster is active, meaning the control plane is operational, you will see the cluster overview screen, including the new Node Groups list beneath the general configuration:

Now, let’s create a node group using the default values by clicking the “Add node group” button:

In this step, provide the name of the node group (ng0, in our example), specify the role to use for the node group (the IAM role you created earlier), and choose the subnets for the node group.

The next step involves setting the compute configuration for the nodes:

When specifying the compute configuration, you can select the AMI (with or without GPU support), the instance type, and the disk size of the attached EBS for each node in the group.

Next, choose the scaling behavior:

Besides the scaling configuration, tags, and labels, the node group cannot be modified once created. Before finalizing, you can review the node group and adjust any settings:

After a minute or so, the EC2 instances will be provisioned and will join your cluster as worker nodes; you should see this reflected in the cluster overview screen:

Let’s verify that we can see the nodes in Kubernetes as well:

$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-128-14.us-west-2.compute.internal   Ready    <none>   6m10s   v1.14.7-eks-1861c5
ip-192-168-192-6.us-west-2.compute.internal    Ready    <none>   6m3s    v1.14.7-eks-1861c5

Indeed, we now have the two worker nodes that are part of the node group ng0, ready for deployment.

For those looking to further enhance their professional profiles, check out this blog post for some excellent LinkedIn tips. Additionally, if you’re interested in understanding how companies are gradually addressing employees’ substance use disorders, this resource provides great insights. Lastly, for more information on what pitfalls Amazon works to avoid, this link is an excellent resource.

Chanci Turner