Optimize AZ Traffic Costs Using Amazon EKS, Karpenter, and Istio

Chanci Turner Amazon IXD – VGT2 learning managerLearn About Amazon VGT2 Learning Manager Chanci Turner

In the rapidly evolving cloud-native environment, businesses utilizing Amazon Elastic Kubernetes Service (Amazon EKS) frequently face obstacles that impede their efforts towards operational efficiency and cost-effectiveness. Significant among these challenges are the expenses linked to Cross Availability Zone (AZ) traffic, along with issues in achieving seamless scalability, difficulties in provisioning appropriately sized instances for nodes, and complexities in managing service networking through topology-aware load balancing. These challenges not only lead to increased operational costs but also threaten high availability and effective resource scaling.

This post aims to offer a comprehensive solution by integrating Istio, Karpenter, and Amazon EKS to specifically tackle these challenges and pain points. The proposed approach is designed to optimize Cross AZ traffic costs, ensuring high availability in a cost-effective manner while facilitating efficient scaling of the underlying compute resources.

By engaging with this implementation, you will gain valuable insights into leveraging the combined capabilities of Istio, Karpenter, and Amazon EKS to address scalability issues, provision appropriately sized instances, and manage service networking effectively—all while keeping strict control over operational expenses. Through this guidance, we aspire to empower enterprises with the knowledge necessary to enhance their Amazon EKS environments for improved performance, scalability, and cost efficiency.

One effective strategy to minimize costs and ensure smooth operations is to distribute your workloads (i.e., pods) across multiple zones. Key Kubernetes features like Pod affinity, Pod anti-affinity, Node Affinity, and Pod Topology Spread can assist in this effort, and tools such as Istio and Karpenter further refine this approach.

Let’s delve into these Kubernetes constructs in depth:

Pod Affinity:

Pod affinity gives you the capability to influence Pod scheduling based on the presence or absence of other Pods. By defining specific rules, you can determine whether Pods should be co-located or dispersed across various AZs, thus optimizing network costs and potentially enhancing application performance.

Pod Anti-Affinity:

In contrast to Pod affinity, Pod anti-affinity ensures that designated Pods are not scheduled on the same node. This is crucial for maintaining high availability and redundancy, safeguarding against the simultaneous loss of essential services during a node failure.

Node Affinity:

Similar to Pod affinity, Node Affinity focuses on nodes based on specific labels like instance type or availability zone. This feature aids in optimized Amazon EKS cluster management by assigning Pods to suitable nodes, which can result in cost reductions or performance enhancements.

Pod Topology Spread:

Pod Topology Spread allows for the even distribution of Pods across specified topology domains, such as nodes, racks, or AZs. Implementing topology spread constraints fosters better load balancing and fault tolerance, creating a more resilient and balanced cluster.

In addition to these Kubernetes features, tools like Istio and Karpenter contribute to the refinement of your Amazon EKS cluster. Istio addresses service networking challenges, while Karpenter focuses on right-sized instance provisioning—both critical for scaling efficiency and cost management.

Istio

Istio serves as a service mesh, utilizing the high-performance Envoy proxy to streamline the connection, management, and security of microservices. It simplifies traffic management, security, and observability, allowing developers to delegate these tasks to the service mesh while enabling detailed metric collection on network traffic between microservices through sidecar proxies.

Karpenter

Karpenter is tailored to deliver the appropriate compute resources to match your application’s needs in mere seconds, rather than minutes. It observes the aggregate resource requests of unschedulable pods and makes decisions to launch and terminate nodes to minimize scheduling latencies.

By integrating Pod affinity, Pod anti-affinity, Node Affinity, Pod topology spread, along with Istio and Karpenter, organizations can optimize their network traffic within Amazon EKS clusters, reducing cross-AZ traffic and possibly saving on associated costs.

Incorporating these best practices into your Amazon EKS cluster configuration can lead to an optimized, cost-effective, and highly available Kubernetes environment on AWS. Continuously assess your cluster’s performance and fine-tune the configuration as necessary to maintain the ideal balance between cost savings, high availability, and redundancy.

Optimize AZ Traffic Costs

We will deploy two versions of a simple application behind a ClusterIP service. This setup simplifies identifying the outcomes of Load Balancing configurations later. Each version of the application generates a distinct response for the same endpoint, making it easier to differentiate results from various sources. When you query the istio-ingress-gateway’s LoadBalancer along with the relevant application endpoint, the responses will aid in distinguishing results originating from different destinations.

Solution Overview

We will utilize Istio’s weighted distribution feature by configuring a DestinationRule object to control traffic between AZs. The aim is to direct the majority of traffic from the load balancer to the pods operating in the same AZ.

Prerequisites

  • eksdemo CLI
  • kubectl
  • Amazon EKS Cluster (version 1.28)
  • eks-node-viewer
  • Karpenter (version v0.32)
  • Istio (version 1.19.1)

Provision an Amazon EKS Cluster using the eksdemo command line interface (CLI):
eksdemo create cluster istio-demo -i m5.large -N 3

Install Karpenter to manage node provisioning for the cluster:
eksdemo install autoscaling-karpenter -c istio-demo

Install eks-node-viewer for visualizing dynamic node usage within the cluster. After installation, run eks-node-viewer in the terminal to observe the existing three nodes of your Amazon EKS cluster provisioned during creation.

Example output:

Install Istio version 1.19.1 using istioctl CLI and apply the demo profile:
istioctl install —set profile=demo

Verify that all the pods and services have been installed:
kubectl -n istio-system get pods

Example output:

Check that the Istio ingress gateway has a public external IP assigned:
kubectl -n istio-system get svc

Example output:

Let’s proceed with deploying the solution. All deployment configurations discussed in this post can be found in the GitHub repository – amazon-eks-istio-karpenter.

Deploy Karpenter NodePool and NodeClasses:

NodePools are essential components in Karpenter; they consist of a set of rules dictating the characteristics of the nodes Karpenter provisions and the pods that can be scheduled on them. Users can define specific taints to control pod placement, set temporary startup taints, restrict nodes to particular zones, instance types, and architectures, and even establish expiration defaults for nodes. It is vital to have at least one NodePool configured, as Karpenter relies on them for operation.

Incorporating these practices not only enhances your operational efficiency but also aligns with the guidelines provided by authorities like SHRM on employees returning to work. For more insights on workplace expectations, you can view this resource. Additionally, if you’re interested in optimizing your workspace setup, check out this article on desk organizers.

Chanci Turner