Deploying Highly Available Microsoft SQL Server Containers in Amazon EKS with Portworx Cloud Native Storage

Chanci Turner Amazon IXD – VGT2 learningLearn About Amazon VGT2 Learning Manager Chanci Turner

In this blog post, we delve into the implementation of Microsoft SQL Server on containers utilizing Amazon Elastic Kubernetes Service (Amazon EKS). The strategies and concepts outlined here are applicable to any stateful application that requires high availability (HA) and durability, along with a streamlined and repeatable DevOps approach. Use cases include platforms such as MongoDB, Apache Cassandra, MySQL, and big data processing.

Support for SQL Server in containers was first introduced with SQL Server 2017, enabling production workloads to run inside Linux containers managed by Kubernetes (often referred to as K8s). Microsoft SQL Server is among the most widely used database engines today. While it offers numerous features and has a robust community, it often requires more maintenance and can be costlier than cloud-based or open-source database alternatives. To mitigate these expenses, some organizations explore open-source solutions to lower their licensing costs, while others opt to transition their workloads to managed relational database management systems (RDBMS), such as Amazon RDS for SQL Server or Amazon Aurora.

However, there are instances when an organization may be unwilling or unable to transition away from the SQL Server engine. This could stem from various factors, such as unreasonable rework and associated costs or a lack of expertise within their teams of developers, IT administrators, and engineers. Some businesses may also find themselves unable to leverage a managed cloud service due to licensing and support agreements or specific technical needs.

In such cases, it remains feasible to harness the numerous advantages of the cloud by deploying SQL Server databases on Amazon Elastic Compute Cloud (Amazon EC2) instances. This method preserves the flexibility necessary to meet particular requirements while still delivering many cloud benefits, including complete abstraction from hardware and physical infrastructure, pay-as-you-go pricing with no upfront commitment, and seamless integration with other services. However, despite being a better option than on-premises SQL Server deployments, the increased management overhead associated with DB instances compared to managed services indicates room for refinement.

Utilizing Kubernetes for SQL Server

Utilizing Kubernetes for SQL Server offers several compelling advantages:

  • Simplicity: Deploying and managing SQL Server workloads in containers is significantly easier and quicker than traditional methods. Deployment is swift, requiring no installation; upgrades are as simple as uploading a new image, and containers provide an abstraction layer that can operate in any environment.
  • Optimized Resource Utilization: Containerizing SQL Server workloads facilitates high density, allowing multiple internal enterprise workloads to share a common resource pool (memory, CPU, and storage). This reduces wasted capacity and enhances infrastructure efficiency.
  • Reduced Licensing Costs: In some scenarios, operating SQL Server within containers, whether at high or low density, can decrease overall licensing expenses. We will elaborate on this in the licensing section.

Available Container Services

Currently, AWS offers four primary container services:

  1. Amazon Elastic Container Service (Amazon ECS): A scalable, high-performance orchestration service seamlessly integrated with various AWS services.
  2. Amazon Elastic Container Service for Kubernetes (Amazon EKS): A managed service from AWS that simplifies the deployment, management, and scaling of containerized applications via Kubernetes.
  3. AWS Fargate: A compute engine for Amazon ECS that allows you to run containers without managing servers or clusters.
  4. Amazon Elastic Container Registry (Amazon ECR): A fully managed Docker container registry that simplifies the storage, management, and deployment of Docker container images.

A fundamental principle in AWS architecture is Multi-AZ deployments, which foster highly resilient and high-performance workloads. While you can directly use Amazon Elastic Block Storage (Amazon EBS) volumes for SQL Server containers, this would confine these containers to a single Availability Zone. This blog post illustrates how Portworx can address this limitation.

Portworx, an AWS Partner and a Microsoft high availability and disaster recovery partner, enables SQL Server to operate in HA mode across multiple AWS Availability Zones within your EKS cluster. It can also function in highly available configurations across AWS Auto Scaling groups. When utilized as a storage solution for SQL Server instances, this feature guarantees storage availability, a critical factor for the high availability of containerized SQL Server instances.

This article outlines how to deploy SQL Server workloads in production with Amazon EKS and Portworx cloud native storage, which is supported by Amazon EBS volumes. We provide a sample script that automates the deployment process, allowing you to set up your SQL Server instances in mere minutes.

Advantages of Running SQL Server in Containers

The primary benefit of utilizing containers is their simplicity and elegance. There is no requirement to install SQL Server or configure a failover cluster. SQL Server containers can be deployed with a single command, and Kubernetes inherently ensures high availability for your SQL Server deployments. In certain cases, the availability of SQL Server instances in a container on Kubernetes could surpass that of workloads running on a failover cluster. For further details, refer to the “High Availability” section later in this article.

Nonetheless, the key advantage of running SQL Server in containers lies in high-density deployments and resource sharing. Unlike virtual machines (VMs), containers are not confined to a set amount of resources for their entire runtime. Instead, they can dynamically utilize a shared resource pool, enabling them to consume varying amounts of resources at different times. As long as the total resource use remains below the available pool, all containers receive the resources they require.

As illustrated in the accompanying diagram, a VM operating at full capacity cannot access any idle resources on the same host. In this instance, there are two physical hosts with eight CPU cores. Even with three cores available, VM 4 and VM 7 remain resource-constrained.

Imagine an alternative approach where physical hosts are no longer a concern. In this scenario, you provision a single VM with the aggregate resources necessary to run multiple applications, such as several SQL Server instances. Furthermore, your applications can share all these resources without contention. This concept is depicted on the right side of the diagram, showcasing containers running on an Amazon EC2 instance.

With this solution, no container is resource-constrained, and the overall number of cores required is reduced from eight to six physical cores. The same principle applies to available memory, highlighting the efficiency of containerization.

For more insights on transferable skills that can help you navigate such transitions, check out this blog post. Furthermore, if you’re interested in how employers can manage rising prescription drug costs, SHRM provides excellent resources. Additionally, for those considering a career with Amazon, this link offers an excellent resource.

Chanci Turner