Securing containerized applications has become a priority as organizations transition to containerized workloads. Containerized applications are on the radar of attackers as businesses increasingly use these to run business-critical applications. Moreover, companies are generally not as prepared from a security perspective to secure containers properly from malicious code and other threats. Microsoft recently introduced a new Pod Sandboxing feature with Azure Kubernetes Service (AKS) in Preview, based on Kata Containers, to help with these challenges.
Container security threats
Hackers continue to compromise containerized environments with malicious code, targeting these to steal proprietary data and software. In the news recently was an attack campaign dubbed SCARLETEEL. In this attack, hackers exploited containerized workloads housed in AWS accounts to steal data and credentials.
In the SCARLETEEL attack, hackers deployed crypto miner software to generate profits or to distract defenders. So how is the attack carried out? Attackers infected public-facing self-managed Kubernetes clusters housed in Amazon AWS accounts.
Once the Crypto mining attack is launched, attackers use a Bash script to exfiltrate credentials and other sensitive data. Attackers also disable CloudTrail logs to hide their tracks further as they compromise the environment and exfiltrate data. In addition, as noted in a recent SCARLETEEL attack, hackers attempt to use a Terraform state file connected to other AWS accounts to compromise additional resources.
This recent attack and others help highlight the need for security regarding running production workloads on top of containerized environments. It can also be a challenge within shared Kubernetes clusters.
Containers are inferior to the virtual machine from the standpoint of sharing the host kernel. Virtual machines have more robust isolation since each virtual machine does not share a kernel. In addition, they have a built-in security boundary. However, containers share the kernel of the host operating system.
This security challenge transcends on-premises and cloud environments, including the Azure Kubernetes Service. When customers running the Azure Kubernetes Service (AKS) want to provide strong isolation of workloads between team members or other use cases, they have had to spin up separate clusters or node pools.
However, with the new addition of Kata VM isolated containers, AKS now has a new mechanism for isolation in the same AKS cluster.
What is Kata Isolated Pod Sandboxing with Azure Kubernetes Service (AKS)?
Microsoft recently announced the Pod Sandboxing with Azure Kubernetes Service (AKS) Preview. With the new AKS feature with Kata isolated pod sandboxing, customers can now provide much better isolation without changing the containers running inside a new secure VM boundary. This new architecture helps to protect your container workloads from untrusted or potentially malicious code. With AKS, you now have a mechanism called Pod Sandboxing.
What does the new feature do? It creates an isolation boundary between your container application and the shared kernel and compute resources of the container host in AKS, including CPU, memory, and networking. The new solution helps bolster additional security and data protection mechanisms organizations currently use to protect their workloads. It can also help companies meet compliance regulations and requirements for securing sensitive data.
Kata containers are essentially hypervisor-based isolation per container pod. They are not a Microsoft solution but an open-source project that is a product of the OpenInfra Foundation. Kata containers provide extended Kubernetes functionality and capabilities without significant performance implications to your production-critical workloads.
What are Kata Containers?
The Kata Containers project is an open-source community that has set out to build lightweight VMs for securing production containerized workloads and providing more robust isolation between the various containers running in your environment. These VMs feel and perform like standard Linux containers but provide much stronger isolation using hardware virtualization technologies as a security boundary.
Traditional containers vs. Kata containers
Learn more about the Kata containers solution here: Kata Containers – Open Source Container Runtime Software | Kata Containers.
Kata Containers in Kubernetes
Kata Containers introduces a unique approach by representing a Kubelet pod as a virtual machine (VM). In a typical Kubernetes cluster, a control plane oversees the scheduling of pods, and a compute Kubelet manages the pods’ lifecycle within the nodes. The Kubelet uses a container runtime to execute the containers, and the lifecycle management is decoupled from container execution through a dedicated gRPC-based Container Runtime Interface (CRI).
In essence, a Kubelet acts as a CRI client and requires a CRI implementation to handle the server side of the interface. CRI-O and containers are CRI implementations that leverage OCI-compatible runtimes for managing container instances. Kata Containers is a CRI-O and containers runtime that is officially supported.
Kata Container limitations
Kata Container is a technology that utilizes a Virtual Machine (VM) to provide an extra layer of security and isolation for container workloads. Compared to the default Docker* runtime, runs, the system has several differences and limitations.
Although some of the limitations may have potential solutions, others are due to fundamental architectural differences that arise from the use of VMs. For example, with the Kata Container runtime, each container is launched within its own isolated VM with its kernel. This increased isolation level may prevent certain container capabilities from being supported or require implicit enabling through the VM.
Azure Kubernetes Service (AKS) Kata sandboxing architecture
In Microsoft Azure, Kata containers run on top of the Azure hypervisor using the Mariner Linux AKS Container Host. Each pod is run inside a nested Kata virtual machine with resources from the parent virtual machine node that provides a completely isolated kernel.
Due to its efficiency, you can provision many Kata containers in a single guest virtual machine while running containers in the parent VM and have superior isolation in a shared Azure Kubernetes Service Cluster deployment. Note the following architecture components of the Kata-powered pod sandboxing:
- Mariner AKS Container host
- Microsoft Hypervisor with Linux Root Partition
- Integration with the Kata Container runtime
Mariner AKS Container host with Kata Guest VM isolation
Mariner AKS Container Host
The Mariner Linux distribution is the internal distribution used by Microsoft and optimized to run on Microsoft Azure. It is extremely efficient and lean and provides many optimizations that allow it to perform optimally in the cloud-native Azure environment.
Microsoft hypervisor with Linux Root partition
Another core component is the Microsoft hypervisor which has long since matured, having been tested in Windows Server and Microsoft Azure. However, another critical optimization of the Azure hypervisor concerning Kata virtual machines is that it runs with a Linux root partition. The Linux root partition runs the management stack and controls the hypervisor.
Integration with the Kata Containers
As mentioned, the Kata Containers project is open-source to provide a more secure container environment using extremely lightweight virtual machines. Microsoft has developed integration with Kata Containers, natively used with the new pod sandboxing feature.
The new Microsoft Azure Kubernetes Service (AKS) Kata VM isolated containers provide a great new way to provide strong security isolation between containerized workloads. It does this using more efficient and lean VMs using the Kata Containers open-source project. In addition, by using Kata VM isolated containers, organizations may no longer need to spin up separate Azure Kubernetes Service (AKS) clusters for isolation between tenant containers. The new solution brings the lightweight Kata VMs into the architecture, allowing each pod to have its own container host, eliminating kernel sharing between the pods.