Kubernetes is the de facto container management platform in the modern cloud-native world. It allows to develop, deploy and manage microservices in a flexible and scalable way. Kubernetes works with various cloud providers, container runtimes, authentication providers, and extensible integration points.
However, Kubernetes still has one major drawback: security. Kubernetes’ integrative approach to running any containerized application on any infrastructure makes it difficult to build holistic security around Kubernetes and the application stack that lives within it.
According to Red Hat’s 2022 “State of Kubernetes” security report, the majority of Kubernetes users have had their delivery halted due to unresolved security issues. Additionally, over the past 12 months, nearly all Kubernetes users in the study experienced at least one security incident. Therefore, it is fair to say that Kubernetes environments are insecure by default and are at risk.
This article presents the top 10 security risks with concrete examples and advice on how to avoid them.
1. Secrets of Kubernetes
Secrets are one of the basic elements of Kubernetes to store sensitive data such as passwords, certificates or tokens and use them in containers. There are three critical issues with Kubernetes secrets:
- Secrets store sensitive data as base-64 encoded strings, but they are not encrypted by default. Kubernetes offers resource encryption for storage, but you need to configure it. Additionally, the biggest threat to secrets is that any pod – and any applications running inside the pod – in the same namespace can access and read them.
- Role-Based Access Control (RBAC) helps you determine who has access to Kubernetes resources. You must properly configure RBAC rules so that only relevant people and applications have access to secrets.
- Secrets and ConfigMaps are the two methods of passing data to running containers. If there are old and unused ConfigMap secrets or resources, it can create confusion and leak vulnerable data. For example, if you delete your back-end application deployment but forget to delete the secret containing your database passwords, any malicious pod can use them in the future.
2. Container images with vulnerabilities
Kubernetes is a container orchestration platform that distributes and runs containers on worker nodes. However, it does not check the contents of containers to determine if they have any security vulnerabilities or exposures.
Therefore, it is necessary to scan images before deployment to ensure that only images from trusted registries without critical vulnerabilities (like remote code execution) will run on the cluster. Container image analysis should also be integrated with CI/CD systems for automation and early defect detection.
3. Threats of execution
Kubernetes workloads, namely containers, run on worker nodes, and containers are controlled by the host operating system when running. If there are permissive policies or container images with vulnerabilities, they can open backdoors throughout your cluster. Therefore, operating system-level runtime protection is necessary to enhance runtime security, and the most important protection against runtime threats and vulnerabilities is to implement the principle of least privilege. in Kubernetes.
Open source and widely accepted tools such as Seccomp, SELinux and AppArmor at the Linux kernel level are available to enforce policies and restrict access. These tools are not internal to Kubernetes and require external configuration and effort to enable runtime threat protection. In order to secure Kubernetes in an automated way, try using the Kubernetes Security Posture Management (KSPM) approach. KSPM leverages automation tools to detect, remediate, and report security, configuration, and compliance issues using a holistic approach.
4. Improper cluster configuration and default settings
The Kubernetes API and its components consist of a complex set of resource definitions and configuration options. Therefore, Kubernetes offers default values for most of its configuration parameters and tries to remove the burden of creating long YAML files.
However, there are three critical cluster and resource configuration issues that you should be aware of:
- Default Kubernetes configurations are useful because they attempt to increase flexibility and agility, but they are not always the most secure options.
- The online sample Kubernetes resources are useful to get started, but it’s worth double-checking what these sample resource definitions will deploy to your cluster.
- It is customary to make changes to Kubernetes resources using “kubectl edit” commands while working on clusters. However, if you forget to update the source files, the changes will be overwritten on the next deployment, and untracked changes could lead to unpredictable behavior.
5. Kubernetes RBAC Policies
RBAC is the native Kubernetes method for managing and controlling authorization of Kubernetes resources. Therefore, configuring and maintaining RBAC policies is critical to securing clusters from unwanted access.
There are two critical points to consider when using RBAC policies. First, some RBAC policies are too permissive, such as the cluster_admin role, which can do almost anything in the cluster. These roles are assigned to regular developers to make them more agile. However, in the event of a security breach, attackers quickly gain high-level access to clusters using cluster_admin. To avoid this, you must configure RBAC policies for specific resources and assign them to particular user groups.
Second, in general, various environments, such as development, testing, staging, and production, exist in the software development lifecycle. Moreover, there are several teams with different purposes, such as developers, testers, operators and cloud administrators. RBAC policies must be assigned correctly for each group and each environment to limit exposure.
6. Network Access
In Kubernetes, a pod can connect to other pods and external addresses outside the cluster; others can connect to this pod from inside the default cluster. Network policies are Kubernetes’ native resources for managing and restricting network access between pods, namespaces, and IP blocks.
Network policies can also work with tags on Pods, so inefficient use of tags could lead to unwanted access. Additionally, when clusters reside in cloud providers, the cluster network must also be isolated from the rest of the virtual private cloud (VPC).
7. Holistic monitoring and audit logging
When you deploy an application to a Kubernetes cluster, it’s not enough to just monitor application metrics. You should also monitor the status of the Kubernetes cluster, cloud infrastructure, and cloud controllers to get a full stack overview. It is also important to monitor for vulnerabilities and detect anomalies, as intruders will test access to your clusters from all possible openings.
Kubernetes provides out-of-the-box audit logs for cluster security incidents. Nevertheless, you should also collect the records of various apps and monitor their health in a central place.
8. Kubernetes APIs
The Kubernetes API is the heart of the entire system, where all internal and external clients connect and communicate with Kubernetes. If you deploy and manage Kubernetes components internally, you need to be more careful because the Kubernetes API server and its components are open source tools with potential and actual vulnerabilities. Therefore, you should use the latest stable version of Kubernetes and patch live clusters as soon as possible.
If you use cloud providers, the Kubernetes control plane controls the provider itself, so the cloud infrastructure updates and fixes itself automatically. However, users are responsible for upgrading worker nodes in most cases. Therefore, you can use automation and resource provisioning tools to easily upgrade nodes or replace them with new ones.
9. Kubernetes Resource Requests and Limits
In addition to scheduling and running containers, Kubernetes can also throttle container resource usage in terms of CPU and memory. Although mostly overlooked by Kubernetes users, resource requests and limits are critical for two reasons:
- Security: When pods and namespaces are unrestricted, even a single container with a security breach can access sensitive data inside your cluster.
- Cost: When the requested resources are greater than the actual usage, the nodes will run out of available resources. This will cause the node pool to grow if autoscaling is enabled, and new nodes will inevitably increase your cloud bill.
When resource demands are calculated and allocated correctly, the entire cluster operates efficiently in terms of CPU and memory. Also, when resource limits are set, faulty applications and intruders will be limited in terms of resource usage. For example, if there is no resource limitation, a malicious container could consume almost all of the node’s resources and render your application unusable.
10. Data and storage
Although containers are designed to be ephemeral, Kubernetes makes it possible to run stateful containerized applications in a scalable and reliable way. With the StatefulSet resource, you can quickly deploy databases, data analysis tools, and machine learning applications in Kubernetes. Data will be accessible to pods as volumes attached to containers.
However, it is essential to limit access by policies and labels to prevent unwanted access by other pods in the cluster. Additionally, storage in Kubernetes is provided by external systems, so you should consider using encryption for critical cluster data. If you manage your storage plugins, you should also check the security settings to make sure they are enabled.
Kubernetes is the indisputable container management platform for running microservices applications. However, holistic security is still one of its drawbacks, as it is not the heart of the project. Therefore, you should take additional steps to make your clusters and applications more secure.