Reducing Vulnerabilities in Kubernetes – The New Stack


In this post, we discuss critical container security best practices to help you avoid complicated security challenges and successfully mitigate vulnerabilities in your containerized environments in Kubernetes.

Containerized environments offer many benefits to developers, but they also present complicated security challenges. Following container security best practices can help you mitigate risk, reduce vulnerabilities, and successfully deploy verified software.

Here I give an overview of the basics of container security for developers. If you are a developer building a Docker container and want to ensure its security, the following reminders may be helpful.

No, namespaces are not enough isolation

Aleksandr Volochnev

Aleks is DataStax’s Lead Developer Advocate in EMEA. After many years as a developer, technical lead, DevOps engineer and architect, Aleks now focuses on cloud computing and distributed systems and shares his expertise in high-performance and disaster-tolerant systems.

A namespace is a virtual cluster within the main Kubernetes cluster that provides a mechanism for isolating groups of related components within the control plane. However, the assumption that resource namespaces are sufficient to run processes in an isolated environment is the fallacy of most developers.

Although namespaces—and the resources that run within them—give you additional context and isolation within those scopes, they don’t isolate your resources from other namespaces.

So you first need a namespace to describe the deployment or service, followed by additional configuration for the isolation to take effect.

Also, pods can still communicate with each other using their IP address, no matter what namespace they’re in. This is because cross-namespace communication is a default in a Kubernetes cluster and is the usual setup.

Not containers and root

According to the principle of least privilege (PoLP), the main difference between root and non-root containers is that the latter are designed to ensure the minimum privileges required to run a process. If developers followed this principle as much as possible and didn’t run containers as root, there would be less serious security problems.

Let me explain: you have a service listening on a privileged port, let’s say port 80, that you need to run as root. But you should also ask yourself if you Yes, really You must run your service on port 80. It is your responsibility to make this decision after being aware of all the possible implications.

“But it runs in a container – shouldn’t that be okay?”

Unfortunately, not. This is due to a critical security issue: privilege escalation.

Preventing escalation of privilege is a key argument to avoid running a container as root. A root user inside a container can run any command as root user on a traditional host system, including starting services, installing software packages, and creating new users. This is of course undesirable from an application point of view and also applies to containers.

You also need to be aware of container runtimes or kernel vulnerabilities. The root user can access and run anything on the underlying host as a highly privileged user. This means they have access to usernames/passwords configured on the host to connect to other services, install unwanted malware and access other cloud resources, among other activities that can compromise file system mounts.

Most services can run without escalated privileges. For those who can’t, there are features that give you fine-grained control. This is especially true for privileged containers, which represent the highest level of escalation with all capabilities available. If someone were to assume a service in your container, that person would need to have the highest privilege available. They just couldn’t do it any other way. Even if it is a containerized app Yes, really requires some elevated privileges, you should still not run it --privileged; it is better to use --cap-add for more fine-grained control.

Docker also has a rootless mode that allows you to run Docker daemons and containers as a non-root user. In addition, you can use this mode to mitigate vulnerabilities in daemons and container times that sometimes grant malicious agents root access to entire nodes and clusters.

Another important note here is that a rootless mode is capable of running daemons and containers without root privileges within a usernamespace by default.

Networks need segmentation

Using an intrusion prevention system (IPS) to block malicious traffic is not enough to adequately protect containerized environments, and I say that from experience.

Let me illustrate this with a short anecdote. I used to work at a company that adopted the flat network model for the sake of collaboration. As you may know, flat networks and accessible areas mean that everything on the network can be accessed by anyone, which can pose significant security problems. I asked the network designer at the time if they were concerned about this and they told me they had “implemented an IPS to prevent malicious attempts”.

The architect left the company a year later, and a major security breach ensued. Simply relying on an IPS wasn’t good enough then, and it certainly isn’t good enough now.

So what can you do to avoid violations? Build network segments. The main advantages of this are the following:

  • You minimize violations — By creating network segments that contain only the resources specific to the consumers that you need to authorize access to, you create an optimal least-privilege environment that significantly minimizes security breaches.
  • minimize damage — They contain the explosion radius on successful exploitation. This makes it easier to prevent lateral movement of a container that has been exploited by malicious agents and comply with PCI DSS standards.
  • Minimize data exfiltration — Network segmentation also helps prevent or minimize the negative effects of data exfiltration, an area where an intrusion detection system (IDS) and an IPS are insufficient. Because they are reactive, IDS and IPS can only detect a new attack or vulnerability when the rules on those devices have been updated. Therefore, there is always a chance that the update will be late, making it easier for cyber criminals to find and exploit your networks.

Keep your dependencies up to date

Dependency management is a critical aspect of creating and maintaining a secure and reliable software supply chain. It’s also a bit complicated as there are daemons and other factors that make it difficult to determine exactly what needs to be done to keep dependencies up to date.

For this reason, just using the latest Docker container tag is also not enough to ensure adequate protection. To do this properly, you must first know your dependencies by:

  • Look at the dependency checks in your CI/CD (npm or Apache Maven).
  • Monitoring of relevant data sources such as safety notices.
  • Regular update/rebuild (this helps ensure the latest security patches are in place).

You also need to be careful with “hidden” or embedded dependencies. If you’re running a Docker container, you’re probably using a package manager to install dependencies. Sometimes you may need to download something like modules or even create some software yourself. In either case, you should definitely update these dependencies, as they may not show up in all scans and can become a potential security issue for containers if they are outdated.

Another important aspect of dependency management is using only trusted sources and ensuring that the project at hand is a genuine Docker Hop project.

First of all, you need to beware of typosquatting, where scammers use misspelled or altered domain names to trick users into visiting deceptive websites. It’s shocking how many projects out there will try to give you malicious software if you use “MGINX” instead of “NGINX” for example. So be careful what you type and click.

Other best practices include using descriptive tags instead of the latest tags and regularly checking for dependency updates.


Container security involves the ongoing implementation and maintenance of security controls to protect containers and the underlying infrastructure.

By applying the above best practices to your development pipeline, you can improve visibility into container workloads. This helps you secure all container assets and components from the initial development stage to the end of their lifecycle, preventing breaches and ensuring timely remediation.

Here are some resources to brush up on your container knowledge:

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Feature image provided by sponsor


Comments are closed.