Kubernetes version 1.23 alleviates management headaches


The latest version of Kubernetes promises to make it easier to monitor and manage individual pods within clusters using a series of temporary containers.

Rey Lejano, SUSE field engineer and head of the Kubernetes 1.23 release team, says the PodSpec.EphemeralContainer feature will make troubleshooting and debugging Kubernetes pods within a cluster by making it easier to temporarily deploy IT management Tools made easier.

Version 1.23 of Kubernetes also includes a command called kubectl debug that makes it easier to start these temporary containers in a pod. The difference between these and other containers is that temporary containers do not provide access to ports or other resource requirements because they are short-lived.

Additional administrative functions include a kubectl event command, which makes it easier to monitor the overall health of the cluster and troubleshoot problems. This command, available in Alpha, makes it easier to view all events related to a particular resource, to search for specific events in the cluster, and to filter events by their status, or to enter a specific namespace.

A horizontal pod autoscaler (HPA), a central component of Kubernetes that automatically scales the number of pods used based on metrics, is now generally available. Additionally, there is now a proposal to create a Custom Resource Definition (CRD) validation expression language to validate CRDs.

In general, according to Lejano, the Technical Oversight Committee (TOC) for Kubernetes is currently focused on making it easier to manage and maintain Kubernetes clusters after they are deployed in a production environment. These functions can be called either through the command line interface (CLI) or at a higher level of abstraction to manage Kubernetes from an IT provider, notes Lejano.

It’s unlikely that the TOC would ever create its own abstraction layer itself, but several initiatives are underway within the Cloud Native Computing Foundation (CNCF) to create these abstractions, Lejano adds.

In the longer term, according to Lejano, IT teams should also expect machine learning algorithms to be used more frequently to simplify the management of fleets of Kubernetes clusters.

In addition, the Kubernetes community will catch up early next year a promise to end Dockershim support in version 1.24 of Kubernetes. Together with Docker Inc., the TOC has signaled its intention to bid farewell to the Docker Engine in favor of an engine for Kubernetes based on Containerd. This shift is intended to provide a more efficient runtime that is provided through the container runtime interface (CRI). The Dockershim project made the original Docker Engine compatible with the CRI as defined in the Kubernetes TOC. It is now a separate open source project maintained by Mirantis and Docker, Inc.

Regardless of how companies approach Kubernetes management, it is certain that more IT teams will be exposed to Kubernetes in 2022 than ever before. The challenge – now and in the future – is to find a way to manage all of these Kubernetes pods and clusters a lot easier than they are now.


Comments are closed.