DETROIT — A new Istio service mesh architecture that ditches sidecar proxies is turning corporate IT pros’ heads with its promise of simpler operations, but proponents of the rival Linkerd project argue that the real problem isn’t the sidecar architecture is, but the envoy proxy .
The service mesh approach for networking in distributed application environments was created for the first time with Linkerd version 1 in 2016, which is designed for VM environments. Powered by Google, IBM, and Lyft, Istio was built in 2017 specifically for use with Kubernetes container orchestration. Linkerd version 2 followed with a focus on Kubernetes, and since then the growing ubiquity of container orchestrators has driven the Popularity of service mesh as a way to shift the burden of the complex Microservices networking away from the application layer and the developers building applications.
Up until this year, the basic architectural design of each of these service meshes was the same – both using a special type of container called a sidecar proxy to relieve the network administration of applications. These sidecar proxies were tightly coupled to applications deployed as part of each Kubernetes pod, and this proximity allowed more granular control over application routing and monitoring than was possible with traditional networks.
However, as service mesh becomes more prevalent in high-scale environments, problems with sidecar proxies have emerged. Binding a container to every application in very performance-sensitive environments can add unsustainable overhead. It can do Upgrading to the service mesh is painful since all sidecars need to be restarted, which may impact application availability.
It’s also possible for application containers to get out of sync with sidecar containers, leading to further potential reliability issues. And managing a massive fleet of sidecars can be an unjustifiable burden in environments where applications may require some functionality of the service mesh, e.g mutual TLS (mTLS) located in the lower layers of the Open Systems Interconnection model — specifically Layer 4 — but don’t need all of them finer application-level filtering this happens further up on layer 7.
Istio Ambient Mesh, an experimental project where engineers come from Google and Solo.io donated to open source in September, includes a new architecture that maintainers say gets around these problems with service mesh sidecars.
Rather than bundling all of the Service Mesh’s capabilities into a sidecar provided with each app, Ambient Mesh decomposes the proxy into a set of two shared resources, called DaemonSets, deployed on each Kubernetes cluster. IT admins can specify whether applications require Layer 4 or Layer 7 routing capabilities by using the same types of Istio configuration files and Kubernetes app files that they already have. The consolidated proxies in Ambient Mesh route traffic accordingly without requiring a sidecar for each pod.
It’s early days for this new approach, but some Istio users said they’re keen to try it.
“It’s amazing — we’re going to roll it out as soon as we can,” said David Ortiz, senior software developer at martech firm Constant Contact, in an online interview this week. “It greatly simplifies running Istio, especially in the context of upgrades.”
One KubeCon attendee said he plans to carefully evaluate Ambient Mesh when it matures, but is interested.
“Sidecars were helpful in getting things started, but we like the idea of being able to service and scale Layer 7 and Layer 4 differently,” said Greg Otto, executive director of cloud services at cable provider Comcast, in an interview this week here. “At the edge, we are very focused on Layer 7 [filtering]but we don’t want to carry it all through our whole [network] where layer 4 [routing] is more appropriate.”
Gregory OttoExecutive Director for Cloud Services, Comcast
While sidecar proxies provide the strictest separation between services for security reasons, most of the critical Common Vulnerabilities and Exposures (CVEs) are in Istio’s Envoy proxy were at the level of Layer 7‘ said Otto.
“Where we don’t need it [Layer 7 filtering]”I don’t want to have to wear it,” he said. “Because if there’s a CVE, then I have a much smaller attack surface that I don’t have to worry about.”
Linkerd counterpoint: The problem isn’t sidecars, it’s Envoy
According to William Morgan, creator of Linkerd and CEO of Buoyant, there is another way to reduce critical Layer 7 vulnerabilities and much of the resource overhead associated with sidecars: Don’t use Envoy.
“At the end of the day, sidecars are actually extremely simple – they’re very straightforward to operate, people understand them, and the fault and safety domains are very clear,” Morgan said. “The problem isn’t the sidecar – the problem is that you have this huge, versatile, resource-hungry, and difficult-to-use proxy [with Envoy].”
Support for Envoy, a popular Cloud Native Computing Foundation project, was a Selling Proposition for Istio about Linkerd in the past. But Linkerd maintainers, led by Morgan, did hang on their own proxydesigned exclusively for use in a service mesh and with a smaller code base and resource requirements than Envoy.
As a result, a Linkerd enterprise user said he sees no need for a sidecarless service mesh and that it’s possible to still have simplicity and transparency with a sidecar.
“From our point of view, a sidecar is simple and easy to understand – it’s the same thing [kind of container] Technology we use for everything else,” said Kasper Nissen, lead platform architect at Lunar, a digital financial services company based in Denmark, in an interview here this week. “We went full-service mesh by default for everything a year and a half ago, and we saw a maybe 10% increase in resource consumption, which was like mTLS and not much compared to all the features we were given [detailed] view range.”
Nissen said he ran into sync issues with sidecar proxies and that Humio log analysis App Lunar used. This service didn’t have time to offload its local data when a sidecar restarted, meaning some data would be lost before Nissen’s team found a workaround that boiled down to “setting a timeout and hoping for the best.” , he said.
However, Morgan and Nissen claimed that the sidecar synchronization issue has its roots in a deeper problem with Kubernetes networks that has remained unresolved in the open source community for three years. By default there is no way to ensure that different containers, whether ephemeral init container Used by services like Linkerd or regular application containers and powered up and down in a specific order. A Kubernetes Enhancement Proposal was created addressing this in 2019, however, was rejected; Discussions continued in the community, but the situation has not changed.
“One would expect Kubernetes to be able to do this now, especially with all services deployed as sidecars,” Nissen said.
Fixing this issue in Kubernetes is the best way to troubleshoot sidecar sync issues, Morgan said.
“It’s not a very exciting statement in the fashion-driven cloud-native world, but sidecars will continue to be the future of the service mesh,” Morgan said. “We know they have warts, but many of them will ultimately be addressed by Kubernetes changes, not by dramatically changing the architecture and running the things that are much more complicated, your infrastructure.”
Beth Pariseau, Senior News Writer at TechTarget, is an award-winning IT journalism veteran. She can be reached at [email protected] or on Twitter @PariseauTT.