Microservices have taken center stage in the software industry. Transitioning from a monolith to a microservices-based architecture empowers companies to deploy their application more frequently, reliably, independently, and with scale without any hassle. This doesn’t mean everything is green in Microservice architecture; there are some problems that need to be addressed, just like while designing distributed systems. This is where the “Service Mesh” concept is getting pretty popular.
We have been thinking about breaking big monolithic applications into smaller applications for quite some time to ease software development and deployment. This chart below, borrowed from Burr Sutter’s talk titled “9 Steps to Awesome with Kubernetes,” explains Microservices evolution.
The introduction of the service mesh was mainly due to a perfect storm within the IT scene. When developers began developing distributed systems using a multi-language (polyglot) approach, they needed dynamic service discovery. Operations were required to handle the inevitable communication failures smoothly and enforce network policies. Platform teams started adopting container orchestration systems like Kubernetes and wanted to route traffic dynamically around the system using modern API-driven network proxies, such as Envoy.
What Is a Service Mesh?
Agreed, microservices CAN decrease the complexity of software development in organizations, but as the number of microservices within an organization rise from single-digit to numerous amounts, inter-service complexities can become daunting.
Hence, a service mesh is a suitable approach to manage and control how various parts of an application interact, communicate with each other and share data. A service mesh is an idea built as a dedicated infrastructure layer right into an app. This visible infrastructure layer helps to optimize communication and avoid downtime as and whenever the app grows.
Microservices pose challenges such as operational complexity, networking, communication between services, data consistency, and security. This is where service meshes come in handy and are specifically designed to address these challenges posed by microservices by offering a granular level of control over how services communicate with each other.
Service meshes offer:
- Service discovery
- Services networking
- Routing and traffic management
- Encryption and authentication/authorization
- Granular metrics and monitoring capabilities
- Rate limiting
- Circuit breaking
- Load balancing
- Distributed tracing
How Does a Service Mesh Work?
A service mesh mainly consists of two essential components: a data plane and a control plane. Making fast, reliable, and secure service-to-service calls within a microservices architecture is what a service mesh strives to do. Although it is called “mesh of services,” it is more appropriate to say “mesh of proxies” that services can plug into and completely abstract the network away.
In a typical service mesh, these proxies are infused inside each service deployment as a sidecar. Rather than calling services directly over the network, services call their local sidecar proxy, which handles the request on the service’s behalf, thus encapsulating the complexities of the service-to-service exchange. The interconnected set of sidecar proxies implements what is known as the data plane. The components of a service mesh that are employed to configure the proxies and gather metrics are collectively known as the service mesh control plane.
Service meshes are meant to resolve the multiple hurdles developers encounter while addressing to remote endpoints. In particular, service meshes help applications running on a container orchestration platform such as Kubernetes.
Is Service Mesh Necessary?
According to The New Stack, as technology stacks become more heterogeneous and there is a bloom in endpoints, a Service Mesh can solve several networking and operational challenges. The four pillars of a Service Mesh (connecting, securing, controlling, and observing connectivity) makes more sense to invest in a central spot.
Istio Service Mesh & Harness
An extremely powerful pattern in Istio is Traffic Shifting. Having the ability to apply percentages/weights to services (for example: v1 and v2 of a service) in a traffic splitting pattern opens up the doors for a canary deployment.
You would most likely be touching a traffic shifting rule during a deployment. Orchestrating a set of KubeCTL and IstioCTL commands, maintaining the configurations, and designing for a failure (rollback) for those tasks certainly requires proper planning and thought. The Harness Platform with Traffic Management support allows you to step away from the orchestration and failure complexity to focus on just the rules and outcomes themselves.
Follow this simple tutorial and see how Istio and Harness fit together in your continuous delivery journey.
Credits: Part of the article is originally published on TheNewStack by me.
Source: Developer Advocacy