What is a service mesh?
A service mesh is a solution that adds security, observability, and reliability features to applications. A mesh runs these features at the platform layer instead of the application layer.
The mesh comprises of a scalable type of network proxies deployed in parallel to application code known as sidecar. These sidecars handle all the communication between the microservices. They also act as a point of injection for the mesh features. The network proxies form the data plane and are completely controlled by a control plane.
The growth of the service mesh is tied to the growth of the “cloud-native” application. “Cloud-native” are applications that are designed to run on the cloud. In the cloud-native world, an application would run on hundreds of services. Each service could have thousands of instances. Each of those instances could be changing. They are managed via an orchestrator like Kubernetes. Service-to-service communication in this world is complex. It’s also a fundamental part of the application’s runtime environment. Managing the service mesh is vital to ensuring end-to-end performance, reliability, and security.
Container orchestration framework:
As more containers are added to an application’s infrastructure, a separate tool for monitoring and managing the set of containers called container orchestration framework is required. Kubernetes leads this market. Many competitors like Docker Storm and Mesosphere, are offering integration with Kubernetes as an alternative.
Containers (Kubernetes pods):
A container instance is a single running copy of a micro-service. Sometimes the application instance is a single container. In Kubernetes, an instance is a small group of interdependent containers known as a pod. Clients rarely access an instance or pod; rather they access a service, which is a set of identical instances or pods (replicas) that is scalable and fault-tolerant.
A sidecar proxy is a network proxy application instance that runs alongside a single instance or pod. The sidecar proxy routes traffic to and from the container it runs alongside. The sidecar communicates with other sidecar proxies and is directly managed by the orchestration framework. The service mesh implementations typically use a sidecar proxy to manage all ingress (incoming) and egress (outgoing) traffic to the instance or pod.
When an instance needs to interact with a different service, it needs to find an available instance of the other service. Usually, the instance performs a DNS lookup for this purpose. The container orchestration framework is responsible to maintain a list of instances IP and their DNS name. Thereby provide DNS service within the service mesh.
The mesh encrypts and decrypts requests and responses between services. The service mesh delivers enhanced performance by the reuse of existing/stored connections. This eliminates the need for the creation of new connections. The most common mechanism to encrypt network traffic is mutual TLS (mTLS). MutualTLS has a public key infrastructure (PKI) that generates and distributes certificates and also handles security keys for use by the sidecar proxies.
Most orchestration frameworks already provide OSI Layer 4 type of network load balancing. A service mesh implements an OSI Layer 7 application load balancing using powerful traffic management and algorithms. Load-balancing parameters are modifiable via API, making it possible to do staggered canary deployments. Authentication and authorization: The service mesh can allow and authenticate requests made from both outsides and within the app, sending only validated requests to instances. This sort of acts like a traditional firewall and prevents unwanted access to the application resources.
In a distributed architecture, the microservices and containers move from the cloud to the edge. The need for Edge is driving Edge’s native applications. Edge and Cloud Hybrid Service allows Edge critical services to run on the Edge and other services on the cloud. A Service Mesh Hub would act as Orchestration layer across Service Meshes. Service Mesh at the edge to offload network infrastructure activities brings new challenges.
As an example, resource constraints become a challenge. While in the Cloud, Service Mesh can scale across many nodes, at the edge have a single node for our service. Also, there may be custom hardware to speed up applications and networking in a data center. At the edge, have to use standard edge hardware. Thus, we need to ensure that we are short on compute and network resources.
Another area that can be challenging for Service Mesh at the edge is that the edge devices might not be secure. With the Service Mesh, we can secure the microservices, but at the same time, the extra layer may become a target for exploits. For example, the Service Mesh will provide some layer of security for the application. Yet, if a hacker gets past the Service Mesh’s security defense will be able to compromise the edge service.
- Mobodexter, Inc., based in Redmond- WA, builds internet of things solutions for enterprise applications with Highly scalable Service Mesh Edge Clusters that works seamlessly with AWS IoT, Azure IoT & Google Cloud IoT.
- Want to build your Edge Computer Vision solution – Email us at [email protected]
- Join our newly launched marketing partner affiliate program to earn a commission here.
- We publish periodic blogs on IoT & Edge Computing: Read all our blogs or subscribe to get our blogs in your Emails.
This UrIoTNews article is syndicated fromMobodexter