Let’s imagine that you are writing some code that invokes a service that has a REST API or Thrift API. In order to make a request, your code needs to know the network location (IP address and port) of a service instance. In a traditional application running on physical hardware, the network locations of service instances are relatively static. For example, your code can read the network locations from a configuration file that is occasionally updated.
Client-Side Service Discovery
There are two discovery patterns: Client-side discovery and Server-side discovery.
In the Client-side discovery, client is responsible for determining the network location of available services. The client uses a load-balancing algorithm to select one of the available services and make a request. Netflix OSS is an example of a client-side discovery pattern.
Server-Side Service Discovery
In the Server-side discovery, the client makes an HTTP request to a service through a load balancer. The load balancer contacts to service registry and route each request to an available service instance. Similar to client-side discovery, service instances are registered and deregistered with the service registry. The AWS ELB (Elastic Load Balancer) is an example of server-side discovery. ELB balances the external traffic from the internet.
Self‑Registration Pattern
Service instances must be registered with and deregistered from the service registry. There are a couple of different ways to handle the registration and deregistration. One option is for service instances to register themselves, the self‑registration pattern. The other option is for some other system component to manage the registration of service instances, the third‑party registration pattern. Let’s first look at the self‑registration pattern.
Third‑Party Registration Pattern
When using the third-party registration pattern, service instances aren’t responsible for registering themselves with the service registry. Instead, another system component known as the service registrar handles the registration. The service registrar tracks changes to the set of running instances by either polling the deployment environment or subscribing to events. When it notices a newly available service instance it registers the instance with the service registry. The service registrar also deregisters terminated service instances. The following diagram shows the structure of this pattern.
Service Discovery Tools
Netflix OSS Eureka Server is an application that holds the information about all client-service applications. Every Micro service will register into the Eureka server and Eureka server knows all the client applications running on each port and IP address. Eureka Server is also known as Discovery Server.
Netflix Ribbon is a Part of Netflix Open Source Software (Netflix OSS). It is a cloud library that provides the client-side load balancing. It automatically interacts with Netflix Service Discovery (Eureka) because it is a member of the Netflix family. The Ribbon mainly provides client-side load balancing algorithms. It is a client-side load balancer that provides control over the behavior of HTTP and TCP client. The important point is that when we use Feign, the Ribbon also applies.
Zuul Server is an API Gateway application. It handles all the requests and performs the dynamic routing of microservice applications. It works as a front door for all the requests. It is also known as Edge Server.Zuul is built to enable dynamic routing, monitoring, resiliency, and security. It can also route the requests to multiple Amazon Auto Scaling Groups.
Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. Each of these features can be used individually as needed, or they can be used together to build a full service mesh
Kubernete - Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications. The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs).