Zuul - Single point of Failure - microservices

Zuul being used to work with Eureka/Edge Services stands out to be a single point of failure? Is there any way around it to make this scalable and resilient?

You can run multiple instance of zuul behind a load balancer.

Related

Custom API Gateway vs Reverse-proxy products

I am trying to understand the main difference between
(1) Using a reverse-proxy such as Nginx or Envoy as a gateway to route requests to corresponding microservice
vs
(2) Building a custom solution which uses the HttpClient (in .Net) to forward the request to the corresponding microservice. I would like to understand the benefits and drawbacks of each approach.
I think that the first approach Layer 7 routing, is that mean it is more performant than the second.
I agree with wander3r. I'm using envoy currently as my API proxy. With envoy, it's not only act as proxy but also handle logging, load balancing, rate limiting, circuit breaking etc.
If you are new to envoy. i did a simple step by step guide on how to start using envoy proxy.
Envoy Proxy Guide

Load balancing when Docker Swarm is combined with Spring Cloud Eureka/Gateway

I have a question regarding how Load balancing works when you have a single node Docker Swarm combined with Spring Eureka using Spring Cloud gateway. I have successfully configured a non Eureka swarm and can see Swarm load balancing between replicas for a service:
Cloud Gateway route config
.route(r -> r.path("/api/**")
.uri("http://my-service:8081")
.id("my-service"))
If I then configure this to use Eureka I now have this:
.route(r -> r.path("/api/**")
.uri("lb://MY-SERVICE")
.id("my-service"))
I believe I'm right in assuming that the gateway will know the IP/Ports and load balance accordingly, however when a request hits an IP will swarm then also decide to load balance between the replicas?
I appreciate that Eureka may be overkill for a small single node swarm but feel it could be beneficial as the app expands and possibly becomes more distributed. Obviously I want to avoid a situation where load balancing happens twice.
I assume I could just use http instead of lb to stop the Gateway from load balancing.
The eureka discovery service will provide the api gateway all available addresses for a given service. Each service registered within eureka will have a unique (container) IP and port and if the api gateway is configured to load balance the requests then yes, each replica of the service will be used, swarm doesn't need to do anything for load balancing, since you are targeting a specific running service (task) and not a node, for example.
However, for a multi-node scenario, Docker swarm has a routing mesh functionality that basically removes the need of having a discovery service. Imagine you have multiple nodes and replicas distributed across them. With swarm's routing mesh you don't even have to know which nodes have specific services running. The api gateway can route the incoming request to literally any node, and if that node happens to lack the requested service, it will automatically balance the request to nodes that do have the task (the name given to a running service).
So, that means that the load balancer doesn't need any sort of discovery service such as Eureka in order to balance the requests to certain container's IPs or nodes, it can simply round-robin all available nodes and that's it.
As for internal requests between the services that have replicas, swarm also provides load balancing capabillities.

Consul & Envoy Integration

Background
I came from HAproxy background and recently there is a lot of hype around "Service Mesh" Architecture. Long story short, I began to learn "Envoy" and "Consul".
I develop an understanding that Envoy is a proxy software but using sidecar to abstract in-out network with "xDS" as Data Plane for the source of truth (Cluster, Route, Filter, etc). Consul is Service Discovery, Segmentation, etc. It also abstracts network and has Data Plane but Consul can't do complex Load Balancing, filter routing as Envoy does.
As Standalone, I can understand how they work and set up them since documentation relatively good. But it can quickly became a headache if I want to integrate Envoy and Consul, since documentation for both Envoy & Consul lacks specific for integration, use-cases, and best practice.
Schematic
Consider the following simple infrastructure design:
Legends:
CS: Consul Server
CA: Consul Agent
MA: Microservice A
MB: Microservice B
MC: Microservice C
EF: Envoy Front Facing / Edge Proxy
Questions
Following are my questions:
In the event of Multi-Instance Microservices, Consul (as
stand-alone) will randomize round-robin. With Envoy & Consul
Integration, How consul handle multi-instance microservice? which
software does the load balance?
Consul has Consul Server to store its data, however, it seems Envoy
does not have "Envoy Server" to store its data, so where are its
data being stored and distributed across multiple instances?
What about Envoy Cluster (Logical Group of Envoy Front Facing Proxy
and NOT Cluster of Services)? How the leader elected?
As I mentioned above, Separately, Consul and Envoy have their
sidecar/agent on each Machine. I read that when integrated, Consul
injects Envoy Sidecar, but no further information on how this works?
If Envoy uses Consul Server as "xDS", what if for example I want to
add an advanced filter so that for certain URL segment it must
forward to a certain instance?
If Envoy uses Consul Server as "xDS", what if I have another machine
and services (for some reason) not managed by Consul Server. How I
configure Envoy to add filter, cluster, etc for that machine and
services?
Thank You, I'm so excited I hope this thread can be helpful to others too.
Apologies for the late reply. I figure its better late than never. :-)
If you are only using Consul for service discovery, and directly querying it via DNS then Consul will randomize the IP addresses returned to the client. If you're querying the HTTP interface, it is up to the client to implement a load balancing strategy based on the hosts returned in the response. When you're using Consul service mesh, the load balancing function will be entirely handled by Envoy.
Consul is an xDS server. The data is stored within Consul and distributed to the agents within the cluster. See the Connect Architecture docs for more information.
Envoy clusters are similar to backend server pools. Proxies contain Clusters for each upstream service. Within each cluster, there are Endpoints which represent the individual proxy instances for the upstream services.
Consul can inject the Envoy sidecar when it is deployed on Kubernetes. It does this through a Kubernetes mutating admission webhook. See Connect Sidecar on Kubernetes: Installation and Configuration for more information.
Consul supports advanced layer 7 routing features. You can configure a service-router to route requests to different destinations by URL paths, headers, query params, etc.
Consul has an upcoming feature in version 1.8 called Terminating Gateways which may enable this use case. See the GitHub issue "Connect: Terminating (External Service) Gateways" (hashicorp/consul#6357) for more information.

Difficulty using Wiremock, microservices, and external https service - no traffic captured

I have a system with microservice architecture. So I have a setup such that when I hit an URL, for example "http://localhost:8081/data/myId/" I get back a JSON response describing a resource. This is the system which I have created.
It turns out that there is some more complexity as to get this response, I am making a connection to an external service provider - this is what I want to use WireMock to mock, as this service provider has an API call limit. Let us say that I am interacting with this service provider at the following URL "https://api.dummyservice.com/". (So all "http://localhost:8081/data/myId/" calls consist of a "https://api.dummyservice.com/" call.)
So I am using WireMock as follows
java -jar '/home/user/Desktop/wiremock-standalone-2.19.0.jar'
--recd-mappings
--proxy-all https://api.dummyservice.com/
--verbose
--print-all-network-traffic
My intention is to listen to all calls at https://api.dummyservice.com/ through my microservice-based system so that I can stub and mock the responses. The problem is that I am not capturing any traffic at all when I access "http://localhost:8081/data/myId/" and get a successful response back!
Have I misunderstood WireMock's application? How can I debug this issue? It seems that I am performing quite a straightforward task.
I am on an Ubuntu 18.04 system if it makes any difference.
It seems you use standalone WireMock in a proper way, but please check correct params here
--record-mappings
--proxy-all="https://api.dummyservice.com/"

How to setup a websocket in Google Kubernetes Engine

How do I enable a port on Google Kubernetes Engine to accept websocket connections? Is there a way of doing so other than using an ingress controller?
Web sockets are supported by Google's global load balancer, so you can use a k8s Service of type LoadBalancer to expose such a service beyond your cluster.
Do be aware that load balancers created and managed outside Kubernetes in this way will have a default connection duration of 30 seconds, which interferes with web socket operation and will cause the connection to be closed frequently. This is almost useless for web sockets to be used effectively.
Until this issue is resolved, you will either need to modify this timeout parameter manually, or (recommended) consider using an in-cluster ingress controller (e.g. nginx) which affords you more control.
As per this article in the GCP documentation, there are 4 ways that you may expose a Service to external applications.
It can be exposed with a ClusterIP, a NodePort, a (TCP/UDP) Load Balancer, or an External Name.

Resources