Consul Go Client redundant server connection - consul

I'm testing a consul server cluster. I am using the go client for this.
How do I enter multiple servers for the client to connect to?
Optimally it would be something like:
client, err := api.NewClient(api.DefaultConfig())
client.remotes = host_array
Is this a wrong-headed approach to using consul and the expected way for a user is to start a client node and then read the locally replicated state?

The Consul API client defaults to 127.0.0.1:8500 because there is an expectation that it will connect to a local Consul Agent running in client mode. The Consul Agent should be your "proxy" to the Consul Servers and maintain the connections with active servers so you don't have to.
https://www.consul.io/docs/internals/architecture.html
https://github.com/hashicorp/consul/issues/3689
An alternate approach could be to utilize a load balancer for a cluster of Consul Servers. Strategies for that are documented here... https://www.hashicorp.com/blog/load-balancing-strategies-for-consul

Related

Consul & Envoy Integration

Background
I came from HAproxy background and recently there is a lot of hype around "Service Mesh" Architecture. Long story short, I began to learn "Envoy" and "Consul".
I develop an understanding that Envoy is a proxy software but using sidecar to abstract in-out network with "xDS" as Data Plane for the source of truth (Cluster, Route, Filter, etc). Consul is Service Discovery, Segmentation, etc. It also abstracts network and has Data Plane but Consul can't do complex Load Balancing, filter routing as Envoy does.
As Standalone, I can understand how they work and set up them since documentation relatively good. But it can quickly became a headache if I want to integrate Envoy and Consul, since documentation for both Envoy & Consul lacks specific for integration, use-cases, and best practice.
Schematic
Consider the following simple infrastructure design:
Legends:
CS: Consul Server
CA: Consul Agent
MA: Microservice A
MB: Microservice B
MC: Microservice C
EF: Envoy Front Facing / Edge Proxy
Questions
Following are my questions:
In the event of Multi-Instance Microservices, Consul (as
stand-alone) will randomize round-robin. With Envoy & Consul
Integration, How consul handle multi-instance microservice? which
software does the load balance?
Consul has Consul Server to store its data, however, it seems Envoy
does not have "Envoy Server" to store its data, so where are its
data being stored and distributed across multiple instances?
What about Envoy Cluster (Logical Group of Envoy Front Facing Proxy
and NOT Cluster of Services)? How the leader elected?
As I mentioned above, Separately, Consul and Envoy have their
sidecar/agent on each Machine. I read that when integrated, Consul
injects Envoy Sidecar, but no further information on how this works?
If Envoy uses Consul Server as "xDS", what if for example I want to
add an advanced filter so that for certain URL segment it must
forward to a certain instance?
If Envoy uses Consul Server as "xDS", what if I have another machine
and services (for some reason) not managed by Consul Server. How I
configure Envoy to add filter, cluster, etc for that machine and
services?
Thank You, I'm so excited I hope this thread can be helpful to others too.
Apologies for the late reply. I figure its better late than never. :-)
If you are only using Consul for service discovery, and directly querying it via DNS then Consul will randomize the IP addresses returned to the client. If you're querying the HTTP interface, it is up to the client to implement a load balancing strategy based on the hosts returned in the response. When you're using Consul service mesh, the load balancing function will be entirely handled by Envoy.
Consul is an xDS server. The data is stored within Consul and distributed to the agents within the cluster. See the Connect Architecture docs for more information.
Envoy clusters are similar to backend server pools. Proxies contain Clusters for each upstream service. Within each cluster, there are Endpoints which represent the individual proxy instances for the upstream services.
Consul can inject the Envoy sidecar when it is deployed on Kubernetes. It does this through a Kubernetes mutating admission webhook. See Connect Sidecar on Kubernetes: Installation and Configuration for more information.
Consul supports advanced layer 7 routing features. You can configure a service-router to route requests to different destinations by URL paths, headers, query params, etc.
Consul has an upcoming feature in version 1.8 called Terminating Gateways which may enable this use case. See the GitHub issue "Connect: Terminating (External Service) Gateways" (hashicorp/consul#6357) for more information.

Connecting to a Socket.IO server running in Node-Red on IBM Bluemix

I've set up a Node-Red instance on IBM Cloud with a Socket.IO server using node-red-contrib-socketio.
I was able to subscribe to events on port 3000 on my local host fine but I'm having difficulty doing the same with the Node-Red instance on IBM Cloud.
According to my client console I seem to be able to connect but get no response using the following URL: ws://MYAPP.eu-gb.mybluemix.net/red:3000/socket.io/?EIO=3&transport=websocket Is this correct or should I be using something else like ws://MYAPP.eu-gb.mybluemix.net:3000/socket.io/?EIO=3&transport=websocket ?
Is any further configuration required in IBM Cloud to enable the connection?
If I need to authenticate within the URL I pass to the server is there a particular way that string should be structured?
Many thanks,
This will not work on Bluemix.
The Bluemix router only forwards external traffic on ports 80 and 443 (http/https) to apps.
But the app may not be actually listening on those ports (The port to listen on is passed in to the application on start up in a environment variable).
You can not just pick an arbitrary ports and listen on.

Elasticsearch multinode environment

In elastic search i created multi-node setup.I use Java Api transport Client to communicate to Elasticsearch server.
Now i created transport client with only one IP[assume:192.129.129.12.9300]. If i request any query in single ip it communicates all nodes and returns results. What happen if it my node[192.129.129.12.9300] that i mentioned in transport Client fails. Can i communicate with other nodes. What is the optimum way configuration to use transport Client for multi node set up.
You need to activate sniff option.
See http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html#transport-client

how to configure gearman_proxy to handle inbound flows between workers and gearmand?

We configured a centralized Nagios / Mod_Gearman with multiple Gearman worker to monitor our servers. I need to monitor remote servers deploying gearman workers on the remote site. But, for security reasons, I would like to reverse the direction of the initial connection between these workers and the neb module (inconming flows from workers to our network are forbidden). The Gearman proxy seems to be the solution since it just puts jobs from the "central" gearmand into another gearmand.
I would like to know if it's possible to configure the gearman proxy to send informations to a remote gearmand and get check results from it without having to open inbound flows ?
Unfortunately, the documentation does not give use cases about that. Do you know were I could find more documentation about gearman proxy configurations ?

How does finagle kestrel cluster work

It says we can use the finagle ServerSet with Zookeeper to create a cluster.
Should I use the finagle server builder to launch a kestrel cluster? Or the cluster can be built with finagle client only.
What's the algorithm to distribute the queue in a cluster?
1 We need to use kestrel as a library instead of running the original kestrel. Code kestrel server based on finagle library.
We can use ServerSet at client side to refer a kestrel cluster registered at ZooKeeper.
https://github.com/robey/kestrel/blob/master/docs/guide.md
At kestrel server side, if the optional zookeeper field of KestrelConfig is specified, kestrel will attempt to use the given configuration to join a logical set of kestrel servers. The ZooKeeper host, port and other connection options are documented here: ZooKeeperBuilder
Kestrel servers will join 0, 1, or 2 server sets depending on their current status
2 The message sender is going to send the message to one randomly picked kestrel server. The message receiver is going to listen to all the kestrel servers and get notified when any kestrel server got a message. So the same queue is distributed on all the server and no algorithms.

Resources