Suppose In any data centre there are different network partition(for eg. DMZ zone) and thus some sets of hosts cant contact other sets of hosts. So if I want to propagate a message to all hosts in any datacenter, can gossip/consul work for the use case?
For the above problem, one solution I am thinking is: All hosts in DMZ zones can be allowed to connect to consul servers(few hosts only). It will be like some sets of hosts cant contact to other sets of hosts, but all the host in the datacenter can talk to consul servers. But I am not sure, even by this, any message can be propagated to all the hosts in the datacenter.
Gossip is just used for consul, which in turn is just used for service registration, service discovery and key/value data - related to configuration.
The Event mechanism is probably what you want, from the Python docs:
Event = <class 'consul.base.Consul.Event'>
The event command provides a mechanism to fire a custom user event to
an entire datacenter. These events are opaque to Consul, but they can
be used to build scripting infrastructure to do automated deploys,
restart services, or perform any other orchestration action.
Unlike most Consul data, which is replicated using consensus, event
data is purely peer-to-peer over gossip.
This means it is not persisted and does not have a total ordering. In
practice, this means you cannot rely on the order of message delivery.
An advantage however is that events can still be used even in the
absence of server nodes or during an outage.
Related
I'm building a system where client IoT devices will be making persistent websocket connections to a single instance of a microservice. We'll call it the "hardware gateway". End devices will be connecting to one of these service instances and may migrate between services at anytime (perhaps due to a reboot or network interruption).
Other services will be pushing notifications to these hardware clients via some hardware gateway instance. I need a way to route these requests to the specific instance that is maintaining a connection to a specific IoT device. At the moment, my solution is to maintain an external KV store where I can map an IoT device's UUID to a service instance, but that puts an extra dependency on all other services to know about this KV store. Not to mention the additional latency introduced by this query.
Maybe there's some reverse proxy that allows me to dynamically update its matching criteria? I've also looked into using a message broker like RabbitMQ, but it doesn't seem to support this use case.
There's a reasonable solution in JVM land for this: Akka.
The instances form an Akka cluster. When a device makes a websocket connection, an actor is spawned to handle the interactions over the websocket. The actor registers that it is the actor interacting with the device with a cluster sharded actor keyed by the device's ID (and likely periodically reregisters with the sharded actor). As instances are deployed, etc. the cluster rebalances. An important feature of this is that the service is stateful, but the instances deploy in a way that looks to the outside world like it's stateless: requests can go to any node.
For pushing notifications to the devices, the HTTP endpoint or message-bus consumer in the service looks up the cluster sharded actor which forwards the notification to the websocket actor (you'll want to think about whether you want at-least-once or at-most-once delivery, which will govern whether there's some portion of the cluster sharded actor which should be persistent).
Background
I came from HAproxy background and recently there is a lot of hype around "Service Mesh" Architecture. Long story short, I began to learn "Envoy" and "Consul".
I develop an understanding that Envoy is a proxy software but using sidecar to abstract in-out network with "xDS" as Data Plane for the source of truth (Cluster, Route, Filter, etc). Consul is Service Discovery, Segmentation, etc. It also abstracts network and has Data Plane but Consul can't do complex Load Balancing, filter routing as Envoy does.
As Standalone, I can understand how they work and set up them since documentation relatively good. But it can quickly became a headache if I want to integrate Envoy and Consul, since documentation for both Envoy & Consul lacks specific for integration, use-cases, and best practice.
Schematic
Consider the following simple infrastructure design:
Legends:
CS: Consul Server
CA: Consul Agent
MA: Microservice A
MB: Microservice B
MC: Microservice C
EF: Envoy Front Facing / Edge Proxy
Questions
Following are my questions:
In the event of Multi-Instance Microservices, Consul (as
stand-alone) will randomize round-robin. With Envoy & Consul
Integration, How consul handle multi-instance microservice? which
software does the load balance?
Consul has Consul Server to store its data, however, it seems Envoy
does not have "Envoy Server" to store its data, so where are its
data being stored and distributed across multiple instances?
What about Envoy Cluster (Logical Group of Envoy Front Facing Proxy
and NOT Cluster of Services)? How the leader elected?
As I mentioned above, Separately, Consul and Envoy have their
sidecar/agent on each Machine. I read that when integrated, Consul
injects Envoy Sidecar, but no further information on how this works?
If Envoy uses Consul Server as "xDS", what if for example I want to
add an advanced filter so that for certain URL segment it must
forward to a certain instance?
If Envoy uses Consul Server as "xDS", what if I have another machine
and services (for some reason) not managed by Consul Server. How I
configure Envoy to add filter, cluster, etc for that machine and
services?
Thank You, I'm so excited I hope this thread can be helpful to others too.
Apologies for the late reply. I figure its better late than never. :-)
If you are only using Consul for service discovery, and directly querying it via DNS then Consul will randomize the IP addresses returned to the client. If you're querying the HTTP interface, it is up to the client to implement a load balancing strategy based on the hosts returned in the response. When you're using Consul service mesh, the load balancing function will be entirely handled by Envoy.
Consul is an xDS server. The data is stored within Consul and distributed to the agents within the cluster. See the Connect Architecture docs for more information.
Envoy clusters are similar to backend server pools. Proxies contain Clusters for each upstream service. Within each cluster, there are Endpoints which represent the individual proxy instances for the upstream services.
Consul can inject the Envoy sidecar when it is deployed on Kubernetes. It does this through a Kubernetes mutating admission webhook. See Connect Sidecar on Kubernetes: Installation and Configuration for more information.
Consul supports advanced layer 7 routing features. You can configure a service-router to route requests to different destinations by URL paths, headers, query params, etc.
Consul has an upcoming feature in version 1.8 called Terminating Gateways which may enable this use case. See the GitHub issue "Connect: Terminating (External Service) Gateways" (hashicorp/consul#6357) for more information.
In DP XB62, a B2B Persistence store can be setup to run in HA configuration with a primary node with writeaccess and a standby/slave node with only read access. This is tightly connected with Virtual IPs and standby control. This works fine for inbound connections (HTTP for instance), but how can I put pollers into active/standby control?
I.e. MQ,SFTP and FTP polling Front side handlers should be deactivated when the machine is in standby mode (and the B2B persistence store is in standby mode).
Can this be achieved in XB62 firmware 6.0.0.2?
Sorry, no, it can't...
As the stand-by control of the DataPower boxes isn't a "real" cluster it won't deactivate the passive box, merely remove the IP address from it.
The pollers will still poll on the stand-by box and there is unfortunately no way around that.
Some customers who wants all processing to be done on one box I normally setup a "poller MPGW" that has the backside set to the VIP. That way any poller who polls data will send to the "active" box and the processing will happen there.
This is most convenient if you only want to monitor one B2B Transaction Viewer for example.
I have been testing a few scripts as well where I enable/disable the FSH depending on events sent at fail-over but I have found that there are a few too many events to monitor for that to be a "safe" approach...
We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.
Is that called "clustering" of servers? When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load? Also, is one "server" that's up and running the application called an "instance"?
[...] Is that called "clustering" of servers?
Clustering is indeed using transparently multiple nodes that are seen as a unique entity: the cluster. Clustering allows you to scale: you can spread your load on all the nodes and, if you need more power, you can add more nodes (short version). Clustering allows you to be fault tolerant: if one node (physical or logical) goes down, other nodes can still process requests and your service remains available (short version).
When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load?
In general, this is the job of a dedicated component called a "load balancer" (hardware, software) that can use many algorithms to balance the request: round-robin, FIFO, LIFO, load based...
In the case of EC2, you previously had to load balance with round-robin DNS and/or HA Proxy. See Introduction to Software Load Balancing with Amazon EC2. But for some time now, Amazon has launched load balancing and auto-scaling (beta) as part of their EC2 offerings. See Elastic Load Balancing.
Also, is one "server" that's up and running the application called an "instance"?
Actually, an instance can be many things (depending of who's speaking): a machine, a virtual machine, a server (software) up and running, etc.
In the case of EC2, you might want to read Amazon EC2 Instance Types.
Here is a real example:
This specific configuration is hosted at RackSpace in their Managed Colo group.
Requests pass through a Cisco Firewall. They are then routed across a Gigabit LAN to a Cisco CSS 11501 Content Services Switch (eg Load Balancer). The Load Balancer matches the incoming content to a content rule, handles the SSL decryption if necessary, and then forwards the traffic to one of several back-end web servers.
Each 5 seconds, the load balancer requests a URL on each webserver. If the webserver fails (two times in a row, IIRC) to respond with the correct value, that server is not sent any traffic until the URL starts responding correctly.
Further behind the webservers is a MySQL master / slave configuration. Connections may be mad to the master (for transactions) or to the slaves for read only requests.
Memcached is installed on each of the webservers, with 1 GB of ram dedicated to caching. Each web application may utilize the cluster of memcache servers to cache all kinds of content.
Deployment is handled using rsync to sync specific directories on a management server out to each webserver. Apache restarts, etc.. are handled through similar scripting over ssh from the management server.
The amount of traffic that can be handled through this configuration is significant. The advantages of easy scaling and easy maintenance are great as well.
For clustering, any web request would be handled by a load balancer, which being updated as to the current loads of the server forming the cluster, sends the request to the least burdened server. As for if it's an instance.....I believe so but I'd wait for confirmation first on that.
You'd' need a very large application to be bothered with thinking about clustering and the "fun" that comes with it software and hardware wise, though. Unless you're looking to start or are already running something big, it wouldn't' be anything to worry about.
Yes, it can be required for clustering. Typically as the load goes up you might find yourself with a frontend server that does url rewriting, https if required and caching with squid say. The requests get passed on to multiple backend servers - probably using cookies to associate a session with a particular backend if necessary. You might have the database on a separate server also.
I should add that there are other reasons why you might need multiple servers, for instance there may be a requirement that the database is not on the frontend server for security reasons