How do master and slave communicate is Mesos. Does the master run a webserver? Is it using HTTP or TCP/IP requests ?
Thanks for your reply
Master and worker (aka slave) exchange protobuf messages packed in HTTP/1.1. Master has a tiny built-in webserver that processes messages from workers and requests coming via HTTP endpoints. If you want to learn more, you can start with looking at mesos/3rdparty/libprocess/src/encoder.hpp:107
Related
I am using DataPower to redirect the incoming requests to the application clusters.
I have 2 clusters, a primary cluster and a standby cluster. In case of a failure in primary cluster the requests gets redirected to the standby cluster. But I am having trouble with already established websocket connections. The requests received from them still tries to go the primary cluster.
Anyone had a similar problem, can please help me with a solution?
Thank you.
Unfortunately it is not possible to "move" a WebSocket connection without a re-connect. The connection is persistent and moving host would cause it to have to do a new handshake with the new host.
There are more advanced load-balancers and running a pub/sub broker for your WS (e.g. RabbitMQ/Kafka) that can handle fail-overs/scaling for WS but DataPower can't, unfortunately out-of-the-box...
I am building a proxy server using Java. This application is deployed in docker container (multiple instances)
Below are requirements I am working on.
Clients send http requests to my proxy server
Proxy server forward those requests in the order it received to destination node server.
When destination is not reachable, proxy server store those requests and forward it when it is available in future.
Similarly when a request fails, request will be re-tried after "X" time
I implemented a node wise queue implantation (Hash Map - (Key) node name - (value) reachability status + requests queue in the order it received).
Above solution works well when there is only one instance. But I would like to know how to solve this when there are multiple instances? Is there any shared datastructure I can use to solve this issue. ActiveMQ, Redis, Kafka something of that kind (I am very new to shared memory / processing).
Any help would be appreciated.
Thanks in advance.
Ajay
There is an Open Source REST Proxy for Kafka based on Jetty which you might get some implementation ideas from.
https://github.com/confluentinc/kafka-rest
This proxy doesn’t store messages itself because kafka clusters are highly available for writes and there are typically a minimum of 3 kafka nodes available for Message persistence. The kafka client in the proxy can be configured to retry if the cluster is temporarily unavailable for write.
I want to create a networked architecture where a master process is connected with some slave processes and exchange messages in this way:
Every slave should be able to send a message to the master. The master should be able to send a message to every subset of connected slaves.
i.e.
Master sends a message to Slave 1
Master sends a message to Slave 2 and Slave 3
Master sends a message to all Slaves
Slave 1 send a message to Master
These messages could have answers, but this can be handled at an higher level if there's not a dedicated way of doing this with ZeroMQ.
This should work using only one port.
With plain sockets I could make the Master bind on a port, accept connections, spawn a thread for every slave to handle incoming data and use the single connection to contact the single slave.
Since this architecture will use message-based communication, I think that ZeroMQ is the proper tool to implement it, but browsing the doc i can't find a way do that.
I'm going to write this in python, but the problem should be language agnostic.
Only using one port I think its best to use Dealer/Router :
Master would be a Router socket
Slave would be a Dealer socket
When slaves start send an 'I'm here message' to the master which should store identity (first frame received) in a list of known slaves.
The master then sends to a slave by prepending the identity and an empty frame to the message. (You can only send to one client at a time with a router socket, but trivial to write a function to take a message and a list of slave identities and send to each in turn.)
The identities of the slaves can either be set by you, using the setsockopt call on the Dealer sockets, or ZeroMQ will auto assign unique ones if you don't.
I want to build a asp.net web api server that can re-route incoming HTTP request to other web api servers. Main server will be master and its only job will be accepting and routing to other servers. Slave servers will inform master server when they started and ready to accept http requests. Slave servers must not only inform they alive but they should send which api they support. I think I have to re-map routing tables in master server in runtime. Is it possible?
This seems like load balancing according to functionality . Is there any way to do this ? I have to write a load balancer for web api any suggestion is welcome.
We configured a centralized Nagios / Mod_Gearman with multiple Gearman worker to monitor our servers. I need to monitor remote servers deploying gearman workers on the remote site. But, for security reasons, I would like to reverse the direction of the initial connection between these workers and the neb module (inconming flows from workers to our network are forbidden). The Gearman proxy seems to be the solution since it just puts jobs from the "central" gearmand into another gearmand.
I would like to know if it's possible to configure the gearman proxy to send informations to a remote gearmand and get check results from it without having to open inbound flows ?
Unfortunately, the documentation does not give use cases about that. Do you know were I could find more documentation about gearman proxy configurations ?