WebSocket: How to switch websocket connections from primary to standy cluster with DataPower? - websocket

I am using DataPower to redirect the incoming requests to the application clusters.
I have 2 clusters, a primary cluster and a standby cluster. In case of a failure in primary cluster the requests gets redirected to the standby cluster. But I am having trouble with already established websocket connections. The requests received from them still tries to go the primary cluster.
Anyone had a similar problem, can please help me with a solution?
Thank you.

Unfortunately it is not possible to "move" a WebSocket connection without a re-connect. The connection is persistent and moving host would cause it to have to do a new handshake with the new host.
There are more advanced load-balancers and running a pub/sub broker for your WS (e.g. RabbitMQ/Kafka) that can handle fail-overs/scaling for WS but DataPower can't, unfortunately out-of-the-box...

Related

Working of websocket services in clustered deployment

Lets say I have a websocket implemented in springboot. The architecture is microservice. I have deployed the service in kubernetes cluster and I have 2 running instance of the service, the socket implementation is using stomp and redis as broker.
Now the first connection is created between a client and one of the service. Does all the data flow occur through the client and the connected service? Would the other service also have a connection? Incase the current service goes down would the other service open up a connection?
Now lets say I'am sending some data back to the client which comes through a kafka topic. One of the either service could read it. If then would either of them be able to send the data back to the client?
Can someone help me understand these scenarios?
A websocket is a permanent connection. After opening it, it will be routed through kubernetes to a fixed pod. No other pod will receive the connection.
If the pod goes down, the connection is terminated.
If a new connection is created, for example by a different user, it may be routed to a different pod.
What data is transmitted, for example with kafka as source, is not relevant in this context. It could be anything.

ActiveMQ Artemis: Can slave instance be a connection point for the clients?

The ActiveMQ Artemis documentation says:
Slave will be in passive mode until the master crashes...
That's OK, but it's not clear if brokers in passive mode can be a connection point. In other words, can I put my slave in the connection list for a remote client like below?
(tcp://my-master:61616,tcp://my-slave:61616)?reconnectAttempts=5
If yes, does it means that a broker in passive mode is just a router?
The ActiveMQ Artemis JMS client supports a list of servers to be used for the initial connection attempt. It can be specified in the connection URI using a syntax with (), e.g.: (tcp://myhost:61616,tcp://myhost2:61616)?reconnectAttempts=5. The client uses this list of servers to create the first connection.
The slave broker doesn't accept incoming client connections until it becomes live, but it is important to include both the master and the slave uri in the list because the client can't know who is live when it creates the initial connection.

shared node wise queue

I am building a proxy server using Java. This application is deployed in docker container (multiple instances)
Below are requirements I am working on.
Clients send http requests to my proxy server
Proxy server forward those requests in the order it received to destination node server.
When destination is not reachable, proxy server store those requests and forward it when it is available in future.
Similarly when a request fails, request will be re-tried after "X" time
I implemented a node wise queue implantation (Hash Map - (Key) node name - (value) reachability status + requests queue in the order it received).
Above solution works well when there is only one instance. But I would like to know how to solve this when there are multiple instances? Is there any shared datastructure I can use to solve this issue. ActiveMQ, Redis, Kafka something of that kind (I am very new to shared memory / processing).
Any help would be appreciated.
Thanks in advance.
Ajay
There is an Open Source REST Proxy for Kafka based on Jetty which you might get some implementation ideas from.
https://github.com/confluentinc/kafka-rest
This proxy doesn’t store messages itself because kafka clusters are highly available for writes and there are typically a minimum of 3 kafka nodes available for Message persistence. The kafka client in the proxy can be configured to retry if the cluster is temporarily unavailable for write.

Is a hornetq cluster designed to cope with loosing 1 or more nodes?

I know there's HornetQ HA with Master/Backup setups. But I would like to run HornetQ in a non-master setup and handle duplicate messages myself.
The cluster setup looks perfect for this, but nowhere I see a hint to its ability to service such these requirements. What happens to clients of a failed node? Do they connect to other servers?
Will a rebooted/repaired node be able to rejoin the cluster and continue distribution of its persistent messages?
Failover on clients require a backup node at the moment. you would have to reconnect manually in case of a failure to get into other nodes.
Example: get the connection factory and connect there.

ActiveMQ network of brokers connectivity scheme

I need to scale up my ActiveMQ solution so I have defined a network of brokers.
I'm tring to figure out how to connect my producers and consumers to the cluster.
does each producer has to be connected to a single broker (with the failover uri for availability)? in this case how can I guarentry the distribution of traffic accross the brokers? do I need to configure the producers to connect each to a diffrent broker?
should I apply the same schema for the consumers?
This makes the application aware of the cluster topology, which I hope can be avoided by a discent cluster
Tx
Tomer
I strongly suggest you carefully read through the documentation from activemq.apache.org on clustering ActiveMQ. There are a lot of very helpful tips.
From what you have written I suggest you pay special attention to this. At the bottom of the page it details how you can control from server side the failover/failback configuration for your producers.
For example:
updateClusterClients - if true pass information to connected clients about changes in the topology of the broker cluster
rebalanceClusterClients - if true, connected clients will be asked to rebalance across a cluster of brokers when a new broker joins the network of brokers
updateURIsURL - A URL (or path to a local file) to a text file containing a comma separated list of URIs to use for reconnect in the case of failure
In a production active system then I would think that making use of updateURIsURL would make it a lot less painful scaling out.

Resources