Vertx Websocket: Load Balancing - websocket

I am working on a system where I send messages to a server using web sockets. For this, I use vertx (without Rx) web sockets client with multiple verticles and the messages are stored in an internal queue. I wish to dequeue from the messaging queue and submit to a verticle is has the least load. To identify the verticle with least load I plan to use a ratio of number messages ack received to number messages sent. I understand vertx web sockets are async in nature. But is there a provision to use any of the provided handlers to parse the response which indicates the message had reached the server.
Please note: The communication is one way ie only from client to server and server is not on vertx, it uses plain netty
Thanks in advance.

You can use a combination of SharedData and regular handlers:
this.vertx.createHttpServer().websocketHandler(ws -> {
ws.handler(data -> {
this.vertx.sharedData().getCounter(this.deploymentID(), (c) -> {
c.result().incrementAndGet((dummy) -> {});
});
});
});
This will increment an atomic counter for specific verticle each time a new WebSocket request comes in.
Now you can get all deployment IDs using vertx.deploymentIDs()
What's left is to iterate over those, gather the counters, then get the minimal one.

Related

ZeroMQ Request/Response pattern with Node.js

I'm implementing a distributed system for a project and am a bit confused as to how I should properly implement the Req/Res pattern. Basically I have a few endpoints that will send a request to a client for processing tasks and responding.
So basically:
Incoming request is received
The endpoint opens a req and res socket type with the broker
Broker receives the request, proxies it to an available worker
Worker responds and the endpoint receives the processed value, reports it back via the endpoint.
I've found a decent load balance broker script here: http://zguide.zeromq.org/js:lbbroker. There's also an async client/server pattern I'm interested in implementing: http://zguide.zeromq.org/js:asyncsrv which I might adapt into a load balanced implementation.
My question is perhaps a bit simplistic but, would each endpoint open a new socket on EVERY request or maintain and open socket for every request? That means there would be n connections for every request made to the endpoint.
You'd keep the sockets open, there's no need to close them after each request. And there'd be a single socket one every endpoint (client and server). At the server end you read a request from the socket, and write your response back to the socket; zmq takes care of ensuring that the response goes back from the right client.

The theory of websockets with API

I have an API running on a server, which handle users connection and a messaging system.
Beside that, I launched a websocket on that same server, waiting for connections and stuff.
And let's say we can get access to this by an Android app.
I'm having troubles to figure out what I should do now, here are my thoughts:
1 - When a user connect to the app, the API connect to the websocket. We allow the Android app only to listen on this socket to get new messages. When the user want to answer, the Android app send a message to the API. The API writes itself the received message to the socket, which will be read back by the Android app used by another user.
This way, the API can store the message in database before writing it in the socket.
2- The API does not connect to the websocket in any way. The Android app listen and write to the websocket when needed, and should, when writing to the websocket, also send a request to the API so it can store the message in DB.
May be none of the above is correct, please let me know
EDIT
I already understood why I should use a websocket, seems like it's the best way to have this "real time" system (when getting a new message for example) instead of forcing the client to make an HTTP request every x seconds to check if there are new messages.
What I still don't understand, is how it is suppose to communicate with my database. Sorry if my example is not clear, but I'll try to keep going with it :
My messaging system need to store all messages in my API database, to have some kind of historic of the conversation.
But it seems like a websocket must be running separately from the API, I mean it's another program right? Because it's not for HTTP requests
So should the API also listen to this websocket to catch new messages and store them?
You really have not described what the requirements are for your application so it's hard for us to directly advise what your app should do. You really shouldn't start out your analysis by saying that you have a webSocket and you're trying to figure out what to do with it. Instead, lay out the requirements of your app and figure out what technology will best meet those requirements.
Since your requirements are not clear, I'll talk about what a webSocket is best used for and what more traditional http requests are best used for.
Here are some characteristics of a webSocket:
It's designed to be continuously connected over some longer duration of time (much longer than the duration of one exchange between client and server).
The connection is typically made from a client to a server.
Once the connection is established, then data can be sent in either direction from client to server or from server to client at any time. This is a huge difference from a typical http request where data can only be requested by the client - with an http request the server can not initiate the sending of data to the client.
A webSocket is not a request/response architecture by default. In fact to make it work like request/response requires building a layer on top of the webSocket protocol so you can tell which response goes with which request. http is natively request/response.
Because a webSocket is designed to be continuously connected (or at least connected for some duration of time), it works very well (and with lower overhead) for situations where there is frequent communication between the two endpoints. The connection is already established and data can just be sent without any connection establishment overhead. In addition, the overhead per message is typically smaller with a webSocket than with http.
So, here are a couple typical reasons why you might choose one over the other.
If you need to be able to send data from server to client without having the client regular poll for new data, then a webSocket is very well designed for that and http cannot do that.
If you are frequently sending lots of small bits of data (for example, a temperature probe sending the current temperature every 10 seconds), then a webSocket will incur less network and server overhead than initiating a new http request for every new piece of data.
If you don't have either of the above situations, then you may not have any real need for a webSocket and an http request/response model may just be simpler.
If you really need request/response where a specific response is tied to a specific request, then that is built into http and is not a built-in feature of webSockets.
You may also find these other posts useful:
What are the pitfalls of using Websockets in place of RESTful HTTP?
What's the difference between WebSocket and plain socket communication?
Push notification | is websocket mandatory?
How does WebSockets server architecture work?
Response to Your Edit
But it seems like a websocket must be running separately from the API,
I mean it's another program right? Because it's not for HTTP requests
The same process that supports your API can also be serving the webSocket connections. Thus, when you get incoming data on the webSocket, you can just write it directly to the database the same way the API would access the database. So, NO the webSocket server does not have to be a separate program or process.
So should the API also listen to this websocket to catch new messages
and store them?
No, I don't think so. Only one process can be listening to a set of incoming webSocket connections.

Using ZeroMQ to send replies to specific clients and queue if client disconnects

I'm new to ZeroMQ and trying to figure out a design issue. My scenario is that I have one or more clients sending requests to a single server. The server will process the requests, do some stuff, and send a reply to the client. There are two conditions:
The replies must go to the clients that sent the request.
If the client disconnects, the server should queue messages for a period of time so that if the client reconnects, it can receive the messages it missed.
I am having a difficult time figuring out the simplest way to implement this.
Things I've tried:
PUB/SUB - I could tag replies with topics to ensure only the subscribers that sent their request (with their topic as their identifier) would receive the correct reply. This takes care of the routing issue, but since the publisher is unaware of the subscribers, it knows nothing about clients that disconnect.
PUSH/PULL - Seems to be able to handle the message queuing issue, but looks like it won't support my plan of having messages sent to specific clients (based on their ID, for example).
ROUTER/DEALER - Design seemed like the solution to both, but all of the examples seem pretty complex.
My thinking right now is continuing with PUB/SUB, try to implement some sort of heartbeat on the client end (allowing the server to detect the client's presence), and when the client no longer sends a heartbeat, it will stop sending messages tagged with its topic. But that seems sub-optimal and would also involve another socket.
Are there any ideas or suggestions on any other ways I might go about implementing this? Any info would be greatly appreciated. I'm working in Python but any language is fine.
To prepare the best proposition for your solution, more data about your application requirements. I have made a little research about your conditions and connnect it with my experience about ZMQ, here I present two possibilities:
1) PUSH/PULL pattern in two direction, bigger impact on scalability, but messages from server will be cached.
Server has one PULL socket to register each client and get all messages from clients. Each message should have client ID to for server knowledge where send response.
For each client - server create PUSH socket to send responses. Socket configuration was sent in register message. You can use also REQ/REP pattern for register clients (assign socket number).
Each client has own PULL socket, which configuration was sent to server in register message.
It means that server with three clients required to (example port numbers in []):
server: 1 x PULL[5555] socket, 3 x PUSH[5560,5561,5562] sockets (+ optional 1 X REQ[5556] socket for registrations, but I think it depends how you prepare client identity)
client: 1 x PUSH[5555] socket, 1 x PULL[5560|5561|5562] (one per client) (+ optional 1 X REP[5556])
You have to connect server to multiple client sockets to send responses but if client disconnects, messages will not lost. Client will get their own messages when it reconnect to their PULL socket. The disadvantage is requirements of creating few PUSH sockets on server side (number of clients).
2) PUB/SUB + PUSH/PULL or REQ/REP, static cocket configuration on server side (only 2), but server has to prepare some mechanism for retransmit or cache messages.
Server create PUB socket and PULL or REQ. Client register it identity by PULL or REQ socket. server will publish all messages to client with this identity as filter. Server use monitor() function on PUB socket to count number of connected and disconnected clients (actions: 'accept' and 'disconnect'). After 'disconnect' action server publish message to all clients to register again. For clients which not re-register, server stop publish messages.
Client create SUB socket and PUSH or REQ to register and send requests.
This solution requires maybe some cache on server side. Client could confirm each message after get it from SUB socket. It is more complicated and have to be connected with your requirement. If you just would like to know that client lost message. Client could send timestamps of last message received from server during registration. If you need guarantee that clients get all messages, you need some cache implementation. Maybe other process which subscribe all messages and delete each confirmed by client.
In this solution server with three clients required to (example port numbers in []):
server: 1 x PUB[5555] socket, 1 x REP or PULL[5560] socket + monitoring PUB socket
client: 1 x SUB[5555] socket and own identity for filter, 1 x REQ or PUSH[5560] socket
About monitoring you could read here: https://github.com/JustinTulloss/zeromq.node#monitoring (NodeJS implementation, but Python will be similar)
I think about other patterns, but I am not sure that ROUTER/DEALER or REQ/REP will cover your requirements. You should read more about patterns, because each of it is better for some solutions. Look here:
official ZMQ guide (a lot of examples and pictures)
easy ROUTER/DEALER example: http://blog.scottlogic.com/2015/03/20/ZeroMQ-Quick-Intro.html

stomp message acknowledgement from client

I am using spring/stomp/websocket framework to notify users of messages asynchronously. I have done this successfully. However, I would be get ACK from the client so that some server side action can take place when this is done.
The flow is roughly as flows:
Service notifies a specific user about a decision and updates a record in the DB with status = "notified"
Client receives the message (using stompClient.subscribe(...))
Client acknowledges that the message was received.
The service "knows" that this message was acknowledged and updates the status to "ACK" in the DB.
stompClient.connect({login:'guest', passcode:'guest'},
function(frame) {
setConnected(true);
**var headers = {ack: 'client'};**
...
stompClient.subscribe('/user/guest/response',function(notification) {
//doSomething
}), **headers**);
}
In the service, the message is sent:
this.messagingTemplate.convertAndSendToUser(user, "/response",msg, map);
Is there a way to handle the client ACK on the server side?
Alternatively, I tried to do a
stompClient.send("/app/response/ack/"+messageId);
on the client, in the method that handles the subscription, but in vain.
Can someone please tell me what is standard way to handle acknowledgments? I have been struggling with this for a a couple of days and any thoughts would be very helpful.
Thanks!
Use the ACK frame as per spec. The server sends an ack:some_id header, the client uses that some_id in the ACK frame.
The answer is no for simple broker.
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
The simple broker is great for getting started but supports only a
subset of STOMP commands (e.g. no acks, receipts, etc.), relies on a
simple message sending loop, and is not suitable for clustering. As an
alternative, applications can upgrade to using a full-featured message
broker.

ZMQ: Multiple request/reply-pairs

ZeroMQs Pub/Sub pattern makes it easy for the server to reply to the right client. However, it is less obvious how to handle communication that cannot be resolved within two steps, i.e. protocols where multiple request/reply pairs are necessary.
For example, consider a case where the client is a worker which asks the server for new work of a specific type, the server replies with the parameters of the work, the client then sends the results and the server checks these and replies whether they were correct.
Obviously, I can't just use recv,send,recv,send sequentially and assume that the first and the second recv are from the same client. What would be the idiomatic way to use multiple recv,send pairs without having to handle messages from other clients inbetween?
Multiple Request/Reply pairs can be made through the use of ZMQ_ROUTER sockets. I recommend using ZMQ_REQ sockets on the clients for bidirectional communication.
If you want to have multiple clients accessing a single server you could use a router socket on the server and request sockets on the clients.
Check out the ZMQ guide's section on this pattern:
http://zguide.zeromq.org/php:chapter3#The-Asynchronous-Client-Server-Pattern
All the clients will interact with the server in the same pattern as Pub/Subs except they will all point at a single server Router socket.
The server on the other hand will receive three messages for every single message a client sends. These parts represent:
Part0 = Identity of connection (random number of which client it is)
Part1 = Empty frame
Part2 = Data of the ZMQ message.
Reference:
http://zguide.zeromq.org/php:chapter3#ROUTER-Broker-and-REQ-Workers
The identity can be used to differentiate between clients accessing on a single port. Repacking the message in the same order and responding on the router socket (with a different data frame) will automatically route it to the client who sent the message.

Resources