What I'm doing: I'm sending data from a python server to the browser over a websocket connection. In server I'm using websockets and asyncio.
I get the data from a pub/sub system as json strings and forward the messages to the clients with await websocket.send(message).
My problem: await websocket.send(message) is taking too long.
What I mean with too long: a message of 1,5MB is received from the pub/sub system with a frequency of 16Hz (~ every 63ms). And await websocket.send(message) is taking up to 90ms to forward the message to one client.
So my queue is getting full and the incoming messages on browser are obsolete.
Does anyone have an idea what can be done to "speed up" the sending process? Because I would prefer not to throw any messages...
Thanks in advance!
Related
I'm trying to get a "communication line" between a server app that uses MQTT for messaging and a web page where I want to see the messages in real time and send back messages to the server-side app.
I use mosquitto, Bottle and gevent on the server and I want to keep it as simple as possible. Using gevent I managed to receive the MQTT messages in a greenlet, put them in a queue and send the messages to the webpage in the websocket procedure which looks like this:
while True:
mqt = queue.get(True)
ws.send(mqt)
I can also send messages from the web page back to the server and MQTT like this (also through a queue):
while True:
msg = ws.receive()
queue2.put(msg)
However I want these two loops to work at the same time on the same websocket. Is there any way to combine them? For example does receive have a timeout? I guess I could use two separate websockets, but that would be a waste if I can do it with only one.
Why not just have messages delivered directly to the page using MQTT over Websockets? There are a number of brokers that support Websockets and the paho JavaScript client allows both subscribing and publishing of messages
I am using spring/stomp/websocket framework to notify users of messages asynchronously. I have done this successfully. However, I would be get ACK from the client so that some server side action can take place when this is done.
The flow is roughly as flows:
Service notifies a specific user about a decision and updates a record in the DB with status = "notified"
Client receives the message (using stompClient.subscribe(...))
Client acknowledges that the message was received.
The service "knows" that this message was acknowledged and updates the status to "ACK" in the DB.
stompClient.connect({login:'guest', passcode:'guest'},
function(frame) {
setConnected(true);
**var headers = {ack: 'client'};**
...
stompClient.subscribe('/user/guest/response',function(notification) {
//doSomething
}), **headers**);
}
In the service, the message is sent:
this.messagingTemplate.convertAndSendToUser(user, "/response",msg, map);
Is there a way to handle the client ACK on the server side?
Alternatively, I tried to do a
stompClient.send("/app/response/ack/"+messageId);
on the client, in the method that handles the subscription, but in vain.
Can someone please tell me what is standard way to handle acknowledgments? I have been struggling with this for a a couple of days and any thoughts would be very helpful.
Thanks!
Use the ACK frame as per spec. The server sends an ack:some_id header, the client uses that some_id in the ACK frame.
The answer is no for simple broker.
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
The simple broker is great for getting started but supports only a
subset of STOMP commands (e.g. no acks, receipts, etc.), relies on a
simple message sending loop, and is not suitable for clustering. As an
alternative, applications can upgrade to using a full-featured message
broker.
I have a single publisher application (PUB) which has N number of subscribers (SUB)
These subscribers need to be able to catch up if they are restarted, or fall down and miss messages.
We have implemented a simple event store that the publisher writes to.
We have implemented a CatchupService which can query the event store and send missed messages to the subscriber.
We have implemented in the subscriber a PUSH socket which sends a request for missed messages.
The subscriber also has a PULL socket which listens for missed messages on a seperate port.
The subscriber will:
Detect a gap
Send a request for missed messages to our CatchupService, the request also contains the address on which to send the results to.
The catchup service has a PULL socket on which it listens for requests
When the CatchupService receives a request it starts a worker thread which:
Gets the missed messages
Opens a PUSH socket connecting to the subscribers PULL socket
Sends the missed messages to the subscriber.
This seems to work quite well however we are unsure if we are using the right socket types for this sort of application. Are these correct or should be using a different pattern.
Sounds okay. Otherwise 0MQ is able to recovery from message loss when peers go offline for a short time. Take a look at the Socket Options and specifically option ZMQ_SNDHWM.
I don't know just how guaranteed the 0MQ recovery mechanisms are so maybe you're best to stay with what you've got, but it is something to be aware of.
I was working with different patterns in zeromq in my project and right now i am using req/rep(later will shift to dealer/router) and pub/sub . The client sends messages to the server and the server publishes this information to other clients who have subscribed.
To use multiple sockets i followed the suggestions on this thread
Combining pub/sub with req/rep in zeromq and used zmq_poll . My server polls on req socket and pub socket.
While writing the code and while reading the above post i guessed that my pub socket will never get polledin and that's what i am observing now when i run the program. Only my request is polled in and publish is not happening at all.
If i don't use polling it works fine i.e as soon as the server gets the message i publish it.
So i am unclear on how polling will be useful in this pattern and how i can use it ?
You probably don't need to poll the pub socket. You certainly don't need to poll in on it - because that can never be triggered (pub sockets are send only).
The polling pattern might be useful in the case where you want to poll for "ready to send" on the req and the pub socket, allowing you to multiplex those channels. This will be particularly useful if/when you move to using a dealer/router.
The reason for that is that replacing req with a dealer (e.g.) can allow you to send multiple messages before receiving responses. Polling for inward and outbound messages will allow you to make maximum advantage of that.
I m currently using zmq with python. Server is using REP socket.
Do I have a way, when recv a message, to know who send it ? If a receive 2 messages, I just need to know if they come from the same user or not, so an uid for example would be enough.
It looks like you want to implement async request handling on the server side: you let the server accept requests, process them asynchronously, and send the responses back to clients whenever the response data is available for each request. Now of course: how would you know, after you're done processing a request, which client to send it back to?
With simple REP sockets, ZMQ makes sure you won't run into this kind of problem by enforcing a recv() -> send(), recv() -> send() sequentiality. In other words, after you do a recv() on a REP socket, you must do a send() before recv()ing from it again. The response will be sent back to the client you got the message from, and there's no doubt about client's address because it's only one client at a time.
But this doesn't really help when you want to parallelize the request handling, does it? There are many cases when the behavior of REP is too restrictive, and that's exactly what Multipart messages and ROUTER (or XREP) sockets are for. XREP breaks the recv() -> send() lockstep of REP, but that causes a problem as we saw earlier - how do you know which client to send the reply back to, if multiple clients are connected? In order to make this work, XREP in ZMQ adds a message part to the front of a message, like an envelope, that includes the identity of the connection that it recv()'d the request from.
There's a whole chapter in the ZMQ Guide about the advanced Request-Reply patterns. You can also find an example for handling async requests here and a good short explanation of the ZMQ connection handling here.
Reading http://zguide.zeromq.org/page%3aall#Transient-vs-Durable-Sockets, you can only get the identity of the socket you're working with... not the socket of any peers you're connected to.
This being said, just build the sender information into the message. This should be trivial to do (either with a UUID or specific name per client).