How to drop buffered websocket messages in aiohttp? - websocket

It takes my app more time to handle certain messages from a websocket than I have before the next ones come. When the code returns from the handling function I want to read only the newest message. How do I do that in aiohttp?

Related

Number of messages consumed from MQTT broker seems to be capped

I'm running a Go service that uses the Paho Go MQTT client for subscribing to a topic. The clients that produce the MQTT messages (also Paho, but on Android devices) log when they produce and my service logs when it receives. As you can see from this graph, there seems to be a pretty consistent "cap" right below 36.000 messages per day on the receiving side. The graphs follow each other almost perfectly up to the cap, but then it seems that the go service caps out on slightly below 600 messages per minute, which means around 10 msgs per second.
Where should I look for the solution to this? I cannot find any setting (options) that could explain this cap.
As per the comments paho.mqtt.golang defaults to ordered delivery of messages (the MQTT spec provides some guarantees re message ordering and and calling handlers in a go routine may break this). The upshot of this is that messages will be delivered one-by-one and, if your handler is not keeping up, a queue may form (at QOS1+ the broker needs to retain messages as it may be necessary to resend them).
Some brokers limit the number of messages queued for a client; for example the max_queued_messages option in Mosquitto defaults to 1000 (this default was lower in Mosquitto 1.X) and, if the queue exceeds the limit, "messages will be silently dropped".
This is what appears to have been happening here; the application was not keeping up with incoming messages so the broker began dropping messages when the queue exceeded a limit.
In many cases using the paho.mqtt.golangoption ClientOptions.SetOrderMatters(false) will help; with this option set the message handler will be called in a separate go routine (so the handler must be threadsafe). Alternatively start a go routine within the handler but note that this approach results in the ACK being sent before the handler completes (which may result in message loss if your application terminates unexpectedly).

Is WebSocket onMessage atomic?

I mean, is onMessage method would be triggered exactly once per atomic message from server, or:
Whether the message is short, 2 or more message might only trigger onMessage only once with one concatenated message, so we have to split the message manually?
Whether the message is long there would be multiple onMessage triggered with partial message each one so we have to concat the message manually?
I'm using C# and javascript for client side (2 product) and golang for server side
All the messages are events technically, you can only read the data from them. You can also control buffer size for both write and read operations, but I think that if the message reaches those limits it wouldn't be separated

omnetpp: Avoid "sending while transmitting" error using sendDelayed()

I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.

Async Request-Response Algorithm with response time limit

I am writing a Message Handler for an ebXML message passing application. The message follow the Request-Response Pattern. The process is straightforward: The Sender sends a message, the Receiver receives the message and sends back a response. So far so good.
On receipt of a message, the Receiver has a set Time To Respond (TTR) to the message. This could be anywhere from seconds to hours/days.
My question is this: How should the Sender deal with the TTR? I need this to be an async process, as the TTR could be quite long (several days). How can I somehow count down the timer, but not tie up system resources for large periods of time. There could be large volumes of messages.
My initial idea is to have a "Waiting" Collection, to which the message Id is added, along with its TTR expiry time. I would then poll the collection on a regular basis. When the timer expires, the message Id would be moved to an "Expired" Collection and the message transaction would be terminated.
When the Sender receives a response, it can check the "Waiting" collection for its matching sent message, and confirm the response was received in time. The message would then be removed from the collection for the next stage of processing.
Does this sound like a robust solution. I am sure this is a solved problem, but there is precious little information about this type of algorithm. I plan to implement it in C#, but the implementation language is kind of irrelevant at this stage I think.
Thanks for your input
Depending on number of clients you can use persistent JMS queues. One queue per client ID. The message will stay in the queue until a client connects to it to retrieve it.
I'm not understanding the purpose of the TTR. Is it more of a client side measure to mean that if the response cannot be returned within certain time then just don't bother sending it? Or is it to be used on the server to schedule the work and do what's required now and push the requests with later response time to be done later?
It's a broad question...

Do I have to listen for server output while sending data to an SMTP server?

I have problems implementing SMTP protocol over WinSock API. Here's what I do now. The client creates a non-blocking socket, connects to the server and exchanges data with it.
It uses "send string" primitive which efectively calls send() in a loop until the whole string is transmitted and "receive string" primitive which calls recv() in a loop until a string ending with CRLF is accumulated or timeout occurs.
The above dumb approach works, except on one specific server deployed at the customer's site. On this server the client sends EHLO, AUTH, MAIL FROM, RCPT, DATA, each time receiving a reasonable responce. Then it sends the mail message body line-by-line (not trying to receive anything from the server) and after some time send() (several hundred lines sent) stars returning WSAEWOULDBLOCK.
How do I handle this? Do I have to check for pending input on the socket after each line? Or what else should I do to predict and possibly prevent this situation?
From the documentation:
This error is returned from operations on nonblocking sockets that cannot be completed immediately, for example recv when no data is queued to be read from the socket. It is a nonfatal error, and the operation should be retried later.

Resources