server-sent events Golang - go

I would like to do some one way streaming of data and am experimenting with SSE vs Websockets.
Using SSE form a golang server I'm finding it confusing on how to notify the client when sessions are finished. (eg the server has finished sending the events or the server suddenly goes offline or client looses connectivity)
One thing I need is to reliably know when these disconnect situations. Without using timeouts etc.
My experiments so far , when I take the server offline the client gets EOF. But I'm having trouble trying to figure out how to signal from the server to the client that a connection is closed / finished and then how to handle / read it? Is EOF a reliable way to determine a closed / error / finished state?
Many of the examples with SSE fail to show client good client connection handling.
Would this be easier with Websockets?
Any experiences suggestions most appreciated.
Thanks

The SSE standard requires that the browser reconnect, automatically, after N seconds, if the connection is lost or if the server deliberately closes the socket. (N defaults to 5 in Firefox, 3 in Chrome and Safari, last time I checked.) So, if that is desirable, you don't need to do anything. (In WebSockets you would have to implement this kind of reconnect for yourself.)
If that kind of reconnect is not desirable, you should instead send a message back to the client, saying "the show is over, go away". E.g. if you are streaming financial data, you might send that on a Friday evening, when the markets shut. The client should then intercept this message and close the connection from its side. (The socket will then disappear, so the server process will automatically get closed.)
In JavaScript, and assuming you are using JSON to send data, that would look something like:
var es = EventSource("/datasource");
es.addEventListener("message", function(e){
var d = JSON.parse(e.data);
if(d.shutdownRequest){
es.close();
es=null;
//Tell user what just happened.
}
else{
//Normal processing here
}
},false);
UPDATE:
You can find out when the reconnects are happening, by listening for the "close" event, then looking at the e.target.readyState
es.addEventListener("error", handleError, false);
function handleError(e){
if(e.target.readyState == 0)console.log("Reconnecting...");
if(e.target.readyState == 2)console.log("Giving up.");
}
No other information is available, but more importantly it cannot tell the difference between your server process deliberately closing the connection, your web server crashing, or your client's internet connection going down.
One other thing you can customize is the retry time, by having the the server send a retry:NN message. So if you don't want quick reconnections, but instead want at least 60 seconds between any reconnect attempts do this have your server send retry:60.

Related

Golang server default timeout with (long polling) server-sent event calls. The call is closed, how to maintain it open?

I'm using this amazing SSE Server in Golang (https://github.com/r3labs/sse) on Heroku.
There is a timeout limit there: https://devcenter.heroku.com/articles/request-timeout#long-polling-and-streaming-responses:
If you’re sending a streaming response, such as with server-sent events, you’ll need to detect when the client has hung up, and make sure your app server closes the connection promptly. If the server keeps the connection open for 55 seconds without sending any data, you’ll see a request timeout.
I know in the WebSocket world there is the concept of Keep-Alive ping.
I'm trying with this code:
go func() {
for range time.Tick(time.Second * 1) {
fmt.Println("a second")
sseServer.Publish("test", &sse.Event{Data: []byte("ping")})
}
}()
using a simple server like this:
httpServer := &http.Server{
//...
ReadTimeout: "10s",
WriteTimeout: "10s",
}
but it doesn't work. The call is closed after 10 seconds (and 10 pings).
I think it will fail on Heroku too.
Where am I wrong?
Can we change the timeout for these SSE calls only?
Is there another way?
UPDATE
I don't need accurate detection of disconnected client, I don't care at all. I'm using it to refresh a dashboard when something happens on server.
I prefer not to use WebSocket for something so easy.
I think I have not explained myself well, because I would like to use ping not so much for detecting disconnected clients but because I would like the connection not to be interrupted (as is happening on Heroku).
THE SITUATION RIGHT NOW
LOCALLY
If I remove the ReadTimeout field totally on Windows but also on Docker linux container the connection does not stop and everything works fine.
ON HEROKU
Since on Heroku the connection drops every 55 seconds for that timeout I told you in the first post I tried that loop with that very simple code and it works: the SSE calls are not closed anymore!
THE REMAINING ISSUE
How to have in any case a default ReadTimeout for all other calls (not SSE); I think it's best practice to set a default ReadTimeout.
How to do?
You do not need the ReadTimeout; the server will never read anything from the client after the initial EventSource/Server Sent Events (SSE) connection.
Thus, it is not a best practice to set a default read timeout with an SSE connection, because that read timeout will always get hit. You can't ever send more data back up through the initial SSE GET request.
You should think about SSE as basically a GET request that simply never closes, because that's almost literally what it is. That means that it works great through most proxy servers, and where it doesn't (where the proxy server applies its own timeouts), the client side will automatically reconnect, which is actually a very nice feature that is not found in websockets (although most websocket client libraries do implement it).
You might want to read through this article to learn some more of the great (and not-so-great) things about SSE: https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/
With regards to your other question, you're probably looking for your HTTP routing library that will allow you to apply timeouts to some GET requests, and not others, but the question is why; if you are trying to protect from a resource drain, you should apply that protection evenly across all endpoints.

Long-polling vs websocket when expecting one-time response from server-side

I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS

Socket.io data loss when Internet speed drop

I am using socket.io 1.4 and I want to know that what happens in this scenario:
The client Emits like this:
Socket.emit('test',data);
The client does 3 emits to server but suddenly Internet speed drops and those emits may not get to server
But after a while the Internet speed rises again but what will happen to previous failed emits?
They will be emitted again automatically?
How should I handle that
Websockets use TCP, which is in general a reliable protocol. There is not exactly such a thing as "The internet speed dropped and I lost some messages." If some messages are lost they will be automatically retransmitted at the TCP level. If retransmission fails completely, the connection will be reset.
So what you really are asking is how socket.io handles this. And the answer is that it has some amount of reconnecting logic, and you may also want to monitor the connection in case it resets (hook up a listener for the disconnect event on the socket), if you want to take some extra action (like notify the user).

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resources