socket io - Emit an event every X seconds or just emit it after a POST event? - websocket

I'm using socket io, and I was wondering what was better.
Emiting an event every X seconds to keep always updated with the database or emit the event after e.g a POST event, so it's more efficient.
I believe updating X seconds should be easier, and maybe has better scalability, but don't know if that's the correct way.
EDIT-1: To give more context. The application is for an accounting team. They basically want their excel sheets converted to a app. They have a lot of data, so I don't know if emitting an event every X seconds is a good idea.
Thanks.

There is no "correct" way. It depends entirely upon the needs of your client and the capabilities of your server. If the client needs to be kept more instantly up-to-date, then send data from your server to the client whenever the server has new data. If the client only needs to be updated every once-in-a-while, then only send it data every once-in-a-while. There is no "correct" way. It depends upon your application.
It is always more efficient to only send data to the client when the data has actually changed and when the client actually cares that something has changed. So, it would be foolish to send a client update every few seconds if the data isn't actually changing that often. If you have a means of knowing when the data changes on the server, then use that event to know when to send data to the client and even then, don't send it more often than the client actually cares to know.
It is always more efficient to have the server do no more work than is actually required by the client. Things like caching and keeping track of what each client was last sent can sometimes save lots of work for the server too.
Any further advice on this matter would need to know a lot more about the needs of your application and how this particular data fits into that and how often the data in question actually changes.
A summary on this topic:
Send data to the client no more often than it needs it
Sending data to the client that has not changed since the last time you changed it is inefficient for the server and consumes bandwidth.
Only you can decide how often your client needs updates (it depends upon your application)
Only you can test the impact on scalability of sending data to every client every time the data changes.
Server-side caching and keeping track of what client already has what data can help you avoid sending data to a client that it already has.
Server-side scalability probably has a lot to do with how many simultaneous clients are connected and how frequently there is changed data to send them.

Related

Performance improvement idea about saving large number of Objects into the dabatase

My web application has one feature where it allows the user to send messages to all friends. The number of friends can be 100K to 200K. The application is using Spring and Hibernate.
It entails fetching the friends' info, building the message object and saving it to the database. After all the messages are sent (actually saved to the db), a notification will popup showing how many messages are sent successfully such as 99/100 sent or 100/100 sent.
However, when I was load testing this feature. It took an extremely long time to finish. I am trying to improve the performance. One approach I tried was to divide the friends into small batches and fetch/save each batch concurrently and wait on all of them to finish. But that still didn't get too much improvement.
I was wondering if there are any other ways I can try. Another approach I can think of is to use WebSockets to send each batch and update the notification after each batch and start the next batch until all the batches are sent. But how can the user still get the notification after he navigates away from the message page? The Websocket logic on the client side has to be somewhere global, correct?
Thank you in advance.

Preventing data loss in client authoritative database writes

A project I'm working on requires users to insert themselves into a list on a server. We expect a few hundred users over a weekend and while very unlikely, a collision could happen in which two users submit the list concurrently and one of them is lost. The server has no validation, it simply allows you to get and put data.
I was pointed in the direction of "optimistic locking" but I'm having trouble grasping when exactly the data should be validated and how it prevents this from happening. If one of the clients reads the data, adds itself and then checks again to ensure that the data is the same with the use of an index or timestamp, how does this prevent the other client from doing the same and then one overwriting the other?
I'm trying to understand the flow in the context of two clients getting data and putting data.
The point of optimistic locking is that the decision to accept or reject a write is taken on the server, and is protected against concurrency by a pessimistic transaction or some sort of hardware protection, such as compare-and-swap. So a client requests a write together with some sort of timestamp or version identifier, and the server only accepts the write if the timestamp is still accurate. If it isn't the client gets some sort of rejection code and will have to try again. If it is, the client gets told that its write succeeded.
This is not the only way to handle receiving data from multiple clients. One popular alternative is to use a reliable messaging system - for example the Java Messaging Service specifies an interface for such systems for which you can find open source implementations. Clients write into the messaging system and can go away as soon as their message is accepted. The server reads requests from the messaging system and acts on them. If the server or the network goes down it's no big deal: the messages will still be there to be read when they come back (typically they are written to disk and have the same level of protection as database data although if you look at a reliable message queue implementation you may find that it is not, in fact, built on top of a standard database table).
One example of a writeup of the details of optimistic locking is the HTTP server Etag specification e.g. https://en.wikipedia.org/wiki/HTTP_ETag

Front-facing REST API with an internal message queue?

I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.
Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.
I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.
An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.
In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?
You have few different scenarios according to how much control you have on the client.
If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request.
This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).
If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given TaskId to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well.
Example response when task is in progress:
{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation here. In most of the cases this solution is overkill. Yet I think it's worth to mention it.
One option would be to use DeferredResult provided by spring but that means you need to maintain some pool of threads in request serving node and max no. of active threads will decide the throughput of your system. For more details on how to implement DeferredResult refer this link https://www.baeldung.com/spring-deferred-result

What is the disadvantage of using websocket/socket.io where ajax will do?

Similar questions have been asked before and they all reached the conclusion that AJAX will not become obsolete. But in what ways is ajax better than websockets?
With socket.io, it's easy to fall back to flash or long polling, so browser compatibility seems to be a non-issue.
Websockets are bidirectional. Where ajax would make an asynchronous request, websocket client would send a message to the server. The POST/GET parameters can be encoded in JSON.
So what is wrong with using 100% websockets? If every visitor maintains a persistent websocket connection to the server, would that be more wasteful than making a few ajax requests throughout the visit session?
I think it would be more wasteful. For every connected client you need some sort of object/function/code/whatever on the server paired up with that one client. A socket handler, or a file descriptor, or however your server is setup to handle the connections.
With AJAX you don't need a 1:1 mapping of server side resource to client. Your # of clients can scale less dependently than your server-side resources. Even node.js has its limitations to how many connections it can handle and keep open.
The other thing to consider is that certain AJAX responses can be cached too. As you scale up you can add an HTTP cache to help reduce the load from frequent AJAX requests.
Short Answer
Keeping a websocket active has a cost, for both the client and the server, whether Ajax will have a cost only once, depending on what you're doing with it.
Long Answer
Websockets are often misunderstood because of this whole "Hey, use Ajax, that will do !". No, Websockets are not a replacement for Ajax. They can potentially be applied to the same fields, but there are cases where using Websocket is absurd.
Let's take a simple example : A dynamic page which loads data after the page is loaded on the client side. It's simple, make an Ajax call. We only need one direction, from the server to the client. The client will ask for these data, the server will send them to the client, done. Why would you implement websockets for such a task ? You don't need your connection to be opened all the time, you don't need the client to constantly ask the server, you don't need the server to notify the client. The connection will stay open, it will waste resources, because to keep a connection open you need to constantly check it.
Now for a chat application things are totally different. You need your client to be notified by the server instead of forcing the client to ask the server every x seconds or milliseconds if something is new. It would make no sense.
To understand better, see that as two persons. One of the two is the server, the over is the client. Ajax is like sending a letter. The client sends a letter, the server responds with another letter. The fact is that, for a chat application the conversation would be like that :
"Hey Server, got something for me ?
- No.
- Hey Server, got something for me ?
- No.
- Hey Server, got something for me ?
- Yes, here it is."
The server can't actually send a letter to the client, if the client never asked for an answer. It's a huge waste of resources. Because for every Ajax request, even if it's cached, you need to make an operation on the server side.
Now the case I discussed earlier with the data loaded with Ajax. Imagine the client is on the phone with the server. Keeping the connection active has a cost. It costs electricity and you have to pay your operator. Now why would you need to call someone and keep him on phone for an hour, if you just want that person to tell you 3 words ? Send a goddamn letter.
In conclusion Websockets are not a total replacement for Ajax !
Sometimes you will need Ajax where Websocket usage is absurd.
Edit : The SSE case
That technology isn't used very widely but it can be useful. As its name states it, Server-Sent Events are a one-way push from the server to the client. The client doesn't request anything, the server just sends the data.
In short :
- Unidirectional from the client : Ajax
- Unidirectional from the server : SSE
- Bidirectional : Websockets
Personally, I think that websockets will be used more and more in web applications instead of AJAX. They are not well suited to web sites where caching and SEO are of greater concern, but they will do wonders for webapps.
Projects such as DNode and socketstream help to remove the complexity and enable simple RPC-style coding. This means your client code just calls a function on the server, passing whatever data to that function it wants. And the server can call a function on the client and pass it data as well. You don't need to concern yourself with the nitty gritties of TCP.
Furthermore, there is a lot of overhead with AJAX calls. For instance, a connection needs to be established and HTTP headers (cookies, etc.) are passed with every request. Websockets eliminate much of that. Some say that websockets are more wasteful, and perhaps they are right. But I'm not convinced that the difference is really that substantial.
I answered another related question in detail, including many links to related resources. You might check it out:
websocket api to replace rest api?
I think that sooner or later websocket based frameworks will start to popup not just for writing real-time chat like parts of web apps, but also as standalone web frameworks. Once permanent connection is created it can be used for receiving all kinds of stuff including UI parts of web application which are now served for example through AJAX requests. This approach may hurt SEO in some way although it can reduce amount of traffic and load generated by asynchronous requests which includes redundant HTTP headers.
However I doubt that websockets will replace or endanger AJAX because there are numerous scenarios where permanent connections are unnecessary or unwanted. For example mashup applications which are using (one time) single purpose REST based services that doesn't need to be permanently connected with clients.
There's nothing "wrong" about it.
The only difference is mostly readability. The main advantage of Ajax is that it allows you fast development because most of the functionality is written for you.
There's a great advantage in not having to re-invent the wheel every time you want to open a socket.
WS:// connections have far less overhead than "AJAX" requests.
As other people said, keeping the connection open can be overkill in some scenarios where you don't need server to client notifications, or client to server request happens with low frecuency.
But another disadvantage is that websockets is a low level protocol, not offering additional features to TCP once the initial handshake is performed. So when implementing a request-response paradigm over websockets, you will probably miss features that HTTP (a very mature and extense protocol family) offers, like caching (client and shared caches), validation (conditional requests), safety and idempotence (with implications on how the agent behaves), range requests, content types, status codes, ...
That is, you reduce message sizes at a cost.
So my choice is AJAX for request-response, websockets for server pushing and high frequency low latency messaging
If you want the connection to server open and if continuous polling to the server will be there then go for sockets else you are good to go with ajax.
Simple Analogy :
Ajax asks questions(requests) to server and server gives answers(responses) to these questions. Now if you want to ask continuous questions then ajax wont work, it has a large overhead which will require resources at both the ends.

Push or Pull for a near real time automation server?

We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data.
What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment?
Update:
Here's a Link that gives some food for thought with regards to UI updates.
There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready.
Industry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription.
As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine.
If there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them.
It would be less network traffic to have the server send updates without the client continually asking for updates.
What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.

Resources