I am writing a frontend application that needs to know what is the current transfer rate with the server. When using HTTP, this is easy enough thanks to the performance API. You can access many measurement of the HTTP call by just using
performance.getEntriesByName(url);
However it seems WebSockets are not covered by this API. So I have been trying to find a way to do the same without it.
It seems the MessageEvent has a timestamp which indicates the time at which the event is created. However, there isn't much documentation about when that MessageEvent is created. Is it at the reception of the first byte (which I hope) or is it once the whole message is downloaded (which is probably the case). Does anyone happen to know if there is more detail on how WebSockets messages are managed by browser?
More generally, how do you measure your WebSockets transfer rate from the frontend without server side help?
I'm using socket io, and I was wondering what was better.
Emiting an event every X seconds to keep always updated with the database or emit the event after e.g a POST event, so it's more efficient.
I believe updating X seconds should be easier, and maybe has better scalability, but don't know if that's the correct way.
EDIT-1: To give more context. The application is for an accounting team. They basically want their excel sheets converted to a app. They have a lot of data, so I don't know if emitting an event every X seconds is a good idea.
Thanks.
There is no "correct" way. It depends entirely upon the needs of your client and the capabilities of your server. If the client needs to be kept more instantly up-to-date, then send data from your server to the client whenever the server has new data. If the client only needs to be updated every once-in-a-while, then only send it data every once-in-a-while. There is no "correct" way. It depends upon your application.
It is always more efficient to only send data to the client when the data has actually changed and when the client actually cares that something has changed. So, it would be foolish to send a client update every few seconds if the data isn't actually changing that often. If you have a means of knowing when the data changes on the server, then use that event to know when to send data to the client and even then, don't send it more often than the client actually cares to know.
It is always more efficient to have the server do no more work than is actually required by the client. Things like caching and keeping track of what each client was last sent can sometimes save lots of work for the server too.
Any further advice on this matter would need to know a lot more about the needs of your application and how this particular data fits into that and how often the data in question actually changes.
A summary on this topic:
Send data to the client no more often than it needs it
Sending data to the client that has not changed since the last time you changed it is inefficient for the server and consumes bandwidth.
Only you can decide how often your client needs updates (it depends upon your application)
Only you can test the impact on scalability of sending data to every client every time the data changes.
Server-side caching and keeping track of what client already has what data can help you avoid sending data to a client that it already has.
Server-side scalability probably has a lot to do with how many simultaneous clients are connected and how frequently there is changed data to send them.
We have a web app making use of websocket with netty. We are trying to push through a large stream of data and noticing that once we hit about 300-400 responses a second, the channel begins to get marked as NOT_WRITABLE and we are unable to send some of the responses.
It seems as though netty is waiting for an ack on the socket. Is there any way to disable that, or any way to speed it up to be able to handle more? The javascript implementation in chrome is pretty bare-bones and it doesn't seem like there's much we can do to increase performance from that side.
Seems like your write faster then the data can be "send out" to the client. You can try to increase the write watermark. See [1].
Als you may want to experiment with some TCP opts like "tcpNoDelay" and "sendBufferSize". See [2]
[1] http://static.netty.io/3.5/api/org/jboss/netty/channel/socket/nio/NioChannelConfig.html#setWriteBufferHighWaterMark(int)
[2] http://static.netty.io/3.5/api/org/jboss/netty/channel/socket/SocketChannelConfig.html
Similar questions have been asked before and they all reached the conclusion that AJAX will not become obsolete. But in what ways is ajax better than websockets?
With socket.io, it's easy to fall back to flash or long polling, so browser compatibility seems to be a non-issue.
Websockets are bidirectional. Where ajax would make an asynchronous request, websocket client would send a message to the server. The POST/GET parameters can be encoded in JSON.
So what is wrong with using 100% websockets? If every visitor maintains a persistent websocket connection to the server, would that be more wasteful than making a few ajax requests throughout the visit session?
I think it would be more wasteful. For every connected client you need some sort of object/function/code/whatever on the server paired up with that one client. A socket handler, or a file descriptor, or however your server is setup to handle the connections.
With AJAX you don't need a 1:1 mapping of server side resource to client. Your # of clients can scale less dependently than your server-side resources. Even node.js has its limitations to how many connections it can handle and keep open.
The other thing to consider is that certain AJAX responses can be cached too. As you scale up you can add an HTTP cache to help reduce the load from frequent AJAX requests.
Short Answer
Keeping a websocket active has a cost, for both the client and the server, whether Ajax will have a cost only once, depending on what you're doing with it.
Long Answer
Websockets are often misunderstood because of this whole "Hey, use Ajax, that will do !". No, Websockets are not a replacement for Ajax. They can potentially be applied to the same fields, but there are cases where using Websocket is absurd.
Let's take a simple example : A dynamic page which loads data after the page is loaded on the client side. It's simple, make an Ajax call. We only need one direction, from the server to the client. The client will ask for these data, the server will send them to the client, done. Why would you implement websockets for such a task ? You don't need your connection to be opened all the time, you don't need the client to constantly ask the server, you don't need the server to notify the client. The connection will stay open, it will waste resources, because to keep a connection open you need to constantly check it.
Now for a chat application things are totally different. You need your client to be notified by the server instead of forcing the client to ask the server every x seconds or milliseconds if something is new. It would make no sense.
To understand better, see that as two persons. One of the two is the server, the over is the client. Ajax is like sending a letter. The client sends a letter, the server responds with another letter. The fact is that, for a chat application the conversation would be like that :
"Hey Server, got something for me ?
- No.
- Hey Server, got something for me ?
- No.
- Hey Server, got something for me ?
- Yes, here it is."
The server can't actually send a letter to the client, if the client never asked for an answer. It's a huge waste of resources. Because for every Ajax request, even if it's cached, you need to make an operation on the server side.
Now the case I discussed earlier with the data loaded with Ajax. Imagine the client is on the phone with the server. Keeping the connection active has a cost. It costs electricity and you have to pay your operator. Now why would you need to call someone and keep him on phone for an hour, if you just want that person to tell you 3 words ? Send a goddamn letter.
In conclusion Websockets are not a total replacement for Ajax !
Sometimes you will need Ajax where Websocket usage is absurd.
Edit : The SSE case
That technology isn't used very widely but it can be useful. As its name states it, Server-Sent Events are a one-way push from the server to the client. The client doesn't request anything, the server just sends the data.
In short :
- Unidirectional from the client : Ajax
- Unidirectional from the server : SSE
- Bidirectional : Websockets
Personally, I think that websockets will be used more and more in web applications instead of AJAX. They are not well suited to web sites where caching and SEO are of greater concern, but they will do wonders for webapps.
Projects such as DNode and socketstream help to remove the complexity and enable simple RPC-style coding. This means your client code just calls a function on the server, passing whatever data to that function it wants. And the server can call a function on the client and pass it data as well. You don't need to concern yourself with the nitty gritties of TCP.
Furthermore, there is a lot of overhead with AJAX calls. For instance, a connection needs to be established and HTTP headers (cookies, etc.) are passed with every request. Websockets eliminate much of that. Some say that websockets are more wasteful, and perhaps they are right. But I'm not convinced that the difference is really that substantial.
I answered another related question in detail, including many links to related resources. You might check it out:
websocket api to replace rest api?
I think that sooner or later websocket based frameworks will start to popup not just for writing real-time chat like parts of web apps, but also as standalone web frameworks. Once permanent connection is created it can be used for receiving all kinds of stuff including UI parts of web application which are now served for example through AJAX requests. This approach may hurt SEO in some way although it can reduce amount of traffic and load generated by asynchronous requests which includes redundant HTTP headers.
However I doubt that websockets will replace or endanger AJAX because there are numerous scenarios where permanent connections are unnecessary or unwanted. For example mashup applications which are using (one time) single purpose REST based services that doesn't need to be permanently connected with clients.
There's nothing "wrong" about it.
The only difference is mostly readability. The main advantage of Ajax is that it allows you fast development because most of the functionality is written for you.
There's a great advantage in not having to re-invent the wheel every time you want to open a socket.
WS:// connections have far less overhead than "AJAX" requests.
As other people said, keeping the connection open can be overkill in some scenarios where you don't need server to client notifications, or client to server request happens with low frecuency.
But another disadvantage is that websockets is a low level protocol, not offering additional features to TCP once the initial handshake is performed. So when implementing a request-response paradigm over websockets, you will probably miss features that HTTP (a very mature and extense protocol family) offers, like caching (client and shared caches), validation (conditional requests), safety and idempotence (with implications on how the agent behaves), range requests, content types, status codes, ...
That is, you reduce message sizes at a cost.
So my choice is AJAX for request-response, websockets for server pushing and high frequency low latency messaging
If you want the connection to server open and if continuous polling to the server will be there then go for sockets else you are good to go with ajax.
Simple Analogy :
Ajax asks questions(requests) to server and server gives answers(responses) to these questions. Now if you want to ask continuous questions then ajax wont work, it has a large overhead which will require resources at both the ends.
We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data.
What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment?
Update:
Here's a Link that gives some food for thought with regards to UI updates.
There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready.
Industry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription.
As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine.
If there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them.
It would be less network traffic to have the server send updates without the client continually asking for updates.
What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.