How to read data from an Ajax web service with Qt? - ajax

I would like to process some data in a Qt application. This data can be found on a web page which uses Ajax to dynamically update itself.
For example, the page itself is www.example.com, and it uses Ajax to load data from www.example.com/data, which is a plain text file. If I view www.example.com in a browser, I can clearly see when the data is updated.
The brute force solution would be to just call the QWebView's load(QUrl("www.example.com/data")) every couple of seconds, or every time its loadFinished() signal is emitted, but that would be a waste of bandwidth, an I will be downloading the same data over and over. The time between updates could theoretically be a few seconds, but it could also be minutes, hours, or longer.
Is there a possibility to only reload the data when the page is updated?

The traditional AJAX model uses the following sequence of events:
Browser opens connection
Browser sends request
Server sends response
Server closes connection
Because the connection is closed, there is no way for the server to notify your browser if any data have changed. In order to get this information, you have no option but to query the server periodically.
As you mentioned in your question, this is not very efficient since you can waste a lot of bandwidth if nothing changes for a long while.
WebSockets is a more up-to-date technology that tries to overcome this inefficiency and Qt has a module that caters for this.
Unfortunately, it's not universal yet so, if you want to use WebSocket technology on a third-party server, you need to have traditional AJAX code to fall back on in case WebSockets are not supported.
EDIT:
Unfortunately, WebSockets are not the golden solution. It's still up to the server to have been programmed to send out notifications of changes. If the server does not have this feature, it won't matter if you're using WebSockets or traditional AJAX, you'll still have to keep querying for changes.

Related

Should I be using AJAX or WebSockets.

Oh the joyous question of HTTP vs WebSockets is at it again, however even after quit a bit of reading on the hundreds of versus blog posts, SO questions, etc, etc.. I'm still at a complete loss as to what I should be working towards for our application. In this post I will be supplying information on application functionality, and the types of requests/responses used in our application currently.
Currently our application is a sloppy piece of work, thrown together using AngularJS and AJAX requests to a Apache server running PHP, namely XAMPP. With the launch of our application I've noticed that we're having problems with response times when the server is under any kind of load. This probably has something to do with the sloppy architecture of our server, the hardware, and the fact that our MySQL database isn't exactly optimized.
However, with such a loyal fanbase and investors seeing potential in our application and giving us a chance to roll out a 2.0 I've been studying hard into how to turn this application into a powerhouse of low latency scalability. Honestly the best option would be hire someone with experience, but unfortunately I'm a hobbyist, and a one-man-army without much experience.
After some extensive research, I've decided on writing the backend using NodeJS this time. However I'm having a hard time deciding on HTTP or Websockets. Here's the types of transactions that are done between the Server/Client.
Client sends a request to the server in JSON format. The request has a few different things.
A request id (For processing logic based on the request)
The data associated with the request ID.
The server receives the request, polls the database (if necessary) and then responds to the client in JSON format. Sometimes the server is serving files to the client. Namely images in Base64 format.
Currently the application (When being used) sends a request to the server every time an interface is changed, which on average for our application is once every few seconds. Every action on our interfaces sends another request to the server. The application also sends requests to check for notifications/messages every 8 seconds, (or two seconds depending on if they're on the messaging interface).
Currently here are the benefits I see of a stated connection over a stateless connection with our application.
If the connection is stated, I can eliminate the requests for notifications and messages, as the server can just tell the client whenever one comes available. This can eliminate x(n)/4 requests per second to the server alone.
Handling something like a disconnection from the server is as simple as attempting to reconnect, opposed to handling timeouts/errors per request, this would only be handled on the socket.
Additional security can be obtained by removing security keys for database interaction, this should prevent the possibility of Hijacking(?) of a session_key and using it to manipulate or access another users data. The session_key is only needed due to there being no state in the AJAX setup.
However, I'm someone who started learning programming through TCP game server emulation. So I understand some benefits of a STATED connection, while I don't understand the benefits of a STATELESS connection very much at all. I know they both have their benefits and quirks, but I'm curious what would be the best approach for us.
We're mainly looking for Scalability, as we had a local application launch and managed to bottleneck at nearly 10,000 users in under 48 hours. Luckily I announced this as a BETA and the users are cutting me a lot of slack after learning that I did it all on my own as a learning project. I've disabled registrations while looking into improving the application's front and backend.
IMPORTANT:
If using WebSockets, would we be able to asynchronously download pictures from the server like we can with AJAX? For example, I can make 5 requests to the server using AJAX for 5 different images, and they will all start downloading immediately, using a stated connection would I have to wait for each photo to be streamed before moving to the next request? Would this only bottle-neck a single user, or every user that is waiting on a request to be completed?
It all boils down on how your application works and how it needs to scale. I would use bare WebSockets rather than any wrapper, since it is an already easy to use API and your hands won't be tied when you need to scale out.
Here some links that will give you insight, although not concrete answers to your questions because as I said, it depends on your expectations.
Hard downsides of long polling?
WebSocket/REST: Client connections?
Websockets, and identifying unique peers[PHP]
How HTML5 Web Sockets Interact With Proxy Servers
If your question is Should I use HTTP over Websockets ?, the response is: You should not.
Even if it is faster because you don't lose time opening the connection, you lose also all the HTTP specification like verbs (GET, POST, PATCH, PUT, ...), path, body, and also response, status code. This seams simple but you'll have to re-implement all or part of these protocol things.
So you should use Ajax, as long as it is one ponctual request.
When you need to make an ajax request every 2 seconds, you need in fact that the server sends you data, not YOU request server to check Api change (if changed). So this is a sign that you should implement a websocket server.

browser implication when ajax call every 3 sec

We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .

Periodic Ajax POST calls versus COMET/Websocket Push

On a site like Trello.com, I noticed in firebug console that it makes frequent and periodic Ajax POST calls to its server to retrieve new data from the database and update the dom as and when something new is available.
On the other hand, something like Facebook notifications seem to be implementing a COMET push mechanism.
What's the advantage and disadvantage of each approach and specifically, my question is why Trello.com uses a "pull" mechanism as I have always thought using such an approach (especially since it pings its server so frequently) as it seems like it is not a scalable solution - when more and more users sign up to use its services?
Short Answer to Your Question
Your gut instinct is correct. Long-polling (aka comet) will be more efficient than straight up polling. And when available, websockets will be more efficient than long-polling. So why some companies use the "pull polling" is quite simply: they are out of date and need to put some time into updating their code base!
Comparing Polling, Long-Polling (comet) and WebSockets
With traditional polling you will make the same request repeatedly, often parsing the response as JSON or stuffing the results into a DOM container as content. The frequency of this polling is not in any way tied to the frequency of the data updates. For example you could choose to poll every 3 seconds for new data, but maybe the data stays the same for 30 seconds at a time? In that case you are wasting HTTP requests, bandwidth, and server resources to process many totally useless HTTP requests (the 9 repeats of the same data before anything has actually changed).
With long polling (aka comet), we significantly reduce the waste. When your request goes out for the updated data, the server accepts the request but doesn't respond if there is no new changes, instead it holds the request open for 10, 20, 30, or 60 seconds or until some new data is ready and it can respond. Eventually the request will either timeout or the server will respond with an update. The idea here is that you won't be repeating the same data so often like in the 3 second polling above, but you still get very fast notification of new data as there is likely already an open request just waiting for the server to respond to.
You'll notice that long polling reduced the waste considerably, but there will still be the chance for some waste. 30-60 seconds is a common timeout period for long polling as many routers and gateways will shutdown hanging connections beyond that time anyway. So what if your data is actually changed every 15 minutes? Polling every 3 seconds would be horribly inefficient, but long-polling with timeouts at 60 seconds would still have some wasted round trips to the server.
Websockets is the next technology advancement that will allow a browser to open a connection with the server and keep it open for as long as it wants and deliver multiple messages or chunks of data via the same open websocket. The server can then send down updates exactly when new data is ready. The websocket connection is already established and waiting for data, so it is quick and efficient.
Reality Check
The problem is that Websockets is still in its infancy. Only the very latest generation of browsers support it, if at all. The spec hasn't been fully ratified as of this posting, so implementations can vary from browser to browser. And of course your visitors may be using browsers a few years old. So unless you can control what browsers your visitors are using (say corporate intranet where IT can dictate the software on the workstations) you'll need a mechanism to abstract away this transport layer so that your code can use the best technique available for that particular visitor's browser.
There are other benefits to having an abstracted communications layer. For example what if you had 3 grid controls on your page all pull polling every 3 seconds, see the mess this would be? Now rolling your own long-polling implementation can clean this up some, but it would be even cooler if you aggregated the updates for all 3 of these tables into one long-polling request. That will again cut down on waste. If you have a small project, again you could roll your own, but there is a standard Bayeux Protocol that many server push implementations follow. The Bayeux protocol automatically aggregates messages for delivery and then segregates messages out by "channel" (an arbitrary path-like string you as a developer use to direct your messages). Clients can listen on channels, and you can publish data on channels, the messages will get to all clients listening on the channel(s) you published to.
The number of server side server push tool kits available is growing quite fast these days as Push technology is becoming the next big thing. There are probably 20 or more working implementations of server push out there. Do your own search for "{Your favorite platform} comet implementation" as it will continue to change every few months I'm sure (and has been covered on stackoverflow before).

Reverse AJAX? Can data changes be 'PUSHED' to script?

I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.

Is there an alternative of ajax that does not require polling without server side modifications?

I'm trying to create a small and basic "ajax" based multiplayer game. Coordinates of objects are being given by a PHP "handler". This handler.php file is being polled every 200MS, by using ajax.
Since there is no need to poll when nothing happens, I wonder, is there something that could do the same thing without frequent polling? Eg. Comet, though I heard that you need to configure server side applications for Comet. It's a shared webserver, so I can't do that.
Maybe prevent the handler.php file from even returning a response if nothing has to be changed at the client, is that possible? Then again you'd still have the client uselessly asking for a response even though something hasn't changed yet. Basically, it should only use bandwidth and sever resources if something needs to be told to the client, eg. the change of an object's coordinates.
Comet is generally used for this kind of thing, and it can be a fragile setup as it's not a particularly common technology so it can be easy not to "get it right." That said, there are more resources available now than when I last tried it ~2 years ago.
I don't think you can do what you're thinking and have handler.php simply not return anything and stop execution: The web server will keep the connection open and prevent any further polling until handler.php does something (terminates or provides output). When it does, you're still handling a response.
You can try a long polling technique, where your AJAX allows a very large timeout (e.g. 30 seconds), and handler.php spins without responding until it has something to report, then returns. (You'll want to make sure the spinning is not resource-intensive). If handler.php "expires" and nothing happens, have it exit and let AJAX poll again. Since it only happens every 30 seconds, it will be a huge improvement over ~5 times a second. That would keep your polling to a minimum.
But that's the sort of thing Comet is designed for.
As Ajax only offers you a client server request model (normally termed pull, rather than push), the only way to get data from the server is via requests. However a common technique to get around this is for the server to only respond when it has new data. So the client makes a request, the server hangs on to that request until something happens and then replies. This gets around the need for frequent polling even when the data hasn't changed as you only need the client send a new request after it gets a response.
Since you are using PHP, one simple method might be to have the PHP code call the sleep command for 200ms at a time between checks for data changes and then return the data to the client when it does change.
EDIT: I would also recommend having a timeout on the request. So if nothing happens for say 2 seconds, a "no change" message is sent back. That way the client knows the server is still alive and processing its request.
Since this is tagged “html5”: HTML5 has <eventsource> and WebSocket, but the implementation side is still in the future tense in practice.
Opera implemented an old version of <eventsource> called <event-source>.
Here's a solution - use a SaaS comet provider, such as WebSync On-Demand. No server resources to worry about, shared hosting or not, since it's all offloaded, and you can push out the information as needed.
Since it's SaaS, it'll work with any server language. For PHP, there's already a publisher written and ready to go.
The server must take part in this. Check with the hosting provider what modules are available. Or try to convince them to support Comet.
Maybe you should consider a small Virtual Private Server (VPS) for this.
One thing to add on the long polling suggestions: If you're on a shared server, this solution will have limited scalability, as each active long poll will keep a connection (and a server-side process to service that connection) active. Your provider most likely has limits (either policy-defined or de facto) on the number of connections you can have open at a time, so you'll hit a wall if you have more sessions/windows than that playing concurrently.

Resources