Continuous AJAX requests - effect on web app? - ajax

I have a web app idea which will require AJAX requests to function. Its crucial that AJAX requests are running as long as the user is on the site.
My question is, if the user left the site up (my ajax requests will be running) for say 4/5 hours, will these AJAX requests still run, my concerns are screen dimming, screensavers, computer sleep states. Will all or none of these affect the performance of my web app?

Unfortunately, it's very client dependent. For example, mobile devices may stop processing JS when going into a sleep state (e.g., to save battery life). However, on an image rotator application I wrote some time ago, which sent regular requests to the server to retrieve images (there was good reason not to cache them, I swear), accessed primarily by non-mobile clients, I observed that requests continued for hours, even days. While I can't know if the client machine ever entered a sleep state, I'm pretty confident it did.
Long story short - I think you can't be sure, but for some target audiences, you can be reasonably sure. I would recommend investigating your audience.

If in your site users require to login simple keep a session time out option to lets say 15 mins. That means after 15 mins of idle time the session will be destroyed and the ajax requests automatically truncated.
If you do not have login this becomes difficult but still can be achieved via ip tracking or similar mechanisms but these will never be as full proof as the first one.

I must agree with Sean in that it it's very client side orientated. If the client stays active, be it through human interaction our not, then the AJAX should keep going.

Related

Notification System - Socket.io or Ajax?

I'm using Laravel5 and, I want to create a notification system for my (web) project. What I want to do is, notifying the user for new notifications such as;
another user starts following him,
another user writes on his wall,
another user sends him a message, etc,
(by possibly highlighting an icon on the header with a drop-down menu. The ones such as StackOverflow).
I found out the new tutorials on Laracast: Real-time Laravel with Socket.io, where a kind of similar thing is achieved by using Node, Redis and Socket.io.
If I choose using socket.io and I have 5000 users online, I assume I will have to make 5000 connections, 5000 broadcastings plus the notifications, so it will make a lot of number of requests. And I need to start for every user on login, on the master blade, is that true?
Is it a bad way of doing it? I also think same thing can be achieved with Ajax requests. Should I tend to avoid using too many continuous ajax requests?
I want to ask if Socket.io is a good way of logic for creating such system, or is it a better approach to use Ajax requests in 5 seconds instead? Or is there any alternative better way of doing it? Pusher can be an alternative, however, I think free is a better alternative in my case.
A few thoughts:
Websockets and Socket.io are two different things.
Socket.io might use Websockets and it might fall back to AJAX (among different options).
Websockets are more web friendly and resource effective, but they require work as far as coding and setup is concerned.
Also using SSL with Websockets for production is quite important for many reasons, and some browsers require that the SSL certificate be valid... So there could be a price to pay.
Websockets sometimes fail to connect even when supported by the browser (that's one reason using SSL is recommended)... So writing an AJAX fallback for legacy or connectivity issues, means that the coding of Websockets usually doesn't replace the AJAX code.
5000 users at 5 seconds is 1000 new connections and requests per second. Some apps can't handle 1000 requests per second. This shouldn't always be the case, but it is a common enough issue.
The more users you have, the close your AJAX acts like a DoS attack.
On the other hand, Websockets are persistent, no new connections - which is a big resources issue - especially considering TCP/IP's slow start feature (yes, it's a feature, not a bug).
Existing clients shouldn't experience a DoS even when new clients are refused (server design might effect this issue).
A Heroku dyno should be able to handle 5000 Websocket connections and still have room for more, while still answering regular HTTP requests.
On the other hand, I think Heroku imposes an active requests per second and/or backlog limit per dyno (~50 requests each). Meaning that if more than a certain amount of requests are waiting for a first response or for your application to accept the connection, new requests will be refused automatically.... So you have to make sure you have no more than 100 new requests at a time. For 1000 requests per second, you need your concurrency to allows for 100 simultaneous requests at 10ms per request as a minimal performance state... This might be easy on your local machine, but when network latency kicks in it's quite hard to achieve.
This means that it's quite likely that a Websocket application running on one Heroku Dyno would require a number of Dynos when using AJAX.
These are just thoughts of things you might consider when choosing your approach, no matter what gem or framework you use to achieve your approach.
Outsourcing parts of your application, such as push notifications, would require other considerations such as scalability management (what resources are you saving on?) vs. price etc'

browser implication when ajax call every 3 sec

We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .

Long running ajax vs polling

I'm trying to decide what the best solution would be for my web application. I have a page that will fire an arbitrary number of ajax requests to retrieve data from the server. For example a page on load may fire 10 ajax requests to the server and each request may take 10 seconds (+-) to return content.
In view of this being a web app in a multi user and multi concurrency environment is it a good idea to use a traditional ajax approach or would you opt for long polling, such as SignalR.
What are the pros/cons of both approaches (Pull vs Push)? Ultimately i'm after the most resource efficient approach.
Thanks
In your stated example you are talking about a pure 'Pull' scenario. ie 'When the page loads I want X, Y, Z to happen and then I want to see the results'.
Long polling/websockets (SignalR) is useful for a Push scenario - ie 'Oh look I have finished running this super long process... I better tell any users currently connected'.
You can use SignalR to run those normal style AJAX requests... but you wont get any performance enhancements. The AJAX will run asynchronously and in parallel and once the server side process is complete you will a callback that executes. Perhaps you might get a slight increase in performance as signalR will have a continuous connection running, so you will loose the slight delay in creating a connection. On the flip side the server will have a large number of open connections running which may degrade performance (especially if you are hitting it with 10 X 10 sec computations)

User closes the browser without logging out

I am developing a social network in ASP.NET MVC 3. Every user has must have the ability to see connected people.
What is the best way to do this?
I added a flag in the table Contact in my database, and I set it to true when the user logs in and set it to false when he logs out.
But the problem with this solution is when the user closes the browser without logging out, he will still remain connected.
The only way to truly know that a user is currently connected is to maintain some sort of connection between the user and the server. Two options immediately come to mind:
Use javascript to periodically call your server using ajax. You would have a special endpoint on your server that would be used to update a "last connected time" status, and you would have a second endpoint for users to poll to see who is online.
Use a websocket to maintain a persistent connection with your server
Option 1 should be fairly easy to implement. The main thing to keep in mind that this will increase the amount of requests coming into your server, and you will have to plan accordingly in order handle the traffic this could generate. You will have some control over the amount of load on your server by configuring how often javascript timer calls back to your server.
Option 2 could be a little more involved if you did this without library support. Of course there are libraries out there such as SignalR that make this really easy to do. This also has an impact on the performance of your site since each user will be maintaining a persistent connection. The advantage with this approach is that it reduces the need for polling like option 1 does. If you use this approach it would also be very easy to push a message to user A that user B has gone offline.
I guess I should also mention a really easy 3rd option as well. If you feel like your site is pretty interactive, you could just track the last time they made a request to your site. This of course may not give you enough accuracy to determine whether a user is "connected".

Session timeout in web applications

The session timeout in web applications typically denotes the idle time - i.e. the period of time when the user doesn't work with the application.
Now, what if there is an automated script written that posts a request every 5 minutes - wouldn't that user's session go on endlessly? This being the case, won't this approach heavily load the application affecting its performance in the long run?
Running an automated call to the server, say via an AJAX request, will keep the session alive. Typically that's the point though. An interesting side effect of this is that if the request happens predictably and regularly, you can use it as a "ping" to determine if the user's browser is still open. If one or two pings are missed, you can close the session earlier and actually free up resources sooner than if you just let the session time out.
Yes, and Yes.
This is why if you're going to write an application for the web, you really want to find a way to implement it without using server side sessions. Usually, you will be able to find ways to implement the same functionality using cookies -- then the session data is client-side so who cares if they stay active permanently.
I did something similar for an application that relies heavily on session data.
What I did was set the IIS timeout to a relatively low number, say 10 minutes, then have a timed AJAX call that pings a blank page every 5 minutes.
This overhead on this is actually fairly low, as all you are doing is requesting a blank page, and if a person closes their browser, the session ends in 10 minutes.
You want to keep session as small as possible. That said, if everyone starts doing that, of course it will load your application, with(out) session. If you think your users are compelled to do that, consider why, as either your application is missing an important feature or is forcing them into something.
Now, regardless of that, if you are expecting lots of users to be active at the same time, so much than a single server won't do, then you would will end up having the session out of process. If the session is in Sql Server, it is just saved data, so in that case we wouldn't be talking about memory usage.
Well... I guess "It Depends" The first question you should ask yourself is whether you even need session.
If you have an automated process, my guess is that you don't really need to use session.
In that case, either turn it off or don't worry about it.
I guess your session table would be a little bit larger, but on the other hand you won't be tearing down and recreating the session. I don't see how this would "heavily load" the application. I suppose it would depend on the application itself and how much memory is used to maintain session state.
It would allow the use's session to go on endlessly, as long as they have their browser open. If need to keep a session alive for an extended period of time, you could also track the sessions via the DB and not in memory.
Also, if you are worried about the indefinite open session, you could implement a timeout from when the session opened and if there is an extended idle time.

Resources