Check availability of resource used by another user - ajax

Building a web application.
User have access trough their browser to shared resources host on a server, however if UserA is already using Resource1, Resource1 should not be available to UserB until UserA release Resource1 or until a given amount of time.
For this part : I chose to use a MySQL table with a list of tuples (resource,currentuser) and run a cron task to delete expired tuples.
Now I want to be able to notify UserA that UserB wants to access Resource1 and if get not answer from UserA, then UserA lost his lock on Resource1 and then the Resource is then available to UserB.
For this part, I guess I have to use AJAX. I have thought about the following solution :
User's browser make periodic AJAX call (let's say each minute) to prove he is still alive and upon a call, if another User has requested the same resource, he has to challenge a server request in a given amount of time(for example a captcha). If the challenge fails, it means the user is not here anymore (maybe he left his browser opened or the webpage unfocused).
The tricky part is : "he has to challenge a server request in a given amount of time (for example a captcha)". How to do that?
Am I following the best path ?

Yes, what you've outlined is fine. Using ajax is also completely fine, especially if you're simply polling every minute.
For example, let's say you have the following:
setInterval(function() {
$.get('/resource/status', function(response) {
if (response.data.newRequest) {
//This would signal a new request to the resource
}
})
}, 60000)
When handling the new request to access the resource, you could use something like reCaptcha and display that however appropriate (overlay or inline). When you do this, you could also start a timer to determine if it's exceeded the amount of time allocated or not. If it has, then you can do another ajax request and revoke this person's access to the resource, or however you want to handle that.

i would use web sockets to control all the users that need to get the resource.
this way you will know who is connected and using the resource and when he finish using it you can let the next user the resource and so on ,
(this way can tell each user an estimation of how much time it will take him to get the resource and do some progress bar)

I think there're two problems here.
How to notify users that resource becomes available?
Periodic AJAX requests might be okay, but you can also consider long-polling or websockets to get close to notifying waiting users in real time.
How to find out that resource is still used by user?
If you want to catch the moment when human user is not doing anything on page, you can track mouse movement/clicking or keyboard button pressing. If nothing is done for last n minutes, the page might be considered as not active.
If you want to make sure that page is not exploited by automated software, you can ask to fill in captcha once in n minutes when resource is being used.

Related

Send notifications from one laravel app to another

I have two different Laravel 5.4 apps, a restaurant menu system to recieve and manage orders, and one website from where customer can place their orders. Both apps run on different server(localy), which means, in my (windows)system I can run only one app at a time(localhost:8000). Both are using the same database tables. My question is how can I notify the restaurant menu system when user places an order from the website i.e., adding new row to Orders table in db? I need a notification as well as auto generate new row in the table like here:
Restaurant Menu System . I have tried doing it with JQuery Ajax, but failed as there is nothing to trigger the ajax function in order page. Tried JQuery setInterval() from here but it seems like a very inefficient way and also gives an error of Uncaught SyntaxError: Invalid or unexpected token. I want to be as smooth as Facebook notifications. Is there any package or trick to do it?
The website looks just like any other ecommerce website with a cart and checkout system from where user can pay and place orders. Any leads is appreciated.
You have two options that I can think of.
One is a technique called comet which I believe Facebook uses or at least used at one point. It basically opens an ajax connection with your server and your server will occasionally check to see if there are any changes, in your case new orders, and when there is, will respond to the request appropriately. A very basic version of what that might look like is...
while (true) {
$order = Order::where('new', 1)->first();
if ($order !== null) {
$order->new = 0;
$order->save();
return $order;
}
sleep(5); // However long you want it to sleep for each time it checks
}
When you open up an ajax connection to this, it's just going to wait for the server to respond. When an order is made and the server finally does respond, your ajax function will get a response and you will need to do two things.
Show the order and do whatever other processing you want to do on it
Re-open the connection which will start the waiting process again
The disadvantage to this approach is it's still basically the setInterval approach except you've moved that logic to the server. It is more efficient this way because the biggest issue is it's just a single request instead of many so maybe not a big deal. The advantage is it's really easy.
The second way is a little bit more work I think but it would be even more efficient.
https://laravel.com/docs/5.4/broadcasting
You'd probably have to setup an event on your orders table so whenever anything is created there, it would use broadcasting to reach out to whatever javascript code you have setup to manage that.

one session per user or one session in every users

I am curious about the value of PHPSESSID because, I created a simple login-type web app. When I try to login with different accounts, the value of the PHPSESSID is not changing. I got curious if it does okay or not. Because I tried to login in youtube with different account too. But their SID's differ on each user.
My question is:
1) Is what happening on my web app okay ?
2) Is yes, how can I make a session ids per account/user ?
3) If no, how can I fix it ?
I would really appreciate your suggestions.
It partly depends on exactly how you implemented "login." One way to do it is simply to change the user-identity (which, by definition, is part of the data that is stored in the session), while keeping the same session.
Another equally-valid way to do it is to first update the existing session (to show that the user, in that session, is now "logged off") (maybe...), and then to coin a completely new session-id, thus starting an entirely new session, in which you now "log on."
One advantage of the second approach ... and probably the reason why so many sites do it this way ... has to do with the possibility that the user might wish to open a new browser-window, and to log-in to the application a second time, intending to keep both logins alive at the same time. If the session-id token is part of the URL, or maybe is part of a hidden form or what-have-you, such that both session-id's can be retained independently, it becomes possible for the user to do what he has done without conflict. Two parallel sessions exist. In one, he is logged on as "joe," and in the second, he is logged on as "jeff." And so on. One set of browser-windows (somehow ...) carries the "jeff session" token; others carry the "joe session" token.
Fundamentally, a "session" is just a pool of server-side values, identified by the (PHPSESSID ...) token furnished each time by the client. Exactly how you choose to manage it, is at your discretion. It's a design-decision with no "correct" approach.

Varnish: purge cache every time user hits "like" button

I need to implement like/dislike functionality (for anonymous users so there is no need to sign up). Problem is that content is served by Varnish and I need to display actual number of likes.
I'm wondering how it's done on website like stackoverflow. Assuming pages are cached in Varnish (for anonymous users only), so every time user votes on answer/question, page needs to be purged from cache. Am I right? Current number of votes needs to be visible for other users.
What is good approach in this situation? Should I send PURGE to Varnish every time user hits "like" button?
A common way of implementing this is to do the like button and display client side in Javascript instead. This avoids the issue slightly.
Assuming that pressing Like leads to a POST request hitting a single Varnish server, you can make the object be invalidated/replaced in different ways. Using purge and a VCL restart is most likely the better way to do this.
Of course there is a slight race here, where other clients will be served the old page while this is ongoing.

how can I use long polling to automatically refresh a webpage

I am trying to figure out how to use long polling to trigger a webpage refresh (the entire page as opposed to just a single section). Although it would be nicer to just update part of the page instead of a single section, I would rather just get down the initial page refresh part and then move on from there. Having said that, I was wondering if anyone would be able to point me in the right direction as to how I can go about doing this? I have been searching for examples of long polling online, but unfortunately have not been able to find anything similar to this yet. Pretty much I would have a webpage which I could remotely refresh using long polling based on some condition on the server (apache on debian), so for instance if I had a bash script based cgi page that showed am or pm based on the server time, when the time on the server changes from am to pm or vice versa, the server would trigger a page refresh on the client side so the cgi page would reload and display the correct data.
Well first of all. if you do long polling requests you need to keep in mind, that there will be an open connection to your server for each page that is viewed in the browsers.
That requires that your server infrastructure is able to handle this without huge memory consumption and wont run out of free connections to handle the long polling request.
i don't assume you use php but it is an good example: so if you have apache with php module, there is on the one hand a limit of maximum connection by configuration of apache and on the other hand for each connection the whole php module is loaded which uses much memory if you have many page views. if you use php-fpm as fcgi, there is also a maximum number of available clients, and you also don't want to increase this number over a certain limit.
so generally i would suggest not to use long polling request for public websites, if you don't have a good server backend that has some nice logic for handling this.
depending on the requirements you could think of the following solution, if you know in which intervals that page should check for refresh:
you could add a attribute data-check-for-refresh-at and data-modified-at to your html node:
<html data-check-for-refresh-at='2013-02-04 12:00:00 GMT' data-modified-at='2013-01-01 12:00:00 GMT'>
parse this with javascript and then do a refresh check at this time submitting the modified-at time with that request. if the content changed you will submit the new content, and the next time when the client should check for updates.
another thing that is important that you should add a random offset to this refresh time by the client, otherwise you probably DDOS yourself. because all clients would send a refresh request at the same time.
EDIT (Based on comments)
First a short explanation how it should be done for real system:
The server should not use one threads or processes per connection, instead it should use the event driven approach (registering callbacks to be informed if streams are ready to read or write). then if a long polling request arrives the server stores the information about which changes the client wants to be informed. then the connection is sleeping there are no cpu circles wasted for that connection anymore until client needs to be informed, also the memory usage is quite low. then if a url changed the server will be informed that is should notify all clients that listen to changes of this url. The server then will submit the responses to clients (a publication subscription system). depending on the number of clients to be notified the notifications should probably be queued and handled in an intelligent way, so that you would have a better balancing of the outgoing traffic. With this approach you will more likely run into the maximum allowed openports/filedescriptor problem then having problems with cpu or memory usage.
Of course this is a very simplistic description, but I think it is sufficient to get ene idea how it would be implemented.
Quick&Dirty Solution
It is more pseudo code then real code, so this would not work with copy and past, also it is assumed that the server creates the files for $notificationFile before any long polling request arrives):
The long polling request will call a php script like this:
set_time_limit(0);
/*
$urlToCheck and $modificationTimeToCheckAgainst should be initialized by the values send by client as parameter for the long polling request
$someTime should be the maximum time the long polling request should be keept alive
*/
$forceResponseTimeout = microtime(true) + $someTime;
$urlToCheck = "the/url/to/observe.html";
$modificationTimeToCheckAgainst = "2013-02-05 00:00:00"; //should be the time in seconds (not a real date)
$notificationFile = "./tmp/observer-file-".sha1($urlToCheck);
$responseStatus = "did-not-change";
while( microtime(true) < $forceResponseTimeout ) {
clearstatcache(); //need to clear cache otherwise we don't have the right modification date (also not the beast idea to keep cpu usage low)
if( filemtime(".update-check-file-".sha1($pathToCheck)) > $modificationTimeToCheckAgainst ) {
$responseStatus = "changed";
break;
}
usleep(100); //this is a bad idea because it creates a high cpu usage, even with the sleep
}
echo $responseStatus; //here some json response should be created, the client then gets the information if it should resend the long polling request or if it should do a refresh.
The update script should look like this:
$urlThatIsUpdated = "the/url/to/observe.html";
//doing the update of the file
$notificationFile = "./tmp/observer-file-".sha1($urlThatIsUpdated);
touch($notificationFile); //updates the modification time of the notification file, which should be recognized by the script above.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

Resources