Why $time from $lock=Cache::lock('name', $time) should be greater than the updating Cache time? - laravel

I placed this code inside a Route::get() method only to test it quicker. So this is how it looks:
use Illuminate\Support\Facades\Cache;
Route::get('/cache', function(){
$lock = Cache::lock('test', 4);
if($lock->get()){
Cache::put('name', 'SomeName'.now());
dump(Cache::get('name'));
sleep(5);
// dump('inside get');
}else{
dump('locked');
}
// $lock->release();
});
If you reach this route from two browsers (almost)at the same time. They both will respond with the result from dump(Cache::get('name'));. Shouldn't the second browser respond be "locked"? Because when it calls the $lock->get() that is supposed to return false? And that because when the second browser tries to reach this route the lock should be still set.
That same code works just fine if the time required for the code after the $lock = Cache::lock('test', 4) to be executed is less than 4. If you set the sleep($sec) when $sec<4 you will see that the first browser reaching this route will respond with the result from Cache::get('name') and the second browser will respond with "locked" as expected.
Can anyone explain why is this happening? Isn't it suppose that any get() method to that lock, expect the first one, to return false for that amount of time the lock has been set? I used 2 different browsers but it works the same with 2 tabs from the same browser too.

Quote from the 5.6 docs https://laravel.com/docs/5.6/cache#atomic-locks:
To utilize this feature, your application must be using the memcached or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 5.8 docs https://laravel.com/docs/5.8/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, dynamodb, or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 8.0 docs https://laravel.com/docs/8.x/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, redis, dynamodb, database, file, or array cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Apparently, they have been adding support for more drivers to make use of this lock functionality. Check which Cache driver you are using and if it fits the support list of your Laravel version.
There is likely an atomicity issue here where the cache driver you are using is not able to lock a file atomically. What should happen is that when a process (i.e. a php request) is writing to the lock file, all other processes requiring the lock file should at least wait until the lock file available to be read again. If not, they read the lock file before it has been written to, which obviously causes a race condition.

I saw this question I asked, well now I can say that the problem I was trying to solve here was not because of the atomic lock. The problem here is the sleep method. If the time provided to the sleep method is bigger than the time that a lock will live, it means when the next request it's able to hit the route the lock time will expire(will be released). And that's because let's say you have defined a route like this:
Route::get('case/{value}', function($value){
if($value){
dump('hit-1');
}else{
sleep(5);
dump('hit-0');
}
});
And you open two browser tabs with the same URL that hits this route something like:
127.0.0.1:8000/case/0
and
127.0.0.1:8000/case/1
It will show you that the first route will take 5sec to finish execution and even if the second request is sent almost at the same time with the first request, still it will wait to finish the first one and then run. This means the second request will last 5sec(from the first request) plus the time it took to run.
Back to the asked question the lock time will expire by the time the second request will get it or said differently run the $lock->get() statement.

Related

Is there a way to delay cache revalidation in service worker?

I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...

Directly using require() in Express instead of placing in a variable

I'm building an app with express and using passport's facebook login
The example application is:
https://github.com/passport/express-4.x-facebook-example/blob/master/server.js
And from it has come to my attention that I can skip the const/var=require... format and directly do this if I never have to reference it again:
e.g
const createError = require('http-errors'),
session = require('cookie-session');
...
app.use(session({ secret: process.env.cookie_secret, resave: true, saveUninitialized: true }));
app.use(function(req, res, next) {
next(createError(404));
});
becomes
app.use(require('cookie-session')({ secret: process.env.cookie_secret, resave: true, saveUninitialized: true }));
app.use(function(req, res, next) {
next(require('http-errors')(404));
});
This works, great, my file is half it's length now but.. I'm worried about the performance implication of this?
require() is a synchronous operation and blocks the event loop. As such, you generally do not want to ever be doing the first require() of a module in the middle of an actual request handler in a server because that will momentarily block the event loop.
Now, since modules are cached, only the first time you require() a module will actually take very long. But, never-the-less, it is considered a good coding practice to load your dependencies upon startup when synchronous I/O is no big deal and not during run-time.
If there were any problems with loading dependencies, you probably also want those to be discovered at server startup time, not once the server is already serving customers.
So, I think the answer to your question is yes and no. Yes, it's just fine to directly require() without assigning to variables in your startup code. No, it's not recommended to do so inside a request handler or middleware. Better to load your dependencies upon startup. Now, no great harm comes to your code if you happen to do a require() inside a request handler because only the first time actually loads if from disk and takes very long, but as a general practice, it's not the recommended way of coding just because you're trying to save a variable name somewhere.
Personally, I'd also like to know that once my server has startup, all dependencies have been successfully loaded too so there is no danger of an imperfect install being discovered later after it starts serving requests (where it may not be as obvious what went wrong and where users would see the consequences).
Here's one other thing to consider. Javascript is moving from require() to import over time and you cannot use import except at the top level of a module. You can't use it inside a statement.
Summary:
You want to load dependencies at startup so you don't block the event loop during actual processing of requests.
You want to load dependencies at startup so you discover any missing dependencies at server startup and not during server run-time.
Code is generally considered more reader-friendly if dependencies are obvious and easy to see for anyone who works on this module.
In the future when we all are using import instead of require(), import is only allowed at the top level.

How to prevent AJAX polling keeping Asp.Net sessions alive

We have a ASP.Net application that has been given a 20 minute sliding expiry for the session (and cookie).
However, we have some AJAX that is polling the server for new information. The downside of this of course is that the session will continue indefinitely, as it is being kept alive by the polling calls causing the expiry time to be refreshed. Is there a way of preventing this - i.e. to only allow the session to be refreshed on non-ajax calls?
Turning off sliding expiry is not really an option as this is an application that business users will be using for most of their day between telephone calls.
Other Stackoverflow discussions on this talk about maintaining 2 separate application (one for authenticated calls, one for unauthenticated. I'm not sure this will be an option as all calls need to be authenticated.
Any ideas?
As this question is old I am assuming it has been resolved or a workaround implemented. However, I wanted to mention that instead of AJAX polling the server to perform an operation we have utilized SignalR which allows both the client to communicate with the server via JQuery and/or the server to notify the client.
Check it out: Learn About ASP.NET SignalR
add below code to your controller action that you are reference for polling.Convert this into an attribute so it can be used everywhere. This line will not extend session timeout
[HttpPost]
public ActionResult Run()
{
Response.Cookies.Remove(FormsAuthentication.FormsCookieName);
return Json("");
}
There is no way to stop the ajax from keeping the session and cookies alive!
However, there is a way to achieve what you want to do. That is if the process I will describe will be ok to you.
I think what you really want to achieve is first to refresh your page with ajax so that some processes will be active and running. Also to know when the user has stopped operating the program.
If that is what you want then there is a simple process to achieve this
You will have your ajax running for the things you want to run.
You will remove the session you want to check if user has stopped operation on the page and manage the session as a variable instead.
The variable can be a global variable or a class variable that will be set to initial value whenever the user clicks an element on the page.
(You will select the click event of an element and set the variable to initial value)
You will increment the variable every given time (say every time your ajax runs)
You will also have a function/method run to check the value of that variable if it is greater than the value you set as limit. This can run every time your ajax runs or every time you want it to run (timed event).
If the value of your variable is greater than the limit set it should invalidate or clear session/log user out.
This way if user stops operating (clicking elements) the system on any page that this is running will eventually log out the current user and stop running the program.
I have done this by creating a hidden page in an i-Frame. Then using JavaScript it posts back every 18 minutes to keep the session alive. This works really well.
This example is from a ASP.NET Forms project but could be tweaked for MVC.
Create a page called KeepSessionAlive page and add a meta refresh tag
meta id="MetaRefresh" http-equiv="refresh" content="21600;url=KeepSessionAlive.aspx"
In the code behind
protected string WindowStatusText = "";
protected void Page_Load(object sender, EventArgs e)
{
//string RefreshValue = Convert.ToString((Session.Timeout * 60) - 60);
string RefreshValue = Convert.ToString((Session.Timeout * 60) - 90);
// Refresh this page 60 seconds before session timeout, effectively resetting the session timeout counter.
MetaRefresh.Attributes["content"] = RefreshValue + ";url=KeepSessionAlive.aspx?q=" + DateTime.Now.Ticks;
WindowStatusText = "Last refresh " + DateTime.Now.ToShortDateString() + " " + DateTime.Now.ToShortTimeString();
}
Add the hidden iFrame in a master page
iframe ID="KeepAliveFrame" src="KeepSessionAlive.aspx" frameBorder="0" width="0" height="0"
Download example

Problem with Boost Asio asynchronous connection using C++ in Windows

Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
Te first step is create a persistent object -say pop3- with all that Boost.asio stuff to do asynchronous connections, in the WM_INITDIALOG message of the dialog-box-procedure. Some like:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
Here we assume that iniPop3Dlg() create the pop3 heap object -say pointed out by pop3p-. Then connect with the remote server, and a session is initiated with the client’s id and password (USER and PASS commands). Here we assume that the server is in TRANSACTION state.
Then, in response to some user input, the dialog-box-procedure, call the appropriate function. Say:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
Note that total() uses the POP3’s STAT command to get how many emails in the server, while detail() uses two commands consecutively; first STAT to get the total and then a loop with the GET command to retrieve the content of each message.
As an aside: detail() and total() share the same subroutines -the STAT handle routine-, and when finished, both leaves the session as-is. That is, without closing the connection; the socket remains opened an the server in TRANSACTION state.
When any option is selected by the first time, the things run as expected, obtaining the desired results. But when making the second chance, the connection hangs.
A closer inspection show that the first time that the statement
socket_.get_io_service().run();
Is used, never ends.
Note that all asynchronous write and read routines uses the same io_service, and each routine uses socket_.get_io_service().reset() prior to any run()
Not also that all R/W operations also uses the same timer, who is reseted to zero wait after each operation is completed:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
I suspect that the problem is in the io_service or in the timer, and the fact that subsequent executions occurs in a different load of the routine.
As a first approach to my problem, I hope that someone would bring some light in it, prior to a more detailed exposition of the -very few and simple- routines involved.
Have you looked at the asio examples and studied them? There are several asynchronous examples that should help you understand the basic control flow. Pay particular importance to the main event loop started by invoking io_service::run, it's important to understand control is not expected to return to the caller until the io_service has no more remaining work to do.

How to Track the Online Status of Users of my WebSite?

I want to track users that are online at the moment.
The definition of being online is when they are on the index page of the website which
has the chat function.
So far, all I can think of is setting a cookie for the user and, when the cookie is found on the next visit, an ajax call is made to update a table with their username, their status online and the time.
Now my actual question is, how can I reliably turn their status to off when they leave
the website? The only thing I can think of is to set a predetermined amount of time of no user interaction and then set the status to off.
But what I really want is to keep the status on as long as they are on the site, with or without interaction, and only go to off when they leave the site.
Full Solution. Start-to-finish.
If you only want this working on the index.php page, you could send updates to the server asynchronously (AJAX-style) alerting the server that $_SESSION["userid"] is still online.
setInterval("update()", 10000); // Update every 10 seconds
function update() {
$.post("update.php"); // Sends request to update.php
}
Your update.php file would have a bit of code like this:
session_start();
if ($_SESSION["userid"])
updateUserStatus($_SESSION["userid"]);
This all assumes that you store your userid as a session-variable when users login to your website. The updateUserStatus() function is just a simple query, like the following:
UPDATE users
SET lastActiveTime = NOW()
WHERE userid = $userid
So that takes care of your storage. Now to retrieve the list of users who are "online." For this, you'll want another jQuery-call, and another setInterval() call:
setInterval("getList()", 10000) // Get users-online every 10 seconds
function getList() {
$.post("getList.php", function(list) {
$("listBox").html(list);
});
}
This function requests a bit of HTML form the server every 10 seconds. The getList.php page would look like this:
session_start();
if (!$_SESSION["userid"])
die; // Don't give the list to anybody not logged in
$users = getOnlineUsers(); /* Gets all users with lastActiveTime within the
last 1 minute */
$output = "<ul>";
foreach ($users as $user) {
$output .= "<li>".$user["userName"]."</li>";
}
$output .= "</ul>";
print $output;
That would output the following HTML:
<ul>
<li>Jonathan Sampson</li>
<li>Paolo Bergantino</li>
<li>John Skeet</li>
</ul>
That list is included in your jQuery variable named "list." Look back up into our last jQuery block and you'll see it there.
jQuery will take this list, and place it within a div having the classname of "listBox."
<div class="listBox"></div>
Hope this gets you going.
In the general case, there's no way to know when a user leaves your page.
But you can do things behind the scenes such that they load something from your server frequently while they're on the page, eg. by loading an <iframe> with some content that reloads every minute:
<meta http-equiv="refresh" content="60">
That will cause some small extra server load, but it will do what you want (if not to the second).
Well, how does the chat function work? Is it an ajax-based chat system?
Ajax-based chat systems work by the clients consistently hitting the chat server to see if there are any new messages in queue. If this is the case, you can update the user's online status either in a cookie or a PHP Session (assuming you are using PHP, of course). Then you can set the online timeout to be something slightly longer than the update frequency.
That is, if your chat system typically requests new messages from the server every 5 seconds, then you can assume that any user who hasn't sent a request for 10-15 seconds is no longer on the chat page.
If you are not using an ajax-based chat system (maybe Java or something), then you can still accomplish the same thing by adding an ajax request that goes out to the server periodically to establish whether or not the user is online.
I would not suggest storing this online status information in a database. Querying the database every couple of seconds to see who is online and who isn't is very resource intensive, especially if this is a large site. You should cache this information and operate on the cache (very fast) vs. the database (very slow by comparison).
The question is tagged as "jquery" - what about a javascript solution? Instead of meta/refresh you could use window.setInterval(), perform an ajax-request and provide something "useful" like e.g. an updated "who's online" list (if you consider that useful ;-))
I have not tried this, so take it with a grain of salt: Set an event handler for window.onunload that notifies the server when the user leaves the page. Some problems with this are 1.) the event won't fire if the browser or computer crashes, and 2.) if the user has two instances of the index page open and closes one, they will appear to logout unless you implement reference counting. On its own this is not robust, but combined with Jonathan's polling method, should allow you to have pretty good response time and larger intervals between updates.
The ultimate solution would be implementing something with websockets.

Resources