I am using https://github.com/JosephSilber/page-cache to cache pages. To prepare pages beforehand (about 100,000), I used to run 8 http requests in parallel via GuzzleHttp. It worked, but was pretty slow, because of the overhead.
I am looking for a way to process an instance of Illuminate\Http\Request directly via the app instance, preventing a real http request. I noticed, that this is much faster. However, parallelizing this with https://github.com/amphp/parallel-functions poses some problems.
The basic code is this:
wait(parallelMap($urlChunks->all(), function($urls) {
foreach($urls as $url) {
//handle the request
}
}, $pool));
I tried several variants for handling the request.
1.
$request = \Illuminate\Http\Request::create($url, 'GET');
$response = app()->handle($request);
In this case app() returns an instance of Illuminate\Container\Container, not an instance of app. So it does not have the method handle()and so on.
2.
$request = \Illuminate\Http\Request::create($url, 'GET');
$response = $app->handle($request);
Only difference here: The variable $app was injected into the closure. Its value is the correct return value from app() called outside the closure. It is the application, but amp fails, because the PDO connections contained in the Application instance can not be serialized.
3.
$request = \Illuminate\Http\Request::create($url, 'GET');
$app = require __DIR__.'/../../../bootstrap/app.php';
$app->handle($request);
This works for a short while. But with each instantiation of the app, one or two mysql connections start to linger around in status "Sleep". They only get closed, when the script ends. Important: This does not have to do with parallelization. I actually tried the same with a sequential loop and noticed the same effect. This looks like an error in the framework to me, because one should expect, that the Application instance closes all connections, when it is destroyed. Or can I do this manually? This would be one way to get this thing to work.
Any ideas?
The third version is my recommended way to do it. Resources in PHP are usually cleaned up when PHP exists, but that doesn't work for long running applications. To change that, I'd file an issue in the Laravel repository or whatever is creating that database connection.
Related
I placed this code inside a Route::get() method only to test it quicker. So this is how it looks:
use Illuminate\Support\Facades\Cache;
Route::get('/cache', function(){
$lock = Cache::lock('test', 4);
if($lock->get()){
Cache::put('name', 'SomeName'.now());
dump(Cache::get('name'));
sleep(5);
// dump('inside get');
}else{
dump('locked');
}
// $lock->release();
});
If you reach this route from two browsers (almost)at the same time. They both will respond with the result from dump(Cache::get('name'));. Shouldn't the second browser respond be "locked"? Because when it calls the $lock->get() that is supposed to return false? And that because when the second browser tries to reach this route the lock should be still set.
That same code works just fine if the time required for the code after the $lock = Cache::lock('test', 4) to be executed is less than 4. If you set the sleep($sec) when $sec<4 you will see that the first browser reaching this route will respond with the result from Cache::get('name') and the second browser will respond with "locked" as expected.
Can anyone explain why is this happening? Isn't it suppose that any get() method to that lock, expect the first one, to return false for that amount of time the lock has been set? I used 2 different browsers but it works the same with 2 tabs from the same browser too.
Quote from the 5.6 docs https://laravel.com/docs/5.6/cache#atomic-locks:
To utilize this feature, your application must be using the memcached or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 5.8 docs https://laravel.com/docs/5.8/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, dynamodb, or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 8.0 docs https://laravel.com/docs/8.x/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, redis, dynamodb, database, file, or array cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Apparently, they have been adding support for more drivers to make use of this lock functionality. Check which Cache driver you are using and if it fits the support list of your Laravel version.
There is likely an atomicity issue here where the cache driver you are using is not able to lock a file atomically. What should happen is that when a process (i.e. a php request) is writing to the lock file, all other processes requiring the lock file should at least wait until the lock file available to be read again. If not, they read the lock file before it has been written to, which obviously causes a race condition.
I saw this question I asked, well now I can say that the problem I was trying to solve here was not because of the atomic lock. The problem here is the sleep method. If the time provided to the sleep method is bigger than the time that a lock will live, it means when the next request it's able to hit the route the lock time will expire(will be released). And that's because let's say you have defined a route like this:
Route::get('case/{value}', function($value){
if($value){
dump('hit-1');
}else{
sleep(5);
dump('hit-0');
}
});
And you open two browser tabs with the same URL that hits this route something like:
127.0.0.1:8000/case/0
and
127.0.0.1:8000/case/1
It will show you that the first route will take 5sec to finish execution and even if the second request is sent almost at the same time with the first request, still it will wait to finish the first one and then run. This means the second request will last 5sec(from the first request) plus the time it took to run.
Back to the asked question the lock time will expire by the time the second request will get it or said differently run the $lock->get() statement.
So I have Laravel 5.2 project where Redis is used as cache driver.
There is a controller, which has method that connects Redis and increases a value and adds value to the set each time this method is called, just like
$redis = Redis::connection();
$redis->incr($value);
$redis->sadd("set", $value);
But the problem is that sometimes there are many connections and many calls for this method at the same time, and there is a data loss, because if two callers call this method while $value is 2, after incr it will become 3, but should be 4 (after two incrs basically).
I have thought about using Redis transactions, but I can't imagine when should I call multi command to start a queue and when to exec it.
Also I had an idea to collect all incrs and sadds as strings to another set and then transact them with cron job, but it would cost too much RAM.
So, any suggestions, how can this data loss be avoided?
Laravel uses Predis as a Redis driver.
To execute a transaction with Predis you have to invoke the transaction method of the driver and give it a callback:
$responses = $redis->transaction(function ($tx) {
$redis->incr($value);
$redis->sadd("set", $value);
});
I have a simple authentications for user,In UserController I have a fuction called postLogin().
public function postLogin()
{
if(Auth::user()->attempt($credentials))
{
return Redirect::intended('desk')->with('stream',"SomeData");;
}
}
with above code I am able to log in successfullt with the "SomeData" variable which I am retrieving it by
<?php
$class = Session::get('stream');
var_dump($class);
?>
First time when it goes to "/desk" url it dumps the value perfectly fine that is "SomeData" but once I refresh the page it resets the session and the value turns to null.
How do I keep this value till the user logs out.
From the laravel official documentation :
Flash Data
Sometimes you may wish to store items in the session only for the next
request. You may do so using the flash method. Data stored in the
session using this method will only be available during the subsequent
HTTP request, and then will be deleted. Flash data is primarily useful
for short-lived status messages:
$request->session()->flash('status', 'Task was successful!');
If you need to keep your flash data around for even more requests, you
may use the reflash method, which will keep all of the flash data
around for an additional request. If you only need to keep specific
flash data around, you may use the keep method:
$request->session()->reflash();
$request->session()->keep(['username', 'email']);
I'm trying to use Laravel IoC by creating a singleton object. I'm following the pattern from tutorial as below. I have put a Log message into object (Foobar in this example) constructor and I can see that object is being created every time I refresh page in browser. How is the singleton pattern meant for Laravels IoC? I understood that its shared object for entire application but its obviously being created every time its requested by App:make(...) Can someone explain please. I thought I would use the singleton pattern for maintaining shared MongoDB connection.
App::singleton('foo', function()
{
return new FooBar;
});
What has been said in Laravel Doc
Sometimes, you may wish to bind something into the container that
should only be resolved once, and the same instance should be returned
on subsequent calls into the container:
This is how you can bind a singleton object and you did it right
App::singleton('foo', function()
{
return new FooBar;
});
But, the problem is, you are thinking about the whole process of the request and response in the wrong way. You mentioned that,
I can see that object is being created every time I refresh page in
browser.
Well, this is normal behaviour of HTTP request because every time you are refreshing the page means every time you are sending a new request and every time the application is booting up and processing the request you've sent and finally, once the application sends the response in your browser, it's job is finished, nothing is kept (session, cookie are persistent and different in this case) in the server.
Now, it has been said that the same instance should be returned on subsequent calls, in this case, the subsequent calls mean that, if you call App::make(...) several times on the same request, in the single life cycle of the application then it won't make new instances every time. For example, if you call twice, something like this
App::before(function($request)
{
App::singleton('myApp', function(){ ... });
});
In the same request, in your controller, you call at first
class HomeController {
public function showWelcome()
{
App::make('myApp'); // new instance will be returned
// ...
}
}
And again you call it in after filter second time
App::after(function($request, $response)
{
App::make('myApp'); // Application will check for an instance and if found, it'll be returned
});
In this case, both calls happened in the same request and because of being a singleton, the container makes only one instance at the first call and keeps the instance to use it later and returns the same instance on subsequent calls.
It is meant to be used multiple times throughout the applications instance. Each time you refresh the page, it's a new instance of the application.
Check this out for more info and practical usage: http://codehappy.daylerees.com/ioc-container
It's written for L3, but the same applies for L4.
Both QWebFrame and QWebPage have void loadFinished(bool ok) signal which can be used to detect when a web page is completely loaded. The problem is when a web page has some content loaded asynchronously (ajax). How to know when the page is completely loaded in this case?
I haven't actually done this, but I think you may be able to achieve your solution using QNetworkAccessManager.
You can get the QNetworkAccessManager from your QWebPage using the networkAccessManager() function. QNetworkAccessManager has a signal finished ( QNetworkReply * reply ) which is fired whenever a file is requested by the QWebPage instance.
The finished signal gives you a QNetworkReply instance, from which you can get a copy of the original request made, in order to identify the request.
So, create a slot to attach to the finished signal, use the passed-in QNetworkReply's methods to figure out which file has just finished downloading and if it's your Ajax request, do whatever processing you need to do.
My only caveat is that I've never done this before, so I'm not 100% sure that it would work.
Another alternative might be to use QWebFrame's methods to insert objects into the page's object model and also insert some JavaScript which then notifies your object when the Ajax request is complete. This is a slightly hackier way of doing it, but should definitely work.
EDIT:
The second option seems better to me. The workflow is as follows:
Attach a slot to the QWebFrame::javascriptWindowObjectCleared() signal. At this point, call QWebFrame::evaluateJavascript() to add code similar to the following:
window.onload = function() { // page has fully loaded }
Put whatever code you need in that function. You might want to add a QObject to the page via QWebFrame::addToJavaScriptWindowObject() and then call a function on that object. This code will only execute when the page is fully loaded.
Hopefully this answers the question!
To check the load of specific element you can use a QTimer. Something like this in python:
#pyqtSlot()
def on_webView_loadFinished(self):
self.tObject = QTimer()
self.tObject.setInterval(1000)
self.tObject.setSingleShot(True)
self.tObject.timeout.connect(self.on_tObject_timeout)
self.tObject.start()
#pyqtSlot()
def on_tObject_timeout(self):
dElement = self.webView.page().currentFrame().documentElement()
element = dElement.findFirst("css selector")
if element.isNull():
self.tObject.start()
else:
print "Page loaded"
When your initial html/images/etc finishes loading, that's it. It is completely loaded. This fact doesn't change if you then decide to use some javascript to get some extra data, page views or whatever after the fact.
That said, what I suspect you want to do here is expose a QtScript object/interface to your view that you can invoke from your page's script, effectively providing a "callback" into your C++ once you've decided (from the page script) that you've have "completely loaded".
Hope this helps give you a direction to try...
The OP thought it was due to delayed AJAX requests but there also could be another reason that also explains why a very short time delay fixes the problem. There is a bug that causes the described behaviour:
https://bugreports.qt-project.org/browse/QTBUG-37377
To work around this problem the loadingFinished() signal must be connected using queued connection.