After moving from an AWS Linux 2/Apache server to Ubuntu 14.2/NGINX Laravel controller output is being cached.
When updating code in a controller and saving it consistently takes 30+ seconds before the new changes are reflected in the browser. I created a test function with a controller and included a random string to see if the web output from the controller was being cached or the controller code itself. When refreshing the page in rapid succession the random string always changes, but if I update a static output it does not change.
Here's are the steps
1. Call function below from a route to the controller where it lives.
`public static function test_function() {
echo(str_random(32));
echo('<br>Line 1');
//echo('<br>Line 2');
}`
Browser output:
d3SomhsJ0KfUKgvd1aSwwzI3d0y8w0Zx
Line 1
2. Uncomment echo('<br>Line 2'); and save file.
Refreshed output:
6BZCh9xbvvYFUz1uOQP8wCyDQxfrFblU
Line 1
3. After 30 seconds
Refreshed output:
xpKXULHxWmTIrcneESKEJHDn4AO3HthV
Line 1
Line 2
Sometimes it takes closer to 40 seconds.
Making updates to controllers and testing is now taking forever.
Here's what I've tried to fix problem to no avail:
run command php artisan cache:clear and php artisan route:clear
Changed opcache.revalidate_freq=30 value to 0 in php.in then restarted nginx
Help!
Related
Longtime D7 user, first time with D9. I am writing my first custom module and having a devil of a time. My routing calls a controller that simple does this:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello A - ". rand());
I can refresh the page over and over and get a new random number each
time. But, when I change the code to:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello B - ". rand());
I still get "hello A 34234234" for several minutes. Clearing the cache doesn't help, all I can do is wait, it's normally about two minutes. I am at my wits end.
I thought it maybe an issue with my docker instance. So I generated a simple HTML file but if I edit then reload that file changes are reflected immediately.
In my settings.local.php I have disabled the render cache, caching for migrations, Internal Page Cache, and Dynamic Page Cache.
In my mymod.routing.yml I have:
options:
_admin_route: TRUE
no_cache: TRUE
Any hint on what I am missing would be deeply appreciated.
thanks,
summer
I have a Yii2 based web project. Recently I've write some REST API to it.
I've realized, that every REST API call has a very long response time.
1134ms, 1250ms, 1034ms etc., so basically the avarage response time is above 1 second.
The Client model has no relation (Client table is a 'standalone' table).
My test table (client) is contains 173 record (1 row has 10 columns). I debugged the problem and marked the related line:
...
$client_id = Yii::$app->request->post('client_id');
// client_id ellenőrzése (pl. blokkolt-e a mobil kliens)
if (!empty($client_id)) {
$client = Client::findOne($client_id); <-----
...
So far I've not configured any cache components, because I don't think, that a table with 173 record is required that.
Without the mentioned findOne() line the response time is avarage 30ms.
The environment:
php 7,
Mysql 5.5,
Yii 2
What should be the problem ? Something in configuration? I developed another project with Yii 1.1 a few years ago, I didn't remember this kind of problem there.
Thank you.
UPDATE #1:
UPDATE #2:
I've noticed, that every activerecord realated operation takes about 1 second to finish (not just Client related operations). Getting 10 items to a gridview, update 1 record etc.
UPDATE #3:
Ok, something strange is happening. I've created a very simple action, which also requires ~1 second to render:
public function actionTest() {
echo "OK";
}
The login page requires avarage 32ms to load.
Ok, after a half a day searching I found the problem and the solution.
After I managed to start the Yii debug toolbar (ˇ2 hours :) ) I've realized this:
And after spending another few hours, I found the solution.
I had to replace this config line:
'dsn' => 'mysql:host=localhost;dbname=test',
to
'dsn' => 'mysql:host=127.0.0.1;dbname=test',
Maybe MySQL is not listening on IPv6 sockets or other configuration causes this problem, but now the average respone time is ˇ53ms.
I've been running some benchmark tests to find out why my application was running tremendously slow. Our application runs on an ec2 m3 instance with a mysql database on RDS. At first I thought it had something to do with RDS or a bad configuration. But as I started putting time checks in the code I came to the conclusion that as optimized as my code was - apparently the laravel kernal itself was taking a long time to execute.
In one of my main controllers the average execution time for all the code within the controller was around 200 - 175ms.
However the page would load taking an excruciating 1.3 seconds! There was definitely nothing wrong in the code for the controller so I figured something else must be causing the issue so I benchmarked the base code in the index.php file in the public directory of Laravel application and found that creating a Illuminate\Contracts\Http\Kernel object and getting/sending the response alone took 1120ms!
<?php
require __DIR__.'/../bootstrap/autoload.php';
$app = require_once __DIR__.'/../bootstrap/app.php';
// FROM HERE ->
$kernel = $app->make('Illuminate\Contracts\Http\Kernel');
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
//<--to here takes 1120 ms of which 200 ms is my code in the controller
$kernel->terminate($request, $response);
I'm assuming this is a framework issue but how can I overcome this - a one second average response time is unacceptable here.
Probably there might be thousands reasons for this. What you should do is profile your code to verify what takes so long.
But the first thing I would consider is database connection and queries that are executed. If you execute for example 10 queries and each takes 100ms, queries execution will take 1s, so you might get total 1.3 seconds and it's nothing strange.
So you should verify your code what exactly is happening, verify execution time for simple controller action (that only returns some string for example), verify what providers are loaded etc because each such thing can affect general performance.
I am using laravel 4.2.
I've a project requirement to send some analysis report email to all the users every Monday 6 am.
Obviously its a scheduled task, hence I've decided to use cron-job.
For this I've installed liebig/cron package. The package is installed successfully. To test email, I've added following code in app/start/global.php:
Event::listen('cron.collectJobs', function() {
Cron::setEnablePreventOverlapping();
// to test the email, I am setting the day of week to today i.e. Tuesday
Cron::add('send analytical data', '* * * * 2', function() {
$maildata = array('email' => 'somedomain#some.com');
Mail::send('emails.analytics', $maildata, function($message){
$message->to('some_email#gmail.com', 'name of user')->subject('somedomain.com analytic report');
});
return null;
}, true);
Cron::run();
});
Also in app\config\packages\liebig\cron\config.php the key preventOverlapping is set to true.
Now, if I run it like php artisan cron:run, it sends the same email twice with the same time.
I've deployed the same code on my DigitalOcean development server (ubuntu) and set its crontab to execute this command every minute but still it is sending the same email twice.
Also it is not generating lock file in app/storage directory, according to some search results I've come to know that it creates a lock file to prevent overlapping. the directory has full permissions granted.
Can anybody knows how to solve it?
Remove Cron::run().
Here's what's happening:
Your Cron route or cron:run command is invoked.
Cron fires off the cron.collectjobs event to get a list of events.
You call Cron::run() and run all the events.
Cron calls Cron::run() and runs all the events.
In the cron.collectjobs event you should only be making a list of jobs using Cron::add().
The reason you're not seeing a lock file is either that preventOverlapping is set to false (it's true by default), or that the jobs are running so fast you don't see it being created and deleted. The lock file only exists for the time the jobs run, which may only be milliseconds.
One of the main purposes of caching is to save resources and not do things like hit your database every request. In light of this, I'm confused by what all Codeigniter does in a controller when it encounters a cache() statement.
For example:
$this->output->cache(5);
$data=$this->main_model->get_data_from_database();
$this->load->view("main/index", $data);
I realize that the cached main/index html file will show for the next 5 minutes, but during these 5 minutes will the controller still execute the get_data_from_database() step? Or will it just skip it?
Note: the Codeigniter documentation says you can put the cache() statement anywhere in the controller function, which confuses me even more about whats getting executed.
I can answer my own question. NOTHING in the controller function other than the cached output gets executed during the time in which the cache is set.
To test this yourself, do a database INSERT or something that would be logged somehow (e.g. write to a blank file).
I added the following code below my cache() statement and it only inserted into the some_table table the first time I loaded the controller function and not the 2nd time (within the 5 minute span).
$this->db->insert('some_table', array('field_name' => 'value1') );
I think this can be verified enabling the Profiler in your controller and check if any query is done. Make sure this is enabled only for your IP if you're using it in Production environment.
$this->output->enable_profiler(TRUE);
-- EDIT 1 --
This will be visible only once. Soon after the cached page is stored, the profiles result won't be visible again (so you might wanna delete the file and refresh the page).
-- EDIT 2 --
You might also use:
log_message('info', 'message');
inside your model, then change in config.php, $config['log_threshold'] to 3 and check the log file.
-- EDIT 3 --
For sure the selection will be done unless you have enabled the database cache. In this case, in the cache folder you'll see the database selection cached.