CakePHP 3.x ORM doesn't use cache data when key exists - caching

CakePHP 3.5.13 with Redis configured as the cache engine:
// config/app.php
'Cache' => [
'default' => [
'className' => 'Redis',
'duration' => '+1 hours',
'prefix' => 'cake_redis_',
'host' => '127.0.0.1',
'port' => 6379,
],
];
I have a table with ~260,000 rows in it and a corresponding Table class called SubstancesTable.php. I'm attempting to get the first 5000 rows and then cache the results, so that on subsequent queries, the cached results are used rather than executing the same query:
// Controller method
public function test()
{
$this->autoRender = false;
$Substances = TableRegistry::get('Substances');
// Get 5000 rows from table
$query = $Substances->find('list')->limit(5000);
// Write to cache
$query->cache('test_cache_key');
// Output the results
debug($query->toArray());
}
When I login to Redis (running redis-cli through ssh on my webserver), I can see a key has been generated with the name "test_cache_key":
127.0.0.1:6379> KEYS *
1) "cake_redis_test_cache_key"
I can also see the serialized data in there using GET cake_redis_test_cache_key.
When I execute the above in a browser, there is virtually no difference in the time taken between the cache not existing, and after the cache has been created. I have deleted the cached key in Redis using DEL cake_redis_test_cache_key and confirmed it has gone by listing the keys in Redis (KEYS *)
Clearly Cake isn't reading from the cache in this situation, even though it's writing to it without problems. Why is this happening?
The documentation (https://book.cakephp.org/3.0/en/orm/query-builder.html#caching-query-results) is not clear. Do I need to do something else to get it to read the results from the cache? I've also read CakePHP 3: find() with cache but can't see what's being done differently to what I'm doing above.

Related

json requested via Ajax: queries are very fast but response is returned very slowly

Note this
It's a single ajax request.
As you can see, I wrote the duration in the result, it's the duration of all the queries executed in the api backend.
The Response length is 11 KByte, so it's not a response weight problem.
But as you can see the server is serving the page is 5 seconds.
I'm using nginx, and on this server (it's a single project dev VPS), there is NO trafic, no concurrency problems.
The backend is made in laravel 8 and it's doing only this:
$start = microtime(true);
$data = $this->articleRepository->getProducts($request->all());
$duration = microtime(true) - $start;
return response()->json([
'status' => 'success',
'data' => $data,
'debug' => [
'duration' => $duration
]
]);
I tried to replace laravel magics with
$json = json_encode([
'status' => 'success',
'data' => $data,
'debug' => [
'duration' => $duration
]
]);
return $json;
But it's taking same time. So I think that is a problem at server side.
By the way, please note that dev VPS is a debian 11 machine in my local network. We already verified that up/down band is well over 350Mbits/secs, symmetric, and stable.
I cannot diagnose it, I have root access to VPS, but I've no idea of what could causes so much slowness
Any idea?
In this very specific case, it was a question of DNS. "localhost" resolution are causing a series of problems and latencies. We moved all pointing from 'localhost' to 127.0.0.1 and ALL is resolved.
Note for future Googlers for Laragon and/or Xamp developers: we discovered accidentally that changing localhost to 127.0.0.1 when configuring redis in the laravel .env variabile fixes a huge amount on a windows machine when using both Laragon and Xamp

Laravel Queue generating `illuminate:queue:restart` continuously

Ive Laravel queue running but on Database connection, here is the config:
'database' => [
'driver' => 'database',
'connection' => 'mysql',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 190,
'block_for' => 0,
]
This is how I run it:
php artisan queue:work --queue=xyz_queue > storage/logs/queue.log
On the redis CLI, this is what is happening every second:
It is normal and expected behavior. According to the documentation
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. You may gracefully restart all of the workers by issuing the queue:restart command: php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost.
What queue:restart does is that setting current timestamp to the value of illuminate:queue:restart key.
When the queues are about to be consumed by the processes (php artisan queue:work) it gets that timestamp value from illuminate:queue:restart key and after the job is about to be completed it gets the value again from the same key.
It compares whether the the value before the job is processed is same as the after the job is processed.
If it is different, then it will stop long-lived process.
It is an efficient way(since Redis is super fast for this kind of scenarios) to detect whether the code is changed and should the jobs should be updated for this code change.
The reason it saves the value into the Redis, "most probably" your cache driver is Redis. If you change it to file then it will be saving in the file instead and making the get request to that file.
Here are the related methods;
protected function stopIfNecessary(WorkerOptions $options, $lastRestart, $job = null)
{
if ($this->shouldQuit) {
$this->stop();
} elseif ($this->memoryExceeded($options->memory)) {
$this->stop(12);
} elseif ($this->queueShouldRestart($lastRestart)) {
$this->stop();
} elseif ($options->stopWhenEmpty && is_null($job)) {
$this->stop();
}
}
protected function queueShouldRestart($lastRestart)
{
return $this->getTimestampOfLastQueueRestart() != $lastRestart;
}
protected function getTimestampOfLastQueueRestart()
{
if ($this->cache) {
return $this->cache->get('illuminate:queue:restart');
}
}

Laravel Database migration procedure

I am currently using Laravel 5.4. I have a separate database per client. I would like to run database migrations in all my client databases. The database names are in the format of clientdb_{clientid}. I have tried using
Config::set("database.connections.mysql", ["database" =>
"clientdb_".$client['id'],
"username" => "root","password" => ""]);
$this->callSilent('migrate',
[ '--path' => 'database/migrations/clients','--database'=>'clientdb_'.$client['id']]);
but I am getting exception called
[InvalidArgumentException] Database [clientdb_1] not configured.
The code you show is configuring the connection labelled mysql. I think what you are really trying to do is configure a new database connection called clientdb_1:
Config::set("database.connections.clientdb_" . $client['id'], [
"database" => "clientdb_" . $client['id'],
"username" => "root",
"password" => ""
]);
Looks like the database is not configured in your config/database.php file

Zend 2 How to Disable shared instantiation of services on the fly

In Zend 2, we use the function get() to get the same instance of a service when we request it multiple times. It is created the first time and cached during the request. That's what a shared service is.
$ar = $this->serviceLocator->get('ActionResponsibility');
Now a non-shared service will create a new instance every time it is requested. to do this we have to change the configuration file as following:
<?php
return [
'service_manager' => [
'invokables' => [
'MyService' => 'Application\Service\MyService',
'AnotherService' => 'Application\Service\AnotherService',
],
'shared' => [
'MyService' => false,
'AnotherService' => false,
'ThirdPartyService' => true,
],
// [...]
]
];
The Questions is, how can we get a new instance only when required in the code, isn't there a way in using the get() function to force a new instance instead of a cached copy?
You can use the build method instead of get to retrieve a new non cached instance.
$ar = $this->serviceLocator->build('ActionResponsibility');

How can I extend the default session duration in Concrete 5.7?

How can I extend the default duration of sessions in the Concrete5 CMS (v5.7)? It feels like I have to login again way too frequently.
One way I discovered to achieve this is by modifying the session-handling settings inside /application/config/concrete.php:
return [
//----------------------- SUPER LONG SESSIONS -------------------------
// We want to extend the session cookie to last for 4 months
// so that users are not bugged for their password all the time.
// WARNING: This does reduce security and potentially increase the chance of
// session-hijacking but if you're willing to make the trade-off, here goes
'session' => [
'name' => 'CONCRETE5',
'handler' => 'file',
// We'll use our own specific save_path so that others on our
// server don't garbage-collect our sessions
'save_path' => DIR_APPLICATION . '/files/tmp/sessions',
// 40 days (in seconds). This is a timeout value.
// If session is not used for 40 days, it is likely to be garbage collected
'max_lifetime' => 3456000,
'cookie' => [
'cookie_path' => false,
// This defaults to 0 which is a session cookie
// (ends when browser is closed)
// Extending to last 4 months (in seconds). Cookie will span multiple
// browser restarts up until this max value, and then user will be forced
// to login again (yes, even in the middle of a session, beware!)
'cookie_lifetime' => 10510000,
'cookie_domain' => false,
'cookie_secure' => false,
'cookie_httponly' => true
]
],
// Browser user-agents and IP addresses may change within that time
// so we will disable strict checking for those
'security' => [
'session' => [
'invalidate_on_user_agent_mismatch' => false,
'invalidate_on_ip_mismatch' => false
],
]
];
Sidenote:
The specific groups a member is a part of are stored in the session and only refreshed when logging in, or when certain permissions are changed in the Dashboard. When this occurs, Concrete5 automatically updates the timestamp in /application/config/generated_overrides/concrete.php, but you can also do this manually if you want to force users' permissions to be refreshed mid-session:
return array(
...
'misc' => array(
'access_entity_updated' => 1453869371,
),

Resources