Yii2 - Use APC cache from console - caching

I am trying to make console command to add data to the APC cache, but so far without any success.
This code works perfectly when I run it as a standard action (controller/action):
public function actionUpdateExchangeRate()
{
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_URL => 'https://openexchangerates.org/api/latest.json?app_id=xxxxxxx',
CURLOPT_USERAGENT => 'Exchange Rates'
));
$json = curl_exec($curl);
curl_close($curl);
$rates = json_decode($json);
Yii::$app->cache->set('rates', $rates->rates);
}
I made a command with the same code, and when I try to set the cache, nothing is happening.
var_dump('<pre>', Yii::$app->cache->set('rates', $rates->rates), '</pre>');die;
Dump give true when run as a controler/action, and false when run from the command.
In the console.php I added this configuration to the component (it is same in the web.php):
'cache' => [
'class' => 'yii\caching\ApcCache',
'keyPrefix' => 'test',
'useApcu' => true
],
PHP version is:
PHP 7.2.4-1+ubuntu16.04.1+deb.sury.org+1 (cli) (built: Apr 5 2018 08:53:57) ( NTS )
Any idea what I am doing wrong here?

Make sure that you have enabled apc.enable_cli in php.ini.
But using ApcCache for console does not make much sense. APCu cache is per process, so it will be removed anyway after command is ended and console commands will not share cache with web requests.
Mostly for testing and debugging. Setting this enables APC for the CLI version of PHP. Under normal circumstances, it is not ideal to create, populate and destroy the APC cache on every CLI request, but for various test scenarios it is useful to be able to enable APC for the CLI version of PHP easily.
https://secure.php.net/manual/en/apcu.configuration.php#ini.apcu.enable-cli

Related

How to use `laravel-backup`-Package with Laravel Vapor

We are using Laravel Vapor to manage our laravel application and planing to use the laravel-backup package to create automated database backups for our production environment.
I testet the implementation and managed to get it worked (with version 7.3.3) on my windows machine.
I set the mail configuration to get notified when an backup runs (successful or not) and set the path to mysqldump like this:
'dump' => [
'dump_binary_path' => 'C:\xampp\mysql\bin',
'use_single_transaction',
'timeout' => 60 * 5,
]
To set this up and running with vapor, I changed the destination.disk-config from local to s3 with s3 as
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
],
I removed the dump_binary_path, because I didn't know where to point with it in the context of vapor. So I hoped that it is at a default location as it is mentioned in the docs of the laravel-backup package:
mysqldump is used to backup MySQL databases. pg_dump is used to dump PostgreSQL databases. If these binaries are not installed in a default location, you can add a key named dump.dump_binary_path in Laravel's own database.php config file.
I included the backup command in the kernel-file
$schedule->command('backup:clean')->daily()->at('01:00');
$schedule->command('backup:run --only-db')->daily()->at('01:30');
and deployed it with vapor.
Unfortunately it isn't working. I didn't recived an email (neither success nor failure) and nothing was created at our s3.
Does someone used laravel-backup with vapor before and knows how to fix this? What am I missing?
Thanks in advance!
The Laravel Vapor support replied this:
"Our native runtime doesn't have mysqldump installed. You will need to run the Docker runtime and install yourself if this is a requirement. It's also worth noting the maximum execution time on Lambda is 15 mins so if your backup is large, you could run into issues."

Laravel Horizon - Redis - HAProxy - Error while reading line from the server

Sorry for the title which might sound like an "allready answered" topic but I believe my case is unique.
Also, this is my first post so I apologize if I am not on the proper channel as I am not sure wether my problem is on the server administration side or the Laravel's configuration one.
I am trying to get some fresh ideas on how to resolve an issue with Horizon / Predis / HAProxy which I thought was fixed but is showing up again.
Some details on environment
2x Apache servers : PHP Version 7.2.29-1+ubuntu18.04.1+deb.sury.org+1
thread safe is disabled and we use FPM
2x Redis servers using a simple master-slave setup (no high availability, no sentinel) : redis version 4.0.9
load balancing with HAProxy version 1.9
Libraries
laravel/framework: 6.14.0
laravel/horizon": 3.7.2
redis/predis: 1.1.1
Horizon configuration
The Horizon daemon is managed through Supervisor.
This is the Redis client configuration in config/database.php:
'redis' => [
'client' => 'predis',
'options' => [
'prefix' => strtoupper(env('APP_NAME') . ':')
],
'default' => [
'host' => env('REDIS_HOST'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT'),
'database' => env('REDIS_DB'),
'read_write_timeout' => -1
],
...
This the Redis connection configuration in config/queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 110
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'long-running-queue'),
'retry_after' => 3620
],
As you can see there are two connections defined for the same physical Redis server.
The application uses queues for 2 different types of jobs:
Fast / short processes like broadcasting, notifications or some Artisan commands calls.
These use the first connection configuration with low timeout setting.
Long running processes which essentially query large amounts of data on a Snowflake DB (cloud based SQL like DB) and / or update / inserts documents on a Solr server.
These processes use the 2nd connection configuration as they can take quite some time to complete (usually around 20 minutes for the one combining read from Snowflake and write to Solr)
The application is a business webapp meant for private use by my company and the load is rather small (around 200 jobs queued / day) but the long running processes are critical to the business : job failure or double run is not acceptable.
This is the config/horizon.php file:
'environments' => [
'production' => [
'supervisor-default' => [
'connection' => 'redis',
'queue' => ['live-rules', 'solr-cmd', 'default'],
'balance' => 'simple',
'processes' => 3,
// must be lower than /config/queue.php > 'connections.redis'
'timeout' => 90,
'tries' => 3,
],
'supervisor-long-run' => [
'connection' => 'redis-long-run',
'queue' => ['long-running-queue', 'solr-sync'],
'balance' => 'simple',
'processes' => 5,
// must be lower than /config/queue.php > 'connections.redis-long-run'
'timeout' => 3600,
'tries' => 10,
],
],
'staging' => [
...
Initial problem <solved>
When we went live at the beginning of the year we immediately hit a problem with the jobs running on the long-running-queue connection:
Error while reading line from the server. [tcp://redis_host:6379] errors started popping left and right.
These translated into jobs being stuck in pending state, until they finally ended up being marked as failed although the tasks had in reality succeeded.
At the time the application's long running processes were limited to the Snowflake SELECT queries.
After going through the numerous posts about it on Laravel Horizon's github issues as well as SO's topics and testing the suggestions without luck we finally figured out that the culprit was our load balancer closing the connection after 90 seconds.
Redis has a tcp-keepalive default config parameter of 300 secs so we tweaked the HAProxy's configuration to close at 310 secs and - poof! -, everything worked fine for a while.
This is HAProxy's configuration for the application nowadays:
listen PROD-redis
bind 0.0.0.0:6379
mode tcp
option tcplog
option tcp-check
balance leastconn
timeout connect 10s
timeout client 310s
timeout server 310s
server 1 192.168.12.34:6379 check inter 5s rise 2 fall 3
server 2 192.168.43.21:6379 check inter 5s rise 2 fall 3 backup
New problem (initial reborn ?)
Coming back a few months later the application has evolved and we now have a job which reads and yields from Snowflake in batch to build a Solr update query. The Solr client is solarium/solarium and we use the addBuffered plugin.
This worked flawlessly on our pre-production environment which doesn't have load balancing.
So next we moved to the production environment and the Redis connection issues rose again unexpectedly, except this time we've got the HAProxy setup properly.
Monitoring the keys in Redis we can see that these jobs get indeed reserved but end up in the delayed state after some time, waiting to be tried again once the job's timeout is reached.
This is a real problem as we end up going through the job's max tries count until it eventually gets marked as failed, running it x times because it never gets the complete flag, putting unecessary stress on the environment and consuming resources when in fact the job DID succeed at first try.
This is what we get from HAProxy's logs:
Jun 26 11:35:43 apache_host haproxy[215280]: 127.0.0.1:42660 [26/Jun/2020:11:29:02.454] PROD-redis PROD-redis/redis_host 1/0/401323 61 cD 27/16/15/15/0 0/0
Jun 26 11:37:18 apache_host haproxy[215280]: 127.0.0.1:54352 [26/Jun/2020:11:28:23.409] PROD-redis PROD-redis/redis_host 1/0/535191 3875 cD 24/15/14/14/0 0/0
The cD part is the interesting information, as per haProxy's documentation:
c : the client-side timeout expired while waiting for the client to send or receive data.
D : the session was in the DATA phase.
There are more logs like this and there is no obvious pattern in the delay between the connection established and the moment it closes as you can see from dates.
Before getting there we have :
switched to a Redis version 5.0.3 server: same issue.
removed HAProxy from the equation and established direct connection between the client and Redis : works flawlessly.
I'm a bit at a loss as to how to figure out and fix the issue for good.
Going back to the HAProxy log concerning client-side timeout, I wonder what could possibly be wrong about the client configuration and what I should try next.
Maybe someone here will come with a suggestion ? Thank you for reading.
From Laravel documentation it is better to use PhpRedis client instead of Predis.
Predis has been abandoned by the package's original author and may be removed from Laravel in a future release.
In short, PhpRedis is a php module written in C. While Predis is php library written in PHP. Huge performance difference described here
BTW, we have similar stack: Laravel + Horizon -> HAProxy-> Redis Server. Wу have 3 redis servers (1 master, 2 slaves). And Sentinel to keep on actual master.
And had similar problems with redis until we migrated from Predis to PhpRedis. When researching problems, the best answer was to use PhpRedis.
PS. We just changed REDIS_CLIENT in .env from Predis to phpredis and everything was still working.
Error
Error while reading line from the server. [tls://private-XXX-do-user-XXX-0.b.db.ondigitalocean.com:25061]
Reason
DigitalOcean Managed Redis is not compatible with Predis.
Solution
Remove Predis and use phpredis. Phpredis is an extension for PHP.
Check if you have installed phpredis:
# php -i | grep async_redis
async_redis => enabled
If you dont have phpredis, just install it: pecl install redis (more information about php-redis installation)
Remove predis from your composer.json and add this line (good practice):
"require": {
"php": "^8.0",
/* ... */
"ext-redis": "*",
/* ... */
Change your code or .env REDIS_CLIENT value:
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'default' => [
'scheme' => env('DATA_REDIS_SCHEME', 'tcp'),
'host' => env('DATA_REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
// 'read_write_timeout' => 0, // was required by DO in the past, but just in case...
],
'cache' => [
],
],

Laravel[PAYPAL] annoying issue with PayPal SDK config fix

I'm trying to create a laravel wrapper for paypal sdk, but could not continue due to the annoying fact that as a developer we should edit this
vendor/paypal/sdk-core-php/lib/../config/sdk_config.ini
Is there anyway in the paypal SDK to easily change the config.ini without compromising the structure.
I'm trying not to touch the /vendor/ folder as much as possible.
Or should i create a filesystem editor in my functions that would create an sdk_config.ini for Paypal SDK?
Any concepts?
Yes with old version like sdk 1.4.0 you can do it by declaring sdk_config path , see paypal documentation
I got same issue too, found some error on code after updating the sdk. post an issue on Paypal-sdk
https://github.com/paypal/SDKs/issues/51
hope it helps
In Composer:
Check composer.json be sure you have:
"paypal/rest-api-sdk-php" : "*"
If you have a different string change it and run composer update.
In your php code:
So now you should be able to inject the configuration in the following way:
$cred = new OAuthTokenCredential($paypal_conf['client_id'], $paypal_conf['secret']);
$this->_api_context = new ApiContext($cred);
$this->_api_context->setConfig($paypal_conf['settings']);
Where $paypal_conf is:
array:3 [▼
"client_id" => "..."
"secret" => "..."
"settings" => array:5 [▼
"mode" => "sandbox"
"http.ConnectionTimeOut" => 30
"log.LogEnabled" => true
"log.FileName" => "./storage/logs/paypal.log"
"log.LogLevel" => "FINE"
]
]

what's wrong with graphviz in php-fpm mode?

I try to config a program use xhprof.
When I use php-cli mode or php's built in webServer, the callgraph generate image works well.
But When I use nginx+php-fpm, the dot exec at xhprof_generate_image_by_dot blocks forever.
Then I install the pear Image_Graphviz.Write a simple case like this:
require_once 'Image/GraphViz.php';
$img = new Image_GraphViz();
$img->addNode(
'Node1',
array(
'URL' => 'http://link1',
'label' => 'This is a label',
'shape' => 'box'
)
);
$img->image('png');
php's built in webServer can generate png files ok,but php-fpm block at dot exec forever.
So, can anybody help me? What's wrong with this? Here is some relevant machine information:
The OS: OSX
The graphviz version:2.34.0
The App: nginx1.2.8+php-fpm+php5.4.21+xhprof(the latest version from github)
After reboot my machine.Everything turn to right.
Mystery.

ci-merchant purchase() not working

Im using the ci-merchant library in my PyroCMS module locally on my development WAMP server. (all working fine)
When I upload to my Linux test server the purchase() function being called does not seem to work.
When it executes it pools for 5 minutes then I get a response "Could not connect to host".
PHP
$params =
array(
'amount' => 20,
'currency' => 'USD',
'return_url' => 'http://someurl.com/return/'
'cancel_url' => 'http://someurl.com/cancel/'
);
$settings = array(
'test_mode' => TRUE,
'username' => 'PAYPAL_TEST_USERNAME'
'password' => 'MY_PAPAL_TEST_PASS'
'signature' => 'MY_PAYPAL_TEST_SIG'
);
$this->load->library('merchant');
$this->merchant->load('paypal_express');
$this->merchant->initialize($settings);
//this is where Im having the issue
$response = $this->merchant->purchase($params);
$message = $response->message();
When I echo $message
echo $message; //Outputs: couldn't connect to host"
CURL - Server Settings
Below is a list of the differences in the CURL settings on the servers. Perhaps this is the issue. I dont think these settings can be changed without having to compile curl but im not sure.
Development Server (WAMP server - status:Working)
AsynchDNS : Yes
CurlInfo : 7.21.7
GSS Neg : Yes
IDN : No
SSPI : Yes
libSSH : libssh2/1.2.7
Test Server (Linuxserver - status:Not working)
AsynchDNS : No
CurlInfo : 7.24.0
GSS Neg : No
IDN : Yes
SSPI : No
libSSH : <<not listed>>
After much trial and error and some advice from friends I found this to be a missing libSSH module.
Since then I have moved my site from a shared hosting company to a more reliable VPS Hosting.
I installed the appropriate libraries and everything is working fine.
I would recommend anyone hosting their sites to moving away from any "shared" hosting companies. I only encountered very delayed support and VPS Hosting wasnt really that much more than what I was paying for VPS.
But you will need to know how to manage a server before you do.

Resources