How to use `laravel-backup`-Package with Laravel Vapor - laravel

We are using Laravel Vapor to manage our laravel application and planing to use the laravel-backup package to create automated database backups for our production environment.
I testet the implementation and managed to get it worked (with version 7.3.3) on my windows machine.
I set the mail configuration to get notified when an backup runs (successful or not) and set the path to mysqldump like this:
'dump' => [
'dump_binary_path' => 'C:\xampp\mysql\bin',
'use_single_transaction',
'timeout' => 60 * 5,
]
To set this up and running with vapor, I changed the destination.disk-config from local to s3 with s3 as
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
],
I removed the dump_binary_path, because I didn't know where to point with it in the context of vapor. So I hoped that it is at a default location as it is mentioned in the docs of the laravel-backup package:
mysqldump is used to backup MySQL databases. pg_dump is used to dump PostgreSQL databases. If these binaries are not installed in a default location, you can add a key named dump.dump_binary_path in Laravel's own database.php config file.
I included the backup command in the kernel-file
$schedule->command('backup:clean')->daily()->at('01:00');
$schedule->command('backup:run --only-db')->daily()->at('01:30');
and deployed it with vapor.
Unfortunately it isn't working. I didn't recived an email (neither success nor failure) and nothing was created at our s3.
Does someone used laravel-backup with vapor before and knows how to fix this? What am I missing?
Thanks in advance!

The Laravel Vapor support replied this:
"Our native runtime doesn't have mysqldump installed. You will need to run the Docker runtime and install yourself if this is a requirement. It's also worth noting the maximum execution time on Lambda is 15 mins so if your backup is large, you could run into issues."

Related

Laravel: How to transfer files via SFTP to Azure Blob Storage without downloading other libraries?

I need to be able to transfer files, list files and delete files on a certain directory on my Azure Blob Storage and use existing Laravel Disk commands like put(), allFiles(), delete().
Below is my code for uploading a file:
use Storage;
class SFTPFileUploader
{
public function uploadFileToAzure($fileName,$content)
{
$sftpAzureDisk= Storage::disk('sftp');
$sftpAzureDisk->put($fileName,$content);
}
}
To be able to do this without having to download other libraries,
First you need make sure you have the following setup in your Azure Portal
1. Azure Storage Account (e.g myprojectstorage)
2. Azure Storage Container (e.g myprojectcontainerfolder)
3. Azure Storage Account -> Settings -> SFTP
3.1 Create a Local User (If none yet)
3.1.1 Create User Name
3.1.2 Generate Password
3.1.3 Set Permissions
3.2 Enable SFTP
After setting all of those, you can proceed to your config/filesystems.php and create new connection
'sftp' => [
'driver' => 'sftp',
'host' => "<myprojectstorage>.blob.core.windows.net",
'port' => 22,
'username' => "<myprojectstorage>.<myprojectcontainer>.<username>",
'password' => <password>,
'privateKey' => storage_path('app/public/your.key'),//optional depends on Azure Setup
'root' => '/',
],
then this should already work without having to download any other library,
use Storage;
class SFTPFileUploader
{
public function uploadFileToAzure($fileName,$content)
{
$sftpAzureDisk= Storage::disk('sftp');
$sftpAzureDisk->put($fileName,$content);
}
}
P.S (I only have tested working using with Password, not tested yet for Key File)

A problem with automatically increasing the file size in a project laravel

I have a problem with a file named
storage\logs\laravel.log
From LARAVEL Project Files
The strange thing is that this file is automatically increased in size every day, so I needed to delete it so that the site would not stop working
Do you have a solution to this problem how to stop this file from working automatically
So that the site does not stop because there is no free space
You can change logging driver to daily. This will create new log file daily, with date in file name.
in config/logging.php
'default' => env('LOG_CHANNEL', 'daily'),
'channels' => [
....
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
],
....
Set the number days that you want log files to keep. After that the old log files will be deleted automatically.
For more information Check the Laravel documentation for Logging.

Laravel Horizon - Redis - HAProxy - Error while reading line from the server

Sorry for the title which might sound like an "allready answered" topic but I believe my case is unique.
Also, this is my first post so I apologize if I am not on the proper channel as I am not sure wether my problem is on the server administration side or the Laravel's configuration one.
I am trying to get some fresh ideas on how to resolve an issue with Horizon / Predis / HAProxy which I thought was fixed but is showing up again.
Some details on environment
2x Apache servers : PHP Version 7.2.29-1+ubuntu18.04.1+deb.sury.org+1
thread safe is disabled and we use FPM
2x Redis servers using a simple master-slave setup (no high availability, no sentinel) : redis version 4.0.9
load balancing with HAProxy version 1.9
Libraries
laravel/framework: 6.14.0
laravel/horizon": 3.7.2
redis/predis: 1.1.1
Horizon configuration
The Horizon daemon is managed through Supervisor.
This is the Redis client configuration in config/database.php:
'redis' => [
'client' => 'predis',
'options' => [
'prefix' => strtoupper(env('APP_NAME') . ':')
],
'default' => [
'host' => env('REDIS_HOST'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT'),
'database' => env('REDIS_DB'),
'read_write_timeout' => -1
],
...
This the Redis connection configuration in config/queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 110
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'long-running-queue'),
'retry_after' => 3620
],
As you can see there are two connections defined for the same physical Redis server.
The application uses queues for 2 different types of jobs:
Fast / short processes like broadcasting, notifications or some Artisan commands calls.
These use the first connection configuration with low timeout setting.
Long running processes which essentially query large amounts of data on a Snowflake DB (cloud based SQL like DB) and / or update / inserts documents on a Solr server.
These processes use the 2nd connection configuration as they can take quite some time to complete (usually around 20 minutes for the one combining read from Snowflake and write to Solr)
The application is a business webapp meant for private use by my company and the load is rather small (around 200 jobs queued / day) but the long running processes are critical to the business : job failure or double run is not acceptable.
This is the config/horizon.php file:
'environments' => [
'production' => [
'supervisor-default' => [
'connection' => 'redis',
'queue' => ['live-rules', 'solr-cmd', 'default'],
'balance' => 'simple',
'processes' => 3,
// must be lower than /config/queue.php > 'connections.redis'
'timeout' => 90,
'tries' => 3,
],
'supervisor-long-run' => [
'connection' => 'redis-long-run',
'queue' => ['long-running-queue', 'solr-sync'],
'balance' => 'simple',
'processes' => 5,
// must be lower than /config/queue.php > 'connections.redis-long-run'
'timeout' => 3600,
'tries' => 10,
],
],
'staging' => [
...
Initial problem <solved>
When we went live at the beginning of the year we immediately hit a problem with the jobs running on the long-running-queue connection:
Error while reading line from the server. [tcp://redis_host:6379] errors started popping left and right.
These translated into jobs being stuck in pending state, until they finally ended up being marked as failed although the tasks had in reality succeeded.
At the time the application's long running processes were limited to the Snowflake SELECT queries.
After going through the numerous posts about it on Laravel Horizon's github issues as well as SO's topics and testing the suggestions without luck we finally figured out that the culprit was our load balancer closing the connection after 90 seconds.
Redis has a tcp-keepalive default config parameter of 300 secs so we tweaked the HAProxy's configuration to close at 310 secs and - poof! -, everything worked fine for a while.
This is HAProxy's configuration for the application nowadays:
listen PROD-redis
bind 0.0.0.0:6379
mode tcp
option tcplog
option tcp-check
balance leastconn
timeout connect 10s
timeout client 310s
timeout server 310s
server 1 192.168.12.34:6379 check inter 5s rise 2 fall 3
server 2 192.168.43.21:6379 check inter 5s rise 2 fall 3 backup
New problem (initial reborn ?)
Coming back a few months later the application has evolved and we now have a job which reads and yields from Snowflake in batch to build a Solr update query. The Solr client is solarium/solarium and we use the addBuffered plugin.
This worked flawlessly on our pre-production environment which doesn't have load balancing.
So next we moved to the production environment and the Redis connection issues rose again unexpectedly, except this time we've got the HAProxy setup properly.
Monitoring the keys in Redis we can see that these jobs get indeed reserved but end up in the delayed state after some time, waiting to be tried again once the job's timeout is reached.
This is a real problem as we end up going through the job's max tries count until it eventually gets marked as failed, running it x times because it never gets the complete flag, putting unecessary stress on the environment and consuming resources when in fact the job DID succeed at first try.
This is what we get from HAProxy's logs:
Jun 26 11:35:43 apache_host haproxy[215280]: 127.0.0.1:42660 [26/Jun/2020:11:29:02.454] PROD-redis PROD-redis/redis_host 1/0/401323 61 cD 27/16/15/15/0 0/0
Jun 26 11:37:18 apache_host haproxy[215280]: 127.0.0.1:54352 [26/Jun/2020:11:28:23.409] PROD-redis PROD-redis/redis_host 1/0/535191 3875 cD 24/15/14/14/0 0/0
The cD part is the interesting information, as per haProxy's documentation:
c : the client-side timeout expired while waiting for the client to send or receive data.
D : the session was in the DATA phase.
There are more logs like this and there is no obvious pattern in the delay between the connection established and the moment it closes as you can see from dates.
Before getting there we have :
switched to a Redis version 5.0.3 server: same issue.
removed HAProxy from the equation and established direct connection between the client and Redis : works flawlessly.
I'm a bit at a loss as to how to figure out and fix the issue for good.
Going back to the HAProxy log concerning client-side timeout, I wonder what could possibly be wrong about the client configuration and what I should try next.
Maybe someone here will come with a suggestion ? Thank you for reading.
From Laravel documentation it is better to use PhpRedis client instead of Predis.
Predis has been abandoned by the package's original author and may be removed from Laravel in a future release.
In short, PhpRedis is a php module written in C. While Predis is php library written in PHP. Huge performance difference described here
BTW, we have similar stack: Laravel + Horizon -> HAProxy-> Redis Server. Wу have 3 redis servers (1 master, 2 slaves). And Sentinel to keep on actual master.
And had similar problems with redis until we migrated from Predis to PhpRedis. When researching problems, the best answer was to use PhpRedis.
PS. We just changed REDIS_CLIENT in .env from Predis to phpredis and everything was still working.
Error
Error while reading line from the server. [tls://private-XXX-do-user-XXX-0.b.db.ondigitalocean.com:25061]
Reason
DigitalOcean Managed Redis is not compatible with Predis.
Solution
Remove Predis and use phpredis. Phpredis is an extension for PHP.
Check if you have installed phpredis:
# php -i | grep async_redis
async_redis => enabled
If you dont have phpredis, just install it: pecl install redis (more information about php-redis installation)
Remove predis from your composer.json and add this line (good practice):
"require": {
"php": "^8.0",
/* ... */
"ext-redis": "*",
/* ... */
Change your code or .env REDIS_CLIENT value:
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'default' => [
'scheme' => env('DATA_REDIS_SCHEME', 'tcp'),
'host' => env('DATA_REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
// 'read_write_timeout' => 0, // was required by DO in the past, but just in case...
],
'cache' => [
],
],

Unable to connect to remote server - Laravel 4.1

I want to execute some commands in a remote server using SSH.
This is my configuration :
'connections' => [
'production' => [
'host' => '172.55.81.20',
'username' => 'user',
'password' => '',
'key' => 'C:\cygwin64\home\oqannouf\.ssh\id_rsa'
],]
I tried this SSH::into('production')->run(['ls']);
but i get the following error:
Unable to connect to remote server
with no additional informations explaining why it doesn't work.
Note:
the RSA key is correct and i can use it to login to the server using cygwin.
Even with a correct password it doesn't work.
I have all the permissions on the folder C:\cygwin64\home\oqannouf.ssh\

Laravel environment files for local and production possibly not working as intended?

In Laravel 4, I'm setting up my local development machine (my laptop) and then I have my production server.
In my /bootstrap/start.php
$env = $app->detectEnvironment(array(
'local' => array('MacBook-Pro-2.local'),
'production' => array('ripken'),
));
I then created a .env.local.php in my root directory with the following:
<?php
return [
// Database Connection Settings
'db_host' => '127.0.0.1',
'db_name' => 'censored',
'db_user' => 'censored',
'db_password' => 'censored'
];
That part works. However, I went to my production server and created the same exact file, called it .env.production.php and it doesn't load the variables from that file. I don't know what made me think to do it but I renamed that file .env.php without "production" in the name and it works.
My question therefore is, why wouldn't .env.production.php work? I thought that is what the good folks over at Laravel wanted me to name my various environment files.
Production is the 'default' ruleset, so it is a bit special. Laravel will look for the "base" or "default" rules when loading production, rather than the actual name production.
So you can change
$env = $app->detectEnvironment(array(
'local' => array('MacBook-Pro-2.local'),
'production' => array('ripken'),
));
to
$env = $app->detectEnvironment(array(
'local' => array('MacBook-Pro-2.local'),
));
That way anything that is not "MacBook-Pro-2.local" is going to be production automatically. Then just use the default .env.php for the production settings.
Every other env needs to be explicitly defined - such as .env.local.php and .env.testing.php etc

Resources