Deploy to multiple servers with different project roots using Laravel Envoy - laravel

When deploying to multiple servers using Laravel Envoy, how can you specify the project root per server?. The example provided in the documentation assumes that the project root is the same path for both servers.
Assume web-1 has project root as /var/html/www and web-2 has project root as /var/foo/bar. How can I access the different server's project root at runtime?

There are different ways to use Laravel Envoy for what you want to achieve. For example, based on your description, something like the following would work in your Envoy.blade.php file after running envoy run deploy.
#servers(['web-1' => '127.0.0.1', 'web-2' => '127.0.0.1'])
#setup
function logMessage($message) {
return "echo '\033[32m" .$message. "\033[0m';\n";
}
#endsetup
#story('deploy')
deploy-web-1
deploy-web-2
#endstory
#task('deploy-web-1', ['on' => ['web-1']])
cd /Users/Shared
{{ logMessage('🚀 Task complete for web-1') }}
#endtask
#task('deploy-web-2', ['on' => ['web-2']])
cd /Users/khill
{{ logMessage('🚀 Task complete for web-2') }}
#endtask

you have to try this
$webServerIps = [
'web-1' => 'xxx.xxx.xxx.xxx',
'web-2' => 'xxx.xxx.xxx.xxx',
];
#servers(array_merge($webServerIps, ['persistent' => 'xxx.xxx.xxx.xxx', 'worker'
=> 'xxx.xxx.xxx.xxx', 'local' => '127.0.0.1']))
i hope you got your solution .
also you can follow this link for more help

Related

How to manage data at DO Spaces with Laravel Storage

I am trying to manage DO's Spaces with Laravel's 8 Storage, however I am getting errors which seems to come from Laravel's side.
At start I wrote this line in terminal as I was instructed in Laravel's documentation
composer require league/flysystem-aws-s3-v3 "~1.0"
afterwards I edited my environmental variables
DO_SPACES_KEY=*KEY*
DO_SPACES_SECRET=*SECRET*
DO_SPACES_ENDPOINT=ams3.digitaloceanspaces.com
DO_SPACES_REGION=AMS3
DO_SPACES_BUCKET=test-name
also added changes in config/filesystems.php
'do_spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
],
After visiting this test Route
Route::get('/test', function (Request $request) {
Storage::disk('do_spaces')->put('test.txt', 'hello world');
});
I am getting this error
Error executing "PutObject" on "//test-name./test-name/test.txt"; AWS HTTP error: cURL error 6: Couldn't resolve host 'test-name' (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://test-name./test-name/test.txt
It seems that problem occurs while laravel is trying to compile url which should not look as it is here (wrong - http://test-name./test-name/test.txt). However I have no clue how to fix this issue and what I am doing wrong, since I was following all steps as many tutorials and documetations were telling to do.
I had the same problem. I solved it next way:
Add https:// to DO_SPACES_ENDPOINT (https://ams3.digitaloceanspaces.com)
In put method use path to text.txt:
Storage::disk('do_spaces')->put('YOUR_SPACE_NAME/YOUR_FOLDER_NAME(if you have)/test.txt', 'hello world');

How to fix mpdf temporary files directory writable issue?

I'm getting this error in my laravel application.
Mpdf \ MpdfException (E_ERROR)
Temporary files directory "/var/www/html/../temp/" is not writable
Please anybody tell me the solution to fix this issue.
I fixed it like so:
$mpdf = New \Mpdf\Mpdf(['tempDir'=>storage_path('tempdir')]);
storage_path('tempdir') is the laravel-managed temporary directory.
I solved my issue by specifying tempDir to a usable temp directory
In config/pdf.php
<?php
return [
'mode' => 'utf-8',
'format' => 'A4',
'author' => '',
'subject' => '',
'keywords' => '',
'creator' => 'Laravel Pdf',
'display_mode' => 'fullpage',
'tempDir' => base_path('storage/app/mpdf'),
'pdf_a' => false,
'pdf_a_auto' => false,
'icc_profile_path' => ''
];
if you are not found config/pdf.php then you should publish package's config file to your config directory by using following command:
php artisan vendor:publish
If you are using docker, and having trouble to give the permissions make sure to enter in the docker container with root user using the follow command:
docker container exec -u 0 -it yourContainer bash
From this answer: Root password inside a Docker container
go to the directory "/var/www/html/../temp/" and check if it exists.
if it does not exists then create it.
if it exists give the necessary permission to it (usually 777 depends on your environment)

Error uploading from Laravel 5.4 to S3 bucket

[I run the script from localhost]
I'm trying to upload files using Laravel 5.4 to AWS S3 bucket but I get this error:
Error executing "PutObject" on "https://bucket_name.s3.amazonaws.com/1520719994357906.png"; AWS HTTP error: cURL error 60: SSL certificate problem: unable to get local issuer certificate (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
In filesystems.php:
's3' => [
'driver' => 's3',
'key' => 'KRY_HERE',
'secret' => 'SECRET_HERE',
'region' => 'us-east-1',
'bucket' => 'bucket_name', //has global access to read files
],
In the controller:
Storage::disk('s3')->put($imageName, file_get_contents(public_path('galleries/').$imageName));
How to solve this? If I upload the app to EC2 instance does it require SSL installed to upload files to S3 bucket? Thanks in advance.
Uploading from server worked fine no need to install SSL it just doesn't work from localhost.
it just doesn't work from localhost,if you want to do working it on localhost you have to do some changes in vendor directory.(For your local use only)
vendor/guzzle/src/handler/CurlFactory.php
Near around line no 350.Comment this two line and add new two line,otherwise replace this two line.(as you wish)
if ($options['verify'] === false) {
unset($conf[\CURLOPT_CAINFO]);
$conf[\CURLOPT_SSL_VERIFYHOST] = 0;
$conf[\CURLOPT_SSL_VERIFYPEER] = false;
} else {
/* $conf[\CURLOPT_SSL_VERIFYHOST] = 2;
$conf[\CURLOPT_SSL_VERIFYPEER] = true;*/ //Comment this two line
$conf[\CURLOPT_SSL_VERIFYHOST] = 0;
$conf[\CURLOPT_SSL_VERIFYPEER] = false;
}
Now it's work fine.

spatie/laravel-backup notifications not posting

Using Laravel 5.3 and v4 from spatie/laravel-backup package.
I am using this package from Spatie which allows me to take backups using a simple terminal command. It is pretty straight forward to set up and when I run the command the backup runs as intended.
But there is also an option in the config file to set notifications (send mail, post to slack, ...) after a backup. This does not seem to do anything for me. I neither receive mails (and I have set my mailaddress) or see posts to my dedicated slack channel (and i have added the webhook).
I have already included the following composer packages since researching this problem:
Guzzlehttp/guzzle
maknz/slack-laravel
maknz/slack
This is the simple notifications section in the config file:
'notifications' => [
'notifications' => [
\Spatie\Backup\Notifications\Notifications\BackupHasFailed::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\UnhealthyBackupWasFound::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\CleanupHasFailed::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\BackupWasSuccessful::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\HealthyBackupWasFound::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\CleanupWasSuccessful::class => ['mail'],
],
/*
* Here you can specify the notifiable to which the notifications should be sent. The default
* notifiable will use the variables specified in this config file.
*/
'notifiable' => \Spatie\Backup\Notifications\Notifiable::class,
'mail' => [
'to' => 'nicolas#******.***',
],
'slack' => [
'webhook_url' => 'https://hooks.slack.com/services/*****/*****/*************',
],
],
Not really sure if I am forgetting something? Thanks for the help
The output I get in terminal after running the commands: gist.github
Extra:
This is a talk at Laracon EU 2016, where the creator (Freek) shows off his package.

Fog VSphere provider vm_clone request cannot use datastore in folder

The following code works great but only when the datastore specified is in the root of the datacenter. We organise our datastores in folders with the same name as the cluster they are associated with.
Tried putting a path in (e.g. dc_name/ds_name) but no good.
server=connection.vm_clone( 'datacenter' => 'EWL',
'template_path' => '.Templates/RHEL 6.2 x64',
'name' => 'new_vm_name',
'datastore' => 'E2-CL01-T2-OS-015',
'dest_folder' => 'Self-Service',
'transform' => 'sparse',
'power_on' => false )
Any clues?

Resources