I am trying to manage DO's Spaces with Laravel's 8 Storage, however I am getting errors which seems to come from Laravel's side.
At start I wrote this line in terminal as I was instructed in Laravel's documentation
composer require league/flysystem-aws-s3-v3 "~1.0"
afterwards I edited my environmental variables
DO_SPACES_KEY=*KEY*
DO_SPACES_SECRET=*SECRET*
DO_SPACES_ENDPOINT=ams3.digitaloceanspaces.com
DO_SPACES_REGION=AMS3
DO_SPACES_BUCKET=test-name
also added changes in config/filesystems.php
'do_spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
],
After visiting this test Route
Route::get('/test', function (Request $request) {
Storage::disk('do_spaces')->put('test.txt', 'hello world');
});
I am getting this error
Error executing "PutObject" on "//test-name./test-name/test.txt"; AWS HTTP error: cURL error 6: Couldn't resolve host 'test-name' (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://test-name./test-name/test.txt
It seems that problem occurs while laravel is trying to compile url which should not look as it is here (wrong - http://test-name./test-name/test.txt). However I have no clue how to fix this issue and what I am doing wrong, since I was following all steps as many tutorials and documetations were telling to do.
I had the same problem. I solved it next way:
Add https:// to DO_SPACES_ENDPOINT (https://ams3.digitaloceanspaces.com)
In put method use path to text.txt:
Storage::disk('do_spaces')->put('YOUR_SPACE_NAME/YOUR_FOLDER_NAME(if you have)/test.txt', 'hello world');
Related
I am running Laravel 5.7 on Forge. Things are working well. I have two simple jobs that run. One when a user logs in and one when users want to download a large file.
In my local they both work great. Once deployed on forge they both fail with the same exception:
ErrorException: Undefined index: queue in /home/forge/SITE/vendor/laravel/framework/src/Illuminate/Queue/Connectors/RedisConnector.php:46
The stack trace points right back to the two lines where I call dispatch();
My setup is default for Redis. I have not changed my env or anything else related to a normal redis setup.
Both my local and my prod forge site have:
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
I have no idea why this would happen only in my Forge setup. TIA
--
After continueing to look into this, when I run it locally I have run php artisan queue:work
I tested running this after ssh'ing into my forge server and got this:
ErrorException : Undefined index: queue
at /home/forge/members.spaceangels.com/vendor/laravel/framework/src/Illuminate/Queue/Connectors/RedisConnector.php:46
42| */
43| public function connect(array $config)
44| {
45| return new RedisQueue(
46| $this->redis, $config['queue'],
47| $config['connection'] ?? $this->connection,
48| $config['retry_after'] ?? 60,
49| $config['block_for'] ?? null
50| );
Exception trace:
1 Illuminate\Foundation\Bootstrap\HandleExceptions::handleError("Undefined index: queue", "/home/forge/members.spaceangels.com/vendor/laravel/framework/src/Illuminate/Queue/Connectors/RedisConnector.php", [])
/home/forge/members.spaceangels.com/vendor/laravel/framework/src/Illuminate/Queue/Connectors/RedisConnector.php:46
2 Illuminate\Queue\Connectors\RedisConnector::connect(["redis"])
/home/forge/members.spaceangels.com/vendor/laravel/framework/src/Illuminate/Queue/QueueManager.php:157
Please use the argument -v to see more details.
my config/queue.php setting:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 90,
'block_for' => null,
],
I feel like this is something missing between my config and how forge enables Redis
Well, it seems like your .env file is missing:
CACHE_DRIVER=redis
[I run the script from localhost]
I'm trying to upload files using Laravel 5.4 to AWS S3 bucket but I get this error:
Error executing "PutObject" on "https://bucket_name.s3.amazonaws.com/1520719994357906.png"; AWS HTTP error: cURL error 60: SSL certificate problem: unable to get local issuer certificate (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
In filesystems.php:
's3' => [
'driver' => 's3',
'key' => 'KRY_HERE',
'secret' => 'SECRET_HERE',
'region' => 'us-east-1',
'bucket' => 'bucket_name', //has global access to read files
],
In the controller:
Storage::disk('s3')->put($imageName, file_get_contents(public_path('galleries/').$imageName));
How to solve this? If I upload the app to EC2 instance does it require SSL installed to upload files to S3 bucket? Thanks in advance.
Uploading from server worked fine no need to install SSL it just doesn't work from localhost.
it just doesn't work from localhost,if you want to do working it on localhost you have to do some changes in vendor directory.(For your local use only)
vendor/guzzle/src/handler/CurlFactory.php
Near around line no 350.Comment this two line and add new two line,otherwise replace this two line.(as you wish)
if ($options['verify'] === false) {
unset($conf[\CURLOPT_CAINFO]);
$conf[\CURLOPT_SSL_VERIFYHOST] = 0;
$conf[\CURLOPT_SSL_VERIFYPEER] = false;
} else {
/* $conf[\CURLOPT_SSL_VERIFYHOST] = 2;
$conf[\CURLOPT_SSL_VERIFYPEER] = true;*/ //Comment this two line
$conf[\CURLOPT_SSL_VERIFYHOST] = 0;
$conf[\CURLOPT_SSL_VERIFYPEER] = false;
}
Now it's work fine.
I have integrated and working in a Laravel 5.4 project. I was actually configure this correctly and php artisan command was working perfectly before.But in between the development time(I have implemented the schedule task using laravel and not sure after that issue appear) it produces m error on php artisan commands. Can anybody help me on this.
The following is the error log for the command for any artisan command
PHP Fatal error: Uncaught
Symfony\Component\Debug\Exception\FatalThrowableError: Type error:
Argument 2 passed to Illuminate\Routing\UrlGenerator::__construct()
must be an instance of Illuminate\Http\Request, null given, called in
/var/www/html/project/vendor/laravel/framework/src/Illuminate/Routing/RoutingServiceProvider.php
on line 60 in
/var/www/html/project/vendor/laravel/framework/src/Illuminate/Routing/UrlGenerator.php:103
Stack trace:
#0 /var/www/html/projrct/vendor/laravel/framework/src/Illuminate/Routing/RoutingServiceProvider.php(60):
Illuminate\Routing\UrlGenerator->__construct(Object(Illuminate\Routing\RouteCollection),
NULL)
#1 /var/www/html/project/vendor/laravel/framework/src/Illuminate/Container/Container.php(290):
Illuminate\Routing\RoutingServiceProvider->Illuminate\Routing{closure}(Object(Illuminate\Foundation\Application))
#2 /var/www/html/project/vendor/laravel/framework/src/Illuminate/Container/Container.php(746):
Illuminate\Container\Container->Illuminate\Container{closur in
/var/www/html/project/vendor/laravel/framework/src/Illuminate/Routing/UrlGenerator.php
on line 103
Please make sure that you are not using any url() or asset() or other helpers functions inside your configuration files
I my case url() helper function in my filesystem.php is causing the issue. I removed it and every thing works fine.
Another alternative solution to commenting out the url() and asset() calls could be to check the environment at run time:
return [
'URL' => app()->runningInConsole() ? '' : url(''),
...
];
If Really Need The Function To Be Inside Your Config, You Could use PHP_SAPI
To Check Weather The App Is Running HTTP or CLI,
'redirect' => PHP_SAPI === 'cli' ? false : url('synchronise')
I figured out the problem, when you are running any artisan command you should avoid using helper functions in any of your config files. Just comment those and try to run artisan command after running that uncomment your config files.
//in config/'any_file.php'
return [
'name' => 'Larvel',
'url' => url('/')
];
//just change and uncomment url() helper
return [
'name' => 'Larvel',
//'url' => url('/')
];
Well I got stuck at same issue while I was using asset in config file (adminlte.php) of Admin LTE.
Please comment your asset, url while using artisan command in config files like this
[
'type' => 'js',
'asset' => false,
// 'location' => asset('js/waitme/waitMe.min.js'),
],
Using Laravel 5.3 and v4 from spatie/laravel-backup package.
I am using this package from Spatie which allows me to take backups using a simple terminal command. It is pretty straight forward to set up and when I run the command the backup runs as intended.
But there is also an option in the config file to set notifications (send mail, post to slack, ...) after a backup. This does not seem to do anything for me. I neither receive mails (and I have set my mailaddress) or see posts to my dedicated slack channel (and i have added the webhook).
I have already included the following composer packages since researching this problem:
Guzzlehttp/guzzle
maknz/slack-laravel
maknz/slack
This is the simple notifications section in the config file:
'notifications' => [
'notifications' => [
\Spatie\Backup\Notifications\Notifications\BackupHasFailed::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\UnhealthyBackupWasFound::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\CleanupHasFailed::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\BackupWasSuccessful::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\HealthyBackupWasFound::class => ['mail'],
\Spatie\Backup\Notifications\Notifications\CleanupWasSuccessful::class => ['mail'],
],
/*
* Here you can specify the notifiable to which the notifications should be sent. The default
* notifiable will use the variables specified in this config file.
*/
'notifiable' => \Spatie\Backup\Notifications\Notifiable::class,
'mail' => [
'to' => 'nicolas#******.***',
],
'slack' => [
'webhook_url' => 'https://hooks.slack.com/services/*****/*****/*************',
],
],
Not really sure if I am forgetting something? Thanks for the help
The output I get in terminal after running the commands: gist.github
Extra:
This is a talk at Laracon EU 2016, where the creator (Freek) shows off his package.
How can I change default log file location <project-name>/storage/logs/laravel.log to something like /var/logs/<project-name>/laravel.log?
I resolved this case by using errorlog logging model and configuring webserver.
1. Configure Laravel:
In config/app.php configuration file:
'log' => 'errorlog'
Read more about Laravel log configuration: http://laravel.com/docs/5.1/errors#configuration
2. Configure webserver (in my case Nginx):
error_log /var/log/nginx/<project_name>-error.log;
For those who don't want to use errorlog and just really want to replace the file to log to, you can do this:
\Log::useFiles(env('APP_LOG_FILE'), config('app.log_level', 'debug'));
$handlers = \Log::getMonolog()->getHandlers();
$handler = array_shift($handlers);
$handler->setBubble(false);
on App\Providers\AppServiceProvider.php or any Provider for that matter. This will log to the value of APP_LOG_FILE instead of the default laravel.log. Set bubbling to true and the application will log on both files.
For anyone still coming across this post in hopes of changing their log file location, I believe this is now easier in newer versions of Laravel. I am currently using 8.x
In your /config/logging.php you can define the path for your single and daily logs. Just update whichever one your are looking to change. Just make sure you also include the name of the log file, not just the path to where you'd like it saved.
'single' => [
'driver' => 'single',
'path' => "/your/desired/log/path/file.log", // edit here
'level' => env('LOG_LEVEL', 'debug'),
],
'daily' => [
'driver' => 'daily',
'path' => "/your/desired/log/path/file.log", // edit here
'level' => env('LOG_LEVEL', 'debug'),
'days' => 14,
]