Please excuse me, as there are already many questions on the same topic.
First some background: I started working on a Laravel 9 web application few months ago, created AWS S3 buckets and an IAM user with full access to AmazonS3, and used the user's access credentials in .env file. And I was able to upload files to the specified bucket, and was able to access the uploaded files in my web application.
Last week, I worked on SMTP setup for sending emails and made some changes to the .env file (though I am sure that I did not change the AWS settings). Now I notice that the uploading files to the AWS S3 bucket is failing with the message:
exception: "League\\Flysystem\\UnableToWriteFile"
file:"/var/www/vhosts/silkweb.ca/vendor/league/flysystem/src/UnableToWriteFile.php"
line: 24
message: "Unable to write file at location: user/profile/3/P2jFdBHTE49mxym6jxa4LHTAPvV0qDiFZ9SsYtZt.png. Error executing \"PutObject\"
I use the following commands to put the file in the AWS S3 bucket:
$filepath = "/user/profile/".$user_id;
$upload_path = Storage::disk('s3')->put($filepath, $request->file('file'));
I even created a new IAM user and used that user's credentials in my .env file, but still no luck. My current .env settings for AWS are as follows
AWS_ACCESS_KEY_ID=AKI***************DAY
AWS_SECRET_ACCESS_KEY=jotz*************************ru
AWS_DEFAULT_REGION=us-east-2
AWS_BUCKET=silkweb
AWS_URL=silkweb.s3.us-east-2.amazonaws.com
AWS_ENDPOINT=http://silkweb-s3.us-east-2.amazonaws.com
AWS_USE_PATH_STYLE_ENDPOINT=false
I have used php artisan clear:cache and php artisan config:clear several times.
Any idea, why I am not able to create a file in the AWS S3 bucket?
After I commented the AWS_URL and AWS_ENDPOINT, file upload started working. The following are the working settings
AWS_ACCESS_KEY_ID=AKI***********DG
AWS_SECRET_ACCESS_KEY=l36***************0C
AWS_DEFAULT_REGION=us-east-2
AWS_BUCKET=silkweb
AWS_USE_PATH_STYLE_ENDPOINT=false
AWS_S3_SILKWEB_URL="https://silkweb.s3.us-east-2.amazonaws.com/"
**#AWS_URL=silkweb.s3.us-east-2.amazonaws.com**
**#AWS_ENDPOINT=http://silkweb-s3.us-east-2.amazonaws.com**
Related
I have error when trying to upload files to Minio object-storage server (something similar to AWS S3).
My endpoint is using hostname such as example.com, and the bucket is "mybucket".
But when I upload file I got error like this :
Error executing "ListObjects" on "http://mybucket.example.com/?prefix=xxxx&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 6: Could not resolve host: mybucket.example.com
It seem the lib adding bucket-name in front of server hostname, so it will error on resolving hostname. But this error not happened when I hit upload to the server IP-Address.
Currently I'm using league/flysystem-aws-s3-v3:1.0.29
PHP 7.3.9
Laravel 7.2
Currently the default config is bucket name as subdomain. If you want to use subdirectory style as bucket name, you can use 'use_path_style_endpoint' => true in your config or when manually init S3Client object. You can read related Minio config here:
https://docs.min.io/docs/how-to-use-aws-sdk-for-php-with-minio-server.html
https://laravel.com/docs/7.x/homestead#configuring-minio
I have a Laravel 8 Project on my Local Windows PC. I uploaded the project to my shared web hosting on Dreamhost via zip file and copying the entire database to remote host. (I am unable to use Composer and php artisan commands on the remote server) I am using Spatie Roles & Permissions in my project.
Later I had to add a new permission 'holiday_vacation' in my project. I created the new permission using artisan commands on my local system I believe that when a new permission is created, it adds a new record in the permissions table and a when a user is given access to a specific permission, a record is added to the model_has_permissions table. I believe that no other table is changed during this process. The newly created 'holiday_vacation' permission works fine on my local system.
However, after I manually updated the remote tables (permissions and model_has_permissions), the remote system is unable to find the new permission (holiday_vacation). The following commands in a controller display error message, "There is no permission named holiday_vacation for guard web."
if(auth()->user()->hasPermissionTo('holiday_vacation') )
{
dd("Has access");
}
I am absolutely sure that the permissions table has the holiday_vacation permission as I copied the permissions and model_has_permissions tables from the local database to the remote one.
Google search on this issue talks about clearing permission cache (e.g. php artisan cache:forget spatie.permission.cache then php artisan cache:clear). Unfortunately, I can't execute php artisan commands on my shared hosting.
Can someone offer a workaround, please?
#BABAK ASHRAFI's comment did the trick, except that the command needed to be modified slightly. (Ref: https://spatie.be/docs/laravel-permission/v3/advanced-usage/cache)
app()->make(\Spatie\Permission\PermissionRegistrar::class)->forgetCachedPermissions();
I am trying to upload file to Amazon S3, but still got an error.
local.ERROR: The PutObject operation requires non-empty parameter: Bucket {"exception":"[object] (InvalidArgumentException(code: 0): The PutObject operation requires non-empty parameter: Bucket at /usr/share/nginx/html/PaymentCloud-API/vendor/aws/aws-sdk-php/src/InputValidationMiddleware.php:64)
I looked at all posts related to this in stackoverflow and github.
This is the way I used to upload file.
Storage::disk('s3')->put('filename', 'content')
I checked content and I received it successfully.
I checked s3 configuration in .env and filesystems.php, but they are all fine.
I solved this issue!!!
It took a lot of time to fix it, but finally the solution was really simple.
It was just because of the .env cache. So I just cleared the cache and restart my server.
Now, it works.
php artisan config:clear
php artisan cache:clear
I'm trying to validate the cloudformation template using AWS CLI on my windows machine locally.
The command is:
aws cloudformation validate-template --template-body file:///C:/AWS/template.json
But Im getting below error:
Error parsing parameter '--template-body': Unable to load param file file:///C:/AWS/template.json: [Errno 2] No such file or directory: 'file:///C:/AWS/template.json'
You can check permission of AWS directory and your template.json file as well.
Sometimes in Windows system directory files you have created in system drive (C://) are user permission restricted. So, It will not allow to access any created file easily.
2nd Way:
You can upload your template to any s3 bucket and then validate your file using s3 url.
For that, aws cli has valid permission for that operations.
Below is the command you can try with changing s3 URL by pointing it to your bucket and stored file:
aws cloudformation validate-template --template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/S3_Bucket.template
So I am trying to deploy my laravel application (v7) to was elastic beanstalk. I have seen tutorials directing uploading a zip file that contains a .env file and update config.database to use the global RDS_* environment variables.
This does not work for me because I want to use codepipline and codebuild to build my application with git hooks. I have tried to set that up but my codebuild does not build successfully because in my pubsec.yaml file I added the usual laravel setup commands like installing dependencies and migrating the application's database.
Migrating the database is where I am encountering an issue. Somehow it seems codebuild does not get the RDS_* variables for my app database. I have been stuck here for a while.
This has made me question how codebuild handles environment variables. How does it create the .env file it uses to deploy? I even added a Linux command to copy my .env.example into an new .env file but having the same issues.
Any help would be greatly appreciated. Thanks
The error on logs:
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from
information_schema.tables where table_schema = forge and table_name = migrations
and table_type = 'BASE TABLE') ```
Codebuild runs on a different environment from elastic beanstalk so environment variables created in elastic beanstalk cannot be accessed in the container AWS codebuild is running.
What code build actually does is build your application and transfer it to an s3 bucket so that during deployment your app can be accessed and moved to your VPC which in my case is an ec2 instance managed by elastic beanstalk.
After deployment (ie. app moved to vpc), EB environment variables can be accessed by the application.
So if you want to run commands that require access to EB environment variables, using commands in code build is the wrong place to put them. You should make use of EB extensions. You can read about them here.
For my Laravel application, I added an init.config file in the .ebextentions directory on the root of my application and then added my migration command as a container command. This worked for my use case.