Trying to configure Laravel to upload to my aws s3 bucket. It is working fine until I change visibility to public. Then it seems to work or at least it does not show any error but nothing gets uploaded to aws.
Here is the part in my register controller where I am uploading a profile picture
if($request->hasFile('avatar')) {
$file = $request->file('avatar');
$filename = $file->getClientOriginalName();
$file->storeAs('avatars/' . $user->id, $filename, 's3');
$user->update([
'avatar' => $filename,
]);
}
And here is the configuration for s3 in filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
'visibility' => 'public',
],
Without the 'visibility' => 'public', it is working fine but as soon as I add it nothing gets uploaded anymore.
I had this before, you need to Edit Object Ownership to ACLs enabled. By default it is disabled which means that all objects. on the bucket is yours as a private, but ACLs let you choose if you won't object to being public or private.
if you will create a new s3 bucket, in Object Ownership enabled ACLs.
or if you have already created a bucket go to permissions and edit Object Ownership to enable ACLs.
What worked for me was enabling public access to my bucket and adding bucket policy that makes the files in question publicly readable. You can read the official docs here.
Also, make sure to remove the "visibility" => "public" line from Laravel as this prevents the upload from taking place.
Related
I am using spatie media library in Laravel with the below code to upload images to s3 bucket
$file = $this->fileUploadModel
->addMediaFromDisk($file->path(), 's3')
->toMediaCollection();
The image is saved to s3 bucket on the format:
my_bucket_name/1/image_name.png
my_bucket_name/2/image_name.png
etc
However I want a way to store the images inside an images folder ie.
my_bucket_name/images/1/image_name.png
By using only Laravel you can do that with a simple:
$file->store('images','s3');
How can I do that?
I implemented the following solution:
Un file config/filesystems.php I defined the following disk:
's3-media' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'root' => 'images/media',
],
In position root of the s3-media array I indicate the main path where the images will be stored.
Hi everyone I stuck on Digital Ocean that I want to prevent my file from the public.
First of all. I set the .env file like this
DO_SPACES_KEY= THE KEY
DO_SPACES_SECRET= THE SECRET
DO_SPACES_ENDPOINT=https://sgp1.digitaloceanspaces.com
DO_SPACES_REGION=sgp1
DO_SPACES_BUCKET= MY BUCKET NAME
DO_SPACES_URL=https://mydomain.sgp1.digitaloceanspaces.com
Then I set the config->filesystem.php
'do_spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
'url' => env('DO_SPACES_URL'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'visibility' => 'public',
],
After that make the controller store the file
//convert image name
$stringImageReFormat=base64_encode('_'.time());
$ext=$request->file('image')->getClientOriginalExtension();
$imageName=$stringImageReFormat.".".$ext;
$imageEncoded=File::get($request->image);
//upload & insert
Storage::disk('do_spaces')->put('public/user_image/'.$imageName,$imageEncoded);
// Insert Data to Table
$user=new User();
$user->image=$imageName;
$user->save();
On my blade template, I retrieve the file like this
{{ Storage::disk('do_spaces')->url('public/user_image/'.$user->image) }}
This is what I get when I don't set the visibility to public
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<BucketName>mybucket</BucketName>
<RequestId>tx0000000000000088d0617-00607228ae-13200e4-sgp1b</RequestId>
<HostId>13200e4-sgp1b-sgp1-zg02</HostId>
</Error>
If I set the visibility in the filesystem.php to public. I can see the files without authentication.
Thank you in advance for any help or advice.
We are using Nova package for our admin backend inside our Laravel App. All files and images are stored in the AWS S3 bucket.
After trying to download a file from Nova, the download begins with the name download.json and Server Error message.
Files are stored correctly in S3, I can check it manually, also the path to them inside S3 is correctly stored in the database.
Here is the code we use to create a download field in Nova
->download(function(){
return Storage::disk('s3')->download($this->name);
})
->onlyOnDetail()
$this->name holds the path inside the s3 bucket.
config/filesystems.php is also defined:
'disks' => [
...
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
],
Nova documentation did not helped me on this problem. Any input would be really helpful.
UPDATE:
The problem was not in code but in Configuration.
Without changing the configuration the following code did help:
Text::make('File/Document', function() {
$linkToFile = Storage::disk('s3')->temporaryUrl($this->name, now()->addMinutes(1));
return 'Download file';
})
->asHtml(),
Hard to see any issue without seeing the complete function, but make sure your name property $this->name has the same value as your remote file 'key' as shown in your Amazon s3 bucket.
Also, make sure the your .env file is correct and contains the following value:
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AWS_DEFAULT_REGION=your_default_region
AWS_BUCKET=your_bucket_name
AWS_URL=your_url #if applicable
Hope this makes sense.
Edit: One more thing, in filesystem.php, this line:
'url' => env('AWS_URL'),
was changed in Laravel 6.x according to this bug and became:
'endpoint' => env('AWS_URL'),
I have a file in S3, that with CloudFront using a cname (with amazon SSL Certificate) while the file is public I can access it without problems using the URL.
Valid examples in public files:
https://xxxxxxxxxxxxx.cloudfront.net/media/logos/logo1.png
https://cdn.{mydomain.com}/media/logos/logo1.png
https://s3.amazonaws.com/{mys3bucketname}/media/logos/logo1.png
In laravel
$disk = Storage::disk('cnames3');
$tempUrl = $disk->temporaryUrl($file, now()->addMinutes(5));
The best option I found was:
Should I use CloudFront together as TemporaryUrl for sensitive files in s3
'cnames3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
]
=====
.env
AWS_BUCKET={mys3bucketname}
AWS_ENDPOINT=https://xxxxxxxxxxxxx.cloudfront.net
AWS_URL=https://cdn.{mydomain.com}
but the URL that I produced includes the name of the bucket, and so it does not work for me, since it denies me access.
https://cdn.{mydomain.com}/{mys3bucketname}/media/logos/logoprivate.png?{params}
how can I get a URL compatible with CNAME, or what can I do so I can use my own domain with signed URLs; I look for this format:
https://cdn.{mydomain.com}/media/logos/logoprivate.png?{params}
if I have the private file and use "temporaryUrl" without endpoint
it returns a valid url:
https://s3.amazonaws.com/{mys3bucketname}/media/logos/logoprivate.png?{params}
but without my domain, which does not work for me, I have been looking for a solution for hours, I hope you can help this beginner in the subject
You have to set bucket_endpoint to true in config then it doesn't append bucket name in your domain.
'cnames3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'bucket_endpoint' => true, //add this
'endpoint' => env('AWS_ENDPOINT'),
]
You can check here https://github.com/aws/aws-sdk-php/blob/master/src/S3/S3Client.php
(Assuming that CloudFront with CNAME is already working)
in options of CloudFrontDistributions>{yourCFID}
In Origins tab edit:
Origin Access Identity -> Use an Identity
Restrict Bucket Access: yes (and Yes, Update Bucket Policy)
In Default Cache Behavior Settings:
Restrict Viewer Access
(Use Signed URLs or
Signed Cookies) = YES
Trusted Signers = SELF
1.- First create Private Key from CloudFront
Creating CloudFront Key Pairs for Your Trusted Signers
2.-
composer require league/flysystem-aws-s3-v3
OR
composer require aws/aws-sdk-php
3.-
Create functions:
Signing CloudFront URLs for Private Distributions
Example:
use Aws\CloudFront\CloudFrontClient;
...
//$filesystemDisk = "s3"
private function signUrl($filesystemDisk, $resourceKey = null)
{
$cloudFront = new CloudFrontClient([
'region' => config('filesystems.disks.' . $filesystemDisk . '.region'),
'version' => '2014-11-06',
]);
// Set up parameter values for the resource
//example
$resourceKey = 'https://cdn.mydomain.com/media/logos/logoprivate.jpg';
$expires = time() + 200;
// Create a signed URL for the resource using the canned policy
$signedUrlCannedPolicy = $cloudFront->getSignedUrl([
'url' => $resourceKey,
'expires' => $expires,
'private_key' => '/path/to/keys/amazon/cloudfront/private/pk-APKFYWFAKEFAKEFAKEIQ.pem',
'key_pair_id' => 'APKFYWFAKEFAKEFAKEIQ',
]);
return $signedUrlCannedPolicy;
}
4.- Done; you get (with your cname):
https://cdn.{mydomain.com}/media/logos/logoprivate.jpg?{params}
Amazon S3 has different storage classes, with different price brackets.
I was wondering if there's a way I can choose a storage class in Laravel's Filesystem / Cloud Storage solution?
It would be good to choose a class on a per upload basis so I can choose throughout my application, not just once in a configuration file.
To pass additional options to flysystem you have to use getDriver()
Storage::disk('s3')->getDriver()->put(
'sample.txt',
'This is a demo',
[
'StorageClass' => 'REDUCED_REDUNDANCY'
]
);
This can be used in Laravel 7
Storage::disk('s3')->put(
'file path',
$request->file('file'),
[
'StorageClass' => 'STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|GLACIER|DEEP_ARCHIVE',
]
);
You can use putFileAs() Method As Well Like Below
Storage::disk('s3')->putFileAs(
'file path',
$request->file('file'),
'file name',
[
'StorageClass' => 'STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA|INTELLIGENT_TIERING|GLACIER|DEEP_ARCHIVE',
]
);
I can't really find this answer on the internet. Hope it helps someone else.
If you want to set StorageClass on the disk level (once for every upload).
You can change it on the config\filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
'options' => [
'StorageClass' => 'INTELLIGENT_TIERING'
]
],
Other possible options...
'ACL',
'CacheControl',
'ContentDisposition',
'ContentEncoding',
'ContentLength',
'ContentType',
'Expires',
'GrantFullControl',
'GrantRead',
'GrantReadACP',
'GrantWriteACP',
'Metadata',
'RequestPayer',
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'SSEKMSKeyId',
'ServerSideEncryption',
'StorageClass',
'Tagging',
'WebsiteRedirectLocation',
Ref: thephpleague/flysystem-aws-s3-v3