Laravel, AWS S3 "Access Denied" Private Access - laravel

I created an S3 bucket and can upload to it. I can view the upload programmatically in Laravel as long as my S3 bucket is fully public. Once I make it private, I get an "Access Denied" error. I've created a User, given that user AmazonS3FullAccess permission and included the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY into .env. I followed this guide: https://dev.to/aschmelyun/getting-started-with-amazon-s3-storage-in-laravel-5b6d. Any ideas?

Does it still store accordingly or is it when trying to view that you get the error ?
Have you taken a look at
https://laravel.com/docs/8.x/filesystem#file-urls
You should be using something like
Storage::url($customer->file-path-name);
By the way try avoiding hyphen separated name. Either go camelCase or snake_case ... hyphen are typically reserved for file names(the actual file, not the reference in your DB)

Related

The specified key does not exist - Laravel

I'm trying to create a temporary URL for a file in my application. I'm able to upload the file to S3 bucket and I'm able to use the method \Storage::temporaryUrl($this->url, now()->addHour(1)) generating the following URL
https://xxxxxx.s3.eu-west-2.amazonaws.com/https%3A//xxxxxxx.s3.eu-west-2.amazonaws.com/images/fYr20cgYh3nAwoEEQCOTaVTLLo7nRFrXjp7cYcCz.jpg?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVHH6TLEV3Z2FBWLY%2F20210622%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210622T191649Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=6300aa81e69c6f4c96cb6f319a2b5ed2bfc2b2767138994928a49f3f93906745
When I click on this URL I get the following error:
The specified key does not exist.
From this question https://stackoverflow.com/a/28654035/4581336 I have checked the following:
the file name exists in my bucket and it is a copy-paste of the file under the Object URL in my bucket
I tried removing the extension from the file to see if it affects the URL but no luck as well. Having xxxx.jpg vs xxxxx as the file name is the same
I'm a fairly new guy in the AWS world so I will copy-paste things that I think might be important to help solve the issue.
The IAM user created has the following permissions:
AmazonS3FullAccess
The bucket Block Public Access settings for this account has everything blocked:
My bucket public permission has everything enabled as well:
I'm currently logged in as a root user (which therefore I'm assuming I can do whatever I want since I'm the root)
If I do all my buckets public I'm able to access the files using the extracted URL generated by the temporaryUrl method
The final goal
The objective of the bucket is to have a place to store files that are uploaded by users in my application. I don't want to have all the files public so I would like to restrict users to the files they own so I create a temporary URL with
Storage::disk('s3')->temporaryUrl($url, now()->addMinutes(30));
Since I'm fairly new to storing files in S3 my logic may be flawed. Please, correct me if this is not how it's supposed to go.
Questions I have looked at but didn't help me
Amazon S3 exception: "The specified key does not exist"
CloudFront + S3 Website: "The specified key does not exist" when an implicit index document should be displayed
aws-sdk: NoSuchKey: The specified key does not exist?
NoSuchKey: The specified key does not exist AWS S3 Codebuild
AWS::S3::NoSuchKey The specified key does not exist
In your first URL, it seems like you've got the hostname there twice - https://xxxxxx.s3.eu-west-2.amazonaws.com appears once, and then a second time encoded. Are you storing the full hostname in the $url parameter to temporaryUrl? You should only be passing the key (the actual filename) to that method.
The error doesn't sound like a permission error - it appears as though you have access to the bucket but just aren't getting the file path correct.

How to access storage file from .env file in Laravel

I created a JSON file to use BigQuery in my Laravel project => BigQuery docs. I put the file in the storage folder to limit its access.
I only need to access it from my .env file.
GOOGLE_APPLICATION_CREDENTIALS='/storage/file.json'
Naturally, I cannot access the folder that easily and I know there are ways to access it but creating a symbolic link would make the file accessible from anywhere and I don't want that. Is there a secure way to access that file in my .env file ? Or is there a better way, another folder in which I should put the JSON file ?
I highly discourage the usage of ENV variables, instead use a Secret Manager to load at runtime, or KMS (Key Management Service)
Look at laravel-env-security for implementation.

Using Amazon AWS Lambda Function Serverless Image Handler Resizer function template results in No S3 Access

This is exactly what I did.
I have a S3 Bucket and the image in the S3 bucket is completely public. You can paste the URL directly into any browser and it displays the image. CORS is set to allow all origins.
I followed the AWS tutorial and clicked launch stack to launch default template for serverless image handling.
https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/deployment.html#step1
While creating the stack is asks you to enter parameters. It asks the source bucket. I entered the SOURCE_BUCKET name exactly! I even double checked. So there I entered the bucket name. It is a public image in that bucket too. Now this is the link right? I am using a default template, it asks for bucket name, I enter it correctly, and then exactly try and use the resizer and it can't find the key.
When I type in the cloudfront URL and the key I get:
{"status":500,"code":"NoSuchKey","message":"The specified key does not exist."}
I have no idea what else do to! The image is completely PUBLIC! I am certain I am typing in the key and URL correctly, I have gotten this to work before but something changed all of a sudden. So I deleted the entire stack before and creating it from scratch and now it is not even making a connection with the key!
What do I do? What other permissions do I need to turn on? And why when creating the stack do they not set this up for you?
UPDATE
So I found out if I add an image to the root of the bucket, it works! But if they key is a path in a subfolder, it does not work.
Keys are not the same as path in S3. if you are going to stick all your images into a S3 psuedo folder call it images and make sure it is part of the permissions granted to the Lambda function. Just because something is public does not mean Lambda will be able to access it.

AWS S3 and CEPH / Rados Bucket permission inheritance

I'm having issues with creating a publicly readable bucket. I'm working in a CEPH / Rados store using the Amazon aws-sdk v 1.60.2
I created a bucket similar to many different tutorials with
s3.buckets.create('bucketName', :acl => :public_read)
I then uploaded a number of files up to s3.buckets['bucketName'] But when I go in and look at specific permissions for the bucket and it's internal objects the bucket I see has READ permissions granted to AllUsers group as well as FULL_CONTROL set to the user I created the bucket with. The objects however do not inherit the anonymous read permissions. I need the objects in the bucket to be readable anonymously.
As a note I see these permissions when I run s3.buckets['bucketName'].acl. When I try to run s3.buckets['bucketName'].policy I get the following error that makes no sense:
/var/lib/gems/1.9.1/gems/json-1.8.3/lib/json/common.rb:155:in `parse': 757: unexpected token at '<?xml version="1.0" encoding="UTF-8"?><ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>erik.test</Name><Prefix></Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><IsTruncated>false</IsTruncated></ListBucketResult>' (JSON::ParserError)
from /var/lib/gems/1.9.1/gems/json-1.8.3/lib/json/common.rb:155:in `parse'
from /var/lib/gems/1.9.1/gems/aws-sdk-v1-1.60.2/lib/aws/core/policy.rb:146:in `from_json'
from /var/lib/gems/1.9.1/gems/aws-sdk-v1-1.60.2/lib/aws/s3/bucket.rb:621:in `policy'
from test.rb:20:in `<main>'
The above error looks like aws-sdk is calling a json parser on an XML string which shouldn't be happening.
I cannot simply upload the objects with explicit permissions because my project would have BOSH uploading to the store automatically.
Unfortunately policies are inherited, so while it is possible to read the list of objects in a bucket, as it stands the anonymous read permission doesn't continue for the items uploaded.
http://ceph.com/docs/master/radosgw/s3/

Once an image is uploaded to parse.com, can I point a different domain name at it (rather than http://files.parsetfss.com/blah/blah)?

I have a mobile app that uploads images to parse.com.
I can reference the photo with a URL that looks like:
http://files.parsetfss.com/<BigLongHexString>/tfss<BigLongHexString>-Photo
BUT, for a variety of reasons, I'd like to be able to refer to that image using a different domain name so it would look more like
http://www.mydomainname.com/<BigLongHexString>/tfss<BigLongHexString>-Photo
Can this be done? I just attempted it by creating a CNAME record to redirect *.mydomainname.com to files.parsetfss.com but the result was:
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>www.mydomainname.com</BucketName>
<RequestId>9B3C39803AABE102</RequestId>
<HostId>
5XbotEGZP/t03kgr8FkmKvyLTHN6ZBhoRcrmXU7pBn1yz1TngkulQ/RSRuAqgBxm
</HostId>
</Error>
Has anyone accomplished this? Is it possible?
I don't believe it is possible, at least the way you are trying to do it, since the files are actually stored on Amazon S3 buckets. Your domain isn't the one setup on those buckets you will get this error.
If you are using parse hosting (https://www.parse.com/docs/hosting_guide#webapp) and your domain is set point to it you can create a route to handle the redirect from there by returning the corresponding parsetfss urls in the route.

Resources