I'm trying to create a temporary URL for a file in my application. I'm able to upload the file to S3 bucket and I'm able to use the method \Storage::temporaryUrl($this->url, now()->addHour(1)) generating the following URL
https://xxxxxx.s3.eu-west-2.amazonaws.com/https%3A//xxxxxxx.s3.eu-west-2.amazonaws.com/images/fYr20cgYh3nAwoEEQCOTaVTLLo7nRFrXjp7cYcCz.jpg?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVHH6TLEV3Z2FBWLY%2F20210622%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210622T191649Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=6300aa81e69c6f4c96cb6f319a2b5ed2bfc2b2767138994928a49f3f93906745
When I click on this URL I get the following error:
The specified key does not exist.
From this question https://stackoverflow.com/a/28654035/4581336 I have checked the following:
the file name exists in my bucket and it is a copy-paste of the file under the Object URL in my bucket
I tried removing the extension from the file to see if it affects the URL but no luck as well. Having xxxx.jpg vs xxxxx as the file name is the same
I'm a fairly new guy in the AWS world so I will copy-paste things that I think might be important to help solve the issue.
The IAM user created has the following permissions:
AmazonS3FullAccess
The bucket Block Public Access settings for this account has everything blocked:
My bucket public permission has everything enabled as well:
I'm currently logged in as a root user (which therefore I'm assuming I can do whatever I want since I'm the root)
If I do all my buckets public I'm able to access the files using the extracted URL generated by the temporaryUrl method
The final goal
The objective of the bucket is to have a place to store files that are uploaded by users in my application. I don't want to have all the files public so I would like to restrict users to the files they own so I create a temporary URL with
Storage::disk('s3')->temporaryUrl($url, now()->addMinutes(30));
Since I'm fairly new to storing files in S3 my logic may be flawed. Please, correct me if this is not how it's supposed to go.
Questions I have looked at but didn't help me
Amazon S3 exception: "The specified key does not exist"
CloudFront + S3 Website: "The specified key does not exist" when an implicit index document should be displayed
aws-sdk: NoSuchKey: The specified key does not exist?
NoSuchKey: The specified key does not exist AWS S3 Codebuild
AWS::S3::NoSuchKey The specified key does not exist
In your first URL, it seems like you've got the hostname there twice - https://xxxxxx.s3.eu-west-2.amazonaws.com appears once, and then a second time encoded. Are you storing the full hostname in the $url parameter to temporaryUrl? You should only be passing the key (the actual filename) to that method.
The error doesn't sound like a permission error - it appears as though you have access to the bucket but just aren't getting the file path correct.
Related
I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.
I created an S3 bucket and can upload to it. I can view the upload programmatically in Laravel as long as my S3 bucket is fully public. Once I make it private, I get an "Access Denied" error. I've created a User, given that user AmazonS3FullAccess permission and included the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY into .env. I followed this guide: https://dev.to/aschmelyun/getting-started-with-amazon-s3-storage-in-laravel-5b6d. Any ideas?
Does it still store accordingly or is it when trying to view that you get the error ?
Have you taken a look at
https://laravel.com/docs/8.x/filesystem#file-urls
You should be using something like
Storage::url($customer->file-path-name);
By the way try avoiding hyphen separated name. Either go camelCase or snake_case ... hyphen are typically reserved for file names(the actual file, not the reference in your DB)
I'm trying to access a file in a Google Drive directory but linking to it using the File Id provided by the API it says that i don't have permission. What i saw is that the File Id in the URL is different from the one who returns from the API. Why?
Using the test page of the Google Api it returns a "Not Found" error(404) and not the "No Permissions" error. Anybody knows how to get this ID(same of the url) that links to the file instead of the File's ID
Edit: Found that the File Resource has a property called "webViewLink" is it the link to the file instead of using the ID?
When you are trying the Drive API, you can set using fields property what values you want to return from your call as it is shown in this image:
webViewLink will return you the link that's shown when you open the file in your browser.
id will return you the ID of the file.
I specified some values, but you can see HERE all the values you could use and if you put "*" you will return all of them. Also, I didn't show in the image the id of the file to not share that info.
HERE you can see why you are getting that error. Surely, you don't have enough permitions because you have already checked that it exists for what I understood in your question.
This is exactly what I did.
I have a S3 Bucket and the image in the S3 bucket is completely public. You can paste the URL directly into any browser and it displays the image. CORS is set to allow all origins.
I followed the AWS tutorial and clicked launch stack to launch default template for serverless image handling.
https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/deployment.html#step1
While creating the stack is asks you to enter parameters. It asks the source bucket. I entered the SOURCE_BUCKET name exactly! I even double checked. So there I entered the bucket name. It is a public image in that bucket too. Now this is the link right? I am using a default template, it asks for bucket name, I enter it correctly, and then exactly try and use the resizer and it can't find the key.
When I type in the cloudfront URL and the key I get:
{"status":500,"code":"NoSuchKey","message":"The specified key does not exist."}
I have no idea what else do to! The image is completely PUBLIC! I am certain I am typing in the key and URL correctly, I have gotten this to work before but something changed all of a sudden. So I deleted the entire stack before and creating it from scratch and now it is not even making a connection with the key!
What do I do? What other permissions do I need to turn on? And why when creating the stack do they not set this up for you?
UPDATE
So I found out if I add an image to the root of the bucket, it works! But if they key is a path in a subfolder, it does not work.
Keys are not the same as path in S3. if you are going to stick all your images into a S3 psuedo folder call it images and make sure it is part of the permissions granted to the Lambda function. Just because something is public does not mean Lambda will be able to access it.
I'm having issues with creating a publicly readable bucket. I'm working in a CEPH / Rados store using the Amazon aws-sdk v 1.60.2
I created a bucket similar to many different tutorials with
s3.buckets.create('bucketName', :acl => :public_read)
I then uploaded a number of files up to s3.buckets['bucketName'] But when I go in and look at specific permissions for the bucket and it's internal objects the bucket I see has READ permissions granted to AllUsers group as well as FULL_CONTROL set to the user I created the bucket with. The objects however do not inherit the anonymous read permissions. I need the objects in the bucket to be readable anonymously.
As a note I see these permissions when I run s3.buckets['bucketName'].acl. When I try to run s3.buckets['bucketName'].policy I get the following error that makes no sense:
/var/lib/gems/1.9.1/gems/json-1.8.3/lib/json/common.rb:155:in `parse': 757: unexpected token at '<?xml version="1.0" encoding="UTF-8"?><ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>erik.test</Name><Prefix></Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><IsTruncated>false</IsTruncated></ListBucketResult>' (JSON::ParserError)
from /var/lib/gems/1.9.1/gems/json-1.8.3/lib/json/common.rb:155:in `parse'
from /var/lib/gems/1.9.1/gems/aws-sdk-v1-1.60.2/lib/aws/core/policy.rb:146:in `from_json'
from /var/lib/gems/1.9.1/gems/aws-sdk-v1-1.60.2/lib/aws/s3/bucket.rb:621:in `policy'
from test.rb:20:in `<main>'
The above error looks like aws-sdk is calling a json parser on an XML string which shouldn't be happening.
I cannot simply upload the objects with explicit permissions because my project would have BOSH uploading to the store automatically.
Unfortunately policies are inherited, so while it is possible to read the list of objects in a bucket, as it stands the anonymous read permission doesn't continue for the items uploaded.
http://ceph.com/docs/master/radosgw/s3/