Amazon S3 sporadic "access is denied" errors on https - https

Some of our users have problems to load the .js and .css hosted on S3. We track the JS errors on the server through window.onerror.
In the logs I can quite often see that for some users S3 that loading the .js produce an "Access is denied".
We use the bucketname.s3.amazonaws.com/folder/file.js syntax.
What can we do about it? Is it an Amazon's problem?

You probably need to set up a policy on S3 that includes the Effect:Allow element description for all principals using the GetObject action on your bucket. You can generate policies for S3 (among others) here: http://awspolicygen.s3.amazonaws.com/policygen.html.

Related

Files stored in AWS bucket to only be downloaded if users are logged into Laravel app. Not by directly going to URL

I only want users to be able to download files which are saved in my AWS bucket if they are logged into my Laravel app. I do not want users to be able to download the files if they are not logged into the system and not by directly going to the URL. Can someone please help me achieve this? I think I may need to somehow set the document as private but I'm not sure about this.
There are two options that I know of:
Returning a temporary signed URL which the user can use to download the file directly from S3.
Streaming or downloading the file from S3 through your application.
For the first option you can use the Storage facade's temporaryUrl method like this to allow access for 5 minutes:
$url = Storage::disk('s3')->temporaryUrl(
'file.jpg', now()->addMinutes(5)
);
To download the file through your server you could return:
return Storage::download('file.jpg');
Don't quote me on the following, but I think the advantage of using the S3 temporary url is that the download doesn't have to happen through your server which frees up your server for other requests. The benefit of downloading through your server is that you would be able to authenticate against every request and there won't be a temporary url which a user would be able to share.

gsutil signed URL, hide file and bucket name

So I can successfully generate a temporary signed url on google cloud storage, with an expiry time etc.
However the signed URL still has the clearly visible bucket name and file name.
Now I understand that when the download occurs, the filename will be visible.. as we have downloaded that file. However it would be nice to obscure the bucket and filename in the URL?
Is this possible, the documentation does not give any clues and a google search session has not really given any results that help.
I don't think there's a way. Bucket naming best practices basically state that bucket and object names are "public", that you should avoid using sensitive information as part of those names, and advice you to use random names if you are concerned about name guessing/enumeration.
A possible workaround for this would be to proxy the "get" for the Cloud Storage objects using Cloud Functions or App Engine, so the app retrieves the objects from Cloud Storage and then send it to the client.
This is more costly, and would require to write more code.
I can think on another possible workaround which consists in protect your Signed URL by code (such as PHP), so that users cannot know what the URL is. Nevertheless, taking in account that you want to avoid any displayed-data on the network activity when downloading, you should test this workaround first to see if this works as intended.

Enable notifications / Watch a Google Play bucket to programatically download reports

There's a lot of new information regarding how to programatically download Google Play reports using gsutil tool. Google Play uses a bucket to store these reports, just like Google Cloud Storage does. I'm already able to download reports from Google Play bucket without a problem. For example:
gsutil cp gs://pubsite_prod_rev_<my project id>/stats/installs/installs_<my app id>_201502_overview.csv .
On the other hand, gsutil offers a feature to watch Google Cloud Storage buckets, so you can receive notifications every time an object in the bucket changes (gsutil notification watchbucket). I am also able to enable notifications in buckets created in my own Google Cloud projects.
The problem is, I'm not able to enable notifications in my Google Play bucket. Is it even possible? I get an AccessDeniedException: 403 Forbidden error when calling:
gsutil notification watchbucket -i playnotif -t sometoken https://notif.mydomain.com gs://pubsite_prod_rev_<my project id>
I've followed all the steps here, being specially careful with those regarding identifying a domain to receive notifications.
As I mentioned above, I'm already able to do all the process I need, but with my own buckets in Google Cloud, not with the Google Play bucket.
The Google Play project has been linked to a Google Cloud project. It did so automatically when I enabled Google Play API access (Google Play Developer Console -> Configuration (left menu) -> API access).
The Google Play project owner and my own Google Cloud project owner is the same.
This owner has successfully registered and validated the domain used to receive the notifications (following the example, I validated both just in case: notif.mydomain.com and mydomain.com, using https in the Google Webmaster Tools)
These domains have also been whitelisted in the Google Developers Console (left sidebar -> APIs & Auth -> Push).
I've successfully enabled notifications in my own Google Cloud buckets using either the project owner account or a service account I created. I've already tried using both (owner and a corresponding service account) in the Google Play bucket, without success.
Any ideas will be greatly appreciated. Thanks!
EDIT:
I had already followed the steps here, but using different procedures (as explained in the comment below). Following Nikita's suggestion, I tried to follow the steps using the same procedure.
So I configured gsutil (through gcloud) to use the owner account:
gcloud config set account owner-of-play-store-project#gmail.com
and while trying to grant full access to the service account, I encountered this error:
$ gsutil acl ch -u my-play-store-service-account#developer.gserviceaccount.com:FC gs://pubsite_prod_rev_my-bucket-id
CommandException: Failed to set acl for gs://pubsite_prod_rev_my-bucket-id/. Please ensure you have OWNER-role access to this resource.
So, I tried to list the default ACL for this bucket, and found:
$ gsutil defacl get gs://pubsite_prod_rev_my-bucket-id
No default object ACL present for gs://pubsite_prod_rev_my-bucket-id. This could occur if the default object ACL is private, in which case objects created in this bucket will be readable only by their creators. It could also mean you do not have OWNER permission on gs://pubsite_prod_rev_my-bucket-id and therefore do not have permission to read the default object ACL.
[]
Conclusion:
It really makes me think that, even using the project owner account, this account doesn't have the OWNER role on the Play Store bucket. This means ACLs can't be modified, not even listed, as well as notifications can't be enabled since, sadly, we don't really own the bucket.
At the moment, you cannot. Google Play owns these buckets, and end users do not have the bucket FULL_CONTROL access necessary to subscribe to Object Change Notifications.

How do I setup a public Google Cloud Storage bucket

I am trying to upload files to a bucket on Google Cloud Storage, but I am having trouble figuring out how to set it up so that it is publicly writable and readable. In other words, I don't want to have to require authentication from the user in order to upload to the bucket. Does anybody know the steps to follow in order to set this up?
I would also like to know what I need to append to my request headers in order to upload files to this bucket. I am using the iOS API client, but if anybody knows what the raw headers are, I can probably figure out from there how to do it in iOS. At the moment, I am only including the following additional header: x-goog-project-id
For your first question, you can make your public your new uploaded objects with the following command that uses the gsutil acl syntax:
gsutil defacl ch -u allUsers:R gs://<bucket>
Now you need to give access to write to that bucket to everybody using the command:
gsutil acl ch -u allUsers:O gs://<bucket>
Regarding your other question, I'm not familiar with iOS but you can go to the bottom of this page and upload an object and you'll see the HTTP request that you can use in your code.
Also, there is Google API Client Library for Objetive-C and it seems that with that library you can manage Google Cloud Storage as per these files.
I hope it helps.
Please consider using signed URLs (https://developers.google.com/storage/docs/accesscontrol#Signed-URLs ) instead of making your bucket publicly writable. Having a publicly writable bucket can be an opening to various forms of abuse, and also could result in your getting a surprisingly high bill if your bucket is discovered by someone on the Internet who then uploads large amounts of data to it.

Finding Out if an Amazon S3 Bucket is a Website in Ruby

Amazon recently allowed S3 buckets to be enabled as websites. Using the aws-s3 gem or something similar, is there a way to programmatically determine whether a given bucket is enabled as a website or not?
Edit: In addition, if a bucket is indeed a website, how would you obtain the endpoint url?
You can use the REST api to access, and set that option
http://docs.amazonwebservices.com/AmazonS3/latest/API/index.html?RESTBucketPUTwebsite.html
in your case use "GET Bucket website"
Extra points:
The endpoint would be just the bucket URL : example-bucket.s3.amazon.com

Resources