I am trying to upload files to a bucket on Google Cloud Storage, but I am having trouble figuring out how to set it up so that it is publicly writable and readable. In other words, I don't want to have to require authentication from the user in order to upload to the bucket. Does anybody know the steps to follow in order to set this up?
I would also like to know what I need to append to my request headers in order to upload files to this bucket. I am using the iOS API client, but if anybody knows what the raw headers are, I can probably figure out from there how to do it in iOS. At the moment, I am only including the following additional header: x-goog-project-id
For your first question, you can make your public your new uploaded objects with the following command that uses the gsutil acl syntax:
gsutil defacl ch -u allUsers:R gs://<bucket>
Now you need to give access to write to that bucket to everybody using the command:
gsutil acl ch -u allUsers:O gs://<bucket>
Regarding your other question, I'm not familiar with iOS but you can go to the bottom of this page and upload an object and you'll see the HTTP request that you can use in your code.
Also, there is Google API Client Library for Objetive-C and it seems that with that library you can manage Google Cloud Storage as per these files.
I hope it helps.
Please consider using signed URLs (https://developers.google.com/storage/docs/accesscontrol#Signed-URLs ) instead of making your bucket publicly writable. Having a publicly writable bucket can be an opening to various forms of abuse, and also could result in your getting a surprisingly high bill if your bucket is discovered by someone on the Internet who then uploads large amounts of data to it.
Related
I am working on an app that has some social network elements: users can create posts with images and they can share these publicly or with friends.
I am now considering the security aspect of this. These images should only be available to the person that uploaded them and the people they select to view them.
From the posts I have seen it seems that one of the recommended ways is to expose an API endpoint through my backend service to control access through it (this way I can check a user's permissions) and then return the requested image but I feel that serving images this way would be quite expensive.
Are there any other approaches that do not sacrifice security but achieve a good performance?
In case it matters, I am using Spring Boot for my backend, Expo + React Native for my app and I am planning to store the images on AWS S3
It turns out S3 on AWS allows access to files through signed URLs, which means only people with the given signed URL can access the file. This signed URL can be further restricted by specifying the duration for which the signed URL will be valid.
Generating these URLs can be done by the back-end service without reaching out to AWS, so that does not create a big performance hit.
I only want users to be able to download files which are saved in my AWS bucket if they are logged into my Laravel app. I do not want users to be able to download the files if they are not logged into the system and not by directly going to the URL. Can someone please help me achieve this? I think I may need to somehow set the document as private but I'm not sure about this.
There are two options that I know of:
Returning a temporary signed URL which the user can use to download the file directly from S3.
Streaming or downloading the file from S3 through your application.
For the first option you can use the Storage facade's temporaryUrl method like this to allow access for 5 minutes:
$url = Storage::disk('s3')->temporaryUrl(
'file.jpg', now()->addMinutes(5)
);
To download the file through your server you could return:
return Storage::download('file.jpg');
Don't quote me on the following, but I think the advantage of using the S3 temporary url is that the download doesn't have to happen through your server which frees up your server for other requests. The benefit of downloading through your server is that you would be able to authenticate against every request and there won't be a temporary url which a user would be able to share.
So I can successfully generate a temporary signed url on google cloud storage, with an expiry time etc.
However the signed URL still has the clearly visible bucket name and file name.
Now I understand that when the download occurs, the filename will be visible.. as we have downloaded that file. However it would be nice to obscure the bucket and filename in the URL?
Is this possible, the documentation does not give any clues and a google search session has not really given any results that help.
I don't think there's a way. Bucket naming best practices basically state that bucket and object names are "public", that you should avoid using sensitive information as part of those names, and advice you to use random names if you are concerned about name guessing/enumeration.
A possible workaround for this would be to proxy the "get" for the Cloud Storage objects using Cloud Functions or App Engine, so the app retrieves the objects from Cloud Storage and then send it to the client.
This is more costly, and would require to write more code.
I can think on another possible workaround which consists in protect your Signed URL by code (such as PHP), so that users cannot know what the URL is. Nevertheless, taking in account that you want to avoid any displayed-data on the network activity when downloading, you should test this workaround first to see if this works as intended.
There's a lot of new information regarding how to programatically download Google Play reports using gsutil tool. Google Play uses a bucket to store these reports, just like Google Cloud Storage does. I'm already able to download reports from Google Play bucket without a problem. For example:
gsutil cp gs://pubsite_prod_rev_<my project id>/stats/installs/installs_<my app id>_201502_overview.csv .
On the other hand, gsutil offers a feature to watch Google Cloud Storage buckets, so you can receive notifications every time an object in the bucket changes (gsutil notification watchbucket). I am also able to enable notifications in buckets created in my own Google Cloud projects.
The problem is, I'm not able to enable notifications in my Google Play bucket. Is it even possible? I get an AccessDeniedException: 403 Forbidden error when calling:
gsutil notification watchbucket -i playnotif -t sometoken https://notif.mydomain.com gs://pubsite_prod_rev_<my project id>
I've followed all the steps here, being specially careful with those regarding identifying a domain to receive notifications.
As I mentioned above, I'm already able to do all the process I need, but with my own buckets in Google Cloud, not with the Google Play bucket.
The Google Play project has been linked to a Google Cloud project. It did so automatically when I enabled Google Play API access (Google Play Developer Console -> Configuration (left menu) -> API access).
The Google Play project owner and my own Google Cloud project owner is the same.
This owner has successfully registered and validated the domain used to receive the notifications (following the example, I validated both just in case: notif.mydomain.com and mydomain.com, using https in the Google Webmaster Tools)
These domains have also been whitelisted in the Google Developers Console (left sidebar -> APIs & Auth -> Push).
I've successfully enabled notifications in my own Google Cloud buckets using either the project owner account or a service account I created. I've already tried using both (owner and a corresponding service account) in the Google Play bucket, without success.
Any ideas will be greatly appreciated. Thanks!
EDIT:
I had already followed the steps here, but using different procedures (as explained in the comment below). Following Nikita's suggestion, I tried to follow the steps using the same procedure.
So I configured gsutil (through gcloud) to use the owner account:
gcloud config set account owner-of-play-store-project#gmail.com
and while trying to grant full access to the service account, I encountered this error:
$ gsutil acl ch -u my-play-store-service-account#developer.gserviceaccount.com:FC gs://pubsite_prod_rev_my-bucket-id
CommandException: Failed to set acl for gs://pubsite_prod_rev_my-bucket-id/. Please ensure you have OWNER-role access to this resource.
So, I tried to list the default ACL for this bucket, and found:
$ gsutil defacl get gs://pubsite_prod_rev_my-bucket-id
No default object ACL present for gs://pubsite_prod_rev_my-bucket-id. This could occur if the default object ACL is private, in which case objects created in this bucket will be readable only by their creators. It could also mean you do not have OWNER permission on gs://pubsite_prod_rev_my-bucket-id and therefore do not have permission to read the default object ACL.
[]
Conclusion:
It really makes me think that, even using the project owner account, this account doesn't have the OWNER role on the Play Store bucket. This means ACLs can't be modified, not even listed, as well as notifications can't be enabled since, sadly, we don't really own the bucket.
At the moment, you cannot. Google Play owns these buckets, and end users do not have the bucket FULL_CONTROL access necessary to subscribe to Object Change Notifications.
I'm letting people post files to my S3 account and I don't know the filename that they'll be posting.
How do I get a signed URL for them ahead of time so they can download whatever file they have posted? I want to do this ahead of time because I don't want to hit my server again.
I'm using the ruby aws sdk.
You can't.
You can generate a signed URL from a file that already exists on S3. You can't generate a URL before you know what the URL will be.
While you can't distribute pre-signed URLs without knowing the key, you can use the S3 POST technique to generate a signed policy, which can then be used to upload files. This requires building a simple web form, or other tool which can create an HTTP POST request.
Ideally, you would create a separate signed policy document for each user, that way you can revoke access on an individual basis. Also, the policy allows you to constrain the maximum file size, the key prefix, file types and other things.