Amazon recently allowed S3 buckets to be enabled as websites. Using the aws-s3 gem or something similar, is there a way to programmatically determine whether a given bucket is enabled as a website or not?
Edit: In addition, if a bucket is indeed a website, how would you obtain the endpoint url?
You can use the REST api to access, and set that option
http://docs.amazonwebservices.com/AmazonS3/latest/API/index.html?RESTBucketPUTwebsite.html
in your case use "GET Bucket website"
Extra points:
The endpoint would be just the bucket URL : example-bucket.s3.amazon.com
Related
I am working on an app that has some social network elements: users can create posts with images and they can share these publicly or with friends.
I am now considering the security aspect of this. These images should only be available to the person that uploaded them and the people they select to view them.
From the posts I have seen it seems that one of the recommended ways is to expose an API endpoint through my backend service to control access through it (this way I can check a user's permissions) and then return the requested image but I feel that serving images this way would be quite expensive.
Are there any other approaches that do not sacrifice security but achieve a good performance?
In case it matters, I am using Spring Boot for my backend, Expo + React Native for my app and I am planning to store the images on AWS S3
It turns out S3 on AWS allows access to files through signed URLs, which means only people with the given signed URL can access the file. This signed URL can be further restricted by specifying the duration for which the signed URL will be valid.
Generating these URLs can be done by the back-end service without reaching out to AWS, so that does not create a big performance hit.
For example: example.com.s3-website-us-east-1.amazonaws.com?
How can I get this value programmatically via the SDK? I can't seem to find it:
link to docs: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3.html
The value doesn't appear to be exposed by the S3 API, itself.
It is, however, easily derived from the bucket location, which is accessible via the S3 REST API, though it isn't obvious from skimming the docs whether this is implemented in the Ruby SDK, either, in spite of its presence in the underlying API. I didn't find it.
But the web site endpoints are always in this form:
${bucket}.s3-website-${region}.amazonaws.com
In us-east-1, as shown in the screen shot, the endpoint for a bucket named example.com fits this pattern, example.com.s3-website-us-east-1.amazonaws.com.
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
I have an EC2 instance that runs Laravel 5.1. I am using an S3 bucket through Laravel's api:
AMAZON_KEY=key
AMAZON_SECRET=secret
AMAZON_REGION=us-west-2
AMAZON_S3_BUCKET=my_app_bucket
But I already set up a ROLE that enables this box to use that particular bucket. Why do I also need a key and a secret? From an analysis of the code, it looks like Laravel always demands a key and a secret, so it would seem that I have to actually create an IAM user account with key/secret and use that for s3 access instead of using roles, which is preferred. Is there a way around this, or is this just how Laravel S3 access works?
A fix to use IAM credentials for filesystem, queue, and email was merged a few days ago, so upgrading to Laravel v5.1.7 should do the trick.
https://github.com/laravel/framework/pull/9558
I am trying to upload files to a bucket on Google Cloud Storage, but I am having trouble figuring out how to set it up so that it is publicly writable and readable. In other words, I don't want to have to require authentication from the user in order to upload to the bucket. Does anybody know the steps to follow in order to set this up?
I would also like to know what I need to append to my request headers in order to upload files to this bucket. I am using the iOS API client, but if anybody knows what the raw headers are, I can probably figure out from there how to do it in iOS. At the moment, I am only including the following additional header: x-goog-project-id
For your first question, you can make your public your new uploaded objects with the following command that uses the gsutil acl syntax:
gsutil defacl ch -u allUsers:R gs://<bucket>
Now you need to give access to write to that bucket to everybody using the command:
gsutil acl ch -u allUsers:O gs://<bucket>
Regarding your other question, I'm not familiar with iOS but you can go to the bottom of this page and upload an object and you'll see the HTTP request that you can use in your code.
Also, there is Google API Client Library for Objetive-C and it seems that with that library you can manage Google Cloud Storage as per these files.
I hope it helps.
Please consider using signed URLs (https://developers.google.com/storage/docs/accesscontrol#Signed-URLs ) instead of making your bucket publicly writable. Having a publicly writable bucket can be an opening to various forms of abuse, and also could result in your getting a surprisingly high bill if your bucket is discovered by someone on the Internet who then uploads large amounts of data to it.
I'm letting people post files to my S3 account and I don't know the filename that they'll be posting.
How do I get a signed URL for them ahead of time so they can download whatever file they have posted? I want to do this ahead of time because I don't want to hit my server again.
I'm using the ruby aws sdk.
You can't.
You can generate a signed URL from a file that already exists on S3. You can't generate a URL before you know what the URL will be.
While you can't distribute pre-signed URLs without knowing the key, you can use the S3 POST technique to generate a signed policy, which can then be used to upload files. This requires building a simple web form, or other tool which can create an HTTP POST request.
Ideally, you would create a separate signed policy document for each user, that way you can revoke access on an individual basis. Also, the policy allows you to constrain the maximum file size, the key prefix, file types and other things.