minio presignedPutObject generated URL only valid for days, how to make it public always without days restriction - minio

I am working on Minio server, using presignedPutObject method can generate a public visiting URL, however this URL can only work for 7 days by default, I tried to extend it to 30 days but restricted.
so how would I try to make all the uploaded files with public permission which the generated URL can exist forever.

I believe the 7 day limit is an S3 Spec limitation rather than a MinIO limitation.
If you need indefinite unauthenticated access to a bucket, you can instead set a read-only bucket policy via mc policy set --recursive download play/bucket/prefix/:
Beyond that , to my knowledge we adhere to the S3 spec on the expiration of presigned URLs.

Related

Hosting uploads on amazon s3 in a private bucket, accessing url's from within Laravel

I'm using a s3 bucket for my application's user's uploads. This bucket is private.
When i use the following code the generated url is not accessible from within the application:
return Storage::disk('s3')->url($this->path);
I can solve this by generating a temporary url, this is accessible:
return Storage::disk('s3')->temporaryUrl($this->path, Carbon::now()->addMinutes(10));
Is this the only way to do this? Or are there other alternatives?
When objects are private in Amazon S3, they cannot be accessed by an "anonymous" URL. This is what makes them private.
An objects can be accessed via an AWS API call from within your application if the IAM credentials associated with the application have permission to access the object.
If you wish to make the object accessible via a URL in a web browser (eg as the page URL or when referencing within a tag such as <img>), then you will need to create an Amazon S3 pre-signed URLs, which provides time-limited access to a private object. The URL includes authorization information.
While I don't know Laravel, it would appear that your first code sample is just provide a normal "anonymous" URL to the object in Amazon S3 and is therefore (correctly) failing. Your second code sample is apparently generating a pre-signed URL, which will work for the given time period. This is the correct way for making a URL that you can use in the browser.

Set a default cache control and expires for entire S3 bucket/CloudFront

I have an amazon S3 bucket with approximately 300K items in it that are used by a large website. I would like to set the expiration of all the objects that are served out of CloudFront from the S3 bucket so that they can be cached in the browser by the user's machine. Is there an easy way to set the cache control on all the s3 objects currently in the bucket AND most importantly set a default for the bucket so that any new items added also gain the expires and cache-control headers OR can this be done using CloudFront?
So far i have read a number of AWS documents but have found nothing:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
http://docs.aws.amazon.com/cli/latest/reference/s3/index.html
Steps for adding cache control for existing objects in your bucket
git clone https://github.com/s3tools/s3cmd
Run s3cmd --configure
(You will be asked for the two keys - copy and paste them from your
confirmation email or from your Amazon account page. Be careful when
copying them! They are case sensitive and must be entered accurately
or you'll keep getting errors about invalid signatures or similar.
Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.)
./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/
For CloudFront you can specify Minimum TTL, Maximum TTL, and Default TTL for a cache behavior.they are basically the time for which an object can be cached on CloudFront and has nothing to do with adding an expiry header for the object i.e it doesn't mody any header
So now if you haven't added any header, then cloudfront will caches it for DEFAULT TTL.
FOR MORE INFO READFOLLOWING TABLE
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist

How do I copy an object with presigned URL?

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

upload files directly to amazon s3 using fineuploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Is there a way to serve s3 files directly to the user, with a url that cant be shared?

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Resources