Hosting uploads on amazon s3 in a private bucket, accessing url's from within Laravel - laravel

I'm using a s3 bucket for my application's user's uploads. This bucket is private.
When i use the following code the generated url is not accessible from within the application:
return Storage::disk('s3')->url($this->path);
I can solve this by generating a temporary url, this is accessible:
return Storage::disk('s3')->temporaryUrl($this->path, Carbon::now()->addMinutes(10));
Is this the only way to do this? Or are there other alternatives?

When objects are private in Amazon S3, they cannot be accessed by an "anonymous" URL. This is what makes them private.
An objects can be accessed via an AWS API call from within your application if the IAM credentials associated with the application have permission to access the object.
If you wish to make the object accessible via a URL in a web browser (eg as the page URL or when referencing within a tag such as <img>), then you will need to create an Amazon S3 pre-signed URLs, which provides time-limited access to a private object. The URL includes authorization information.
While I don't know Laravel, it would appear that your first code sample is just provide a normal "anonymous" URL to the object in Amazon S3 and is therefore (correctly) failing. Your second code sample is apparently generating a pre-signed URL, which will work for the given time period. This is the correct way for making a URL that you can use in the browser.

Related

Video streaming with object storage bucket

We are storing the videos in object storage (aws s3/oci os) and using object uri's we are able to play the videos from HTML video player. but if we make the bucket access as private then possible ways are use the pre-authenticated urls or use the object storage sdk api to get the input stream for video object, stream the data using data buffers with ResourceRegion in webflux (we can handle all the authentication stuff to access private bucket data).
My query is there any better way to access the private bucket videos (content delivery & streaming)? Can we provide a proxy url instead video object uri directly to client, because I can handle some authentication & authorisation stuff on this url and will hidden the actual video object uri so that we can prevent the video downloading from any third party apps.
Kindly provide suggestions on this.
Yes, there are ways. One way is to have a proxy server route external HTTP calls. But that will have only limited features. Another option is to have custom written microservice to stream data from a private/public bucket via an HTTP endpoint with additional custom business logic.
You may refer to this sample Spring Boot microservice code to stream content from OCI Object Storage.
https://github.com/oracle-devrel/oci-sdk-java-samples/tree/main/usecases/storage-file-streaming
You can generate a new access key and secret from your s3 storage, create a small/simple service/api with node or any language of your choice, and every time your app needs a url for a video, it can send a request to the service for a new url which can have an expiration time on it.
Also, in your api you can ensure only your app can access the request for new url.
However, if you mean you want only your browser or your client's to be the only ones that can access the video then that may be difficult. From the above, you can control who can access the url, how long the url is active and who can call the api. Third parties have to do a lot bypass your restrictions

How do I copy an object with presigned URL?

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

Fine Uploader with Amazon S3, s3Key vs UUID clarification

The response to the initial file list request initially sent by Fine Uploader requires according to the documentation, at least the follow items: name, uuid, s3Key, blobName.
As the documentation says, s3Key is for Amazon S3 usage, blobName is for MS Azure. That is clear. However if providing the s3Key for use with Amazon S3 is the implementation, can the uuid be excluded? Just want to clarify how the uuid is used when dealing with Amazon S3 objects, and if the uuid is not provided correctly what are the guidelines for doing it correctly.
name and uuid are always required in your server's response to the initial file list GET request. The UUID is simply sent as a parameter back to your server when a delete request is made. The UUID is also required by some internal components of Fine Uploader for tracking purposes. IF you want to use the initial file list feature, you should be storing the UUID Fine Uploader assigns each file along with the file record in your DB. If you aren't doing this, you can probably just generate a new UUID for each canned/initial file and return it in your response to the initial file GET. If you choose to do this, and you are utilizing the delete file feature, you may need to be sure and persist this association server-side.

upload files directly to amazon s3 using fineuploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Is there a way to serve s3 files directly to the user, with a url that cant be shared?

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Resources