I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.
Related
I know how to get the metadata for a google document, and I can get the thumbnailLink value, but I have no clue how to actually download that using the ruby client for the google API. I'm authenticating using an access_token from the user and would like to download the image to represent that document.
I can't find any example or documentation for how to create a credentialed request to do a simple GET of the thumbnailLink URL in ruby.
I have tried adding the Authorization: Bearer <ACCESS_TOKEN> header to the request to no avail.
Any suggestions?
The problem was the scope. The start of this was actually the creation of the google document itself and the scope I used for that was https://www.googleapis.com/auth/drive.file which says "View and manage Google Drive files and folders that you have opened or created with this app", which you would think would give you access to the thumbnailLink for the document you just created with an access_token generated using that scope, but it does not. Instead you need to widen your scope. For now I'm going with https://www.googleapis.com/auth/drive which is pretty much wide open permissions, but will try to limit that later. But once I changed to that scope I'm able to make a simple GET request with the Authorization: Bearer <access_token> header and it works.
I'm using a s3 bucket for my application's user's uploads. This bucket is private.
When i use the following code the generated url is not accessible from within the application:
return Storage::disk('s3')->url($this->path);
I can solve this by generating a temporary url, this is accessible:
return Storage::disk('s3')->temporaryUrl($this->path, Carbon::now()->addMinutes(10));
Is this the only way to do this? Or are there other alternatives?
When objects are private in Amazon S3, they cannot be accessed by an "anonymous" URL. This is what makes them private.
An objects can be accessed via an AWS API call from within your application if the IAM credentials associated with the application have permission to access the object.
If you wish to make the object accessible via a URL in a web browser (eg as the page URL or when referencing within a tag such as <img>), then you will need to create an Amazon S3 pre-signed URLs, which provides time-limited access to a private object. The URL includes authorization information.
While I don't know Laravel, it would appear that your first code sample is just provide a normal "anonymous" URL to the object in Amazon S3 and is therefore (correctly) failing. Your second code sample is apparently generating a pre-signed URL, which will work for the given time period. This is the correct way for making a URL that you can use in the browser.
The response to the initial file list request initially sent by Fine Uploader requires according to the documentation, at least the follow items: name, uuid, s3Key, blobName.
As the documentation says, s3Key is for Amazon S3 usage, blobName is for MS Azure. That is clear. However if providing the s3Key for use with Amazon S3 is the implementation, can the uuid be excluded? Just want to clarify how the uuid is used when dealing with Amazon S3 objects, and if the uuid is not provided correctly what are the guidelines for doing it correctly.
name and uuid are always required in your server's response to the initial file list GET request. The UUID is simply sent as a parameter back to your server when a delete request is made. The UUID is also required by some internal components of Fine Uploader for tracking purposes. IF you want to use the initial file list feature, you should be storing the UUID Fine Uploader assigns each file along with the file record in your DB. If you aren't doing this, you can probably just generate a new UUID for each canned/initial file and return it in your response to the initial file GET. If you choose to do this, and you are utilizing the delete file feature, you may need to be sure and persist this association server-side.
I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/
I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.