upload files directly to amazon s3 using fineuploader - fine-uploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?

There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.

Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/

Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Related

Hosting uploads on amazon s3 in a private bucket, accessing url's from within Laravel

I'm using a s3 bucket for my application's user's uploads. This bucket is private.
When i use the following code the generated url is not accessible from within the application:
return Storage::disk('s3')->url($this->path);
I can solve this by generating a temporary url, this is accessible:
return Storage::disk('s3')->temporaryUrl($this->path, Carbon::now()->addMinutes(10));
Is this the only way to do this? Or are there other alternatives?
When objects are private in Amazon S3, they cannot be accessed by an "anonymous" URL. This is what makes them private.
An objects can be accessed via an AWS API call from within your application if the IAM credentials associated with the application have permission to access the object.
If you wish to make the object accessible via a URL in a web browser (eg as the page URL or when referencing within a tag such as <img>), then you will need to create an Amazon S3 pre-signed URLs, which provides time-limited access to a private object. The URL includes authorization information.
While I don't know Laravel, it would appear that your first code sample is just provide a normal "anonymous" URL to the object in Amazon S3 and is therefore (correctly) failing. Your second code sample is apparently generating a pre-signed URL, which will work for the given time period. This is the correct way for making a URL that you can use in the browser.

Handling image uploads in parse.com Cloudcode

I am trying to handle file uploads for a web app through cloud code.I am facing the following issues
We cant add third party middle ware such as busboy to parse
Express' built in function such as req.files doesn't seem to work with the body parser parse.com provides.
I don't want to expose my app key in the client code.
I wanted to know if there is any other way to handle this.
Parse Cloud is not a Node environment so it is not a surprise that it does not support nmp modules.
The middleware Parse provides for express.js does not support file uploads. Instead you need to send the file contents as base64 to your endpoint and create the Parse.File object from this data. more info here
Your app and client keys(except for master key) are PUBLIC INFORMATION and NOT secrets. This is clearly mentioned in the documentation by Parse and you cannot hide them at all. Use CLPs, ACLs and Cloud code to protect your data from unauthorised access.

How do I copy an object with presigned URL?

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

How can I make POST requests without making my API key public?

Using the imageshack API I can upload images to imageshack but I have to use an API key to do that. I can create a POST form for the image upload to imageshack but the key has to be put in the form and that exposes the API key publicly. How can I upload images to imageshack without exposing my API key?
I think the only way to do this properly is that the image is first POSTed to your OWN application by the user.
Then in your app you internally redirect this POST to ImageShack, where you can use your API key safely without anyone ever seeing it.
You can use something easy like RestClient to run the POST request from your back-end. You will need to store the image temporarily on your server, either in memory or on disk, for retransmission to ImageShack.
So:
User sends image with POST to your server
Your server receives the image in the POST request from the user
Your server runs a POST with this image to ImageShack using your API key
The POST request from step 1 returns successfully to the user

Is there a way to serve s3 files directly to the user, with a url that cant be shared?

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Resources