S3 use PUTinstead of POST - fine-uploader

I've got this setup on my amazon environment:
CloudFront distribution -> S3 bucket in frankfurt.
Unfortunately newer regions supports only v4 signatures and this is causing me some headaches.
I use fine-uploader to directly upload to cloudfront distribution and everything works fine if file is chunked ( in this case fine-uploader uses put to upload file).
The problem happens when file size is smaller than chunk size. In this case fine-uploader changes method to POST. As Post is not supported ( as in the documentation ) by cloudfront i'm not able to upload files. There's any way to overwrite upload method for not chunked files ?

I just made some adjustments to the pre-release of Fine Uploader 5.4. If you are sending files through a CDN and utilizing v4 signatures, you will need to supply the hostname of your S3 bucket to Fine Uploader S3 as well. This will ensure the headers are signed used the bucket's hostname, and not the hostname of your CDN. This was tested and verified with Fastly, and should work with any sane CDN. CloudFront is, for the most part, a mess, so no guarantees with CF.
As a result of my change, I don't believe you will hav to use an Origin Access Identity anymore, and will therefore not be restricted to PUT requests.
I've updated the documentation for the CDN section on the S3 feature page in the develop branch for reference. 5.4.0 is scheduled to be released next week.

You can simply lower your S3 maximum chunk size. See the docs here: http://docs.aws.amazon.com/cli/latest/topic/s3-config.html#multipart-chunksize
multipart_chunksize
Default - 8MB
Once the S3 commands have decided to use multipart operations, the file is divided into chunks. This configuration option specifies what the chunk size (also referred to as the part size) should be. This value can specified using the same semantics as multipart_threshold, that is either as the number of bytes as an integer, or using a size suffix.

Related

Browser Cache Private S3 Resources

Stack is:
Angular
Laravel
S3
nginx
I'm using S3 to store confidential resources of my users. Bucket access is set to private which means I can access files either by creating temporary (signed, dynamic) links or by using Storage::disk('s3')->get('path/to/resource') method and returning an actual file as a response.
I'm looking for a way to cache resources in user's browser. I have tried to set cache headers to resource response directly on AWS, but since I'm creating temporary urls, they are dynamic and cache is not working in that case.
Any suggestion is highly appreciated.
EDIT: One thing that makes the whole problem even more complex is that security of resources should be intact. It means that I need a way to cache resources, but in the same time I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
Temporary links in terms of security are still not an ideal solution, since they can be shared (and accessed multiple times) within the period of time they are valid for (in my case it's 30 seconds).
Caching will work as-is (based on Cache-Control, et al.) as long as the URL stays the same. So, if your application uses the same signed URL for awhile, you'll be fine.
The problem comes when you want to update an expiration date or something. This of course has different querystring parameters, and is effectively a different URL. You need a different caching key, but the browser has no concept of this by default.
If it is acceptable for your security, you can create a Service Worker which uses just the base URL (without querystring) as the cache key. Then, future requests for the same object on the bucket will be able to used the cached response, regardless of other URL parameters.
I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
This part of your requirement is impossible, and unrelated to caching. Once that URL is signed, it can be used by others.
You have just add one parameter in your code.
'ResponseCacheControl' => 'no-store'
Storage::disk('s3')->getAwsTemporaryUrl(Storage::disk('s3')->getDriver()->getAdapter(), trim($mNameS3), \Carbon\Carbon::now()->addMinutes(config('app.aws_bucket_temp_url_time')), ['ResponseCacheControl' => 'no-store']);

Storing object with Control-Cache header in object storage is unachievable

I did upload an object with Cache-Control as parameter and it does not take effect in object storage bucket but it does in AWS S3 bucket using the same code:
$s3Client->putObject([
'ACL' => 'public-read',
'Bucket' => config('filesystems.disks.object-storage.bucket_name'),
'CacheControl' => 'public, max-age=86400',
'Key' => $path,
'SourceFile' => $path,
]);
I don't really understand why the same code does not have same effect in both cloud buckets since both use S3 API.
The uploaded file has control-cache header in AWS S3 and the same file in IBM OO doesn't get the same result.
how can I set correctly control-cache header in object-storage file ?
The IBM object storage currently does not have all the options as AWS S3, the valid API operations are listed here https://ibm-public-cos.github.io/crs-docs/api-reference
As you can see there is not support for control cache
It can be done now - at least, certainly through the IBM Cloud Object Storage CLI:
ibmcloud cos put-object --bucket bucket-name-here --cache-control
"public, max-age=31536000" --body dir/file.jpg --key prefix/file.jpg
Assuming you have the rights to do this, it will result in a with the appropriate Cache-Control header. There are optional parameters for e.g. Content-Type as well, although it seemed to detect the correct one for a JPG. To replace metadata on existing files you may have to copy from the bucket to the same bucket, as is done here.
Prior to this I created a service account with HMAC and entered the credentials with ibmcloud cos config hmac. You may also need ibmcloud cos config region to set your default region first.
As for the API itself, setCacheControl() [and setHttpExpiresDate()] seem like what you need. For the REST API you may need Cache-Control to be part of the PUT - it has been listed as a "common header" since June 2018. I'm not sure this is how you achieve this goal via REST, but it seems likely - this is how you set Content-Type.
In the web console I wasn't able to see an equivalent to the way Oracle offers to set Cache-Control headers when selecting files to upload, as it starts uploading immediately upon drag-and-drop using Aspera Connect. (This is unfortunate, as it's a relatively user-friendly way to upload a moderate number of files with paths.)

[play framework]Leverage browser caching

I ran a test run for speed test of my page. It said "Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.".
My Page using Play Framework. Came across a lot of answers regarding .htaccess file but it is not supported in Play Framework. How to cache the static files on browser level?
When using Play in production mode, it already sets the ETag header, so whenever a browser requests a file matching that eTag, play just returns 304 Not Modified. This will save you data (the browser will not download the file again if it has the right version), but still requires a request to the server.
If you want to specify a expiracy date, you can use assets.defaultCache="max-age=3600" to your application.conf (adapt the value for your needs: 3600 is one hour in seconds).
I can't check this right now, but I think Play also sets Cache-Control: max-age=3600, so probably the warning you are getting is because this value is too low for the tool you are using to check the caching.
You can also set the expiracy time to individual assets (see https://www.playframework.com/documentation/2.5.x/AssetsOverview#Additional-Cache-Control-directive)
Note that you should only specify a high expiracy time to assets that you are sure that don't change a lot...

Receive file via websocket and save/write to local folder

Our application is entirely built on websockets. We don't do any HTTP request-reply. However, we are stuck with file download. If i receive file content via websockets can I wrote to local folder on user computer ?
If it makes a difference, we are only supporting Chrome so not issue if it doesn't work on other browsers.
Also, I know i can do this via HTTP. Trying to avoid it and stick to websockets since thats how the entire app is.
Thanks a lot in advance!
The solution depends on size of your file.
If size is less than about 50 MB, I would encode file's content to base64 string on the server and send this string to the client. Client should receive parts of the string, concat them to single result, and store. After receiving whole string, add link (tag <a>) to your page with attribute href set to "data:<data_type>;base64,<base64_encoded_file_content>". <data_type> is a mime type of your file, for example "text/html" or "image/png". Suggest file name by adding download attribute set to name of file (doesn't work for Chrome on OS X).
Unfortunately I have no solution for large files. Currently there is only FileEntry API that allows to write files with JS, but according to documentation it is supported only by Chrome v13+, learn more here https://developer.mozilla.org/en-US/docs/Web/API/FileEntry.

What's the fastest way to upload an image to a webserver?

I am building an application which will allow users to upload images. Mostly, it will work with mobile browsers with slow internet connections. I was wondering if there are best practices for this. Does doing some encryption and than doing the transfer and decoding on server is a trick to try ? OR something else?
You would want something preferably with resumable uploads. Since your connections is slow you'd need something that can be resumed where you left off. A library i've come across over the many years is Nginx upload module:
http://www.grid.net.ru/nginx/upload.en.html
According to the site:
The module parses request body storing all files being uploaded to a directory specified by upload_store directive. The files are then being stripped from body and altered request is then passed to a location specified by upload_pass directive, thus allowing arbitrary handling of uploaded files. Each of file fields are being replaced by a set of fields specified by upload_set_form_field directive. The content of each uploaded file then could be read from a file specified by $upload_tmp_path variable or the file could be simply moved to ultimate destination. Removal of output files is controlled by directive upload_cleanup. If a request has a method other than POST, the module returns error 405 (Method not allowed). Requests with such methods could be processed in alternative location via error_page directive.

Resources