How to set the Cache Control header with Paperclip / fog for a file hosted on Google storage? - paperclip

I'm able to upload files to my google storage bucket with Paperclip and fog-google.
How can I set file headers ? I am searching for an equivalent to s3_headers option in fog-aws.
I have tried google_headers and fog_headers options with no success.

Here we go :
fog_file: {
cache_control: 'max-age=86400'
}

Related

Optimizing image files on Lravel Vapor

I'm using Laravel Vapor to host a site. Up until now, I've not had a problem with the lack of a filesystem, but now I've hit a brick wall.
I'm trying to optimize .png and .jpeg files, and the libraries I found require a filesystem to write the compressed files:
Image Optimizer (https://github.com/spatie/image-optimizer)
PHP Image Cache (https://nielse63.github.io/php-image-cache/)
I'm guessing that I can set up an external service that runs on an additional traditional server... But I'd prefer to make it work with Vapor.
Any ideas?
Have you tried using GD Library or Imagick directly?
Using Imagick with files on s3 you can do something like this:
$s3 = \Storage::disk('s3');
$file = $s3->get('tmp/'.$uuid); // assuming Vapor upload to S3 here
$imagick = new \Imagick(); // extension can be added to Vapor.
$imagick->readImageBlob($file);
$imagick->thumbnailImage(200,200);// whatever size you are looking for.
$s3->put('path/on/s3/for/your/optimized/file',$imagick->getImageBlob(),['CacheControl' => 'max-age=10000000, public', 'ACL' => 'public-read']) // whatever options you need
Note both read and write are from/to s3 directly. No need to write to local disk

How to retrieve files from S3 in Laravel Vapor

I'm having a problem loading images in my html dynamically after storing them successfully with Laravel Vapor.
I have followed this documentation provided by laravel vapor to store files, and it works like a charm. I copy my uploaded files from the tmp directory into the root of my S3 bucket and then store the path of that file in my databases images table so that later I can return the file path to my front end and display the image in my browser.
Unfortunately this is always returning a 403 status code from AWS S3.
I could fix this by making my generated S3 bucket public, but that would raise a security issue. I believe this should work out of the box, not sure where I could have gone wrong... any ideas?
I am returning the uploaded image url using the Storage facade.
use Illuminate\Support\Facades\Storage;
return Storage::url($image->path);
Where $image->path is the file path in my S3 bucket.
I'm sure that the storage facade is working correctly because it is returning the correct url with the file's path.
I got the solution to this problem. I contacted laravel vapor support and I was told to set the visibility property for my file to public when I copy it to the permanent location, as stated in Laravel's official documentation here.
So after you upload your file using the js vapor.store method you should copy it to a permanent directory, then set it's visibility to public.
Storage::copy($request->path, str_replace('tmp/', '', $request->path));
Storage::setVisibility(str_replace('tmp/', '', $request->path), 'public');
I also noticed that your can set the visibility of the file directly in the vapor.store method by passing a visibility attribute with the respective value.
vapor.store(file, { visibility: 'public-read' });
As a side note: just 'public' will return a 400 bad request, it must be set to 'public-read'.

Serve static files in Flask from private AWS S3 bucket

I am developing a Flask app running on Heroku that allows users to upload images. The app has a page displaying the user's images in a table.
For developing purposes, I am saving the uploaded files to Heroku's ephemeral file system, and everything works fine: the images are correctly loaded and displayed (I am using the last method shown here implying the use of send_from_directory()). Now I have moved the storage to S3 and I am trying to adapt the code. I use boto3 to upload the files to the bucket: it works fine. My doubts are related to the download to populate the users' pages with their images.
As explained here, I could set the file as "public-read" and use the URL (I think this is what Flask-S3 does), but I'd rather prefer not to leave free access to the files. So, my solution attempt is to download the file to Heroku's filesystem and serve the image using again the send_from_directory() as follows:
app.py
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
s3.download_file(current_app.config['S3_BUCKET_NAME'],
resource,
os.path.join('tmp',
resource))
return send_from_directory('tmp', # Heroku's filesystem
resource,
as_attachment=False)
Then, in the template I generate the URL for the image as follows:
...
<img src="{{ url_for('app.download_image',
resource=resource) }}" height="120" width="120">
...
It works, but I don't think this is the proper way for some reasons: among them, I should manage the Heroku's filesystem to avoid using up all the space between dynos restart (I should delete the images from the filesystem).
Which is the best/preferred way, also considering the performance?
Thanks a lot
The preferred way is to simply create a pre-signed URL for the image, and return a redirect to that URL. This keeps the files private in S3, but generates a temporary, time limited, URL that can be used to download the file directly from S3. That will greatly reduce the amount of work happening on your server, as well as the amount of data transfer being consumed by your server. Something like this:
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
url = s3.generate_presigned_url('get_object', Params = {'Bucket': 'S3_BUCKET_NAME', 'Key': resource}, ExpiresIn = 100)
return redirect(url, code=302)
If you don't like that solution, you should at least look into streaming the file contents from S3 instead of writing it to the file system.

Error "SignatureDoesNotMatch". Google Cloud Storage Bucket PUT

I'm loosing my mind.
I'm using Shrine (https://github.com/janko-m/shrine) with Google Cloud Storage (https://github.com/renchap/shrine-google_cloud_storage), but when I start the PUT call I get this:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
</Message>
<StringToSign>
PUT
image/jpeg
1518399402
/mybucket.appspot.com/7d5e4aad1e3a737fb8d2c59571fdb980.jpg
</StringToSign>
</Error>
I followed this info (http://shrinerb.com/rdoc/classes/Shrine/Plugins/PresignEndpoint.html) for presign_endpoint, but still nothing:
class FileUploader < Shrine
plugin :presign_endpoint, presign_options: -> (request) do
filename = request.params["filename"]
extension = File.extname(filename)
content_type = Rack::Mime.mime_type(extension)
{
content_type: content_type
}
end
end
I tried with and without this (restarting the Rails server everytime).
Where am I wrong?
I also tried with Postman with a PUT to that URL and withtout any content-type. But still nothing.
I read here: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1976 and here: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1695
How can I try without Rails?
Is there a REPL (or similar) to try with my credentials and with a file?
You have several options for just uploading a file to Google Cloud Storage as you can see in the official docs here.
For example if you want to use Ruby Client Library, you can use this code:
# project_id = "Your Google Cloud project ID"
# your-bucket-name = "Your Google Cloud Storage bucket name"
# local_file_path = "Path to local file to upload"
# storage_file_path = "Path to store the file in Google Cloud Storage"
require "google/cloud/storage"
storage = Google::Cloud::Storage.new(project: "your-project_id")
bucket = storage.bucket "your-bucket-name"
file = bucket.create_file "local_file_path", "storage_file_path"
puts "Uploaded #{file.name}"
You will need to:
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key.
as you can see in the Cloud Storage Client Libraries docs here.
However, it looks like the github repo you are using makes use of signed URLs. If you need to create a signed URL you have the instructions here. It exists already the signed_urls method for Ruby in the official repo.
Anyway, the error you were getting is because there was something wrong being made with the signed urls. And precisely there was a commit to the repo you were referencing (https://github.com/renchap/shrine-google_cloud_storage) 7 days ago with the name Fix presigned uploads. It looks like this commit should fix the "PUT" upload (before a "GET" was being signed, therefore it wouldn't work). Still I didn't try it so I don't know if it is actually working.

Upload to Amazon S3 with Codeigniter

I am developing an API using Codeigniter and I want to let users upload images to my Amazon S3 account. I am using Phils Restserver and Donovan Schönknecht S3 library (for Ci).
It works perfectly to upload a local file to Amazon but how can I get the image file
sent via normal external form?
Using the built in Ci upload library it works fine but then I have to store the files
locally on my own server and I want them on S3. Can the two be combined?
I guess what I am asking is how can I "get" the image file that is sent to the controller
and resize it and then upload it to S3?
Do I perhaps need to temporary save it on the local server, upload it to S3 and then remove it from the local server?
This is my "upload" modal:
// Load the s3 library
$this->load->library('S3');
// Make the upload
if ($this->s3->putObjectFile($args['local'], "siticdev", $args['remote'], S3::ACL_PUBLIC_READ)) {
// Handle success
return TRUE;
} else {
// Handle failure
return FALSE;
}
Thankful for all input!
If I understand you correctly, you want a user to upload an image via a form, resize that image, then transfer that to Amazon S3.
You'll have to store the file locally (at least for a few seconds) to resize it with CI. After you resize it, then you can transfer it to Amazon S3. In your success callback from the transfer, you can delete the image from your server.
You should definitely check out the CI S3 library. The "spark" is available here - http://getsparks.org/packages/amazon-s3/versions/HEAD/show

Resources