Error "SignatureDoesNotMatch". Google Cloud Storage Bucket PUT - ruby

I'm loosing my mind.
I'm using Shrine (https://github.com/janko-m/shrine) with Google Cloud Storage (https://github.com/renchap/shrine-google_cloud_storage), but when I start the PUT call I get this:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
</Message>
<StringToSign>
PUT
image/jpeg
1518399402
/mybucket.appspot.com/7d5e4aad1e3a737fb8d2c59571fdb980.jpg
</StringToSign>
</Error>
I followed this info (http://shrinerb.com/rdoc/classes/Shrine/Plugins/PresignEndpoint.html) for presign_endpoint, but still nothing:
class FileUploader < Shrine
plugin :presign_endpoint, presign_options: -> (request) do
filename = request.params["filename"]
extension = File.extname(filename)
content_type = Rack::Mime.mime_type(extension)
{
content_type: content_type
}
end
end
I tried with and without this (restarting the Rails server everytime).
Where am I wrong?
I also tried with Postman with a PUT to that URL and withtout any content-type. But still nothing.
I read here: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1976 and here: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1695
How can I try without Rails?
Is there a REPL (or similar) to try with my credentials and with a file?

You have several options for just uploading a file to Google Cloud Storage as you can see in the official docs here.
For example if you want to use Ruby Client Library, you can use this code:
# project_id = "Your Google Cloud project ID"
# your-bucket-name = "Your Google Cloud Storage bucket name"
# local_file_path = "Path to local file to upload"
# storage_file_path = "Path to store the file in Google Cloud Storage"
require "google/cloud/storage"
storage = Google::Cloud::Storage.new(project: "your-project_id")
bucket = storage.bucket "your-bucket-name"
file = bucket.create_file "local_file_path", "storage_file_path"
puts "Uploaded #{file.name}"
You will need to:
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key.
as you can see in the Cloud Storage Client Libraries docs here.
However, it looks like the github repo you are using makes use of signed URLs. If you need to create a signed URL you have the instructions here. It exists already the signed_urls method for Ruby in the official repo.
Anyway, the error you were getting is because there was something wrong being made with the signed urls. And precisely there was a commit to the repo you were referencing (https://github.com/renchap/shrine-google_cloud_storage) 7 days ago with the name Fix presigned uploads. It looks like this commit should fix the "PUT" upload (before a "GET" was being signed, therefore it wouldn't work). Still I didn't try it so I don't know if it is actually working.

Related

Serve static files in Flask from private AWS S3 bucket

I am developing a Flask app running on Heroku that allows users to upload images. The app has a page displaying the user's images in a table.
For developing purposes, I am saving the uploaded files to Heroku's ephemeral file system, and everything works fine: the images are correctly loaded and displayed (I am using the last method shown here implying the use of send_from_directory()). Now I have moved the storage to S3 and I am trying to adapt the code. I use boto3 to upload the files to the bucket: it works fine. My doubts are related to the download to populate the users' pages with their images.
As explained here, I could set the file as "public-read" and use the URL (I think this is what Flask-S3 does), but I'd rather prefer not to leave free access to the files. So, my solution attempt is to download the file to Heroku's filesystem and serve the image using again the send_from_directory() as follows:
app.py
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
s3.download_file(current_app.config['S3_BUCKET_NAME'],
resource,
os.path.join('tmp',
resource))
return send_from_directory('tmp', # Heroku's filesystem
resource,
as_attachment=False)
Then, in the template I generate the URL for the image as follows:
...
<img src="{{ url_for('app.download_image',
resource=resource) }}" height="120" width="120">
...
It works, but I don't think this is the proper way for some reasons: among them, I should manage the Heroku's filesystem to avoid using up all the space between dynos restart (I should delete the images from the filesystem).
Which is the best/preferred way, also considering the performance?
Thanks a lot
The preferred way is to simply create a pre-signed URL for the image, and return a redirect to that URL. This keeps the files private in S3, but generates a temporary, time limited, URL that can be used to download the file directly from S3. That will greatly reduce the amount of work happening on your server, as well as the amount of data transfer being consumed by your server. Something like this:
#app.route('/download/<resource>')
def download_image(resource):
""" resource: name of the file to download"""
s3 = boto3.client('s3',
aws_access_key_id=current_app.config['S3_ACCESS_KEY'],
aws_secret_access_key=current_app.config['S3_SECRET_KEY'])
url = s3.generate_presigned_url('get_object', Params = {'Bucket': 'S3_BUCKET_NAME', 'Key': resource}, ExpiresIn = 100)
return redirect(url, code=302)
If you don't like that solution, you should at least look into streaming the file contents from S3 instead of writing it to the file system.

Change s3 file access with aws-sdk -> 2 and ruby

I try to migrate my project to aws-sdk 2. Need to use AWS SDK for Ruby - Version 2 for this.
I found all methods, but i cant change access to file (make public).
In later version i use this:
bucket.objects[file_path].acl = :public_read
But i cant find method for changing with new api version.
This is link to old api documentation
This is link to new api documentations
I presume here that you want to change the object ACL after it's been uploaded to S3. If you can, consider setting the ACL when the object is sent to S3 rather than after.
There's two ways to do it. They are both similar and perform the same action. Pick the one you like best or the one you are more comfortable with.
Using the Client API
client = Aws::S3::Client.new(region: myregion)
resp = client.put_object_acl({ acl: "public-read", bucket: mybucket, key: mykey })
Documentation:
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html#put_object_acl-instance_method
The Resource API
s3 = Aws::S3::Resource.new(region: myregion)
bucket = s3.bucket(mybucket)
object = bucket.object(mykey)
resp = object.acl.put({ acl: "public-read" })
Documentation:
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/ObjectAcl.html#put-instance_method
Bonus
If absolutely all your objects inside your bucket needs to be public, you can set the default ACL on your whole bucket so that any object uploaded will be automatically public without having you to specify it. You do that by setting a bucket policy to your bucket.
Make a bucket public in Amazon S3

How can I generate a signed URL for a file in cloud storage that uses the static site URL?

Say I have a domain, media.coolsite.com. Also say I have a bucket in Google Cloud Storage with the same name. I also have the cname setup properly, so that when I visit media.coolsite.com in the browser, it renders index.html in the bucket.
The bucket also contains a file 01.mp3 that I am trying to get a signed URL for. I can do this with the following code:
gcloud = Gcloud.new
storage = gcloud.storage
bucket = storage.bucket 'media.coolsite.com'
file = bucket.file '01.mp3'
signed_url = file.signed_url
This returns a URL like so:
https://storage.googleapis.com/media.coolstuff.com/01.mp3?GoogleAccessId=...&Expires=...etc...
But what I would like is a URL like this:
http://media.coolstuff.com/01.mp3?GoogleAccessId=...&Expires=...etc...
Note the different host: media.coolstuff.com instead of storage.googleapis.com.
How can I do this using the gcloud gem?

SignatureDoesNotMatch using ruby aws-sdk gem and presigned pos

I need some help to get the new ruby AWS S3 SDK 2.0 working. I am using the region eu-central-1 so it is required to use the latest signature method v4 for all requests.
Actually I want to create a presigned post url to use it in combination with jquery-fileupload. I have setup S3 correctly with all access keys, bucket, CORS configuration, etc. But every time I generate a url with the following code
#signer = Aws::S3::Presigner.new
#url = #signer.presigned_url(:put_object, bucket: ENV['S3_BUCKET_NAME'], key: "documents/#{SecureRandom.uuid}/${filename}", acl: :public_read)
which creates the following URL
https://project-xxxxx-staging.s3.eu-central-1.amazonaws.com/documents/a253feb0-4c60-4735-8d95-4649c0d3dcb5/%24%7Bfilename%7D?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJVY57LY6XGIRIRHQ%2F20150205%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20150205T140425Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&x-amz-acl=public_read&X-Amz-Signature=ff2fbe233ed7380dc745aa7ba37421d7d8703db0d67208541e500367262a8c51
I get the following error
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
Does anybody know how to fix this or any hints on the underlying problem?
My environment I am using:
ruby '2.2.0'
gem 'rails', '4.2.0'
gem 'aws-sdk', '~> 2.0.21.pre'
A pre-signed URL contains a signature computed from headers and query parameters that would/should be sent. What is likely happening is the jquery uploader is adding a header that Amazon S3 requires to be signed (such as Content-Type). You would need to pre-specify this content type when building the presigned url:
signer = Aws::S3::Presigner.new
signer.presigned_url(:put_object, bucket:'name', key:'key', acl:'public-read', content_type:'...')
Also, the correct canned ACL is "public-read", not :public_read. This could also possibly be causing an issue.

Why are my S3 images are not valid for Facebook Javascript SDK?

I'm running into a error with the Facebook SDK which appears to be related to the permissions on my S3 bucket. I'm using Ruby on Rails with the Paperclip gem with Amazon S3 for storage.
Right now I have the dialog setup like so:
FB.ui({
method: 'feed',
name: "Check out this project on WorkHands",
picture: "https://workhands_images.s3.amazonaws.com/images/avatars/1100/original/2013-08-05_04_13_28__0000.jpeg?1376351034",
link: link.attr('href'),
caption: 'Work by',
description: "hello",
display: 'popup',
redirect_ui: window.location.origin
}
The reason why I think it has something to do with S3 is that I can pass in an image url from another src not on S3 (even from google images) and the dialog works perfectly fine.
My understanding is that Paperclip sets the ACL of each object to public_read by default. https://github.com/thoughtbot/paperclip/blob/master/lib/paperclip/storage/s3.rb
I have tried setting a bucket policy similar to the example here: http://ariejan.net/2010/12/24/public-readable-amazon-s3-bucket-policy/
But that didn't seem to fix anything.
For the image above, when I call s3object.acle.grants.inspect, I get XML like this:
[<Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID>0e77d1de2a82b95d7b735e0071296ef5f903fa17ba0b98ecfe5ab2d36a8f17d0</ID>
cush4437FULL_CONTROL, http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"Group\">http://acs.amazonaws.com/groups/global/AllUsersREAD]
I think it's the numbers after the '?' in your url. Facebook is (probably?) being strict about formatting URL queries in the "k=v" format, and since there is no '=' it is unhappy.
Drop the 's' from 'https'. Facebook won't always reliably fetch them.
It turns out that Facebook throws this error because of the source url has two subdomains. see https://stackoverflow.com/a/7320178/1296645
mybucket.s3.amazonaws.com - throws an error
s3.amazonaws.com/mybucket - works fine

Resources