Fog with Carrierwave upload to S3 default upload path invalid - ruby

I'm trying to upload to S3 with Carrierwave and Fog-Aws, and I'm having an issue. For some reason, fog is trying to upload to my bucket at
https://{bucket-name}.s3.amazonaws.com
But, when I access a file directly from aws, the url format is like this:
https://s3-{region}.amazonaws.com/{bucket-name
Whenever I try to use the path that Fog is using, it gives me the following error:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
So my question is, is there a way to
A) Change the endpoint format on S3 to match what Fog is expecting it to be, or
B) Change a setting for Fog to use this different format instead?
For reference:
I'm using Carrierwave version 1.0, fog-aws version 0.11.0
Here's my carrierwave.rb file:
if Rails.env.test? or Rails.env.development?
CarrierWave.configure do |config|
config.storage = :file
config.root = "#{Rails.root}/tmp"
config.cache_dir = "#{Rails.root}/tmp/images"
end
else
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => ENV['AWS_S3_REGION'],
:endpoint => "https://s3-#{ENV['AWS_S3_REGION']}.amazonaws.com/#{ENV['AWS_S3_BUCKET_NAME']}"
}
config.storage = :fog
config.fog_directory = ENV['AWS_S3_BUCKET_NAME']
config.fog_public = false
end
end

I believe :region is the only setting you should need to change in this case. As long as it is set accurately (and isn't the default us-east-1 region) it should change the host as you desire.
That said, I would NOT expect to also need to change endpoint like this. It would be set if you needed to use CNAME stuff, which it doesn't sound like you need. Omitting this, while setting region, should hopefully get you what you are after.

Related

Customize Instagram widget on a Dashing.io dashboard

I have set up a dashboard using dashing with a number of (mostly) existing widgets. That worked so far - see production dashboard here (work in progress).
Now I would like to have an Instagram widget that displays the n lastest images taken by username.
I have found a widget that will display images by long and lat and also was able to get my tokens configured, so I can talk to the Instagram API.
Here's the code of my current widget originally from #mjamieson's gist on github.
require 'instagram'
require 'rest-client'
require 'json'
# Instagram Client ID from http://instagram.com/developer
Instagram.configure do |config|
config.client_id = ENV['INSTAGRAM_CLIENT_ID']
config.client_secret = ENV['INSTAGRAM_CLIENT_SECRET']
end
# Latitude, Longitude for location
instadash_location_lat = '45.429522'
instadash_location_long = '-75.689613'
SCHEDULER.every '10m', :first_in => 0 do |job|
photos = Instagram.media_search(instadash_location_lat,instadash_location_long)
if photos
photos.map do |photo|
{ photo: "#{photo.images.low_resolution.url}" }
end
end
send_event('instadash', photos: photos)
end
I got this to work, but would like to modify the given API call to only display images taken by me / a user of my choice. Unfortunately I don't understand ruby or json enough to figure out what the Instagram API documentation wants me to do.
I found the following url
https://api.instagram.com/v1/users/{user-id}/media/recent/?access_token={acces-token}
and tried it (with my credentials filled in). It returned json data correctly including my images (among other data).
How can I modify the given code to display images by username instead of location?
Any help is greatly appreciated.
You'll need an access_token to get content from some user. Take a look at sample application on gem page.
It seems you need something like this:
# here we take access token from session, assuming you already got it
# sometime before and stored it there for future use
client = Instagram.client(:access_token => session[:access_token])
photos = client.user_recent_media
And this example how to get this access_token using OAuth2 browser authorization and sinatra app:
require "sinatra"
require "instagram"
enable :sessions
CALLBACK_URL = "http://localhost:4567/oauth/callback"
Instagram.configure do |config|
config.client_id = "YOUR_CLIENT_ID"
config.client_secret = "YOUR_CLIENT_SECRET"
# For secured endpoints only
#config.client_ips = '<Comma separated list of IPs>'
end
get "/" do
'Connect with Instagram'
end
get "/oauth/connect" do
redirect Instagram.authorize_url(:redirect_uri => CALLBACK_URL)
end
get "/oauth/callback" do
response = Instagram.get_access_token(params[:code], :redirect_uri => CALLBACK_URL)
session[:access_token] = response.access_token
redirect "/nav"
end
Solution
require 'sinatra'
require 'instagram'
# Instagram Client ID from http://instagram.com/developer
Instagram.configure do |config|
config.client_id = ENV['INSTAGRAM_CLIENT_ID']
config.client_secret = ENV['INSTAGRAM_CLIENT_SECRET']
config.access_token = ENV['INSTAGRAM_ACCESS_TOKEN']
end
user_id = ENV['INSTAGRAM_USER_ID']
SCHEDULER.every '2m', :first_in => 0 do |job|
photos = Instagram.user_recent_media("#{user_id}")
if photos
photos.map! do |photo|
{ photo: "#{photo.images.low_resolution.url}" }
end
end
send_event('instadash', photos: photos)
end
Explaination
1.) In addition to the client_id and client_secret I had defined before, I just needed to add my access_token to the Instagram.configure section.
2.) The SCHEDULER was correctly working, but needed to call Instagram.user_recent_media("#{user_id}") instead of Instagram.media_search(instadash_location_lat,instadash_location_long)
3.) To do that I had to set a second missing variable for user_id
Now the call gets recent media filtered by user ID and outputs it into the dashing widget.
Thanks for the participation and hints! That pointed me into the right direction of the documentation and helped me to figure it out myself.

accessing non standard s3 bucket

Using the aws-s3 gem, I can successfully perform transaction with a standard s3 bucket but one made in Ireland (s3-eu-west-1) gives the error The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. After 2 hours of searching this still means nothing to me, is there a way to get round this problem.
This simple tutorial works fine for standard s3 bucket but not for Ireland.
This person's experiences seem to suggest it's not possible.
Ok I've just found the answer here.
require 'aws/s3'
AWS::S3::Base.establish_connection!(
:access_key_id => ACCESS_KEY_ID,
:secret_access_key => SECRET_ACCESS_KEY
)
AWS::S3::DEFAULT_HOST.replace('s3-eu-west-1.amazonaws.com') # <= the crucial hacky line
AWS::S3::S3Object.store(
file_name,
temp_file,
bucket,
:content_type => mime_type
)
Edit
Much better option is to use the aws-sdk gem whose API seems a lot nicer, e.g.:
require 'aws-sdk'
s3 = AWS::S3.new(
:access_key_id => ACCESS_KEY_ID,
:secret_access_key => SECRET_ACCESS_KEY,
:s3_endpoint => 's3-eu-west-1.amazonaws.com'
)
bucket = s3.buckets[bucket_name]
bucket.objects.create(
file_name,
temp_file,
:content_type => mime_type
)

Verifying permissions on S3 object in Ruby

I'm using aws-sdk for Ruby to manage objects on S3. I'm able grant public read permissions by setting
object.acl = :public_read
Is there a way to determine if there is already public read permission granted to the object before doing that?
Ruby aws-sdk has poor documentation, and I wasn't able to locate it as well. Below is a function that I have created to check whether a file has read permission or not. Modify it as per your needs:
def check_if_public_read(object)
object.acl.grants.each do |grant|
begin
if(grant.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers")
return true if ([:read, :full_control].include?(grant.permission.name))
end
rescue
end
end
return false
end
where object is any S3 Object:
AWS.config(
:access_key_id => "access key",
:secret_access_key => "secret key"
)
s3 = AWS::S3.new
file = s3.buckets["my_bucket"].objects["path/to/file.png"]
check_if_public_read(file) => true
Please note that I have figured this out looking at the objects and aws-sdk source code, and the uri parameter may change over time. This works now, and for aws-sdk gem version 1.3.5.

How do I update a batch of S3 objects' metadata using ruby?

I need to change some metadata (Content-Type) on hundreds or thousands of objects on S3. What's a good way to do this with ruby? As far as I can tell there is no way to save only metadata with fog.io, the entire object must be re-saved. Seems like using the official sdk library would require me rolling a wrapper environment just for this one task.
You're right, the official SDK lets you modify the object metadata without uploading it again. What it does is copy the object but that's on the server so you don't need to download the file and re-upload it.
A wrapper would be easy to implement, something like
bucket.objects.each do |object|
object.metadata['content-type'] = 'application/json'
end
In the v2 API, you can use Object#copy_from() or Object.copy_to() with the :metadata and :metadata_directive => 'REPLACE' options to update an object's metadata without downloading it from S3.
The code in Joost's gist throws this error:
Aws::S3::Errors::InvalidRequest: This copy request is illegal because
it is trying to copy an object to itself without changing the object's
metadata, storage class, website redirect location or encryption
attributes.
This is because by default AWS ignores the :metadata supplied with a copy operation because it copies metadata. We must set the :metadata_directive => 'REPLACE' option if we want to update the metadata in-place.
See http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#copy_from-instance_method
Here's a full, working code snippet that I recently used to perform metadata update operations:
require 'aws-sdk'
# S3 setup boilerplate
client = Aws::S3::Client.new(
:region => 'us-east-1',
:access_key_id => ENV['AWS_ACCESS_KEY'],
:secret_access_key => ENV['AWS_SECRET_KEY'],
)
s3 = Aws::S3::Resource.new(:client => client)
# Get an object reference
object = s3.bucket('my-bucket-name').object('my-object/key')
# Create our new metadata hash. This can be any hash; in this example we update
# existing metadata with a new key-value pair.
new_metadata = object.metadata.merge('MY_NEW_KEY' => 'MY_NEW_VALUE')
# Use the copy operation to replace our metadata
object.copy_to(object,
:metadata => new_metadata,
# IMPORTANT: normally S3 copies the metadata along with the object.
# we must supply this directive to replace the existing metadata with
# the values we supply
:metadata_directive => "REPLACE",
)
For easy re-use:
def update_metadata(s3_object, new_metadata = {})
s3_object.copy_to(s3_object,
:metadata => new_metadata
:metadata_directive => "REPLACE"
)
end
For future readers, here's a complete sample of changing stuff using the Ruby aws-sdk v1 (also see this Gist for a aws-sdk v2 sample):
# Using v1 of Ruby aws-sdk as currently v2 seems not able to do this (broken?).
require 'aws-sdk-v1'
key = YOUR_AWS_KEY
secret = YOUR_AWS_SECRET
region = YOUR_AWS_REGION
AWS.config(access_key_id: key, secret_access_key: secret, region: region)
s3 = AWS::S3.new
bucket = s3.buckets[bucket_name]
bucket.objects.with_prefix('images/').each do |obj|
puts obj.key
# Add metadata: {} to next line for more metadata.
obj.copy_from(obj.key, content_type: obj.content_type, cache_control: 'max-age=1576800000', acl: :public_read)
end
after some search this seems to work for me
obj.copy_to(obj, :metadata_directive=>"REPLACE", :acl=>"public-read",:content_type=>"text/plain")
Using the sdk to change the content type will result in x-amz-meta- prefix. My solution was to use ruby + aws cli. This will directly write to the content-type instead of x-amz-meta-content-type.
ids_to_copy = all_object_ids
ids_to_copy.each do |id|
object_key = "#{id}.pdf"
command = "aws s3 cp s3://{bucket-name}/#{object_key} s3://{bucket-name}/#{object_key} --no-guess-mime-type --content-type='application/pdf' --metadata-directive='REPLACE'"
system(command)
end
This API appears to be available now:
Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'foo',
:aws_secret_access_key => 'bar',
:endpoint => 'https://s3.amazonaws.com/',
:path_style => true
}).put_object_tagging(
'bucket_name',
's3_key',
{foo: 'bar'}
)

Carrierwave Gem - Heroku - Fog Gem configuration - Giving name error

I am a little lost with Heroku and Carrierwave Gem. I have read the WIKI, Read me and searched the net and i admit, i need help. Everything well on local but Heroku crushes the application.
///ERROR MESSAGE FROM HEROKU LOGS
2012-01-03T17:33:26+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/carrierwave-0.5.8/lib/carrierwave/uploader/configuration.rb:91:in `eval': uninitialized constant CarrierWave::Storage::Fog (NameError
///GEM FILE
gem "fog"
gem 'carrierwave'
/app/uploaders/avatar_uploader.rb
storage :fog
/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'XXXX',
:aws_secret_access_key => 'XXXX',
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'site_images' # required
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
When i change the storage to file not fog, then i do not get errors. Are there any other fog settings i am skipping or missing. Any help greatly appreciated. Do i need to create a separate document with fog settings?
It might not be the solution to your problem but it is worth a try adding
config.cache_dir = "#{Rails.root}/tmp/uploads". That will help keep the files around until they are uploaded to you S3 bucket.
If that does not help can you also post your uploader file?

Resources