Carrierwave Gem - Heroku - Fog Gem configuration - Giving name error - ruby-on-rails-3.1

I am a little lost with Heroku and Carrierwave Gem. I have read the WIKI, Read me and searched the net and i admit, i need help. Everything well on local but Heroku crushes the application.
///ERROR MESSAGE FROM HEROKU LOGS
2012-01-03T17:33:26+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/carrierwave-0.5.8/lib/carrierwave/uploader/configuration.rb:91:in `eval': uninitialized constant CarrierWave::Storage::Fog (NameError
///GEM FILE
gem "fog"
gem 'carrierwave'
/app/uploaders/avatar_uploader.rb
storage :fog
/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'XXXX',
:aws_secret_access_key => 'XXXX',
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'site_images' # required
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
When i change the storage to file not fog, then i do not get errors. Are there any other fog settings i am skipping or missing. Any help greatly appreciated. Do i need to create a separate document with fog settings?

It might not be the solution to your problem but it is worth a try adding
config.cache_dir = "#{Rails.root}/tmp/uploads". That will help keep the files around until they are uploaded to you S3 bucket.
If that does not help can you also post your uploader file?

Related

firebase: ruby authentication with database secret

Re-asking same question because:
Database Secrets
warning
Database secrets are currently deprecated and use a legacy Firebase token generator. Update your source code with the Firebase Admin SDK.
when getting the "secret" -- by which I infer they mean secret key.
for reference:
thufir#dur:~/ruby/firebase$
thufir#dur:~/ruby/firebase$ ./quickstart.rb
true
200
{"name"=>"-Kxf9rMd9p1F0cb2HTeM"}
thufir#dur:~/ruby/firebase$
thufir#dur:~/ruby/firebase$ cat quickstart.rb
#!/usr/bin/env ruby
require 'rubygems'
require 'firebase'
require 'pp'
require_relative 'config'
config = Config.new
#firebase = Firebase::Client.new(config.database_url)
firebase = Firebase::Client.new(config.database_url,config.database_secret)
response = firebase.push("todos", { :name => 'Pick the milk', :priority => 1 })
pp response.success? # => true
pp response.code # => 200
pp response.body # => { 'name' => "-INOQPH-aV_psbk3ZXEX" }
response.raw_body # => '{"name":"-INOQPH-aV_psbk3ZXEX"}'
thufir#dur:~/ruby/firebase$
this approach is relatively stable? I copied the information from the Google console GUI:
Add Firebase to your web app
Copy and paste the snippet below at the bottom of your HTML, before other script tags.
into the config file for reference. (Probably should use yaml or similar; works for now.)

Fog with Carrierwave upload to S3 default upload path invalid

I'm trying to upload to S3 with Carrierwave and Fog-Aws, and I'm having an issue. For some reason, fog is trying to upload to my bucket at
https://{bucket-name}.s3.amazonaws.com
But, when I access a file directly from aws, the url format is like this:
https://s3-{region}.amazonaws.com/{bucket-name
Whenever I try to use the path that Fog is using, it gives me the following error:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
So my question is, is there a way to
A) Change the endpoint format on S3 to match what Fog is expecting it to be, or
B) Change a setting for Fog to use this different format instead?
For reference:
I'm using Carrierwave version 1.0, fog-aws version 0.11.0
Here's my carrierwave.rb file:
if Rails.env.test? or Rails.env.development?
CarrierWave.configure do |config|
config.storage = :file
config.root = "#{Rails.root}/tmp"
config.cache_dir = "#{Rails.root}/tmp/images"
end
else
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => ENV['AWS_S3_REGION'],
:endpoint => "https://s3-#{ENV['AWS_S3_REGION']}.amazonaws.com/#{ENV['AWS_S3_BUCKET_NAME']}"
}
config.storage = :fog
config.fog_directory = ENV['AWS_S3_BUCKET_NAME']
config.fog_public = false
end
end
I believe :region is the only setting you should need to change in this case. As long as it is set accurately (and isn't the default us-east-1 region) it should change the host as you desire.
That said, I would NOT expect to also need to change endpoint like this. It would be set if you needed to use CNAME stuff, which it doesn't sound like you need. Omitting this, while setting region, should hopefully get you what you are after.

compass/sass remote themeing via sftp/scp with alternate port

I am trying to get compass/sass to watch changes on my local computer and reflect those changes remotely using a custom config.rb script. net::sftp works but my server requires a custom ssh port. I couldn't find any mods to make sftp work with an alternate port so im trying net:scp now, the problem is I dont know the proper command structure to upload using net:scp and wanted to see if someone can help me. Here is my code:
# Require any additional compass plugins here.
require 'net/ssh'
require 'net/scp'
# SFTP Connection Details - Does not support alternate ports os SSHKeys, but could with mods
remote_theme_dir_absolute = '/home2/trinsic/public_html/scottrlarson.com/sites/all/themes/ gateway_symbology_zen/css'
sftp_host = 'xxx.xxx.xxx.xxx' # Can be an IP
sftp_user = 'user' # SFTP Username
sftp_pass = 'password' # SFTP Password
# Callback to be used when a file change is written. This will upload to a remote WP install
on_stylesheet_saved do |filename|
$local_path_to_css_file = css_dir + '/' + File.basename(filename)
Net::SSH.start( sftp_host, sftp_user, {:password => sftp_pass,:port => 2222} ) do ssh.scp.upload! $local_path_to_css_file, remote_theme_dir_absolute + '/' + File.basename(filename)
end
puts ">>>> Compass is polling for changes. Press Ctrl-C to Stop"
end
#
# This file is only needed for Compass/Sass integration. If you are not using
# Compass, you may safely ignore or delete this file.
#
# If you'd like to learn more about Sass and Compass, see the sass/README.txt
# file for more information.
#
# Change this to :production when ready to deploy the CSS to the live server.
environment = :development
#environment = :production
# In development, we can turn on the FireSass-compatible debug_info.
firesass = false
#firesass = true
# Location of the theme's resources.
css_dir = "css"
sass_dir = "sass"
extensions_dir = "sass-extensions"
images_dir = "images"
javascripts_dir = "js"
# Require any additional compass plugins installed on your system.
#require 'ninesixty'
#require 'zen-grids'
# Assuming this theme is in sites/*/themes/THEMENAME, you can add the partials
# included with a module by uncommenting and modifying one of the lines below:
#add_import_path "../../../default/modules/FOO"
#add_import_path "../../../all/modules/FOO"
#add_import_path "../../../../modules/FOO"
##
## You probably don't need to edit anything below this.
##
# You can select your preferred output style here (can be overridden via the command line):
# output_style = :expanded or :nested or :compact or :compressed
output_style = (environment == :development) ? :expanded : :compressed
# To enable relative paths to assets via compass helper functions. Since Drupal
# themes can be installed in multiple locations, we don't need to worry about
# the absolute path to the theme from the server root.
relative_assets = true
# To disable debugging comments that display the original location of your selectors. Uncomment:
# line_comments = false
# Pass options to sass. For development, we turn on the FireSass-compatible
# debug_info if the firesass config variable above is true.
sass_options = (environment == :development && firesass == true) ? {:debug_info => true} : {}
I get an error when I run the command: compass watch:
NoMethodError on line ["17"] of K: undefined method `upload!' for #<Net::SSH::Co
nnection::Session:0x000000036bb220>
Run with --trace to see the full backtrace
I needed a solution for this too but did not find any satisfying answer anywhere.
After reading the Ruby Net::ssh documentation and some source of Compass, this is my solution to upload CSS and sourcemap to a remote SSH server with non-standard port and forced public-key authorisation:
First make sure you have the required gems installed
sudo gem install net-ssh net-sftp
then add this to your config.rb
# Add this to the first lines of your config.rb
require 'net/ssh'
require 'net/sftp'
...
# Your normal compass config comes here
...
# At the end of your config.rb add the config for the upload code
remote_theme_dir_absolute = '/path/to/my/remote/stylesheets'
sftp_host = 'ssh_host' # Can be an IP
sftp_user = 'ssh_user' # SFTP Username
on_stylesheet_saved do |filename|
# You can use the ssh-agent for authorisation.
# In this case you can remove the :passphrase from the config and set :use_agent => true.
Net::SFTP.start(
sftp_host,
sftp_user ,
:port => 10022,
:keys_only => true,
:keys => ['/path/to/my/private/id_rsa'],
:auth_methods => ['publickey'],
:passphrase => 'my_secret_passphrase',
:use_agent => false,
:verbose => :warn
) do |sftp|
puts sftp.upload! css_dir + '/app.css', remote_theme_dir_absolute + '/' + 'app.css'
end
end
on_sourcemap_saved do |filename|
# You can use the ssh-agent for authorisation.
# In this case you can remove the :passphrase from the config and set :use_agent true.
Net::SFTP.start(
sftp_host,
sftp_user ,
:port => 10022,
:keys_only => true,
:keys => ['/path/to/my/private/id_rsa'],
:auth_methods => ['publickey'],
:passphrase => 'my_secret_passphrase',
:use_agent => false,
:verbose => :warn
) do |sftp|
puts sftp.upload! css_dir + '/app.css.map', remote_theme_dir_absolute + '/' + 'app.css.map'
end
end
It was quite some trial and error until this worked for me.
Some points of failure were:
If no ssh-agent is available connection will fail until you set :ssh_agent => false explicitly
If you do not limit the available keys with :keys all available keys will be tried one after another. If you use the ssh-agent and have more than 3 keys installed chanches are high that the remote server will close the connection if you try too much keys that are not valid for the server you currently connect.
On any connection issue set verbosity level to :verbose => :debug to see what is going on. Remember to stop the compass watch and restart to ensure configuration changes apply.

dalli on heroku not caching

I want to enable action caching in my rails app on heroku.
In development.rb I set:
config.action_controller.perform_caching = true
and see in logs
Started GET "..." for 127.0.0.1 at 2013-05-17 14:03:25 +0400
...
Write fragment ...
OR
Read fragment ... (0.2ms)
->
To move to production I installed memcache add-on via $heroku addons:add memcache , installed new gem in Gemfile: gem 'dalli' and changed settings in production.rb:
config.action_controller.perform_caching = true
config.cache_store = :dalli_store #, ENV['MEMCACHE_SERVERS'], { :namespace => 'myapp', :expires_in => 1.day, :compress => true }
I have also tried to enable those two commented parameters, but anyway I don't see Read/Write fragment ... pieces in logs, I see that app gets authenticated, but cache is always missing
Started GET "..." for 195.178.108.38 at 2013-05-17 09:54:19 +0000
Dalli/SASL authenticating as myapp%40heroku.com
Dalli/SASL: Authenticated
cache: [GET ...] miss
Running $heroku run console I check that the cache is loading:
irb(main):001:0> Rails.cache.read('color')
Dalli/SASL authenticating as myapp%40heroku.com
Dalli/SASL: Authenticated
=> nil
irb(main):002:0> Rails.cache.write('color', 'red')
=> true
irb(main):003:0> Rails.cache.read('color')
=> "red"
Why action caching does not work?
Can you try using memcachier instead?
remove memcahe add-on
add memcachier add-on
add "memcachier" gem just above "dalli" in your gem file
it should "just work"
See here in the DevCenter: Memcachier

How do I update a batch of S3 objects' metadata using ruby?

I need to change some metadata (Content-Type) on hundreds or thousands of objects on S3. What's a good way to do this with ruby? As far as I can tell there is no way to save only metadata with fog.io, the entire object must be re-saved. Seems like using the official sdk library would require me rolling a wrapper environment just for this one task.
You're right, the official SDK lets you modify the object metadata without uploading it again. What it does is copy the object but that's on the server so you don't need to download the file and re-upload it.
A wrapper would be easy to implement, something like
bucket.objects.each do |object|
object.metadata['content-type'] = 'application/json'
end
In the v2 API, you can use Object#copy_from() or Object.copy_to() with the :metadata and :metadata_directive => 'REPLACE' options to update an object's metadata without downloading it from S3.
The code in Joost's gist throws this error:
Aws::S3::Errors::InvalidRequest: This copy request is illegal because
it is trying to copy an object to itself without changing the object's
metadata, storage class, website redirect location or encryption
attributes.
This is because by default AWS ignores the :metadata supplied with a copy operation because it copies metadata. We must set the :metadata_directive => 'REPLACE' option if we want to update the metadata in-place.
See http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#copy_from-instance_method
Here's a full, working code snippet that I recently used to perform metadata update operations:
require 'aws-sdk'
# S3 setup boilerplate
client = Aws::S3::Client.new(
:region => 'us-east-1',
:access_key_id => ENV['AWS_ACCESS_KEY'],
:secret_access_key => ENV['AWS_SECRET_KEY'],
)
s3 = Aws::S3::Resource.new(:client => client)
# Get an object reference
object = s3.bucket('my-bucket-name').object('my-object/key')
# Create our new metadata hash. This can be any hash; in this example we update
# existing metadata with a new key-value pair.
new_metadata = object.metadata.merge('MY_NEW_KEY' => 'MY_NEW_VALUE')
# Use the copy operation to replace our metadata
object.copy_to(object,
:metadata => new_metadata,
# IMPORTANT: normally S3 copies the metadata along with the object.
# we must supply this directive to replace the existing metadata with
# the values we supply
:metadata_directive => "REPLACE",
)
For easy re-use:
def update_metadata(s3_object, new_metadata = {})
s3_object.copy_to(s3_object,
:metadata => new_metadata
:metadata_directive => "REPLACE"
)
end
For future readers, here's a complete sample of changing stuff using the Ruby aws-sdk v1 (also see this Gist for a aws-sdk v2 sample):
# Using v1 of Ruby aws-sdk as currently v2 seems not able to do this (broken?).
require 'aws-sdk-v1'
key = YOUR_AWS_KEY
secret = YOUR_AWS_SECRET
region = YOUR_AWS_REGION
AWS.config(access_key_id: key, secret_access_key: secret, region: region)
s3 = AWS::S3.new
bucket = s3.buckets[bucket_name]
bucket.objects.with_prefix('images/').each do |obj|
puts obj.key
# Add metadata: {} to next line for more metadata.
obj.copy_from(obj.key, content_type: obj.content_type, cache_control: 'max-age=1576800000', acl: :public_read)
end
after some search this seems to work for me
obj.copy_to(obj, :metadata_directive=>"REPLACE", :acl=>"public-read",:content_type=>"text/plain")
Using the sdk to change the content type will result in x-amz-meta- prefix. My solution was to use ruby + aws cli. This will directly write to the content-type instead of x-amz-meta-content-type.
ids_to_copy = all_object_ids
ids_to_copy.each do |id|
object_key = "#{id}.pdf"
command = "aws s3 cp s3://{bucket-name}/#{object_key} s3://{bucket-name}/#{object_key} --no-guess-mime-type --content-type='application/pdf' --metadata-directive='REPLACE'"
system(command)
end
This API appears to be available now:
Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'foo',
:aws_secret_access_key => 'bar',
:endpoint => 'https://s3.amazonaws.com/',
:path_style => true
}).put_object_tagging(
'bucket_name',
's3_key',
{foo: 'bar'}
)

Resources