Bad Request/Bad URI on production (Heroku) but not locally - ruby

Let me preface this by saying most of this application is a giant hack put together in a short window of time under pressure so I may have deeper issues. This question will likely have some bad code in it.
I've built a Sinatra application to handle some tasks with purchase and sales orders in-house. Part of that is to send a few parameters to one of the routes in the app which will then push those off to an API that does useful things with them.
Right now, I'm generating the links with paramaters from within a pretty ugly loop in HAML:
%td
- opts = JSON.generate({ "key" => d[d.keys.first]["key"], "sa_id" => d.keys.first, "site" => d[d.keys.first]["site"], "name" => d[d.keys.first]["name"], "recipient" => d[d.keys.first]["email"], "items" => d[d.keys.first]["descriptions"], "date" => d[d.keys.first]["ship_date"]})
-if (d[d.keys.first]["email"]) && (d[d.keys.first]["site"] != "")
%a{:href => "/notify?options=#{opts}", :title => "Deliver"} Deliver
-else
Deliver
%a{:href => "/destroy?key=#{d.keys.first}", :title => "Destroy"} Destroy
When clicking hte "deliver" link (%a{:href => "/notify?options=#{opts}", :title => "Deliver"} Deliver) locally, everything behaves as expected. My /notify route is called, it hands the parameters off to the desired API, and everything is rainbows and unicorns. When I click that same link on Heroku, it throws a "Bad Request" stating "Bad URI". The only difference between the two URLs generated is the hostname (localhost:3000 vs. myapp.herokuapp.com) and vimdiff confirms this.
Everything else being even, why would Heroku (using Webrick) kick back my URI when my local instance (Thin) doesn't seem to care?

Switching from Webrick to Thin on Heroku resolved this issue, though I'm not sure about the specifics as to why.

We've had some issues where development and production environments that were seemingly identical behaved differently once deployed on heroku. A quick sanity check I like to do is to diff the local gemfile.lock to the one on heroku after the deploy process is complete. I know - we should lock our gem versions down more tightly and we will do, so don't consider this a best practice - but it unearthed some hard to track down issues, some of which were related to the fact that we have some devs on Windows and some on Mac and the gemfile.lock often had Windows specific gems which causes issues on heroku. Just something to consider.

Related

EncryptedCookie gem causes TypeError in Sinatra

I'm trying to make my Sinatra app's sessions more secure, and to do that I'd like to use the EncryptedCookie gem. Unfortunately, when I visit any page in my app, I get this error:
TypeError at /
no _dump_data is defined for class UnboundMethod
file: encrypted_cookie.rb location: dump line: 68
Here's my code:
configure do
use Rack::Session::EncryptedCookie,
:key => 'myapp.session',
:domain => 'myapp.com',
:path => '/',
:expire_after => 1200,
:secret => 'bigcrazysecretstringhere'
end
I tried using the EncryptedCookie gem with the same settings as shown above in a simple Sinatra app I made to test the gem, and it worked fine. There must be some other setting in my app that's interfering with the app, but I can't figure out what it might be. Has anyone out there experienced a similar issue?
(I've also tried starting the app with 'thin start', 'rackup config.ru', and 'ruby myapp.rb'- none of these made a difference.)
After banging my head against the wall for well over a week, I discovered the contractor who worked on the project before me had put
use Rack::Session::Pool
200 lines below the original
use Rack::Session::Cookie
I removed that and Encrypted Cookie worked fine. Lesson learned: Immediately abstract away everything you can when handed a 500 line Sinatra app. Then things can't hide from you.

Why can Gibbon Gem access API but can't listSubscribe()?

I'm trying to get mailchimp integrated with my ruby-on-rails app using the Gibbon gem.
I've successfully accessed the API. I tested it by getting a list of all my mailchimp lists. However, I'm running into problems with the listsubscribe method. I'm not getting any errors, it just isn't working at all.
I have the following code in the controller for the page where users sign up, after the user is made and their information can be accessed.
gb=Gibbon::API.new
gb.listSubscribe({:id => "the-id-for-list", :email_address => user.email, :update_existing => false, :double_optin => false, :send_welcome => true, :merge_vars => {'FNAME' => user.first_name, 'LNAME' => user.last_name, 'MERGE3' => user.subscription, 'MERGE4' => DateTime.now}})
It does nothing. I've tried playing around with the parameter phrasing (à la this post:How do you use the Gibbon Gem to automatically add subscribers to specific interest groups in MailChimp?) I've tried structuring it more like in this tutorial: http://www.leorodriguez.me/2011/08/subscribe-member-into-mailchimp-using.html
I have no idea what's going wrong. As I said before, other API calls are going through to MailChimp. Do you have any suggestions? Thank you in advance.
It turns out I had the code in the wrong place. I was not putting it where users were actually being created, but in the code to generate the view to ask users to sign up. Once I moved it to where the user was actually created, it worked fine.

Devise not working well with multiple subdomains on RoR3 application

I have seen a lot of questions about this topic, but a lot of them have contradictory information, and for some reason it didnt work for me.
I have:
a top level domain: i.e. lvh.me (development).
each user has subdomains: i.e. userdomain.lvh.me
The login form is in the top level domain: lvh.me
I want:
If an user logs in, the session needs to be shared between all the subdomains. I mean, the session needs to be active in lvh.me:3000/something and userdomain.lvh.me:3000
If an user logs out from lvh.me:3000/something it should work, and if the user logs out from userdomain.lvh.me:3000 it should work also.
I tried
Setting in an initializer the following:
MyApplication::Application.config.session_store :cookie_store, :key => '_mykey', :domain => :all
What happened?
I can login in lvh.me:3000, I am correctly redirected to lvh.me:3000/internalpage and if I go to subdomain.lvh.me:3000 it works great. I can also logout from lvh.me:3000/internalpage BUT if I try to logout from subdomain.lvh.me:3000 it doesn't work. The destroy action in Devise SessionsController is executed and everything, but the session doesn't die.
According to http://excid3.com/blog/sharing-a-devise-user-session-across-subdomains-with-rails-3/,
The trick here is the :domain option. What this does is sets the level
of the TLD (top level domain) and tells Rails how long the domain is.
The part you want to watch out for here is that if you set :domain =>
:all like is recommend in some places, it simply won’t work unless
you’re using localhost. :all defaults to a TLD length of 1, which
means if you’re testing with Pow (myapp.dev) it won’t work either
because that is a TLD of length 2.
So, after reading that I also tried
MyApplication::Application.config.session_store :cookie_store, :key => '_mykey', :domain => 'lvh.me'
What happened?
I can login in lvh.me:3000, I am correctly redirected to lvh.me:3000/internalpage and if I go to subdomain.lvh.me:3000 it doesn't work, i have no session there. If I go back to lvh.me:3000/internalpage my session has disappeared. What happened there?
What else?
Then, after reading rails 3.2 subdomains and devise I changed my initializer line to
MyApplication::Application.config.session_store :cookie_store, :key => '_mykey', :domain => '.lvh.me'
Note the "." before the domain name.
According to the post in SO:
This allows this cookie to be accessible across subdomains and the
application should maintain it's session across subdomains. May not be
100% what you are looking for but it should get you going in the right
direction.
What happened?
Nothing, it didn't work. Same behavior if compared with the last thing I tried.
I finally tried What does Rails 3 session_store domain :all really do? , creating a custom class to handle the cookies. But I had no luck.
Of course that I deleted all the cookies and temp files before each attempt. Also I changed the name of the cookie.
Any help? Thanks!
According to this guy here: Rails: how can I share permanent cookies across multiple subdomains? You need to set the domain manually? Googling around it looks like '.domainname.com' with the dot at the beginning really is the way to go.
If you inherit from Devise::SessionsController you can manually set it on create
class SessionsController < Devise::SessionsController
def create
# modify the cookie here
super
end
end
I am setting up a working example to test that out, I'll post back afterwards, cheers!
And here is my Edit
Forget tempering with the token on create. The problematic is this, you need to have the token domain set to '.lvh.me' that's all there is to it, but domain: '.lvh.me' just doesn't do anything. Here is my proof of concept and ultimately it boiled down to a single change inside a controller:
class HomeController < ApplicationController
def index
cookies[:_cookietest_session] = {domain: '.lvh.me'}
end
end
In Chrome the token would look like this
And that for subdomain.lvh.me, lvh.me and any other subdomain I tried. I can sign_in/sign_out from any and the session is created/destroyed accordingly.
Now I wouldn't advise doing it the way I did, I liked the middleware approach I think it would work just fine if setup properly. Let me know if you need further help on this.
Cheers!
Ok last thing
I went back and tried domain: :all because it really ought to work as you have expected. If I access lvh.me I get a cookie with .lvh.me but if I got to subdomain.lvh.me I get one that reads .subdomain.lvh.me
I think the issue is that :all adds a . to the subdomain.lvh.me so you would stay logged in with foo.subdomain.lvh.me which doesn't do you much good.
:all seems to work if your original login is from the root domain lvh.me and you then redirect to a subdomain. but you can't log in through a subdomain with it set that way.
MyApplication::Application.config.session_store :cookie_store, :key => '_mykey', :domain => '.lvh.me'
looks like the correct way to specify this.
Note:
Make sure you restart rails after making change.
Make sure you clear cookies out for your domain before testing again. You can leave remnant cookies behind that are confusing between tests.

Sinatra session members "disappearing"

I've successfully troubleshooted an issue with session members not being available even though they were set and would like to know why it's happening. My situation can be described as:
Sinatra app using :session.
Using oAuth to authorise users and in the process setting a :ret_url session member so that the app knows where to come back to after auth.
Server is unicorn on Cedar stack (Heroku)
This works perfectly whilst running locally but the :ret_url session member was completely disappearing from the session on Heroku. I found that if I removed this code it fixed the problem:
before do
cache_control :public, :must_revalidate, :max_age => 60
end
Question 1: I'm guessing that my cookie was being cached without the :ret_url value and that's why it was breaking?
Question 2: I was setting the session member as shown in the route condition code below, is this the wrong place to do it?
# redirect users to login if necessary
set(:auth) do |access_token|
condition do
if request.request_method == 'GET'
session[:ret_url] = request.path_info
end
redirect '/' unless user_logged_in?
end
end
I'd like to use cacheing and still have my cookie be valid.
Hard to see what is going on without knowing all details, but there is a simple rule that you are most probably violating: do not use http caching on actions that are supposed to do something (other than just show page). When http caching is on, you browser does not even try to re-load the page and renders it from browser cache.
Cookies are not cached anywhere, the only thing cache_control does is setting CacheControl http response value
In your case the best thing you can do is to add list of routes that have no-action pages to your before block:
before '/my/static/page' do
cache_control :public, :must_revalidate, :max_age => 60
end
Most probably you will have very limited set of routes where you can benefit from http caching
A chap by the name of Ari Brown (waves at Ari), who is not a member here but deserves the credit for this answer, pointed me at the right solution, which is, as per the Sinatra FAQ, to not use enable :sessions but to use Rack::Session::Cookie as per
use Rack::Session::Cookie, :key => 'rack.session',
:domain => 'foo.com',
:path => '/',
:expire_after => 2592000, # In seconds
:secret => 'change_me'
I've added this into my config.ru and all is well.
I also noticed over in this post the alternative suggestion to set :session_secret, 'change_me' and, indeed, to do this via an environment variable, namely:
$ heroku config:add SESSION_KEY=a_longish_secret_key
then in your app
enable :sessions
set :session_secret, ENV['SESSION_KEY'] || 'change_me'
Obviously you can use the environment variable strategy with the Rack::Session::Cookie approach too. That's the way I went as it offers more flexibility in configuration.
The reason these work is that the cache controller middleware is farming requests out to multiple server instances and without setting a session secret it's just making one up per server, and thus breaking the sessions.

Where can I look to see why Paperclip is failing silently in Rails 3?

I have followed the simple example here.
I have performed the generation, run the migration, added the code to my model and view, and restarted the application.
This is on a company edit screen, where the user can upload a logo.
Running Rails 3.0.3 in dev mode. The only thing even close to Paperclip that I see in the log is:
Started GET "/logos/original/missing.png" for 127.0.0.1 at Tue Dec 14 15:27:42 -0500 2010
ActionController::RoutingError (No route matches "/logos/original/missing.png"):
I was under the impression that Paperclip was pretty easy to use, but I can't seem to even locate an error message. Can anyone help?
Please set your "default_url" path to a image which is displayed if there is no image.
For example,
has_attached_file :image,
:default_url => '/images/nopicture.jpeg',
:styles => {
:large => "300x300>",
:thumb => "160x120>"
}
Where "nopicture.jpeg" which is available in your "/images" folder under public is the default picture to be displayed if none is available.
This should solve your problem.
Fixed! The power of Google. Or Bing rather. My first problem was, I did not have my form_helper include:
:html => { :multipart => true }
That at least got the call to Paperclip going. But it was hanging.
I am using Passenger to serve up Rails. And it turns out that Passenger did not know where ImageMagick was installed on my machine. So I added an initialization file to config/initializers called "paperclip.rb" with one line:
Paperclip.options[:image_magick_path] = "/opt/local/bin"
Problem solved.

Resources