application and actioncable won't share cookie - ruby

I am using devise for authentication, but when I implemented your method I got "An unauthorized connection attempt was rejected"
After hours of searching I found out that:
cookies.signed['user.id']
returns nil. In the following code block.
def find_verified_user
if verified_user = User.find_by(id: cookies.signed['user.id'])
verified_user
else
reject_unauthorized_connection
end
end
I checked and there is definitely a cookie but it does not contain the cookie data set by Devise.
To check if the 'user.id' actually is set I raise it in the view. This, as excepted, return the user id
Signed in as ##{cookies.signed[:username]}.
- raise(cookies.signed['user.id'].inspect)
%br/
%br/
#messages
%br/
%br/
= form_for :message, url: messages_path, remote: true, id: 'messages-form' do |f|
= f.label :body, 'Enter a message:'
%br/
= f.text_field :body
%br/
= f.submit 'Send message'
My question/issue:
It seems like the cookie is not available at the actioncable server.
Is there a way to share the cookie set by Devise with the cable server?
https://github.com/stsc3000/actioncable-chat.git

Check the client-side JavaScript file that connects to your Action Cable server. Some tutorials have you put that in 'app/assets/javascripts/application_cable.coffee' and others in 'app/assets/javascripts/channels/index.coffee' but it looks like this:
#= require cable
#App = {}
App.cable = Cable.createConsumer("ws://cable.example.com:28080")
You need the WebSocket address to point to your Cable server and that address needs to share a cookie namespace with the rest of your app. Most likely yours is pointing at the wrong place, so for example if you're working on this locally you would need to change:
App.cable = Cable.createConsumer("ws://cable.example.com:28080")
to
App.cable = Cable.createConsumer("ws://localhost:28080")
assuming of course that your Cable server is running on port 28080 (specified in the bin/cable executable).
Also make sure to clear your browser cache so the updated file is the one being used by the browser.

Not sure if you got it running by now, but I had the same issue on Rails 5.0.0.beta3. I did not change to the following line:
App.cable = Cable.createConsumer("ws://localhost:3000")
I kept it as it was before
#App ||= {}
App.cable = ActionCable.createConsumer()
But what I did change had to do with the Cookies. No matter what. The cookie for my user_id would not display. So I made a work around. I got the cookie to save the username instead, then I was finally able to see it in the find_verified_user function call.
After the user logs in(Sessions#create), I call a helper function:
sessions_helper.rb
def set_cookie(user)
the_username = user.username.to_s
cookies.permanent.signed[:username] = the_username
end
The new find_verified_user
def find_verified_user
if current_user = User.find_by_username(cookies.signed[:username])
current_user
else
reject_unauthorized_connection
end
end
This may or may not be the best solution, but after hours of confusion and frustration this worked for my situation. I hope this can help someone

You need to configure in config/initializers/session_store.rb
# using cookie store
if Rails.env.production?
# to share across subdomains
Rails.application.config.session_store :cookie_store,
key: '_app_name_session', domain: ".example.com"
else
# to share with any domain
Rails.application.config.session_store :cookie_store,
key: '_app_name_session', domain: :all, tld_length: 2
end
#for redis store
elsif Rails.env.production?
# to share across subdomains
Rails.application.config.session_store :redis_store, {
servers: [
{ host: YourRedisHost, port: YourRedisPort},
],
key: '_app_name_session',
expire_after: 1.day,
domain: '.example.com'
}
else
# to share with any domain
Rails.application.config.session_store :redis_store, {
servers: [
{ host: YourRedisHost, port: YourRedisPort},
],
key: '_app_name_session',
expire_after: 1.day,
domain: :all,
tld_length: 2
}
end

the problem I found is: I have 2 different users logged in. One is logged in at 127.0.0.1, and the other is logged in at localhost.
so when I access my website using 127.0.0.1:3000, but my cable is configured to run on localhost like this:
config.action_cable.url = "ws://localhost:3000/cable"
in config/environments/development.rb
the user logged in at 127.0.0.1 makes cable request to "ws://localhost:3000/cable" (as per configuration), but this way the cookie saved for localhost is sent, even though I am making the request from 127.0.0.1, which is different user(or no user at all).
So the bottom root is actually what Pwnrar points above, cable address configuration and the way you access your website.
So to solve the problem, always access your website using the server address configured for your cable, otherwise cookies get mixed in.

Related

Why is Net::HTTP timing out when I try to access a Prawn Generated PDF?

I am using Prawn to generate a PDF from my controller, and when accessed directly at the url, it works flawlessly, I.E. localhost:3000/responses/1.pdf
However, when I try to generate this file on the fly for inclusion in a Mailer, everything freezes up and it times out.
I have tried various methods for generating / attaching the file and none have changed the outcome.
I also tried modifying the timeout for Net::HTTP to no avail, it just takes LONGER to time out.
If I run this command on the Rails Console, I receive a PDF data stream.
Net::HTTP.get('127.0.0.1',"/responses/1.pdf", 3000)
But if I include this code in my controller, it times out.
I have tried two different methods, and both fail repeatedly.
Method 1
Controller:
http = Net::HTTP.new('localhost', 3000)
http.read_timeout = 6000
file = http.get(response_path(#response, :format => 'pdf')) #timeout here
ResponseMailer.confirmComplete(#response,file).deliver #deliver the mail!
Method 1 Mailer:
def confirmComplete(response,file)
email_address = response.supervisor_id
attachments["test.pdf"] = {:mime_type => "application/pdf", :content=> file}
mail to: email_address, subject: 'Thank you for your feedback!'
end
The above code times out.
Method 2 Controller:
ResponseMailer.confirmComplete(#response).deliver #deliver the mail!
Method 2 Mailer:
def confirmComplete(response)
email_address = response.supervisor_id
attachment "application/pdf" do |a|
a.body = Net::HTTP.get('127.0.0.1',"/responses/1.pdf", 3000) #timeout here
a.filename = "test.pdf"
end
mail to: email_address, subject: 'Thank you for your feedback!'
end
If I switch the a.body and a.filename, it errors out first with
undefined method `filename=' for #<Mail::Part:0x007ff620e05678>
Every example I find has a different syntax or suggestion but none fix the problem that Net::HTTP times out. Rails 3.1, Ruby 1.9.2
The problem is that, in development, you're only running one server process, which is busy generating the email. That process is sending another request (to itself) to generate a PDF and waiting for a response. The request for the PDF is basically standing in line at the server so that it can get it's PDF, but the server is busy generating the email and waiting to get the PDF before it can finish. And thus, you're waiting forever.
What you need to do is start up a second server process...
script/rails server -p 3001
and then get your PDF with something like ...
args = ['127.0.0.1','/responses/1.pdf']
args << 3001 unless Rails.env == 'production'
file = Net::HTTP.get(*args)
As an aside, depending on what server you're running on your production machine, you might run into issues with pointing at 127.0.0.1. You might need to make that dynamic and point to the full domain when in production, but that should be easy.
I agree with https://stackoverflow.com/users/811172/jon-garvin's analysis that you're only running one server process, but I would mention another solution. Refactor your PDF generation so you don't depend on your controller.
If you're using Prawnto, I'm guessing you have a view like
# app/views/response.pdf.prawn
pdf.text "Hello world"
Move this to your Response model: (or somewhere else more appropriate, like a presenter)
# app/models/response.rb
require 'tmpdir'
class Response < ActiveRecord::Base
def pdf_path
return #pdf_path if #pdf_generated == true
#pdf_path = File.join(Dir.tmpdir, rand(1e11).to_s)
Prawn::Document.generate(#pdf_path) do |pdf|
pdf.text "Hello world"
end
#pdf_generated = true
#pdf_path
end
def pdf_cleanup
if #pdf_generated and File.exist?(#pdf_path.to_s)
File.unlink #pdf_path
end
end
end
Then in your ResponsesController you can do:
# app/controllers/responses_controller.rb
def show
#response = Response.find params[:id]
respond_to do |format|
# this sends the PDF to the browser (doesn't email it)
format.pdf { send_file #response.pdf_path, :type => 'application/pdf', :disposition => 'attachment', :filename => 'test.pdf' }
end
end
And in your mailer you can do:
# this sends an email with the PDF attached
def confirm_complete(response)
email_address = response.supervisor_id
attachments['test.pdf'] = {:mime_type => "application/pdf", :content => File.read(response.pdf_path, :binmode => true) }
mail to: email_address, subject: 'Thank you for your feedback!'
end
Since you created it in the tmpdir, it will be automatically deleted when your server restarts. You can also call the cleanup function.
One final note: you might want to use a different model name like SupervisorReport or something - Response might get you in namespacing trouble later)

Sinatra cookies vanish on certain routes?

I have a simple Sinatra app that I am playing with, and for some reason the cookies don't seem to work for certain routes, which I find quite bizarre.
require "sinatra"
set(:authenticate) do |*vars|
condition do
unless request.cookies.has_key?("TestCookie")
redirect to("/login"), 303
end
end
end
get "/login" do
return "No valid cookie"
end
get "/secret", :authenticate => [:auth_cookie] do
cookie = request.cookies["TestCookie"]
return "Secrets ahoy - #{cookie}"
end
get '/cookie/set' do
response.set_cookie("TestCookie", {
:expires => Time.now + 2400,
:value => "TestValue"
})
return "Cookie is set"
end
get '/cookie/get' do
cookie = request.cookies["TestCookie"]
return "Cookie with value #{cookie}"
end
If I go to cookies/set it correctly sets the cookie (can see it in firecookie), then if I go to cookies/get I get the correct cookie output. However if I go to /secret it always redirects to the /login. As I am still fairly new to Ruby syntax I thought it may be a problem with my condition within the authenticate extension, so I have tried removing that and just spitting out the cookie like the other one does. However still nothing, so I am at a loss as to why the cookie is there, I can see it in the browser... and /cookies/get works, but /secret doesn't...
Am I missing something here?
The problem is that the cookie is set with path /cookie. When you set a cookie your can specify a path, which is effectively a sub-part of the Website that you want the cookie to apply to. I guess Sinatra/Rack use the path of the current request by default which in /cookie/set would be /cookie.
You can make it work the way you expect by explicitly specifying the path:
response.set_cookie("TestCookie", {
:expires => Time.now + 2400,
:value => "TestValue",
:path => '/'
})
Or you could set the cookie at a route called say /cookie-set rather than /cookie/set

Post/Redirect Auth problem with Sinatra

I've stumbled across a bit of problem when it comes to redirects behind a protected set of URLs (admin section) within a Sinatra app. It most likely a silly mistake but I haven't found anything online that helps.
This is for a password protected area as the helpers show, where the user can create new events. The first time a user tries to access the admin, they are prompted for a password, then subsequent pages are left. The problem I have is that when the app attempts to redirect after a successful new event is made, the user has to re-auth themselves ... which seems bit redundant.
This also applies for the deletion and editing process, the user always gets prompted when a redirect is attempted. I've tried passing 303 at the second parameter to for a different HTTP code, but to no avail
Anyway, here's the code, any questions/help would be appreciated
helpers do
def protected!
unless authorized?
response['WWW-Authenticate'] = %(Basic realm="Restricted Area")
throw(:halt, [401, "Not authorized\n"])
end
end
def authorized?
#auth ||= Rack::Auth::Basic::Request.new(request.env)
#auth.provided? && #auth.basic? && #auth.credentials && #auth.credentials == ['admin', 'admin']
end
end
...
get "/admin/events/:id" do
protected!
conf = Conference.where(:_id => params[:id]).first
not_found unless conf
haml :admin_event_edit, :layout => :admin_layout, :locals => { :event => conf }
end
post "/admin/events/new/" do
protected!
conf = Conference.new(params[:event])
if conf.save!
redirect "/admin/events/"
else
"Something went horribly wrong creating the new event, heres the form contents #{params.inspect}"
end
end
get "/admin/events/" do
protected!
haml :admin_events, :layout => :admin_layout, :locals => { :our_events => Conference.where(:made => true).order_by(:start_date.asc).limit(15), :other_events => Conference.where(:made => false).order_by(:start_date.asc).limit(15)}
end
Is this only happening in Safari?
I've used the code above and it only re-auths in Safari, Chrome, and FireFox work as expected.
It seems that if you unless you check the "remember my username/password" Safari will send each subsequent request without the Authorization in the header (a great tool for watching headers etc is Charles). If you do check it then Apple sends the Auth in the header correctly and even if you quit out of Safari it will continue to remember to send the Auth on relaunch.
So it's Apple being silly not you :)

How can I persistently overwrite an attribute initialized by Rack::Builder?

I am trying to use OmniAuth to handle the OAuth flow for a small-ish Sinatra app. I can get 37signals Oauth to work perfectly, however I'm trying to create a strategy for Freshbooks Oauth as well.
Unfortunately Freshbooks require OAuth requests to go to a user specific subdomain. I'm acquiring the subdomain as an input and I then need to persistently use the customer specific site URL for all requests.
Here's what I've tried up to now. The problem is that the new site value doesn't persist past the first request.
There's to to be a simple way to achieve this but I'm stumped.
#Here's the setup -
def initialize(app, consumer_key, consumer_secret, subdomain='api')
super(app, :freshbooks, consumer_key, consumer_secret,
:site => "https://"+subdomain+".freshbooks.com",
:signature_method => 'PLAINTEXT',
:request_token_path => "/oauth/oauth_request.php",
:access_token_path => "/oauth/oauth_access.php",
:authorize_path => "/oauth/oauth_authorize.php"
)
end
def request_phase
#Here's the overwrite -
consumer.options[:site] = "https://"+request.env["rack.request.form_hash"]["subdomain"]+".freshbooks.com"
request_token = consumer.get_request_token(:oauth_callback => callback_url)
(session[:oauth]||={})[name.to_sym] = {:callback_confirmed => request_token.callback_confirmed?,
:request_token => request_token.token,
:request_secret => request_token.secret}
r = Rack::Response.new
r.redirect request_token.authorize_url
r.finish
end
Ok, here's a summary of what I did for anyone who comes across this via Google.
I didn't solve the problem in the way I asked it, instead I pushed the subdomain into the session and then I overwrite it whenever the site value needs to be used.
Here's the code:
#Monkeypatching to inject user subdomain
def request_phase
#Subdomain is expected to be submitted as <input name="subdomain">
session[:subdomain] = request.env["rack.request.form_hash"]["subdomain"]
consumer.options[:site] = "https://"+session[:subdomain]+".freshbooks.com"
super
end
#Monkeypatching to inject subdomain again
def callback_phase
consumer.options[:site] = "https://"+session[:subdomain]+".freshbooks.com"
super
end
Note that you still have to set something as the site when it's initialised, otherwise you will get errors due to OAuth not using SSL to make the requests.
If you want to see the actual code I'm using it's at: https://github.com/joeharris76/omniauth I'll push the fork up to the main project once I've battle tested this solution a bit more.

Testing basic HTTP authenticated request in Merb

The Merb Open Source Book has a chapter on authentication. However, the testing an authenticated request section example only shows what you can do for forms based authentication. I have a web service that I want to test with HTTP basic authentication. How would I do that?
After posting my question, I tried a few more things and found my own answer. You can do something like the following:
response = request('/widgets/2222',
:method => "GET",
"X_HTTP_AUTHORIZATION" => 'Basic ' + ["myusername:mypassword"].pack('m').delete("\r\n"))
I may get around to updating the book, but at least this info is here for Google to find and possibly help someone else.
Here is an example for HTTP basic auth from inside a controller:
class MyMerbApp < Application
before :authenticate, :only=>[:admin]
def index
render
end
def admin
render
end
protected
def authenticate
basic_authentication("Protected Area") do |username, password|
username == "name" && password == "secret"
end
end
end
you'll need to define the merb_auth_slice in config/router.rb if it's not already done for you:
Merb::Router.prepare do
slice(:merb_auth_slice_password, :name_prefix => nil, :path_prefix => "")
end

Resources