I'm trying to use minitest with sintara and my issue is that running the test (ruby test_login.rb) is unable to find the login page and when I print out the document html I get the sinatra 404 page. I have no idea how to connect my web app with this test program and all documentation and previous questions I have scoured have nothing that helps me.
Here is my code:
require 'sinatra/base'
require 'minitest/autorun'
require 'rack/test'
require 'minitest/spec'
require 'nokogiri'
require 'rack/test'
require_relative 'login'
class Test < MiniTest::Test`
include Rack::Test::Methods
def app
Sinatra::Application
end
def test_login
response = get ('/home')
doc = Nokogiri::HTML(response.body)
puts last_response
puts doc
#response = post '/login', username: 'test_user', password: 'password'
#get '/home'
#follow_redirect!()
#puts doc
#assert_equal "Admin", doc.at_css("#admin-block div h1")
end
end
Please do not comment asking me to use a different testing gem.
Thank you
Since you're testing that the login works, you only want to see if the login information that you pass to the login form actually gives a HTTP 200 response.
There is no need for the first part of the code that is not commented in your example.
As a side-note, checking if a response from a page has a certain HTTP code with content of the page body is not considered a 'good' test, but if you consider it to be decent enough, then go ahead with it.
The following test should work if the login information that you pass is valid.
def test_login
response = post '/login', username: 'test_user', password: 'password'
assert_equal response.status, Net::HTTPSuccess
end
EDIT
Update to the question came right after posting my answer, but if you're getting an error when sending a GET request to /home it sounds like that route is not defined in your application.
Related
It would appear that either I am missing something very basic, or Rack/Test can't cope with Sinatra doing a redirect.
Presumably there is a way around this or Rack/Test would be useless. Can anyone tell me what I should be doing to get it to work here?
(Edit: What I ultimately want to achieve is to test which page I eventually get, not the status. In the test, the last_response object is pointing to a page that doesn't exist in my app, and certainly not the page you actually get when you run it.)
An example app:
require 'sinatra'
require 'haml'
get "/" do
redirect to('/test')
end
get '/test' do
haml :test
end
This works as you would expect. Going to either '/' or '/test' gets you the contents of views/test.haml.
But this test does not work:
require_relative '../app.rb'
require 'rspec'
require 'rack/test'
describe "test" do
include Rack::Test::Methods
def app
Sinatra::Application
end
it "tests" do
get '/'
expect(last_response.status).to eq(200)
end
end
This is what happens when you run the test:
1) test tests
Failure/Error: expect(last_response.status).to eq(200)
expected: 200
got: 302
And this is what last_response.inspect looks like:
#<Rack::MockResponse:0x000000035d0838 #original_headers={"Content-Type"=>"text/html;charset=utf-8", "Location"=>"http://example.org/test", "Content-Length"=>"0", "X-XSS-Protection"=>"1; mode=block", "X-Content-Type-Options"=>"nosniff", "X-Frame-Options"=>"SAMEORIGIN"}, #errors="", #body_string=nil, #status=302, #header={"Content-Type"=>"text/html;charset=utf-8", "Location"=>"http://example.org/test", "Content-Length"=>"0", "X-XSS-Protection"=>"1; mode=block", "X-Content-Type-Options"=>"nosniff", "X-Frame-Options"=>"SAMEORIGIN"}, #chunked=false, #writer=#<Proc:0x000000035cfeb0#/home/jonea/.rvm/gems/ruby-1.9.3-p547#sandbox/gems/rack-1.5.2/lib/rack/response.rb:27 (lambda)>, #block=nil, #length=0, #body=[]>
I wonder if Rack/Test has just arbitrarily decided to insert 'http://example.org' into the redirect?
As #sirl33tname points out, a redirect is still a redirect, so the best possible status I can expect is 302, not 200. If I want to test whether I got a good page at the end of the redirect, I should test ok? not the status.
But if I want to test what URL I eventually end up with, though, I need to do a tiny bit more, because Rack/Test is basically a mocking system (sic) and returns a mock of a page on a redirect, not the actual page.
But this is easy enough to override, it turns out, with follow_redirect!.
The test becomes:
it "tests" do
get '/'
follow_redirect!
expect(last_response.status).to be_ok
# ...and now I can test for the contents of test.haml, too...
expect(last_response.body).to include('foo')
end
And that does the job.
Your Test is wrong.
A get on a redirect is status code 302. So the right test is:
expect(last_response.status).to eq(302)
Maybe a better way to check this is just assert last_response.ok?
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3
Or like the example from github:
get "/"
follow_redirect!
assert_equal "http://example.org/yourexpecedpath", last_request.url
assert last_response.ok?
And yes this is always example.org because you get a mock instead of a real response.
Another way would be to test last_response.header['Location']
I'm trying to parse the Twitter usernames from a bit.ly stats page using Nokogiri:
require 'rubygems'
require 'nokogiri'
require 'open-uri'
doc = Nokogiri::HTML(open('http://bitly.com/U026ue+/global'))
twitter_accounts = []
shares = doc.xpath('//*[#id="tweets"]/li')
shares.map do |tweet|
twitter_accounts << tweet.at_css('.conv.tweet.a')
end
puts twitter_accounts
My understanding is that Nokogiri will save shares in some form of tree structure, which I can use to drill down into, but my mileage is varying.
That data is coming in from an Ajax request with a JSON response. It's pretty easy to get at though:
require 'json'
url = 'http://search.twitter.com/search.json?_usragnt=Bitly&include_entities=true&rpp=100&q=nowness.com%2Fday%2F2012%2F12%2F6%2F2643'
hash = JSON.parse open(url).read
puts hash['results'].map{|x| x['from_user']}
I got that URL by loading the page in Chrome and then looking at the network panel, I also removed the timestamp and callback parameters just to clean things up a bit.
Actually, Eric Walker was onto something. If you look at doc, the section where the tweets are supposed to be look like:
<h2>Tweets</h2>
<ul id="tweets"></ul>
</div>
This is likely because they're generated by some JavaScript call which Nokogiri isn't executing. One possible solution is to use watir to traverse to the page, load the JavaScript and then save the HTML.
Here is a script that accomplishes just that. Note that you had some issues with your XPath arguments which I've since solved, and that watir will open a new browser every time you run this script:
require 'watir'
require 'nokogiri'
browser = Watir::Browser.new
browser.goto 'http://bitly.com/U026ue+/global'
doc = Nokogiri::HTML.parse(browser.html)
twitter_accounts = []
shares = doc.xpath('//li[contains(#class, "tweet")]/a')
shares.each do |tweet|
twitter_accounts << tweet.attr('title')
end
puts twitter_accounts
browser.close
You can also use headless to prevent a window from opening.
I'm trying to use the Ruby version of Mechanize to extract my employer's tickets from a ticket management system that we're moving away from that does not supply an API.
Problem is, it seems Mechanize isn't keeping the cookies between the post call and the get call shown below:
require 'rubygems'
require 'nokogiri'
require 'mechanize'
#agent = Mechanize.new
page = #agent.post('http://<url>.com/user_session', {
'authenticity_token' => '<token>',
'user_session[login]' => '<login>',
'user_session[password]' => '<password>',
'user_session[remember_me]' => '0',
'commit' => 'Login'
})
page = #agent.get 'http://<url>.com/<organization>/<repo-name>/tickets/1'
puts page.title
user_session is the URL to which the site's login page POSTs, and I've verified that this indeed logs me in. But the page that returns from the get call is the 'Oops, you're not logged in!' page.
I've verified that clicking links on the page that returns from the post call works, but I can't actually get to where I need to go without JavaScript. And of course I've done this successfully on the browser with the same login.
What am I doing wrong?
Okay this might help you - first of all what version of mechanize are you using? You need to identify, if this problem is due to the cookies being overwritten/cleaned by mechanize between the requests or if the cookies are wrong/not being set in the first place. You can do that by adding a puts #agent.cookie_jar.jar inbetween the two requests, to see what is stored.
If its a overwriting issue, you might be able to solve it by collecting the cookies from the first request, and applying them to the second. There are many ways to do this:
One way is to just do a temp_jar = agent.cookie_jar.jar an then just going through each cookie and add it again using the .add method
HOWEVER - the easiest way is by just installing the latest 2.1 pre release of mechanize (many fixes), because you will then be able to do it very simply.
To install the latest do a gem install mechanize --pre and make sure to get rid of the old version of mechanize gem uninstall mechanize 'some_version' after this, you can simply do as follows:
require 'rubygems'
require 'nokogiri'
require 'mechanize'
#agent = Mechanize.new
page = #agent.post('http://<url>.com/user_session', {
'authenticity_token' => '<token>',
'user_session[login]' => '<login>',
'user_session[password]' => '<password>',
'user_session[remember_me]' => '0',
'commit' => 'Login'
})
temp_jar = #agent.cookie_jar
#Do whatever you need an use the cookies again in a new session after that
#agent = Mechanize.new
#agent.cookie_jar = temp_jar
page = #agent.get 'http://<url>.com/<organization>/<repo-name>/tickets/1'
puts page.title
BTW the documentation is here http://mechanize.rubyforge.org/index.html
Mechanize would automatically send cookies obtained from the response in the consecutive request. You can use the same agent without re-new.
require 'mechanize'
#agent = Mechanize.new
#agent.post(create_sessions_url, params, headers)
#agent.get(ticket_url)
Tested with mechanize 2.7.6.
I would like to specify a base URL so I don't have to always specify absolute URLs. How can I specify a base URL for Mechanize to use?
To accomplish the previously proffered answer using Webrat, you can do the following e.g. in your Cucumber env.rb:
require 'webrat'
Webrat.configure do |config|
config.mode = :mechanize
end
World do
session = Webrat::Session.new
session.extend(Webrat::Methods)
session.extend(Webrat::Matchers)
session.visit 'http://yoursite/yourbasepath/'
session
end
To make it more robust, such as for use in different environments, you could do:
ENV['CUCUMBER_HOST'] ||= 'yoursite'
ENV['CUCUMBER_BASE_PATH'] ||= '/yourbasepath/'
# Webrat
require 'webrat'
Webrat.configure do |config|
config.mode = :mechanize
end
World do
session = Webrat::Session.new
session.extend(Webrat::Methods)
session.extend(Webrat::Matchers)
session.visit('http://' + ENV['CUCUMBER_HOST'] + ENV['CUCUMBER_BASE_PATH'])
session
end
Note that if you're using Mechanize, Webrat will also fail to follow your redirects because it won't interpret the current host correctly. To work around this, you can add session.header('Host', ENV['CUCUMBER_HOST']) to the above.
To make sure the right paths are being used everywhere for visiting and matching, add ENV['CUCUMBER_BASE_PATH'] + to the beginning of your paths_to method in paths.rb, if you use it. It should look like this:
def path_to(page_name)
ENV['CUCUMBER_BASE_PATH'] +
case page_name
Apologies if anyone got a few e-mails from this -- I originally tried to post as a comment and Stack Overflow's irritating UI got the better of me.
For Mechanize, the first URL you specify will be considered the base URL. For example:
require "rubygems"
require "mechanize"
agent = Mechanize.new
agent.get("http://some-site.org")
# Subsequent requests can now use the relative path:
agent.get("/contact.html")
This way you only specify the base URL once.
I'm writing Cucumber tests for a Sinatra based application using Webrat. For some tests I need to implement a scenario like
Given I am logged in as admin
When I am visiting "/"
Then I should see "Settings"
I define steps like this:
Given /^I am logged in as "(.+)"$/ do |user|
visit "/login"
fill_in "login", :with => user
fill_in "password", :with => "123456"
click_button "Login"
end
When /^I am viewing "(.+)"$/ do |url|
visit(url)
end
Then /^I should see "(.+)"$/ do |text|
response_body.should =~ /#{text}/
end
On success a cookie is created
response.set_cookie(cookie_name, coockie_value)
and then verified in views when user tries to access admin pages via helper method:
def logged_in?
request.cookies[cookie_name] == cookie_value
end
And it looks like Webrat doesn't store cookies. Tests don't report any error, but "logged_in?" in views is always false, like the cookie was not saved.
Am I doing something wrong? If this is just how Webrat works, what is the best workaround?
The real problem is the way Sinatra is treating sessions in the test environment. Search the Google group for the discussion, but the real solution is to simply use:
use Rack::Session::Cookie
and not
enable :sessions
Using Selenium is nice but it's overkill as a solution for the OP's problem.
The workaround is use Webrat with Selenium back end. It runs all tests in a separate Firefox window, so cookies or javascript is not a problem. The downside is extra time and resources required to run Firefox and do all the real clicks, rendering etc.
You could have your "Given /^I am logged in" step hack logged_in?:
Given /^I am logged in as "(.+)"$/ do |user|
visit "/login"
fill_in "login", :with => user
fill_in "password", :with => "123456"
click_button "Login"
ApplicationController.class_eval <<-EOE
def current_user
#current_user ||= User.find_by_name(#{EOE})
end
end
EOE
end
There are two downsides:
It's really hackish to mix view-level and controller-level issues like this.
It'll be difficult to mock up "logout"