I am trying to use capybara-poltergeist with proxy to emulate a browser.
require 'capybara/poltergeist'
require 'capybara/dsl'
Capybara.register_driver :poltergeist_proxy do |app|
Capybara::Poltergeist::Driver.new(app,:js_errors => false,{ :phantomjs_options => ['--ignore-ssl-errors=yes', '--proxy-type=https','--proxy=112.124.46.186:80'] })
end
Capybara.current_driver = :poltergeist_proxy
Capybara.default_wait_time = 90
Capybara.app_host = 'https://www.bbc.co.uk'
visit('/')
Unfortunately, I am getting the following error -
/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/poltergeist-1.5.0/lib/capyb
ara/poltergeist/web_socket_server.rb:87:in `rescue in send': Timed out waiting for response to {"name":"visit","args":["https://www.bbc.co.uk/"]}. It's possible that this happened because something took a very long time (for example a page load was slow). If so, setting the Poltergeist :timeout option to a higher value will help (see the docs for details). If increasing the timeout does not help, this is probably a bug in Poltergeist - please report it to the issue tracker. (Capybara::Poltergeist::TimeoutError)
I am not sure what mistake I am making. I know the syntax I am using is correct, based on a related query here, as well as mentioned at the github.
I don't think https is a valid proxy type (see https://github.com/ariya/phantomjs/wiki/API-Reference). Also, you can try adding timeout: 180 to your driver options
Related
I'm using Capybara to fill in a form and download the results.
It's a bit slow when filling in the form, and I want to check if JavaScript is the culprit.
How do I turn off JavaScript?
The Ruby code was something similar to, but not the same as, the following (the following won't reproduce the error message, but it is somewhat slow).
require "capybara"
url = "http://www.hiv.lanl.gov/content/sequence/HIGHLIGHT/highlighter.html"
fasta_text = [">seq1", "gattaca" * 1000, ">seq2", "aattaca" * 1000].join("\n")
session = Capybara::Session.new(:selenium)
# Code similar to this was run several times
session.visit(url)
session.fill_in('sample', :with => fasta_text)
session.click_on('Submit')
And the error I was getting (with my real code, but not the code I have above) was
Warning: Unresponsive script
A script on this page may be busy, or it may have stopped responding.
You can stop the script now, open the script in the debugger, or let
the script continue.
Script: chrome://browser/content/tabbrowser.xml:2884
I wasn't running Capybara as part of a test or as part of a spec.
To confirm that the code I wrote currently has JavaScript enabled (which is something I want to disable), doing
url = "http://www.isjavascriptenabled.com"
session = Capybara::Session.new(:selenium)
session.visit(url)
indicates that JavaScript is enabled.
Capybara only uses JavaScript if you've specified a javascript_browser:
Capybara.javascript_driver = :poltergeist
And if you've specified js: true as metadata in your spec:
context "this is a test", js: true do
Check for both of those things. If they're not there and the test is not running in a browser or using Poltergeist, then it's probably not using JavaScript.
I'm using Ruby 1.9.3 and need to GET a URL. I have this working with Net::HTTP, however, if the site is down, Net::HTTP ends up hanging.
While searching the internet, I've seen many people faced similar problems, all with hacky solutions. However, many of those posts are quite old.
Requirements:
I'd prefer using Net::HTTP to installing a new gem.
I need both the Body and the Response Code. (e.g. 200)
I do not want to require open-uri, since that makes global changes and raises some security issues.
I need to GET a URL within X seconds, or return error.
Using Ruby 1.9.3, how can I GET a URL while setting a timeout?
To clarify, my existing code looks like:
Net::HTTP.get_response(URI.parse(url))
Trying to add:
Net::HTTP.open_timeout(1000)
Results in:
NoMethodError: undefined method `open_timeout' for Net::HTTP:Class
You can set the open_timeout attribute of the Net::HTTP object before making the connection.
uri = URI.parse(url)
Net::HTTP.new(uri.hostname, uri.port) do |http|
http.open_timeout = 1000
response = http.request_get(uri.request_uri)
end
I tried all the solutions here and on the other questions about this problem but I only got everything right with the following code, The open-uri gem is a wrapper for net::http.
I needed a get that had to wait longer than the default timeout and read the response. The code is also simpler.
require 'open-uri'
open(url, :read_timeout => 5 * 60) do |response|
if response.read[/Return: Ok/i]
log "sending ok"
else
raise "error sending, no confirmation received"
end
end
I'm not sure how to solve this big performance issue of my application. I'm using open-uri to request the most popular videos from youtube and when I ran perftools https://github.com/tmm1/perftools.rb
It shows that the biggest performance issue is Timeout.timeout. Can anyone suggest me how to solve the problem?
I'm using ruby 1.8.7.
Edit:
This is the output from my profiler
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B4bANr--YcONZDRlMmFhZjQtYzIyOS00YjZjLWFlMGUtMTQyNzU5ZmYzZTU4&hl=en_US
Timeout is wrapping the function that is actually doing the work to ensure that if the server fails to respond within a certain time, the code will raise an error and stop execution.
I suspect that what you are seeing is that the server is taking some time to respond. You should look at caching the response in some way.
For instance, using memcached (pseudocode)
require 'dalli'
require 'open-uri'
DALLI = Dalli.client.new
class PopularVideos
def self.get
result = []
unless result = DALLI.get("videos_#{Date.today.to_s}")
doc = open("http://youtube/url")
result = parse_videos(doc) # parse the doc somehow
DALLI.set("videos_#{Date.today.to_s}", result)
end
result
end
end
PopularVideos.get # calls your expensive parsing script once
PopularVideos.get # gets the result from memcached for the rest of the day
I am trying to test selecting an option from select tag (these options are fetched from a remote database server). During normal interation with the website, it does not take more than a fraction of a second to populate this dropdown. However, when I run the following test,
When /^(?:|I )select "([^"]*)" from "([^"]*)" in search form$/ do |value, field|
within "#select_container" do
save_and_open_page
page.should have_css("#criteria_div_code > option:nth-child(10)")
select(value, :from => field)
end
end
I get the following error,
expected css "#criteria_div_code > option:nth-child(10)" to return something (RSpec::Expectations::ExpectationNotMetError)
The dropdown is populated with at least 20 options and so I just test for the presence of the 10th option (for now).
save_and_open_page shows that only one option (default option) exists instead of at least 10 and hence the "ExpectionNotMetError" comes up.
Capybara.default_wait_time = 30 - Ample time for the lists to get populated.
Isn't capybara waiting for the ajax call to finish?
Am I missing something here?
You might want to check my response to setting timeouts for ajax resynchronization Using Capybara for AJAX integration tests. Resynchronization timeout defaults to 10secs and if your response does not return before that time, you will not get any responses especially if you have set :resynchronize to false in your configurations. below is a snippet to set that timeout
Capybara.register_driver :selenium do |app|
Capybara::Selenium::Driver.new(app, :browser => :firefox, :resynchronization_timeout => 1000)
end
NOTE: if you previously set :resynchronize to false, you need to set this to true.
I guess you need to user js driver for ajax testing,
describe 'some stuff which requires js', :js => true do
it 'will use the default js driver'
it 'will switch to one specific driver', :driver => :celerity
end
Also note the following line - Capybara can block and wait for Ajax requests to finish after you’ve interacted with the page. To enable this behaviour, set the :resynchronize driver option to true.
I would like to specify a base URL so I don't have to always specify absolute URLs. How can I specify a base URL for Mechanize to use?
To accomplish the previously proffered answer using Webrat, you can do the following e.g. in your Cucumber env.rb:
require 'webrat'
Webrat.configure do |config|
config.mode = :mechanize
end
World do
session = Webrat::Session.new
session.extend(Webrat::Methods)
session.extend(Webrat::Matchers)
session.visit 'http://yoursite/yourbasepath/'
session
end
To make it more robust, such as for use in different environments, you could do:
ENV['CUCUMBER_HOST'] ||= 'yoursite'
ENV['CUCUMBER_BASE_PATH'] ||= '/yourbasepath/'
# Webrat
require 'webrat'
Webrat.configure do |config|
config.mode = :mechanize
end
World do
session = Webrat::Session.new
session.extend(Webrat::Methods)
session.extend(Webrat::Matchers)
session.visit('http://' + ENV['CUCUMBER_HOST'] + ENV['CUCUMBER_BASE_PATH'])
session
end
Note that if you're using Mechanize, Webrat will also fail to follow your redirects because it won't interpret the current host correctly. To work around this, you can add session.header('Host', ENV['CUCUMBER_HOST']) to the above.
To make sure the right paths are being used everywhere for visiting and matching, add ENV['CUCUMBER_BASE_PATH'] + to the beginning of your paths_to method in paths.rb, if you use it. It should look like this:
def path_to(page_name)
ENV['CUCUMBER_BASE_PATH'] +
case page_name
Apologies if anyone got a few e-mails from this -- I originally tried to post as a comment and Stack Overflow's irritating UI got the better of me.
For Mechanize, the first URL you specify will be considered the base URL. For example:
require "rubygems"
require "mechanize"
agent = Mechanize.new
agent.get("http://some-site.org")
# Subsequent requests can now use the relative path:
agent.get("/contact.html")
This way you only specify the base URL once.