I have a page that sometimes loads in over a minute. Assume this is the expected behavior and wont change. In these cases, I get Net::ReadTimeout.
Note that this is after navigating to a page by clicking a button on the previous page, not an ajax request. Therefore Capybara.using_wait_time doesn't help.
I have tried a number of radical things (some of which I knew wouldn't work) like:
Setting page.driver.browser.manage.timeouts's implicit_wait, script_timeout and page_load.
Looping through the entire object space and setting all Selenium::WebDriver::Remote::Http::Default's timeout value.
Looping through the entire object space and setting all
Net::HTTP's read_timeout.
page.driver.browser.send(:bridge).http.instance_variable_get(:#http).read_timeout=
None seem to work. This should be very trivial, still I couldn't find a way to do it.
If you know of a webdriver agnostic solution that would be great. If not - I am using selenium.
Selenium has a lot of different timeout settings, some of which can be changed at runtime, others which have to be set when the driver is initialized. You are most likely running into the Http::Default timeout which defaults to 60 seconds. You can override this by passing your own instance into the Selenium driver as http_client
Capybara.register_driver :slow_selenium do |app|
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 120
Capybara::Selenium::Driver.new(app, http_client: client)
end
and then use the :slow_selenium driver for tests which will take over a minute to load the page
Related
I'm writing a script that registers users for me, but the website got a lot of junk loading (like statistic urls that has to load in etc) so the script is really slow cause its waiting for the site to fully load even if all the elements needed already are loaded, is it possible to disable this wait time=? it would make my script like 10 sec faster.
The waiting for the page to load is controlled by the page load strategy. By default, it is set to "normal", which waits for the document readiness state to be "complete". You can set the strategy to "none" to remove the waiting. Some of the browsers/drivers also support an "eager" strategy that waits for the browser to be in the "interactive" state.
require 'webdrivers'
require 'watir'
browser = Watir::Browser.new :chrome, page_load_strategy: 'none'
browser.goto 'www.google.com'
p browser.ready_state
#=> "loading"
See https://w3c.github.io/webdriver/#navigation for more details.
My problem is that I see the load time for a web page element on a test in jMeter # 200 miliseconds and when browsing, most of the time I get 3 or 4 seconds, in the condition where the size in bytes is # 331000.
I must mention that I cleared the cache and cookies for each iteration and I inserted also the constant timer between the steps.
The searching an id is the actual case described previously.
var pkg = JavaImporter(org.openqa.selenium);
var wait_ui = JavaImporter(org.openqa.selenium.support.ui.WebDriverWait);
var wait = new wait_ui.WebDriverWait(WDS.browser, 5000);
WDS.sampleResult.sampleStart()
var searchBox = WDS.browser.findElement(pkg.By.id("140:0;p"));
searchBox.click();
searchBox.sendKeys("1053606032");
searchBox.sendKeys(org.openqa.selenium.Keys.ENTER);
WDS.sampleResult.sampleEnd()
I expected to see the same load time results, but maybe an option would be if I wait until some elements on the search results page are visible. But I cannot bring an argument why is this difference. I had another case where the page loads in 10 seconds in Chrome and in jMeter Test Results 300 miliseconds.
Please try with wait until for a specific element which loads as close as the page load.
Below is another try for the same. Use the below code and check if this helps:-
WDS.sampleResult.sampleStart()
WDS.browser.get('http://jmeter-plugins.org')
//(JavascriptExecutor)WDS.browser.executeScript("return document.readyState").toString().equals("complete")
WDS.browser.executeScript("return document.readyState").toString().equals("complete")
WDS.sampleResult.sampleEnd()
For me without execute script page loads in 3 sec and with executeScript it loads in 7 sec..while in browser that loads in around 7.57sec..
Hope this helps.
I have rspec tests and I feel there is a bug with puffing billy or maybe I am misunderstanding the use of whitelists in the settings.
Basically the test is about checking that, if a third party hosted image (on a service called Cloudinary) takes time to be downloaded by a page, then the user sees a loading spinner and after the image is finally loaded the spinner disappears and is not visible anymore.
I'm sure billy is the root cause as
if I don't use puffing billy (by removing billy: true) to stub/fake a 10 seconds response delay for the third party image, as you might guess, I see the image (using save_and_open_page to make sure) too soon and my test can't be implemented.
if I use billy by using billy: true, then the image NEVER appears (checked it by save_and_open_page) and the spinner keeps turning as the actual stubbed image never shows up , it's like it was "blocked" by puffing billy... also making the test not doable :(
Puffing billy settings
Capybara.configure do |config|
# we must force the value of the capybara server port. We have to do that because puffing-billy
# saves everything locally for a specific host and port. But, capybara starts your rack application
# on a new port each time you run your tests. The consequence is puffing-billy
# is not able to reuse the created cache files and tries to do all the API call again.
# source - kevin.disneur.me/archives/2015-03-05-end-to-end-tests-in-javascript.html
config.server_port = 60001
# source: comments in coderwall.com/p/jsutlq, enable to load save_and_open page with css and js
config.asset_host = 'http://localhost:3000'
end
require 'billy/capybara/rspec'
Billy.configure do |c|
c.cache = true
c.cache_request_headers = false
c.persist_cache = true
c.non_successful_cache_disabled = true
c.non_successful_error_level = :warn
c.whitelist = ['localhost', '127.0.0.1', 'https://res.cloudinary.com']
c.ignore_cache_port = true
c.cache_path = "spec/support/http_cache/billy/"
# Only set non_whitelisted_requests_disabled **temporarily**
# to false when first recording a 3rd party interaction. After
# the recording has been stored to cache_path, then set
# non_whitelisted_requests_disabled back to true.
c.non_whitelisted_requests_disabled = true
end
The rspec test:
context "non signed in visitor", js: true, billy: true do
describe "Spinner shows while waiting then disappears" do
it "should work" doproxy.stub("https://res.cloudinary.com/demo/image/upload/sample.jpg").and_return(
Proc.new { |params, headers, body|
sleep 10
{code: 200}
}
)
visit actual_deal_page_path(deal)
# detect spinner
expect(page).to have_css('div#fullPageLoadingSpinner', visible: :visible)
# check subsequent image elements not yet visible
expect(page).to have_no_css("img[src*='https://res.cloudinary.com/demo/image/upload/sample.jpg']")
# then after 15 seconds, the cloudinary image finally is loaded and
# the spinner disappears
sleep 15
expect(page).to have_css('div#fullPageLoadingSpinner', visible: :hidden)
expect(page).to have_css("img[src*='https://res.cloudinary.com/demo/image/upload/sample.jpg']")
end
end
end
The image in the view
<section
id="introImg"
<img src="https://res.cloudinary.com/demo/image/upload/sample.jpg" class="cld-responsive deal-page-bckdImgCover"
</section>
I have tried different variations on the settings, tried also to change https//res.cloudinary to res.cloudinary.com...nothing works
Finally, I think it matters, I keep seeing in my tests logs:
puffing-billy: CACHE KEY for 'https://res.cloudinary.com:443/demo/image/upload/sample.jpg' is 'get_res.cloudinary.com_2c4fefdac8978387ee341535c534e21e2588ed76'
puffing-billy: Connection to https://res.cloudinary.com:443/demo/image/upload/sample.jpg not cached and new http connections are disabled
puffing-billy: CACHE KEY for 'https://res.cloudinary.com:443/demo/image/upload/sample.jpg' is 'get_res.cloudinary.com_2c4fefdac8978387ee341535c534e21e2588ed76'
puffing-billy: Connection to https://res.cloudinary.com:443/demo/image/upload/sample.jpg not cached and new http connections are disabled
puffing-billy: CACHE KEY for 'https://res.cloudinary.com:443/demo/image/upload/sample.jpg' is 'get_res.cloudinary.com_2c4fefdac8978387ee341535c534e21e2588ed76'
puffing-billy: Connection to https://res.cloudinary.com:443/demo/image/upload/sample.jpg not cached and new http connections are disabled
puffing-billy: CACHE KEY for 'https://res.cloudinary.com:443/demo/image/upload/sample.jpg' is 'get_res.cloudinary.com_2c4fefdac8978387ee341535c534e21e2588ed76'
puffing-billy: Connection to https://res.cloudinary.com:443/demo/image/upload/sample.jpg not cached and new http connections are disabled
On these Test logs, first I don't quite understand why there are so many lines for the same resource and second, what does this message mean "not cached and new http connections are disabled". I tried other tickets with similar sounding issues such as #104 or #179 or, but my bug might be different...
I wrote this script a couple months ago and it has been working great. However, had an issue last night that the page never loads and just spins forever. My script was just timing out. I need to add code to test if the page fully loads within 30 second and if not to exit with a status of 2 and a proper message. Here is the code I have:
#-------------------------------------------------------------#
# Watir Login to XXXX SSO Site
# Written 2017-09-17 for ITC by Jim Clark
#-------------------------------------------------------------#
# the Watir controller
require "watir"
# set a variable
#test_site = "https://oracle.pomeroy.com"
test_site = "http://xxx.xxx"
# open a browser
browser = Watir::Browser.new :phantomjs
#puts "Beginning of test: XXXX SSO Login."
#puts " Step 1: go to the test site: " + test_site
browser.goto test_site
# Test if page fully loaded after 30 seconds, if not, exit with proper status and message
# validate login site loads
if browser.text.include? "Username"
#puts " Step 2: enter username and password text fields."
browser.text_field(:name, "username").set "xxxx"
browser.text_field(:name, "password").set "xxxx"
#puts " Step 3: click login"
browser.button(:text, "Login").click
if browser.text.include? "Logged In As XXXX"
puts "OK: Test Passed. Login worked!"
status = 0
else
puts "CRITICAL: Open a SEV1 - Test login failed!"
status = 2
end
else
puts "CRITICAL: Open a SEV1 - Could not find login page!"
status = 2
end
browser.close
exit status
I have just spent the last 3 hours searching and reading and tried about 20 different things and nothing I did seemed to work.
So while the wait methods have a default timeout, settable thusly: Watir.default_timeout = 30 it appears there is no such setting in Watir itself for the the goto method. However since this is just calling the webdriver navigate.to method, and there IS a timeout settable in webdriver for that, then you can set that using watir's .driver method prior to trying to goto the page. You should be able set that as follows
b.driver.manage.timeouts.page_load = 30
b.goto
In your case since it's the initial load that is failing this ought to work for you, and if you don't like the default error message, you could rescue and provide your own.
In most cases I've seen however, the delays are often with REST services that are slow or overloaded, so just setting that prior to your goto will most often not be sufficient if the delays are with subsequent AJAX calls. What is checked when you use goto is the browser status, which will be 'ready' once the initial page content has been loaded, and often that happens pretty quickly as these days that content is often all static, cached and rapidly served by the webserver. Basically that is the time for loading a web 1.0 web-page.
Typically modern 'web 2.0' pages have a LOT of scripts and AJAX/REST style interactions that are usually necessary for all the page content to appear. It's not uncommon for the browser to report 'ready' when the page is mostly blank as the remaining contents are pulled in via REST or GRAPHQL requests. There is not unfortunately a standard way to tell if the browser is busy making further requests due to scripts.
The most standard thing to do is wait a given amount of time for a specific condition to be true, usually something to appear on the page that is one of the last things loaded/rendered. The wait methods take timeout parameterss, so you could do something along these lines
b.goto 'www.mysite.com'
Watir::Wait.until(timeout=30) { browser.text_field(name: 'username').exists? }
The Watir::Wait.until method (rubydocs here) also accepts a custom message if you want to provide one to make it easier to know what timed out when it fails.
I need to find the time that an element takes to load and it's ready to be accessible. As I've so many explicit 'sleep' in my script, I'm trying to reduce it by using the metric.
Try setting an Implicit Wait so that the webdriver polls the dom until the element appears.
driver = Selenium::WebDriver.for :firefox
driver.manage.timeouts.implicit_wait = 10 # seconds
http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp