I'm attempting to use the selenium-webdriver [ruby bindings][1] to access an internal web-site that requires a proxy to be configured, and HTTP Basic Auth.
I currently have:
require "selenium-webdriver"
driver = Selenium::WebDriver.for :firefox
driver.navigate.to "http://my-internal-site.com"
But this fails due to both the proxy and http auth issues. If I add my username and password to the URL (i.e. http://username:password#site.com) I can do basic authentication on another site that doesn't require the proxy, but this doesn't seem like an ideal solution.
Any suggestions?
Unfortunately doing http://username:password#site.com has been the standard way of doing but with more and more browsers blocking this approach. Patrick Lightbody of BrowserMob discussed in the company blog on how they get it to work.
Until there is full support for this across browsers for WebDriver (or Selenium), alternate option is to integrate w/ desktop GUI automation tools, where the desktop GUI tool will automate the HTTP authentication part. You can probably find some examples for this or file downloads, uploads if you google for things like "Selenium AutoIt", etc.
For a cross platform solution, replace AutoIt with Sikuli or something similar.
I tried the approach with AutoIt and it worked fine until Selenium 2.18.0,
because they implemented UnhandledAlertException, which will be thrown as soon
as the proxy login dialog pops up.
if you try to catch it, you end up with an driver=null, you would need to loop
the attempt to create a driver and trust into your AutoIt Script to kill the window.
If you're using Google-Chrome, try creating a custom extension and import it through ChromeOptions. It supports http(s) that wasn't supported by browsermob_proxy in Chrome. In-case of redirects testing, this is the only way that will help you as of now...
For details, check this post
https://devopsqa.wordpress.com/2018/08/05/handle-basic-authentication-in-selenium-for-chrome-browser/
Related
I am running my tests on headless chrome browser and need to get the user agent of the headless browser.
For a Chrome browser, that is not headless I use this code to get the user agent:
page.execute_script("navigator.userAgent"); ==> which works as required
But for a headless browser this doesn't seem to work. Is there a way to get the userAgent?
PS: I use ruby, capybara in my framework
Your issue is that you're using execute_script when you need to be using evaluate_script because you want a response. That being said, your code shouldn't have worked without headless set either so I'm not sure what version of Capybara you're running.
page.evaluate_script("navigator.userAgent")
Is there a universal way to detect when a selenium browser opens an error page? For example, disable your internet connection and do
driver.get("http://google.com")
In Firefox, Selenium will load the 'Try Again' error page containing text like "Firefox can't establish a connection to the server at www.google.com." Selenium will NOT throw any errors.
Is there a browser-independent way to detect these cases? For firefox (python), I can do
if "errorPageContainer" in [ elem.get_attribute("id") for elem in driver.find_elements_by_css_selector("body > div") ]
But (1) this seems like computational overkill (see next point below) and (2) I must create custom code for every browser.
If you disable your internet and use htmlunit as the browser you will get a page with the following html
<html>
<head></head>
<body>Unknown host</body>
</html>
How can I detect this without doing
if driver.find_element_by_css_selector("body").text == "Unknown host"
It seems like this would be very expensive to check on every single page load since there would usually be a ton of text in the body.
Bonus points if you also know of a way to detect the type of load problem, for example no internet connection, unreachable host, etc.
WebDriver API doesnot expose HTTP status codes , so if you want to detect/manage HTTP errors, you should use a debugging proxy.
See Jim's excellent post Implementing WebDriver HTTP Status on how to do exactly that.
If you just need to remote-control the Tor Browser, you might also consider the Marionette framework by Mozilla. Bonus: It fails when a page cannot be loaded: (see navigate(url) in the API)
The command will return with a failure if there is an error loading
the document or the URL is blocked. This can occur if it fails to
reach the host, the URL is malformed, the page is restricted (about:*
pages), or if there is a certificate issue to name some examples.
Example use (copy from other answer):
To use with the Tor Browser, enable marionette at startup via
Browser/firefox -marionette
(inside the bundle). Then, you can connect via
from marionette import Marionette
client = Marionette('localhost', port=2828);
client.start_session()
and load a new page for example via
url='http://mozilla.org'
client.navigate(url);
For more examples, there is a tutorial.
I go throught the intranet/internet using proxy auth.
I'm not familiar with automation throught a proxy or proxys, in IExplorer we set up the proxy on LAN settings in "Use automatic configuration script" with something like:
http://some-url/url/file.proxy
Uncheck "Automatically detect settings" and we don't set any in the "Proxy Server" section.
So we can go "out" (internet/intranet).I have a username/password so everytime I just open a new IE instance, I got a prompt for them. How should I set this values on PhantomJS to get access to the network/internet ? I jus can't make it work, everytime I try to get a screenshot from anypage I got a webpage screenshot related to the proxy auth.
I've tried set the full/script.proxy url in the proxy prop and username/password but didn't work. Hope someone can provide an example for my understanding. Also I'll appreaciate some resources/good-to-read articles.
Got it.
I take a look to the script I just mentioned in my question, and just got the proxy (ip:port) needed in phantomJS.
Basically the script do some decisions about what proxy to use based on the requested url and return the proxy ip.
So the PhantomJS docs it's preety straightforward, I wasn't understanding how my proxy was set (by script),if it's your case, you can copy/paste the script url into your browser so you can analyze it and retrieve the information you need to setup phantomJs
he code in the proxy script is kind of easy to read (if you have any programming experiencie).
So I'm trying to write a suite of tests using Selenium WebDriver in Ruby for our web application, but I can't even get into the application because of SSL certificate issues in Firefox. Our application is deployed on a local server, and uses a self-signed SSL Certificate for testing/development. When you're simply using the browser manually, you can tell Firefox to set a security exception, and store it permanently, which works fine. This isn't really a possibility using Selenium. First off, the tests fail before I would be able to set the permanent exception. Secondly, the moment I set the exception, Selenium forgets it and displays the screen again.
I've already tried creating a custom profile with firefox -p and adding the exception in that profile and loading it up via Selenium, but Selenium doesn't seem to respect that exception. I also tried setting various profile parameters to get it to ignore or accept the certificate, but Selenium appears to ignore those profile parameters as well. Finally, I made Selenium add an extension that skips the invalid certificate screen, but it still doesn't work. Here's my code:
require 'rubygems'
require 'selenium-webdriver'
profile = Selenium::WebDriver::Firefox::Profile.from_name "Selenium"
profile.add_extension("./skip_cert_error-0.3.2-fx.xpi")
profile["browser.xul.error_pages"] = "false"
profile["browser.ssl_override_behavior"] = "1"
driver = Selenium::WebDriver.for(:firefox, :profile => profile)
I figured it out. The trick was to download the certificate to the site, and save it somewhere, then go back into Firefox settings (with the Selenium Profile loaded), and manually upload the certificate, then "Edit Trust" to trust the certificate.
There's another way to skin this cat. I am using a Cucumber\Ruby\Selenium framework, and the traditional profile adjustments for skipping bad certs did not work for me either. What I ended up doing was creating/newing a FF profile inside the Ruby code, and setting one member variable for the FF profile (assume_untrusted_certificate_issuer). Then I just passed along the profile to the browser\driver instance. Check it out:
profile = Selenium::WebDriver::Firefox::Profile.new
profile.assume_untrusted_certificate_issuer=false
browser = Selenium::WebDriver.for :firefox, :profile => profile
This all lives in my env.rb file.
Versions in play here:
Windows7 Pro
Ruby 1.9.3,
Selenium Webdriver Gem 2.19, and
Firefox 14.0.1
Pretty sweet, huh?
I want to change the settings of firefox so as to allow it to make cross domain ajax calls. Since due to the security feature of the firefox it doen't allow ajax calls to be made. I know if it is in same domain it will allow. I have a code given bellow which in safari works fine but firefox doesn't display the results when it calls csce server then since the code is on local machine doesn't allow it and returns error. I know it will start working if I load my this code to csce server but I want to run the code from my machine. So can anyone help me in resolving this. I have spent past couple of days just searching for this solution.
Kindly suggest how to achieve this or should I go with some older version of firefox?
I googled and set the parameters of browser in config file as specified in this site but it still doesn't work.
http://code.google.com/p/httpfox/issues/detail?id=20
Maybe you could use privoxy and tell it to inject something like "Access-Control-Allow-Origin: *" in the server response.
To do this, you would have to go into the file user.filter (create it if it doesn't exist) in privoxys configuration directory and insert something like this:
SERVER-HEADER-FILTER: allow-crossdomain
s|Server: .*|Access-Control-Allow-Origin: *|
Instead of Server, you can also use any other header that's always present and you don't need.
And this into user.action:
{+server-header-filter{allow-crossdomain}}
csce.unl.edu
Note: I didn't test it.
https://developer.mozilla.org/En/HTTP_access_control
http://config.privoxy.org/user-manual/
This appears to enable XSS from file:// pages in Firefox 4, although it prompts you so might not be suitable for more than simple test pages:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");