How to get the user agent for a headless browser - ruby

I am running my tests on headless chrome browser and need to get the user agent of the headless browser.
For a Chrome browser, that is not headless I use this code to get the user agent:
page.execute_script("navigator.userAgent"); ==> which works as required
But for a headless browser this doesn't seem to work. Is there a way to get the userAgent?
PS: I use ruby, capybara in my framework

Your issue is that you're using execute_script when you need to be using evaluate_script because you want a response. That being said, your code shouldn't have worked without headless set either so I'm not sure what version of Capybara you're running.
page.evaluate_script("navigator.userAgent")

Related

RSpec, Appium: provide desired capabilities from CLI

I am using this official Appium page http://appium.io/docs/en/writing-running-appium/default-capabilities-arg/
and I want to override the capabilities which are provided in spec_helper.rb this way
rspec --default-capabilities spec/test_data/new_caps.json spec/cli_test/cli_test_spec.rb
but it is giving me error invalid option: --default-capabilities.
Appium server already running in the background.
What am I doing wrong here?

Chrome DevTools Protocol addScriptToEvaluateOnNewDocument using ruby chrome_remote

So I am trying to inject a script to run on any page using addScriptToEvaluateOnNewDocument on chrome 79, but it doesn't seem to be working.
I am using the ruby gem chrome_remote, which gives a pretty basic access to the CDP.
Here is an example ruby:
scpt =<<EOF
window.THIS_WAS_SET = 1
EOF
ChromeRemote.client.send_cmd 'Page.addScriptToEvaluateOnNewDocument',{source: scpt}
ChromeRemote.client.send_cmd "Page.navigate", url: "http://localhost:4567/test"
I then start chrome with --remote-debugging-port=9222
The Page.addScriptToEvaluateOnNewDocument will always return {"identifier"=>"1"} (even if I call it multiple times, say with different scripts).
And when I open console on the opened tab in Chrome (which works, so I know CDP in general is working), and check the value of window.THIS_WAS_SET, it is undefined.
Is there any way to verify the command was sent to the browser, such as a log in the browser it was received? Any way to see what scripts were injected? Why does each call always return a ScriptIdentifier of 1, that seems problematic?
Anyone have a similar example working?
you should call "page.enable" first.

Karma, Mocha, Chrome Headless and click testing on an url

I'm trying to use Karma & Chrome Headless to replace testing I would have done with CasperJS & PhantomJS to load a public site for running end to end click testing.
The idea is that this would be an automated test against a canary server to ensure functionality like login, signup, etc. behaves as expected.
I've gotten simple test loading examples up and running with Karma, Mocha, Chai, & Chrome Headless but I can't figure out how to navigate to an url via the test and then check/click on DOM elements.
I haven't found any useful documentation or examples to point me in the right direction.

How to make Selenium WebDriver in Ruby remember SSL Certificate exceptions in Firefox?

So I'm trying to write a suite of tests using Selenium WebDriver in Ruby for our web application, but I can't even get into the application because of SSL certificate issues in Firefox. Our application is deployed on a local server, and uses a self-signed SSL Certificate for testing/development. When you're simply using the browser manually, you can tell Firefox to set a security exception, and store it permanently, which works fine. This isn't really a possibility using Selenium. First off, the tests fail before I would be able to set the permanent exception. Secondly, the moment I set the exception, Selenium forgets it and displays the screen again.
I've already tried creating a custom profile with firefox -p and adding the exception in that profile and loading it up via Selenium, but Selenium doesn't seem to respect that exception. I also tried setting various profile parameters to get it to ignore or accept the certificate, but Selenium appears to ignore those profile parameters as well. Finally, I made Selenium add an extension that skips the invalid certificate screen, but it still doesn't work. Here's my code:
require 'rubygems'
require 'selenium-webdriver'
profile = Selenium::WebDriver::Firefox::Profile.from_name "Selenium"
profile.add_extension("./skip_cert_error-0.3.2-fx.xpi")
profile["browser.xul.error_pages"] = "false"
profile["browser.ssl_override_behavior"] = "1"
driver = Selenium::WebDriver.for(:firefox, :profile => profile)
I figured it out. The trick was to download the certificate to the site, and save it somewhere, then go back into Firefox settings (with the Selenium Profile loaded), and manually upload the certificate, then "Edit Trust" to trust the certificate.
There's another way to skin this cat. I am using a Cucumber\Ruby\Selenium framework, and the traditional profile adjustments for skipping bad certs did not work for me either. What I ended up doing was creating/newing a FF profile inside the Ruby code, and setting one member variable for the FF profile (assume_untrusted_certificate_issuer). Then I just passed along the profile to the browser\driver instance. Check it out:
profile = Selenium::WebDriver::Firefox::Profile.new
profile.assume_untrusted_certificate_issuer=false
browser = Selenium::WebDriver.for :firefox, :profile => profile
This all lives in my env.rb file.
Versions in play here:
Windows7 Pro
Ruby 1.9.3,
Selenium Webdriver Gem 2.19, and
Firefox 14.0.1
Pretty sweet, huh?

HTTP Basic Auth and Proxy for selenium-webdriver (ruby bindings)

I'm attempting to use the selenium-webdriver [ruby bindings][1] to access an internal web-site that requires a proxy to be configured, and HTTP Basic Auth.
I currently have:
require "selenium-webdriver"
driver = Selenium::WebDriver.for :firefox
driver.navigate.to "http://my-internal-site.com"
But this fails due to both the proxy and http auth issues. If I add my username and password to the URL (i.e. http://username:password#site.com) I can do basic authentication on another site that doesn't require the proxy, but this doesn't seem like an ideal solution.
Any suggestions?
Unfortunately doing http://username:password#site.com has been the standard way of doing but with more and more browsers blocking this approach. Patrick Lightbody of BrowserMob discussed in the company blog on how they get it to work.
Until there is full support for this across browsers for WebDriver (or Selenium), alternate option is to integrate w/ desktop GUI automation tools, where the desktop GUI tool will automate the HTTP authentication part. You can probably find some examples for this or file downloads, uploads if you google for things like "Selenium AutoIt", etc.
For a cross platform solution, replace AutoIt with Sikuli or something similar.
I tried the approach with AutoIt and it worked fine until Selenium 2.18.0,
because they implemented UnhandledAlertException, which will be thrown as soon
as the proxy login dialog pops up.
if you try to catch it, you end up with an driver=null, you would need to loop
the attempt to create a driver and trust into your AutoIt Script to kill the window.
If you're using Google-Chrome, try creating a custom extension and import it through ChromeOptions. It supports http(s) that wasn't supported by browsermob_proxy in Chrome. In-case of redirects testing, this is the only way that will help you as of now...
For details, check this post
https://devopsqa.wordpress.com/2018/08/05/handle-basic-authentication-in-selenium-for-chrome-browser/

Resources