My problem is that when im opening website on my computer by chorme everything works fine. But when im using Cypress cy.visit() to reach this endpoint by Cypress test runner, the data not appear..
in runner i always get Null response and have no idea why.
ill try to wait for page to load but its not working. click for module im interested, post and get endpoint. nothing works, table in test runner still opens empty
I try to create a program which will automate fetching data from one of the Google services. By using chrome and watir (which is basically a Ruby library build on top of the Selenium). Everything works fine as long as I keep my browser open. But when I minimize window, my program is not even able to pass a login process since it cannot find certain elements. This is my code to login:
#browser = Watir::Browser.new :chrome, options: { detach: true }
#browser.goto BASE_URL
#browser.text_field(name: 'identifier').set USER_EMAIL
#browser.element(xpath: '//*[#id="identifierNext"]').click
#browser.text_field(xpath: '//input[#type="password"]').set USER_PASSWORD
#browser.element(xpath: '//*[#id="passwordNext"]/div/button/div[2]').click
When my browser is minimize, during attempt to set a password I get this error message:
*** Watir::Exception::UnknownObjectException Exception: element located, but timed out after 30 seconds, waiting for
#<Watir::TextField: located: true; {:xpath=>"//input[#type="password"]", :tag_name=>"input"}> to be
present
And it works just fine as an open window. Even if I maximize the window during whole process program is suddenly able to locate missing input fields. The same story goes in many other points further. Program is not able to locate some elements unless chrome window is open.
Needless to say it works even worse in headless mode and I'm basically not able to locate any of those elements in html code.
As far as I understand Google services frontend side are build with Angular framework which inject html code dynamically. But shouldn't selenium pretend to act like a regular user and trigger the same responses on minimized and open window (and the headless mode as well)?
Is is some kind of blockade from Google to prevent this kind of automated proces and how can I bypass it?
Is this an issue with Chrome and switching for e.g. Firefox would fix it?
Can I implement some additional actions to actually mimic human interaction and pretend that my Chrome window is open?
I am running my tests on headless chrome browser and need to get the user agent of the headless browser.
For a Chrome browser, that is not headless I use this code to get the user agent:
page.execute_script("navigator.userAgent"); ==> which works as required
But for a headless browser this doesn't seem to work. Is there a way to get the userAgent?
PS: I use ruby, capybara in my framework
Your issue is that you're using execute_script when you need to be using evaluate_script because you want a response. That being said, your code shouldn't have worked without headless set either so I'm not sure what version of Capybara you're running.
page.evaluate_script("navigator.userAgent")
I am currently writing a selenium test case for a page which has doubleClick ad integrated in header and footer.
Selenium works perfectly fine until it comes to test this page, and test breaks with following issue.
Exception class=[java.lang.IllegalArgumentException]
com.gargoylesoftware.htmlunit.ScriptException: Exception invoking Window.getComputedStyle() with arguments [Text, String]
at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:669)
at net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:601)
at net.sourceforge.htmlunit.corejs.javascript.ContextFactory.call(ContextFactory.java:507)
I wonder any one has gone through issue.
Here is my stack
WebDrive : Firefox
HtmlUnit 2.12
Selenium-htmlunit-driver-2.33.0
I think css getting pulled by doubleclick on runtime causing the issue.
Any help in this regard will be helpful.
I'm attempting to use the selenium-webdriver [ruby bindings][1] to access an internal web-site that requires a proxy to be configured, and HTTP Basic Auth.
I currently have:
require "selenium-webdriver"
driver = Selenium::WebDriver.for :firefox
driver.navigate.to "http://my-internal-site.com"
But this fails due to both the proxy and http auth issues. If I add my username and password to the URL (i.e. http://username:password#site.com) I can do basic authentication on another site that doesn't require the proxy, but this doesn't seem like an ideal solution.
Any suggestions?
Unfortunately doing http://username:password#site.com has been the standard way of doing but with more and more browsers blocking this approach. Patrick Lightbody of BrowserMob discussed in the company blog on how they get it to work.
Until there is full support for this across browsers for WebDriver (or Selenium), alternate option is to integrate w/ desktop GUI automation tools, where the desktop GUI tool will automate the HTTP authentication part. You can probably find some examples for this or file downloads, uploads if you google for things like "Selenium AutoIt", etc.
For a cross platform solution, replace AutoIt with Sikuli or something similar.
I tried the approach with AutoIt and it worked fine until Selenium 2.18.0,
because they implemented UnhandledAlertException, which will be thrown as soon
as the proxy login dialog pops up.
if you try to catch it, you end up with an driver=null, you would need to loop
the attempt to create a driver and trust into your AutoIt Script to kill the window.
If you're using Google-Chrome, try creating a custom extension and import it through ChromeOptions. It supports http(s) that wasn't supported by browsermob_proxy in Chrome. In-case of redirects testing, this is the only way that will help you as of now...
For details, check this post
https://devopsqa.wordpress.com/2018/08/05/handle-basic-authentication-in-selenium-for-chrome-browser/