Locating WebElements using XPATH (NoSuchElementException) - xpath

I am having problems with locating elements using xpath while trying to write automated webUI tests with Arquillian Drone + Graphene.
To figure things out I tried to locate the search-button on the google homepage. Even that I am not getting done. Neither with an absolute or a relative xpath.
However, I am able to locate elements using IDs or when the xpath string has an ID in it. But only when the ID is a real ID and is not generated. For example on google homepage: The google-logo has a real ID "hplogo". I can locate this element by using directly the ID or the ID within the xpath-expression.
Why is locating the google logo using the ID "hplogo" possible but it fails while using the absolute xpath "/html/body/div[1]/div[5]/span/center/div[1]/div/div"?
I am really confused. What am I doing wrong? Any help is appreciated!
EDIT:
WebElement e = browser.findElement(By.xpath("/html/body/div[1]/div[5]/span/center/div[1]/div/div"));
is causing a NoSuchElementException.

Your expression works on
Firefox, but on webkit-based browser (e.g., chrome) the rendered DOM is a bit different. Maybe it depends on localization (google.co.uk for me). If I force on google.com the image logo for me is:
/html/body/div/div[5]/span/center/div[1]/img on firefox 37 and /html/body/div/div[6]/span/center/div[1]/img on Chome 42.
EDIT:
After discussing in chat, we figure out that HTMLUNIT is indeed creating a DOM that is different from the one real browsers render. Suggested to migrate to FirefoxDriver

Related

Import XML of span element fails with #N/A

finally decided to sign up to stackoverflow because of this. So I´d be super grateful about a solution!
I´m trying to get a number of a <span> element. Here is an image of the data box I´m trying to scrape from. It´s on this page: https://de.marketscreener.com/kurs/aktie/SNOWFLAKE-INC-112440376/analystenerwartungen/
The relevant Xpath is //*[#id="highcharts-0oywbsk-200"]/div[2]/div/span/span
I´m trying: =IMPORTXML("https://de.marketscreener.com/kurs/aktie/SNOWFLAKE-INC-112440376/analystenerwartungen/"),"//div[2]/div/span/span")
I´m ignoring the #id-element, this works pretty well with many elements on the same page, but in this case not at all. I ignore the id, because I can´t use it as it changes on every page. Is this ok?
Google Sheets always gives me a #N/A error?! Any idea how to scrape that number?
disabling JavaScript reveals what you can scrape:

Setting the correct xpath

I'm trying to set the right xpath for using RSelenium, but I'm not very experienced in this area, so any help would be much appreciated.
Since I'm not allowed to post pictures yet I have tried to add a link to a screenshot of the html:
The html
I need R to scrape the dates (28-10-2020 - 13-11-2020), but so far I have not been able to set the correct xpath when using html.nodes.
I'm trying to scrape from sites like this one: https://www.boligsiden.dk/adresse/topperne-9-3-33-2620-albertslund-01650532___9__3__33
I usually do this on python rather than R
As you can see in this image when you right-click on the element concerned. You get a drop-down menu with an x-path to the element.
Other than that, the site orientation and x-path might change and a full x-path might be a good option in the short-run, so I rather prefer driver.find_element_by_xpath('//button[contains(text(),"Login")]')\ .click()
In your case which would be find_element_by_xpath('//*[contains(#class, 'u-pb-4 u-block')]')
I hope this helps and it is mostly the same across different languages

Confused about scrapy and Xpath

I am trying to scrape some data from the following website: https://xrpcharts.ripple.com/
The data I am interested in is Total XRP which you can see immediately below or to the side (depending on your browser) of the circle diagram. So what I first did was inspect the element I am interested in. So I see that it is inside <div class="stat" inside span ng-bind="totalXRP | number:2" class="ng-binding">99,993,056,930.18</span>.
The number 99,993,056,930.18 is what I am interested in.
So I started in a scrapy shell and wrote:
fetch("https://xrpcharts.ripple.com")
I then used chrome to copy the Xpath by right clicking on that place of HTML code, the result chrome gave me was:
/html/body/div[5]/div[3]/div/div/div[2]/div[3]/ul/li[1]/div/span
Then I used the Xpath command to extract the text:
response.xpath('/html/body/div[5]/div[3]/div/div/div[2]/div[3]/ul/li[1]/div/span/text()').extract()
but this gave me an empty list []. I really do not understand what I am doing wrong here. I think I am making an obvious mistake but I dont see it. Thanks in advance!
The bottom line is: you cannot expect the page you see in the browser to be the same page Scrapy would download and have available to work with. Scrapy is not a browser.
This page is quite dynamic and complex and is constructed with the help of multiple asynchronous requests bringing in both the logic and the data. There is also JavaScript executed in the browser that plays an important role in forming and supporting the HTML document object tree.
Scrapy does not have all these things, the thing you get when you do fetch() is just the very first initial "bare bones" HTML page without all the "dynamic content".

Get background color of a webelement using rspec, ruby and capybara

I wanna get the background color of a web element. I am not sure of the exact command in ruby/capybara for the same.
We are using ruby, selenium and capaybara in our we application automation.
As far as I understand capybara, it was not developed for the nodes manipulation, but leverages finding/matching the elements. I'd suggest to use nokogiri for this purposes.
Capybara::Node::Element provides only value and text properties.
Capybara doesn't provide direct access to the complete style of an element, however you can access it using evaluate_script. Something like
page.evaluate_script("window.getComputedStyle(document.getElementById('my_element_id'))['background-color']")
should return what you're looking for -- obviously if the element doesn't have an id you'd have to change the window.getElementById to a different method of locating your element. Since you're using selenium, if you're willing to use methods that won't work with other drivers, and already have found the element in Capybara, you can do something like the following which allows you to pass the element instead of having to figure out how to find the element in the DOM again from JS
el = page.find(....) # however you've found the element in Capybara
page.driver.browser.execute_script("return window.getComputedStyle(arguments[0])['background-color']", el.native)

How to select from frames with the Watir Ruby Gem

When trying to select a list element's option I attempted to do:
myvar=ie.select_list(:id, 'myid').option(:text, 'mytext').select
But for some reason while I'm using Watir in irb to access the website and attempting to manipulate any of the items I get this exception.
Watir::Exception::UnknownObjectException: Unable to locate element...etc
I'm looking at page in the browser but using .html isn't showing the full page. It looks like the rest of the page is hidden and I'm not sure how to get into/around this.
irb(main):011:0> ie.html
=> "<HTML><HEAD><TITLE>My Title</TITLE>\r\n
<SCRIPT language=JavaScript type=text/javascript src=\"../../script.js\"></SCRIPT>\r\n</HEAD><FRAMESET id=mainFrameSet name=mainFrameSet rows=100%,0%><FRAME id=frmMain src=\"DefaultT.cfm?ID=2197024\" name=frmMain><FRAME id=frmHidden src=\"Dummy.html\" name=frmHidden scrolling=no></FRAMESET></HTML>"
EDIT:
Looking at this in retrospect I have changed the title so it would more accurately address the issue I was having. It was difficult for a new waiter user to find information like on Watir and Frames. The original title was something like "Using Watir On An Encrypted Site". I have severely edited the question to get to the essence of what I was asking. I can't thank those enough who attempted to answer the ramblings of a new Ruby user with minimal knowledge of the Web and programming in general. Please see previous revisions if necessary.
Based on the html you added, your webpage is using frames. Unlike other elements, you have to explicitly specify the frames you want to use.
You probably want the frame with id 'frmMain', so try:
myvar=ie.frame(:id, 'frmMain').select_list(:id, 'myid').option(:text, 'mytext').select
My guess is that the element is not on the page when you try to access it.
Try this (please notice when_present):
myvar=ie.select_list(:id, 'myid').when_present.option(:text, 'mytext').select
More information: http://watirwebdriver.com/waiting/

Resources