Does Watin support xPath? - watin

Does Watin support xPath?
How can I access an element that does not have any id or class or something unique to it?

AFAIK, it does not.
My suggestion would be to look for an element container that has an id / class / name to identify it and then look up this element inside it.
If you are running into problems with identifying such elements, you might want to post some of the DOM that you.

Related

Appium. Usage of Xpath axes in locator. Find sibling/parent/child/etc elements in context of curtrent element

Is it possible to find sibling/parent/child/etc mobile elements in context of existing mobile element.
For example I have basic element:
MobileElement mobileElement = driver.findElement(By.xpath("//any/xpath/locator"));
Then I need to find following sibling element.
In Selenium I could use similar code:
MobileElement nextMobileElement = mobileElement.findElement("following-sibling::nextelement['any conditions']");
But in appium I getting NoSuchElementException.
What is the proper syntax for usage of xpath axes in Appium to locate elements in defined above way?
In general if you want to find element under a parent element using xpath, add a . to the xpath to say that the child element xpath to be searched under the given parent element. So in your case you can try something like :
mobileElement.findElement(By.xpath(".(xpath of the child element)")
For example mobileElement.findElement(By.xpath(".//div[#class='xyz']")
Solution found in next way:
When appium is unable to select element by its name, i.e. there an element <android.widget.ViewAnimator ....>, then you can view it's "class" attribute, and it will contain an identical text: class="android.widget.ViewAnimator". So you can build your Xpath complicated locators based on this attribute. In case of Xpath axes it will look like this: //some/xpath/preceding-sibling::*[#class='android.widget.ViewAnimator'].
Appium supports XPath as WebDriver does. Make sure your XPath is correct, you can test it in appium-desktop. Following-sibling was working for me without any issues.
When Appium is not able to process XPath expression, you should get the related exception.

How to get href with Watir using Ruby

I'm trying to use Watir to grab a specific link on a page:
Screenshot: Here is the href I am trying to grab.
My guess is I need to specify the ancestor element biz-website(?) then traverse down to the a tag and grab its href somehow, but I'm not sure what the syntax of my code would need to be do that.
Any ideas or tips?
You should be able to get the value of the href with
browser.span(:class, 'biz-website').a.href
If the class 'biz-website' is not unique for spans on your page, you can also use 'biz-website js-add-url-tagging'. If that is still not unique, you could also try
browser.span(:text, 'Business website').parent.a.href

Get background color of a webelement using rspec, ruby and capybara

I wanna get the background color of a web element. I am not sure of the exact command in ruby/capybara for the same.
We are using ruby, selenium and capaybara in our we application automation.
As far as I understand capybara, it was not developed for the nodes manipulation, but leverages finding/matching the elements. I'd suggest to use nokogiri for this purposes.
Capybara::Node::Element provides only value and text properties.
Capybara doesn't provide direct access to the complete style of an element, however you can access it using evaluate_script. Something like
page.evaluate_script("window.getComputedStyle(document.getElementById('my_element_id'))['background-color']")
should return what you're looking for -- obviously if the element doesn't have an id you'd have to change the window.getElementById to a different method of locating your element. Since you're using selenium, if you're willing to use methods that won't work with other drivers, and already have found the element in Capybara, you can do something like the following which allows you to pass the element instead of having to figure out how to find the element in the DOM again from JS
el = page.find(....) # however you've found the element in Capybara
page.driver.browser.execute_script("return window.getComputedStyle(arguments[0])['background-color']", el.native)

How to use Selenium using Xpath to determine the classes of an element?

I am trying to use xpath within selenium to select a div element that is within a td.
What I am really trying to do is determine the class of the div and if it is either classed LOGO1, LOGO2, LOGO3 and so on. Originally I was going to just snag the image:url to determine with logo.jpg was used but whoever made the target website used one image for each logo type and used css to determine which portion of the image will be displayed. So Imagine 4 images on one sprite image. This is the reason why I have to determine the class of the div instead of digging through the css paths.
In selenium I am using storeElementPresent | /html/body/form/center/table/tbody/tr/td[2]/div[3]/div[2]/fieldset/table/tbody/tr[2]/td/div/table/tbody/tr[${i}]/td[8]/div//class | cardLogo .
The div has multiple classes so I am thinking that this is the issue, but any help is appreciated. Below is the target source. This is source from within the table in the tbody. Selenium has no problems identifying all the way up to td[8] but then fails to gather the div. Please help!
<td class="togglehidefields" style="width:80px;">
<div class="cardlogo LOGO1" style="background-image:url(https://www.somesite.com/merchants/images/image.jpg)"></div>
<span id="ContentPlaceHolder1_grdCCChargebackDetail_lblCardNumber_0">7777</span>
</td>
I was fiddling with selenium.getAttribute() but it kept erroring out, any ideas there?
This <div/> element has one class attribute with one value, but this one is tokenized when parsed as HTML.
As selenium only supports XPath 1.0, you will need to check for classes like this:
//div[contains(#class, "LOGO1") or contains(#class, "LOGO2")]
Extend that pattern as needed and embed it in your expression.
With XPath 2.0 and better, you could tokenize and use the = operator which works on a set-based semantics:
//div[tokenize(#class, ' ') = ("LOGO1", "LOGO2")]
Old post but I'll put the solution I used up just in case it can help anyone.
xpath=//div[contains(#class,'carouselNavNext ')]/.[contains(#class, 'disabled')]
Fire of your contains, and then follow with /. to check children AND the current element.

extract xpath

I want to retrieve the xpath of an attribute (example "brand" of a product from a retailer website).
One way of doing it is using addons like xpather or xpath checker to firefox, opening up the website using firefox and right clicking the desired attrbute I am interested in. This is ok. But I want to capture this information for many attributes and right clicking each and every attribute maybe time consuming. Also, the other problem I have is that attributes I maybe interested in will be there for one product. The other attributes maybe for some other product. So, I will have to go that product & then do it manually again.
Is there an automated or programatic way of retrieving the xpath of the desired attributes from a website rather than having to do this manually?
You must notice that not all websites use valid XML that you can use xpath on...
That said, you should check out some HTML parsers that will allow you to use xpath on HTML even if it is not a valid XML.
Since you did not specify the technology you are working with - I'll suggest the .NET HTML Agility Pack, if you need others, search for questions dealing with this here on SO.
The solution I use for this kind of thing is to write an xpath something like this:
//*[text()="Brand"]/following-sibling::*
//*[text()="Color"]/following-sibling::*
//*[text()="Size"]/following-sibling::*
//*[text()="Material"]/following-sibling::*
It works by finding all elements (labels) with the text you want and then looking to the next sibling in the HTML. Without a specific URL to see I can't help any further.
This is a generalised version you can make more specific versions by replacing the asterisks is tag types, and you can navigate differently by replacing the axis following sibling with something else.
I use xPaths in import.io to make APIs for this kind of thing all the time, It's just a matter of finding a xPath that's generic enough to find the HTML no matter where it is on the page, but being specific enough to get the right data.

Resources