What is the difference between locator and selector? - xpath

When I'm writing test scripts for PHPUnit, I read a lot about selectors and locators, but I don't really know what's the difference.
Does anyone know the difference?

Selenium uses what is called locators to find and match the elements of your page that it needs to interact with. There are 8 locators strategies included in Selenium:
Identifier
Id
Name
Link
DOM
XPath
CSS
UI-element
Here the CSS locator strategy uses CSS selectors to find the elements in the page.What else is your doubt in these?

Related

Unable to identify element in Blue Prism using XPath

I have spied an input text box using the Application Modeller of Blue Prism and was able to successfully highlight the text box using the below XPath:
/HTML/BODY(1)/DIV(4)/main(1)/DIV(1)/DIV(1)/DIV(1)/DIV(2)/DIV(1)/DIV(1)/DIV(2)/IFRAME(1)/HTML/BODY(1)/DIV(2)/FORM(1)/DIV(3)/TABLE(2)/TBODY(1)/TR(1)/TD(1)/DIV(1)/DIV(1)/DIV(1)/DIV(2)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/SPAN(1)/DIV(1)/DIV(2)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/DIV(1)/TABLE(1)/TBODY(1)/TR(1)/TD(1)/INPUT(1)
I wanted to use a more robust XPath and to achieve that I was trying to use the below XPath:
//*[#id="CT"]/div/div/div/div[1]/div[1]/table/tbody[1]/tr/td/input[1]
The above XPath was identifying the element correctly in Chrome but was getting the below error message when trying the same in Blue Prism:
Error - Highlighting results - Object reference not set to an instance of an object.
Let me know if I am doing anything incorrectly.
Sorry for replying to a pretty old one! The workaround we've devised for this scenario (where making the path dynamic requires too long of a loop / search) is to use Jquery snippets. If the page is using jquery it is trivial to execute these queries very quickly using the blue prism capability of executing javascript functions.
And we put in an enhancement request, because it'd be supremely useful functionality.
Update: As a user points out below, the vanilla js querySelector method is probably safer and more future proof than using jquery if it is possible to be used.
Blue Prism does not fully support the XPath spec; alas the construct you're attempting to use here won't work.
Alternatively, you can set the Path attribute of an application modeler entry to be Dynamic, which allows you to insert dynamic parameters from the process/object level to pinpoint elements you'd like to interact with.
Unfortunately Blue Prism doesn't actually use "real" XPaths, but only an extremely limited subset: Absolute paths without wildcards. (Note: It is technically possible to match the XPath to a string with wildcards, but this seemingly causes BP to check every single element in the document, and is so slow it is almost never the right solution.)
For cases where an element can't be robustly identified via the BP application modeler (maybe because it requires complex or dynamic selectors), my workaround is to inject a JS snippet. JS can select elements much more reliably, and it can then generate the BluePrism path for that element.
Returning data from JS to BluePrism is not trivial, but one of the nicer solutions is to have JS create a <script id="_output"> element, put JSON inside it, then have BluePrism read the contents of this element.

Get background color of a webelement using rspec, ruby and capybara

I wanna get the background color of a web element. I am not sure of the exact command in ruby/capybara for the same.
We are using ruby, selenium and capaybara in our we application automation.
As far as I understand capybara, it was not developed for the nodes manipulation, but leverages finding/matching the elements. I'd suggest to use nokogiri for this purposes.
Capybara::Node::Element provides only value and text properties.
Capybara doesn't provide direct access to the complete style of an element, however you can access it using evaluate_script. Something like
page.evaluate_script("window.getComputedStyle(document.getElementById('my_element_id'))['background-color']")
should return what you're looking for -- obviously if the element doesn't have an id you'd have to change the window.getElementById to a different method of locating your element. Since you're using selenium, if you're willing to use methods that won't work with other drivers, and already have found the element in Capybara, you can do something like the following which allows you to pass the element instead of having to figure out how to find the element in the DOM again from JS
el = page.find(....) # however you've found the element in Capybara
page.driver.browser.execute_script("return window.getComputedStyle(arguments[0])['background-color']", el.native)

Locating WebElements using XPATH (NoSuchElementException)

I am having problems with locating elements using xpath while trying to write automated webUI tests with Arquillian Drone + Graphene.
To figure things out I tried to locate the search-button on the google homepage. Even that I am not getting done. Neither with an absolute or a relative xpath.
However, I am able to locate elements using IDs or when the xpath string has an ID in it. But only when the ID is a real ID and is not generated. For example on google homepage: The google-logo has a real ID "hplogo". I can locate this element by using directly the ID or the ID within the xpath-expression.
Why is locating the google logo using the ID "hplogo" possible but it fails while using the absolute xpath "/html/body/div[1]/div[5]/span/center/div[1]/div/div"?
I am really confused. What am I doing wrong? Any help is appreciated!
EDIT:
WebElement e = browser.findElement(By.xpath("/html/body/div[1]/div[5]/span/center/div[1]/div/div"));
is causing a NoSuchElementException.
Your expression works on
Firefox, but on webkit-based browser (e.g., chrome) the rendered DOM is a bit different. Maybe it depends on localization (google.co.uk for me). If I force on google.com the image logo for me is:
/html/body/div/div[5]/span/center/div[1]/img on firefox 37 and /html/body/div/div[6]/span/center/div[1]/img on Chome 42.
EDIT:
After discussing in chat, we figure out that HTMLUNIT is indeed creating a DOM that is different from the one real browsers render. Suggested to migrate to FirefoxDriver

Selenium WebDriver CSS equivalent to xpath not contains

I want to avoid using xpath in my java selenium code but cannot figure out what the css equivalent to the code below would be
("xpath", "//div[#class='error'][not(contains(#style,'display: none'))]");
Does anyone know if there is a CSS equivalent to xpath not contains
You can't easily match against a compound attribute (i.e., style=) in CSS. The not part is easy - CSS3 has a :not(...) selector. You're going to have to find something else to identify the elements you want to exclude, and then use :not(...) to exclude them,

extract xpath

I want to retrieve the xpath of an attribute (example "brand" of a product from a retailer website).
One way of doing it is using addons like xpather or xpath checker to firefox, opening up the website using firefox and right clicking the desired attrbute I am interested in. This is ok. But I want to capture this information for many attributes and right clicking each and every attribute maybe time consuming. Also, the other problem I have is that attributes I maybe interested in will be there for one product. The other attributes maybe for some other product. So, I will have to go that product & then do it manually again.
Is there an automated or programatic way of retrieving the xpath of the desired attributes from a website rather than having to do this manually?
You must notice that not all websites use valid XML that you can use xpath on...
That said, you should check out some HTML parsers that will allow you to use xpath on HTML even if it is not a valid XML.
Since you did not specify the technology you are working with - I'll suggest the .NET HTML Agility Pack, if you need others, search for questions dealing with this here on SO.
The solution I use for this kind of thing is to write an xpath something like this:
//*[text()="Brand"]/following-sibling::*
//*[text()="Color"]/following-sibling::*
//*[text()="Size"]/following-sibling::*
//*[text()="Material"]/following-sibling::*
It works by finding all elements (labels) with the text you want and then looking to the next sibling in the HTML. Without a specific URL to see I can't help any further.
This is a generalised version you can make more specific versions by replacing the asterisks is tag types, and you can navigate differently by replacing the axis following sibling with something else.
I use xPaths in import.io to make APIs for this kind of thing all the time, It's just a matter of finding a xPath that's generic enough to find the HTML no matter where it is on the page, but being specific enough to get the right data.

Resources