When I check an XPath in Firebug, it works as expected.
The XPath I'm trying is below:
.//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.pn`g')][2]
However, in Selenium WebDriver
ArrayList<WebElement> al = toolActionObject.getAllElementsByXpath(".//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]");
System.out.println(al.size());
the output lines are like
DEBUG (SeleniumActions.java:91) - Locating elements by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]
DEBUG (SeleniumActions.java:91) - Locating elements by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]
DEBUG (SeleniumActions.java:91) - Locating elements by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]
DEBUG (SeleniumActions.java:91) - Locating elements by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]
DEBUG (SeleniumActions.java:91) - Locating elements by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]
DEBUG (SeleniumActions.java:111) - Exception : Elements not found.
Cause : Elements not found by By.xpath: *//div[Text()='Data Fields']/following::div[contains(#style,'db3a1a10.png')][2]. Returning empty Array List of WebElement.
0
INFO (GSUILogInLogOut.java:95) - Clossing browser.
PASSED: testHere
What is the reason for such conflicting behavior?
You have two issues in your XPath. text() needs to be written lowercase and 'db3a1a10.png' (probably just an error in your example code) must not contain a tickmark.
Related
internet.find(:xpath, '/html/body/div[1]/div[10]/div[2]/div[2]/div[1]/div[1]/div[1]/div[5]/div/div[2]/a').text
I am looping through a series of pages and sometimes this xpath will not be available. How do I continue to the next url instead of throwing an error and stopping the program? Thanks.
First, stop using xpaths like that - they're going to be ultra-fragile and horrendous to read. Without seeing the HTML I can't give you a better one - but at least part way along there has to be an element with an id you can target instead.
Next, you could catch the exception returned from find and ignore it or better yet you could check if page has the element first
if internet.has_xpath?(...)
internet.find(:xpath, ...).text
...
else
... whatever you want to do when it's not on the page
end
As an alternative to accepted answer you could consider #first method that accepts count argument as the number of expected matches or null to allow empty results as well
internet.first(:xpath, ..., count: nil)&.text
That returns element's text if one's found and nil otherwise. And there's no need to ignore rescued exception
See Capybara docs
Below is the error shown on failure of my test.
JMeter version: 3.2
Assertion error: false
Assertion failure: true
Assertion failure message: Value expected to be 'ptgrna9jgc1f3a77881+iamtestpass', but found 'ptgrna9jgc1f3a77881+iamtestpass'
There is a failure even if the 2 strings are exactly same. Any idea why this is happening?
PS: The tests are running as expected on a different machine.
Everything started working fine after switching to JMeter 3.1!
If assertion fails - something is wrong, my expectation is that the reason is in:
Your JSON Path Expression returns a JSON Array and you are expecting its 1st element
There is a leading or trailing space (or both) either in the JSON Path return value or in your expected data. Double check both using i.e. Debug Sampler and View Results Tree listener combination.
In Ruby/Capybara, I tried searching multiple(two) locators(css) in a single find query and found that it automatically search both of them and perform the action on the locator which is present on page.
Ex-
find("css1","css2").set "ABC"
I observed that while running the script, at run time it search for both the locators and will perform the action on the one which is present on page.
However, When I tried the same logic using xpath, it dont work and throw element not found error or invalid selector(one xpath is present on page).
ex-
find(:xpath,"xpath1","xpath2").set "ABC"
Can anyone please help how we can do it for xpath also in ruby capybara.
The example you show of find("css1","css2").set "ABC" won't actually do anything with the "css2" argument passed and, in the current version of Capybara, will actually emit a warning about unused parameters. What will work would be
find("css1, css2").set("ABC")
because it's using the grouping comma which will find items matching either css1 or css2. In XPath you can do that with the union operator | which will return elements that match xpath1 or xpath2
find(:xpath, "xpath1 | xpath2").set("ABC")
I am trying to locate Ajax control (mouse over) on Amazon home page for sign in.
WebElement element = driver.findElement(By.xpath("//*[#id='nav-link-yourAccount']"));
however this element locator works for some time and other times its not finding element and script is failing.
I have noticed that Xpath of this element is changing sometimes to //*[#id='nav-link-yourAccount']/span[1], there is no any other unique identifier which can be used to locate this element.
Could you please let me know how to resolve this variable xpath issue.
If you fail to find an element at one of the xpath values, you could try searching for the other xpath value. You could use ExpectedConditions to wait for a certain period of time for an element to exist. If that time elapses and the element is not found, then use the second locator to find the element. This assumes that the xpath is only changing between these two known values. Also, once you locate the element, you might want to make some assertions about the other properties of the element to further ensure you've found the element you're looking for.
Here's a post about waiting for an element:
Equivalent of waitForVisible/waitForElementPresent in Selenium WebDriver tests using Java?
I am trying to scrape an HTML page for a particular input field (so that I can extract a token from it for use during login). I'm using SBCL 1.0.54 (because that version works properly with StumpWM), quicklisp, and the following quicklisp packages:
drakma
closure-html
cxml-stp
If I load the HTML page using Drakma, and convert it to valid X(HTML), I can use the following code (loosely adapted frome the Plexippus XPath examples):
(xpath:do-node-set (node (xpath:evaluate "//*" xhtml-tree))
(format t "found element: ~A~%"
(xpath-protocol:local-name node)))
... to obtain the following results (snipped for brevity; the page in question is large):
found element: img
found element: a
found element: img
found element: script
found element: div
found element: img
found element: a
found element: input
found element: input
However I can't seem to get any XPath statement more complicated than "//*" working correctly. My aim is to find an input with a particular name, but even just finding all inputs fails:
* (xpath:evaluate "//input" xhtml-tree)
#<XPATH:NODE-SET empty {10087146F3}>
I'm obviously missing something pretty basic here. Could someone please give me pointer in the right direction?
Could it be a namespace issue? That is, if there is an xmlns attribute on the root html element, then you will need to declare the namespace with xpath:with-namespaces and specify it in your XPath expression. The expression "//input" only finds input elements that aren't in any namespace.