I'm getting "not a valid xpath expression" after using this xpath.
(//span[contains(text(),'1h')]//following::span[contains(text(),'15m')])[1] when I inspect this element it returns back with 1 result.
I tried using different locator in same keyword 1st 2 exact xpath will be found & on the 3rd one it will fail with the error above.
Assert 'xpath=(//span[contains(text(),'${course}')])[1]' Has State Of '${state}'(finds it successully)
Assert 'xpath=(//span[text()='${hr}'])[1]' Has State Of '${state}' (finds it successully)
Assert 'xpath=(//span[text()='${min}'])[1]' Has State Of '${state}' (throws error for this one)
It is visible on the page.
Related
internet.find(:xpath, '/html/body/div[1]/div[10]/div[2]/div[2]/div[1]/div[1]/div[1]/div[5]/div/div[2]/a').text
I am looping through a series of pages and sometimes this xpath will not be available. How do I continue to the next url instead of throwing an error and stopping the program? Thanks.
First, stop using xpaths like that - they're going to be ultra-fragile and horrendous to read. Without seeing the HTML I can't give you a better one - but at least part way along there has to be an element with an id you can target instead.
Next, you could catch the exception returned from find and ignore it or better yet you could check if page has the element first
if internet.has_xpath?(...)
internet.find(:xpath, ...).text
...
else
... whatever you want to do when it's not on the page
end
As an alternative to accepted answer you could consider #first method that accepts count argument as the number of expected matches or null to allow empty results as well
internet.first(:xpath, ..., count: nil)&.text
That returns element's text if one's found and nil otherwise. And there's no need to ignore rescued exception
See Capybara docs
In one of the API solution, incoming request is in XML format, and i need to fetch first child node tag name to make decision to run the logic. I am using xpath to get the tag name, when in am running xpath i am getting error "Can not convert #STRING to a NodeList"
I have tried with local-name and name, but both are giving same error.
my xml is as below
<p:Check xmlns:p="http://amarwayx.com.cu/WCSXMLSchema/creptonium">
<AttributeChnageLocal>
<TaskID>17723</TaskID>
<BatchID>12345</BatchID>
<Expiry>2022-12-06</Expiry>
<TimeStamp>2019-07-20T22:45:48</TimeStamp>
</AttributeChnageLocal>
</p:Check>
and Xpath i used are
local-name(/p:Check/*)
name(/p:Check/*)
local-name(/p:Check/*[1])
name(/p:Check/*[1])
how ever is some online xpath evaluator has evaluated correct name(AttributeChnageLocal), i am not getting where the xpath syntax is wrong.
below is my tool snapshot.
same kind of expression works fine
You have ticked a box labelled "store the string value of the selected node as text", which suggests that the XPath evaluation tool you are using expects your XPath expression to select a node; but it doesn't, it selects a string.
I don't know what this tool you are using is, but unfortunately all its options seem to assume that you are selecting nodes.
Line 1 there does find the selector
Wait Until Element Is visible xpath=//a[contains(text(),'Download selected certificate')]
Then when i try to get the href element
${url}= Get Element Attribute xpath=//a[contains(text(),'Download selected certificate')]/#href
It fails
Error says
InvalidSelectorException: Message: invalid selector: Unable to locate an element with the xpath expression //a[contains(text(),'Download selected certificate')]/ because of the following error:
SyntaxError: Failed to execute 'evaluate' on 'Document': The string '//a[contains(text(),'Download selected certificate')]/' is not a valid XPath expression.
Im not sure why as i have a working unrelated example with similar syntax
${url}= Get Element Attribute xpath=//tr[contains(b/span, Jul)][${row_number}]/td[5]/span/a/#href
In xpath, a value of //a[contains(text(),'Download selected certificate')]/#href would return an object which is the href attribute of that a tag.
In Selenium2Library though, when calling Get Element Attribute, # designates the attribute name to get the value of, for a particular dom elelement.
So what that keyword does is to split that string on the last #, get a webelement on the 1st part, and retrieve the target attribute's (the 2nd part of the split) value.
Doing so, it ends up with a locator for the element being //a[contains(text(),'Download selected certificate')]/ - and that trailing / makes it an invalid xpath.
The solution is simple - lose the trailing slash; e.g.:
${url}= Get Element Attribute xpath=//a[contains(text(),'Download selected certificate')]#href
As for why your last sample works - it beats me too :)
In Ruby/Capybara, I tried searching multiple(two) locators(css) in a single find query and found that it automatically search both of them and perform the action on the locator which is present on page.
Ex-
find("css1","css2").set "ABC"
I observed that while running the script, at run time it search for both the locators and will perform the action on the one which is present on page.
However, When I tried the same logic using xpath, it dont work and throw element not found error or invalid selector(one xpath is present on page).
ex-
find(:xpath,"xpath1","xpath2").set "ABC"
Can anyone please help how we can do it for xpath also in ruby capybara.
The example you show of find("css1","css2").set "ABC" won't actually do anything with the "css2" argument passed and, in the current version of Capybara, will actually emit a warning about unused parameters. What will work would be
find("css1, css2").set("ABC")
because it's using the grouping comma which will find items matching either css1 or css2. In XPath you can do that with the union operator | which will return elements that match xpath1 or xpath2
find(:xpath, "xpath1 | xpath2").set("ABC")
I want to use an assertion "expected result" that uses both some form of "contains" function or wildcard AND gets the text to test against from an Excel dataSource. The SoapUI 'contains' function has no way to use a dataSource that I've found, and I cannot figure out how to use an XPath function like contains with a dataSource. Can someone please explain how that works?
--
I've been asked for more detail.
In SoapUI, if I add an assertion and choose the request/response as the source, I then have a choice of assertions. One of them is "XPath Match". I can use that to designate a specific field in the response, in this case, which value I want to test.
Having chosen the "XPath expression" in the top half of the "XPath Match Configuration", I can then choose my Excel dataSource as the content for the lower half "Expected Result". I have used this to test an error code against an error code from the Excel spreadsheet.
What I don't know how to do is determine, in this assertion, that the error message returned contains the value in Excel. I figure something special goes into "Expected Result" in the "XPath Match Configuration" box, but I don't know what.
The Expected Result of the XPath assertion is only a "dumb" string. The best that you can do in this field is property expansion ... which does not help your cause.
Instead you will need to use the top portion, where you can enter XPath Expression, that provides the logic you are looking for. Your XPath expression will need to look something like:
contains(//*:some/*:node, '${data_source#property}')
and your Expected Value will be simply:
true
Convenient reference, in case you need it.