Verifying searched text displayed is in a single line - ruby

How can I test whether a sentence (combination of four or five words) is displayed in a single line?
I have to search with a name or some other fields. After search results are displayed, I should test whether the displayed text is a single line. For example, the code below is used to verify the search result link:
//ol[contains(#class,'search results')]/li[contains(#class,'mod result') and contains(#class,'XXXXXX')]//a[contains(#href,'trk=XXXXXX')]

I am not familiar with ruby, but the following java approach should work in any language.
Assuming that your "sentence" is entirely contained in one element, you could find all occurrences with something like:
driver.findElements(By.xpath("//*[text()='your sentence']"))
Then simply test for the size of the array.

Assuming that a single or multiple lines will be contained within a single DOM element, you could use the vertical component of the element size to check for the multiple line condition.
webElement.getSize()

Related

XPath to be precised into one in order to extract text from a web page?

I have a few Xpaths as below:
//*[#id="904735f0-bb82-11ea-a473-6d0f51688222"]/div/p
//*[#id="729c0860-a71d-11ea-b994-53a3e91a35c2"]/div/div/div[1]/div/p
//*[#id="2555ab30-bb84-11ea-9e8b-277e7f6208b2"]/div/div/div[1]/div/p
//*[#id="7e100250-a71d-11ea-b994-53a3e91a35c2"]/div/div/div[1]/div/p
//*[#id="811727d0-a71d-11ea-b994-53a3e91a35c2"]/div/div/div[1]/div/p
All of the above are used to extract text from a single web page since text is located at different view--ports, but I wish to find a single xpath to extract text for all of them. Is it possible to use 'and' and multiple ID's to extract all of it through one xpath?
Any other suggestions would be appreciate.
You can use the or operator for the last four.
And the merge-nodes operator | to add the first one.
So to select all 5 expression in one, use the following expression:
//*[#id="904735f0-bb82-11ea-a473-6d0f51688222"]/div/p | //*[#id="729c0860-a71d-11ea-b994-53a3e91a35c2" or #id="2555ab30-bb84-11ea-9e8b-277e7f6208b2" or #id="7e100250-a71d-11ea-b994-53a3e91a35c2" or #id="811727d0-a71d-11ea-b994-53a3e91a35c2"]/div/div/div[1]/div/p
A shorter and more generic solution could be :
(//div/div/div[1]/div/p|//div/p)[parent::*[string-length(#id)=36 and substring(#id,24,1)="-"]]
First part with () is used to specify the end of the path. Since #id attributes have the same length, we use it inside the predicate. We also verify the presence of a - at a specific position with substring.

Scrapy - How can I handle a random number of elements?

I have a Scrapy crawler that I can comfortably acquire the first desired paragraph, but sometimes there is a second or third paragraph.
response.xpath(f"string(//h2[contains(text(), '{card}')]/following-sibling::p)").get()
is the xpath code I am using to acquire said paragraph.
response.xpath(f"string(//h2[contains(text(), '{card}')]/following-sibling::p[1])").get() acquires the same paragraph, but sometimes, I need response.xpath(f"string(//h2[contains(text(), '{card}')]/following-sibling::p[2])").get().
How might I go about taking this varying number of paragraphs into account when scraping?
You could try to use a wildcard *.
removed
EDIT : With string() function you'll only get the first paragraph.
Just remove string() from your XPath expression to get all the paragraphs (assuming there are in the same node) and store the result in a variable.
//h2[contains(text(), '{card}')]/following-sibling::p/text()
Alternative : If you know the maximum possible number of paragraph, you can use concat().
concat(//h2[contains(text(), '{card}')]/following-sibling::p[1],'|',//h2[contains(text(), '{card}')]/following-sibling::p[2])

Need to count the number of emails in a string using Capybara/RSpec

So I have an application which holds multiple entries that are strings of comma separated emails. The strings live in text area elements where they can be modified. The application uses JavaScript to modify these strings, I need to use Capybara to watch verify that a target string has the correct number of emails in it. To illustrate what I mean here's my Cucumber (assuming the target list starts with a 5 email string):
When I remove the 3rd email under list one
Then I should see 4 emails under list one
When I click the "Cancel" button for list one
Then I should see 5 emails under list one
I can pretty easily grab the string with Capybara like so:
expect(page).to have_css(".css-selectors textarea")
but I don't know what to do from there. I need to be able to assert that the number of emails in the string is in fact changing to the desired number. I need to split the string and count the number of emails to see if they match the target number, but everything I've tried leads to a race condition where Capybara checks the value before the JS can finish updating. I've looked into passing a filter block to the have_css call but I can't find documentation on how that would work, or if it's even the right tactic. And so I'm out of ideas here.
Since all the emails are in one element your inclination to use a filter block is exactly correct. The filter block receives each element that matches the initial selector and needs to return whether or not it matches whatever extra filtering you wish to do (true/false). Therefore, to check that the element had a string (value not text since it's a form field) with 4 comma separated items it would be something like
expect(page).to have_css(".css-selectors textarea"){ |ta|
ta.value.split(',').size == 4
}
This will then use Capybaras waiting/retrying behavior while also performing the extra step of checking for a matching number of comma separated items in the text of the element, thereby getting around the race condition.
Your check could also be performed by using a regex for the with option of the field selector, along the lines of
expect(page).to have_field(type: 'textarea', with: /^[^,]+,[^,]+,[^,]+,[^,]+$/)
or fillable_field selector
expect(page).to have_selector(:fillable_field, type: 'textarea', with: /^[^,]+,[^,]+,[^,]+,[^,]+$/)
Those don't currently scope to the .css-selectors element but you could do that with within or a chained find. You could also ensure a unique element by passing the id/name/label text of the element. Obviously you could make the regex more complicated if you want to actually verify the text strings are emails, etc.

Selenium: How to locate a node using exact text match

I want to locate a Element on a Web Page using text.
I know there is a method name contains to do so, for example:
tr[contains(.,'hello')]/td
But problem is if I have two elements name hello and hello1 then this function does not work properly.
Is there any other method like contains for exact string match for locating elements?
tr[.//text()='hello']/td
this will select all td child elements of all tr elements having a child with exactly 'hello' in it. Such an XPath still sounds odd to me.
I believe that this makes more sense:
tr/td[./text()='hello']
because it selects only the td that contains the text.
does that help?
It all depends on what your HTML actually contains, but your tr[contains(.,'hello')]/td XPath selector means "the first cell of the first row that contains the string 'hello' anywhere within it" (or, more accurately, "the first TD element in the TR element that contains the string 'hello' anywhere within it", since Selenium has no idea what the elements involved really do). That's why it's getting the wrong result when there are rows containing "hello" and "hello1" - both contain "hello".
The selector tr[. ='hello']/td would be more accurate, but it's a little unusual (because HTML TR elements aren't supposed to contain text - the text is supposed to be in TH or TD elements within the TR), and it probably won't work (because text in any other cells would break the comparison). You probably want tr[td[.='hello']]/td, which means "the first TD element contained in the TR element that contains a TD element that has the string 'hello' as it's complete text".
Well, your problem is that you are searching text into the tr (which is not correct anyway) and this cause a problem to the function contains which cannot accept a list of text. Try to use this location path instead. It should retrieve what you want.
//tr/td[contains(./text(),"hello")]
This location path will retrieve a set of node on which you have to iterate to get the text. You can try to append the
/text()
but this will cause (at least on my test) a result that is a string which is a concatenation of all the matched strings.
I had the same probem. I had a list of elements, one was named "Image" and another one was named "Text and Image". After reading all the posts here, non of the sugestions worked for me. So I tryed the following and it worked:
List<WebElement> elementList = new ArrayList<WebElement>();
elementList = driver.findElements(By.xpath("//*[text()= '" +componentName+"']"));
for(WebElement element : elementList){
if(element.getText().equals(componentName)){
element.click();
}
}

Trouble using Xpath "starts with" to parse xhtml

I'm trying to parse a webpage to get posts from a forum.
The start of each message starts with the following format
<div id="post_message_somenumber">
and I only want to get the first one
I tried xpath='//div[starts-with(#id, '"post_message_')]' in yql without success
I'm still learning this, anyone have suggestions
I think I have a solution that does not require dealing with namespaces.
Here is one that selects all matching div's:
//div[#id[starts-with(.,"post_message")]]
But you said you wanted just the "first one" (I assume you mean the first "hit" in the whole page?). Here is a slight modification that selects just the first matching result:
(//div[#id[starts-with(.,"post_message")]])[1]
These use the dot to represent the id's value within the starts-with() function. You may have to escape special characters in your language.
It works great for me in PowerShell:
# Load a sample xml document
$xml = [xml]'<root><div id="post_message_somenumber"/><div id="not_post_message"/><div id="post_message_somenumber2"/></root>'
# Run the xpath selection of all matching div's
$xml.selectnodes('//div[#id[starts-with(.,"post_message")]]')
Result:
id
--
post_message_somenumber
post_message_somenumber2
Or, for just the first match:
# Run the xpath selection of the first matching div
$xml.selectnodes('(//div[#id[starts-with(.,"post_message")]])[1]')
Result:
id
--
post_message_somenumber
I tried xpath='//div[starts-with(#id,
'"post_message_')]' in yql without
success I'm still learning this,
anyone have suggestions
If the problem isn't due to the many nested apostrophes and the unclosed double-quote, then the most likely cause (we can only guess without being shown the XML document) is that a default namespace is used.
Specifying names of elements that are in a default namespace is the most FAQ in XPath. If you search for "XPath default namespace" in SO or on the internet, you'll find many sources with the correct solution.
Generally, a special method must be called that binds a prefix (say "x:") to the default namespace. Then, in the XPath expression every element name "someName" must be replaced by "x:someName.
Here is a good answer how to do this in C#.
Read the documentation of your language/xpath-engine how something similar should be done in your specific environment.
#FindBy(xpath = "//div[starts-with(#id,'expiredUserDetails') and contains(text(), 'Details')]")
private WebElementFacade ListOfExpiredUsersDetails;
This one gives a list of all elements on the page that share an ID of expiredUserDetails and also contains the text or the element Details

Resources