I want to use Watir to tell whether or not an element is hidden using the overflow: hidden css property. The only way that I have thought to do this would be to figure out where the containing div is and see whether the contained element falls within it or not. Methods like visible? and present? return true even when the contained element is completely hidden.
Is there a better way?
Thanks!
One option might be to use document.elementFromPoint(x,y). This should tell you what the top element is at a specific coordinate. If your element is hidden due to the overflow, it will return a different element.
The following seems to work for the examples on w3schools.com:
require 'rubygems'
require 'watir-webdriver'
b = Watir::Browser.new :firefox
class Watir::Element
def hidden_by_overflow?()
assert_exists
assert_enabled
# Add one to the coordinates because, otherwise, if there is no border
# between this element and another element, we might get the element right
# above or to the left of this one
top_left_x = #element.location.x + 1
top_left_y = #element.location.y + 1
top_test = browser.execute_script("return document.elementFromPoint(#{top_left_x}, #{top_left_y})")
return top_test != self
end
end
begin
b.goto('http://www.w3schools.com/cssref/playit.asp?filename=playcss_overflow&preval=hidden')
b.div(:id, 'demoDIV').ps.each{ |x| puts x.hidden_by_overflow? }
#=> The first 9 paragraphs of the 16 paragraphs return true.
ensure
b.close
end
Note:
I tested this in Firefox. Not sure if it will be browser compatible.
The check is only checking that the top left corner of your element is not hidden. Elements that are partially outside the containing div would still return true. I tried making a check for the bottom corner using #element.size.height, but webdriver started scrolling the results in the containing div.
Related
Noob question. I need to pass 3,000+ URLs from a CSV sheet to Selenium. I need Selenium to navigate to each one of these links, scrape information and then put that information into a CSV.
The issue I am running into is when I push my CSV URLS into an array, I cannot pass one single object (url) into Selenium at a time.
I know I likely need some sort of loop. I have tried setting up loops and selecting from the array using .map, .select. and just a do loop.
urls.map do |url|
#driver.navigate.to #{url}
name = #driver.find_element(:css, '.sites-embed-
footer>a').attribute('href')
puts name
kb_link = name
kb_array.push(kb_link)
puts 'urls is #{n}'
end
In the above example, Selenium returns an "invalid URL" error message. De-bugging with Pry tells me that my 'url' object is not a single url, but rather still the entire array.
How can I set Selenium to visit each URL from the array one by one?
EDIT: ----------------
So, after extensive de-bugging with Pry, I found a couple issues. First being that my CSV was feeding a nested array to my loop which was causing the URL error. I had to flatten my array and un-nest it to get around that issue.
After that, I had to build a rescue into my loop so that my script didn't die when it encountered a page without the CSS element I was looking for.
Here's the finalized loop.
begin
#urls1.each do |url|
#driver.navigate.to(url)
#driver.manage.timeouts.implicit_wait = 10
name = #driver.find_element(:css, '.sites-embed-
footer>a').attribute('href')
puts name
kb_link = name
kb_array.push(kb_link)
puts 'done'
rescue Selenium::WebDriver::Error::NoSuchElementError
puts 'no google doc'
x = 'no google doc'
kb_array.push(x)
next
end
What about using .each?
Example:
array = [1, 2, 3, 4, 5, 6]
array.each { |x| puts x }
In your code:
urls.each do |url|
#driver.navigate.to #{url}
name = #driver.find_element(:css, '.sites-embed-footer>a').attribute('href')
puts name
kb_link = name
kb_array.push(kb_link)
puts 'urls is #{n}'
end
First of all, it doesn't make much sense to use map if you don't use the result of the block somewhere. map, applied to an Enumerable, returns a new Array, and you don't do anything with the returned array (which in your case would contain just the return values of puts, which is usually nil, so you would get back just an array of nils with the side effect that something is written to stdout.
If you are only interested in the side effects, each or each_with_indexshould be used to traverse an Enumerable. Given the problems you have with map and with each, I wonder what is the actual content of your object urls. Did you ever inspect it? You could do a
p urls
before entering the loop. With 3000 URLs, the output will be huge, but maybe you can run it on a simpler example with less URLs.
I have a method that checks whether a panel is displayed or not.
def verifyNav(section)
wait = Selenium::WebDriver::Wait.new(:timeout => 10)
wait.until { #driver.find_element(:id => section + '-panel').displayed? == true }
end
Now I want to add some code that says that any other elements that have an id that ends in '-panel' should not be displayed.
I've done some searching and I found that I can use the end_with method and there seems to be a find_elements method that returns a list of matching elements.
I've found out that
a = 'radio-panel'
a.end_with?('-panel')
returns true.. but if I try to call
#driver.find_elements(:id => end_with?('-panel'))
I get an error saying that end_with is an undefined method.
Any ideas on how I can do this?
This would help you
p #driver.find_elements(xpath: "(.//*)[contains(#id, '-panel')]").select{|element|element.attribute("id").end_with?"-panel"}
If you want to iterate and wanted to do perform some action on each element, then you can do
#driver.find_elements(xpath: "(.//*)[contains(#id, '-panel')]").select{|element|element.attribute("id").end_with?"-panel"}.each do |i|
puts i.displayed? # Or whatever operation you want to perform
end
If you want to check whether all the element with id which consist of -panel is displayed
p #driver.find_elements(xpath: "(.//*)[contains(#id, '-panel')]")
.select{|element|element.attribute("id").end_with?"-panel"}
.map{|element| element.displayed?}
.all?
Question
I need to search a given web page for a particular node when given the exact HTML as a string. For instance, if given:
url = "https://www.wikipedia.org/"
node_to_find = "<title>Wikipedia</title>"
I want to "select" the node on the page (and eventually return its children and sibling nodes). I'm having trouble with the Nokogiri docs, and how to exactly go about this. It seems as though, most of the time, people want to use Xpath syntax or the #css method to find nodes that satisfy a set of conditions. I want to use the HTML syntax and just find the exact match within a webpage.
Possible start of a solution?
If I create two Nokogiri::HTML::DocumentFragment objects, they look similar but do not match due to the memory id being different. I think this might be a precursor to solving it?
irb(main):018:0> n = Nokogiri::HTML::DocumentFragment.parse(<title>Wikipedia</title>").child
=> #<Nokogiri::XML::Element:0x47e7e4 name="title" children=[ <Nokogiri::XML::Text:0x47e08c "Wikipedia">]>
irb(main):019:0> n.class
=> Nokogiri::XML::Element
Then I create a second one using the exact same arguments. Compare them - it returns false:
irb(main):020:0> x = Nokogiri::HTML::DocumentFragment.parse("<title>Wikipedia</title>").child
=> #<Nokogiri::XML::Element:0x472958 name="title" children=[#<Nokogiri::XML::Text:0x4724a8 "Wikipedia">]>
irb(main):021:0> n == x
=> false
So I'm thinking that if I can somehow create a method that can find matches like this, then I can perform operations of that node. In particular - I want to find the descendents (children and next sibling).
EDIT: I should mention that I have a method in my code that creates a Nokogiri::HTML::Document object from a given URL. So - that will be available to compare with.
class Page
attr_accessor :url, :node, :doc, :root
def initialize(params = {})
#url = params.fetch(:url, "").to_s
#node = params.fetch(:node, "").to_s
#doc = parse_html(#url)
end
def parse_html(url)
Nokogiri::HTML(open(url).read)
end
end
As suggested by commenter #August, you could use Node#traverse to see if the string representation of any node matches the string form of your target node.
def find_node(html_document, html_fragment)
matching_node = nil
html_document.traverse do |node|
matching_node = node if node.to_s == html_fragment.to_s
end
matching_node
end
Of course, this approach is fraught with problems that boil down to the canonical representation of the data (do you care about attribute ordering? specific syntax items like quotation marks? whitespace?).
[Edit] Here's a prototype of converting an arbitrary HTML element to an XPath expression. It needs some work but the basic idea (match any element with the node name, specific attributes, and possibly text child) should be a good starting place.
def html_to_xpath(html_string)
node = Nokogiri::HTML::fragment(html_string).children.first
has_more_than_one_child = (node.children.size > 1)
has_non_text_child = node.children.any? { |x| x.type != Nokogiri::XML::Node::TEXT_NODE }
if has_more_than_one_child || has_non_text_child
raise ArgumentError.new('element may only have a single text child')
end
xpath = "//#{node.name}"
node.attributes.each do |_, attr|
xpath += "[#{attr.name}='#{attr.value}']" # TODO: escaping.
end
xpath += "[text()='#{node.children.first.to_s}']" unless node.children.empty?
xpath
end
html_to_xpath('<title>Wikipedia</title>') # => "//title[text()='Wikipedia']"
html_to_xpath('<div id="foo">Foo</div>') # => "//div[id='foo'][text()='Foo']"
html_to_xpath('<div><br/></div>') # => ArgumentError: element may only have a single text child
It seems possible that you could build an XPath from any HTML fragment (e.g. not restricted to those with only a single text child, per my prototype above) but I'll leave that as an exercise for the reader ;-)
I am new to the Watir world, having used webdriver and geb in a previous company. I want to know if Watir offers any method that is analogous to the get_elements method from webdriver. See below for an example.
Imagine the following html exists within a larger page
<div class="someClass">someText</div>
<div class="someClass">someMoreText</div>
<div class="someClass">evenMoreText</div>
I want make some assertion against each of the divs by locating all elements of the given class and iterating through them. Using webdriver, I could do it like this:
elements = driver.get_elements(:css, ".someClass")
elements.each do |element|
//some assert on element
end
Does Watir provide an equivalent construct? I can't find anything useful in the Watir documentation.
You can do:
elements = driver.elements(:css => '.someClass')
Or if you know they are all divs, you should do:
elements = driver.divs(:css => '.someClass')
Both of these methods would return a collection of elements that match your criteria. In the first case it would match any tag type, where as the second case the results would be limited to divs.
Then, with either of the above, you can iterate the same way:
elements.each do |element|
//some assert on element
end
Instead of using :css locator i'd recommend you to use :class locator instead, since it is usually faster and makes your tests more readable:
elements = driver.divs(:class => 'someClass')
Also, don't forget :id, :name, :text and others.
So I am cycling through a list of links on a page with Nokogiri and pushing all the links onto a 2D array. The issue is that it is pushing nil in some elements which I don't want.
How do I force it to skip the elements that are nil, so my array just has links and not some links and some nil values?
See code:
url = 'http://www.craigslist.org/about/sites'
def my_list(url)
root = Nokogiri::HTML(open(url))
list = root.css("a").map do |link|
if link[:href] =~ /http/
[link.text, link[:href]]
end
end
end
Thoughts?
P.S. I tried if link[:href].nil?, but I am not sure how to tell it to skip that particular link element.
You can post-process the list, as root doesn't seem to support all the collection methods, try this at the end of your method to clean it up. It'll drop all the nils.
list = list.reject {|x| x.nil?}
You can try:
list = root.css("a").reject!{|l| l[:href].nil?}.map do | link |