Check whether element is present - ruby

Is there anyway to check whether an element is present in Selenium web driver? I try to use this code:
if #driver.find_element(:link, "Save").displayed? == true
but it will break in exception, which is not what I expected because I still want the script to continue running.

I'm not Ruby expert and can make some syntax errors but you can get general idea:
if #driver.find_elements(:link, "Save").size() > 0
This code doesn't throw NoSuchElementException
But this method will "hang" for a while if you have implicitlyWait more than zero and there is no elements on the page.
The second issue - if element exists on the page but not displayed you'll get true.
To workaround try to create method:
def is_element_present(how, what)
#driver.manage.timeouts.implicit_wait = 0
result = #driver.find_elements(how, what).size() > 0
if result
result = #driver.find_element(how, what).displayed?
end
#driver.manage.timeouts.implicit_wait = 30
return result
end

#driver.find_element throws an exception called NoSuchElementError.
So you can write your own method which uses try catch block and return true when there is no exception and false when there is an exception.

If it's expected that the element should be on the page no matter what: use a selenium wait object with element.displayed?, rather than using begin/rescue:
wait = Selenium::WebDriver::Wait.new(:timeout => 15)
element = $driver.find_element(id: 'foo')
wait.until { element.displayed? } ## Or `.enabled?` etc.
This is useful in instances where parts of the page take longer to properly render than others.

I am using selenium-webdriver version 3.14.0 released earlier this month. I was trying to put a check on #web_driver_instance.find_element(:xpath, "//div[contains(text(), 'text_under_search')]").displayed? using:
element_exists = #wait.until { #web_driver_instance.find_element(:xpath, "//div[contains(text(), 'text_under_search')]").displayed? }
unless element_exists
#do something if the element does not exist
end
Above failed with NoSuchElementError exception so i tried using below:
begin
#wait.until { #web_driver_instance.find_element(:xpath, "//div[contains(text(), 'text_under_search')]").displayed? }
rescue NoSuchElementError
#do something if the element does not exist
end
This also did not work for me and failed again with NoSuchElementError exception.
Since text presence i was checking was likely to be unique on the page, tried below and this worked for me:
unless /text_under_search_without_quotes/.match?(#web_driver_instance.page_source)
#do something if the text does not exist
end

Find Element
Solution #1
expect(is_visible?(page.your_element)).to be(false)
[or]
expect(is_visible?(#driver.find_element(:css => 'locator_value'))).to be(false)
[or]
expect(is_visible?(#driver.first(:css => 'locator_value'))).to be(true)
=> Generic ruby method
def is_visible?(element)
begin
element.displayed?
return true
rescue => e
p e.message
return false
end
end
Solution #2
expect(is_visible?(".locator_value")).to be(false) # default css locator
[or]
expect(is_visible?("locator_value", 'xpath')).to be(true)
[or]
expect(is_visible?("locator_value", 'css')).to be(false)
[or]
expect(is_visible?("locator_value", 'id')).to be(false)
=> Generic ruby method
def is_visible?(value, locator = 'css')
begin
#driver.first(eval(":#{locator}") => value).displayed?
return true
rescue => e
p e.message
return false
end
end
Find Elements (list of elements)
Solution #1
=> Variable declared in the page class
proceed_until(#driver.find_elements(:css => 'locator_value').size == 0)
[or]
proceed_until(#driver.all(:css => 'locator_value').size == 0)
=> Generic ruby method
def proceed_until(action)
init = 0
until action
sleep 1
init += 1
raise ArgumentError.new("Assertion not matching") if init == 9
end
end

Related

Browsermob Proxy + Watir not capturing traffic continuously

I have the BrowserMob Proxy set up correctly with Watir and it is capturing traffic and saving the HAR file; however, what it's not doing is that it's not capturing the traffic continuously. So following is what I'm trying to achieve:
Go to homepage
Click on a link to go to another page where I need to wait for some events to happen
Once on the second page, start capturing traffic after the event happens and wait for a specific call to occur and capture its contents.
What I'm noticing however, is that it's following all of the above steps, but on step 3 the proxy stops capturing traffic before that call is even made on that page. The HAR that is returned doesn't have that call in it hence the test fails before it even does its job. Following is how the code looks like.
class BMP
attr_accessor :server, :proxy, :net_har, :sel_proxy
def initialize
bm_path = File.path(Support::Paths.cucumber_root + "/browsermob-
proxy-2.1.4/bin/browsermob-proxy")
#server = BrowserMob::Proxy::Server.new(bm_path, {:port => 9999,
:log => false, :use_little_proxy => true, :timeout => 100})
#server.start
#proxy = #server.create_proxy
#sel_proxy = #proxy.selenium_proxy
#proxy.timeouts(:read => 50000, :request => 50000, :dns_cache =>
50000)
#net_har = #proxy.new_har("new_har", :capture_binary_content =>
true, :capture_headers => true, :capture_content => true)
end
def fetch_har_entries(target_url)
har_logs = File.join(Support::Paths.har_logs, "har_file # .
{Time.now.strftime("%m%d%y_%H%M%S")} .har")
#net_har.save_to har_logs
index = 0
while (#net_har.entries.count > index) do
if #net_har.entries[index].request.url.include?(target_url) &&
entry.request.method.eql?("GET")
logs = JSON.parse(entry.response.content.text) if not
entry.response.content.text.nil?
har_logs = File.join(Support::Paths.har_logs, "json_file_# .
{Time.now.strftime("%m%d%y_%H%M%S")}.json")
File.open(har_logs, "w") do |json|
json.write(logs)
end
break
end
index += 1
end
end
end
In my test file I have following
Then("I navigate to the homepage") do
visit(HomePage) do |page|
page.element.click
end
end
And("I should wait for event to capture traffic") do
visit(SecondPage) do |page|
page.wait_until{page.element2.present?)
BMP.fetch_har_entries("target/url")
end
end
What am I missing that is causing the proxy to not capture traffic in its entirety?
In case anyone gets here from a google search, I figured out how to resolve this on my own (thanks stackoverflow community for nothing, lol). So to resolve the issue, i used a custom retriable loop called eventually method.
logs = nil
eventually(timeout: 110, interval: 1) do
#net_har = #proxy.new_har("har", capture_binary_content: true, capture_headers: true, capture_content: true)
#net_har.entries.each do |entry|
begin
break if #net_har.entries.index entry == #net_har.entries.count
next unless entry.request.url.include?(target_url) &&
entry.request.post_data.text.include?(target_body_text)
logs = entry.request.post_data.text
break
rescue TypeError
fail("Response body for the network call came back empty")
end
end
raise EOFError if logs_hash.nil?
end
logs
end
Basically I'm assuming what was happening was the BMP would only cache or capture 30 seconds worth of har logs, and if my network event didn't occur during those 30 secs, i was SOL. So the what above code is doing is that's it's waiting for the logs variable to be not nil, if it is, it raises an EOFError and goes back to the loop initializes the har again and looks for the network call again. It keeps on doing that until it find the call or 110 seconds are up. Following is the eventually method I'm using
def eventually(options = {})
timeout = options[:timeout] || 30
interval = options[:interval] || 0.1
time_limit = Time.now + timeout
loop do
begin
yield
rescue EOFError => error
end
return if error.nil?
raise error if Time.now >= time_limit
sleep interval
end
end

Waitr and Jira integration not producing desired output

I wrote a Ruby script to check if the layer found in DOM in Firebug for the page www.jira.com is matching with the hash values declared in my script. Below is the Ruby script I have written:
require 'watir'
browser = Watir::Browser.new(:chrome)
browser.goto('https://jira.com')
JIRA_DATA_LAYER = {
'jira' => {
'event' => ['gtm.js', 'gtm.load'],
'gtm.start' => '1468949036556',
}
}
def get_jira_data_layer(get_data_layer)
result = []
get_data_layer.each do |data_layer|
data_layer.each do |data_layer_key, data_layer_value|
result << {"#{data_layer_key}" => data_layer_value}
end
end
return result
end
def compare_jira_data_layer(layer, jira_name)
message = []
index = 0
JIRA_DATA_LAYER[jira_name].each do |jira_key, jira_value|
if layer.include?({jira_key => jira_value})
result = 'matches - PASS'
else
result = 'matches - FAIL'
end
index += 1
message.push("'#{jira_key} => #{jira_value}' #{result}")
end
return message.join("\n")
end
data_layer = browser.execute_script("return dataLayer")
get_data_layer = get_jira_data_layer(data_layer)
compare_data_layer = compare_jira_data_layer(get_data_layer, "jira")
puts compare_data_layer
I am getting the following output:
'event => ["gtm.js", "gtm.load"]' matches - FAIL
'gtm.start => 1468949036556' matches - FAIL
I want the following to be achieved:
'event => gtm.js' matches - FAIL
'gtm.start => 1468949036556' matches - FAIL
You could simply change the value for event key in JIRA_DATA_LAYER, but I guess it has to be that way.
Try to expand if sentence when checking key for this hash and use is_a? method to check whether value for particular key is array or not. If so, loop through each member of this array.

Code not actually asserting in RSpec?

I'm new to Ruby and in various open source software I've noticed a number of "statements" in some RSpec descriptions that appear not to accomplish what they intended, like they wanted to make an assertion, but didn't. Are these coding errors or is there some RSpec or Ruby magic I'm missing? (Likelihood of weirdly overloaded operators?)
The examples, with #??? added to the suspect lines:
(rubinius/spec/ruby/core/array/permutation_spec.rb)
it "returns no permutations when the given length has no permutations" do
#numbers.permutation(9).entries.size == 0 #???
#numbers.permutation(9) { |n| #yielded << n }
#yielded.should == []
end
(discourse/spec/models/topic_link_spec.rb)
it 'works' do
# ensure other_topic has a post
post
url = "http://#{test_uri.host}/t/#{other_topic.slug}/#{other_topic.id}"
topic.posts.create(user: user, raw: 'initial post')
linked_post = topic.posts.create(user: user, raw: "Link to another topic: #{url}")
TopicLink.extract_from(linked_post)
link = topic.topic_links.first
expect(link).to be_present
expect(link).to be_internal
expect(link.url).to eq(url)
expect(link.domain).to eq(test_uri.host)
link.link_topic_id == other_topic.id #???
expect(link).not_to be_reflection
...
(chef/spec/unit/chef_fs/parallelizer.rb)
context "With :ordered => false (unordered output)" do
it "An empty input produces an empty output" do
parallelize([], :ordered => false) do
sleep 10
end.to_a == [] #???
expect(elapsed_time).to be < 0.1
end
(bosh/spec/external/aws_bootstrap_spec.rb)
it "configures ELBs" do
load_balancer = elb.load_balancers.detect { |lb| lb.name == "cfrouter" }
expect(load_balancer).not_to be_nil
expect(load_balancer.subnets.sort {|s1, s2| s1.id <=> s2.id }).to eq([cf_elb1_subnet, cf_elb2_subnet].sort {|s1, s2| s1.id <=> s2.id })
expect(load_balancer.security_groups.map(&:name)).to eq(["web"])
config = Bosh::AwsCliPlugin::AwsConfig.new(aws_configuration_template)
hosted_zone = route53.hosted_zones.detect { |zone| zone.name == "#{config.vpc_generated_domain}." }
record_set = hosted_zone.resource_record_sets["\\052.#{config.vpc_generated_domain}.", 'CNAME'] # E.g. "*.midway.cf-app.com."
expect(record_set).not_to be_nil
record_set.resource_records.first[:value] == load_balancer.dns_name #???
expect(record_set.ttl).to eq(60)
end
I don't think there is any special behavior. I think you've found errors in the test code.
This doesn't work because there's no assertion, only a comparison:
#numbers.permutation(9).entries.size == 0
It would need to be written as:
#numbers.permutation(9).entries.size.should == 0
Or using the newer RSpec syntax:
expect(#numbers.permutation(9).entries.size).to eq(0)

Wait until particular element to disappear

Using Ruby, we can wait for particular element by doing following:
wait = Selenium::WebDriver::Wait.new(:timeout => 10)
wait.until { driver.find_element(:class, 'gritter-item') }
but if I want particular element to disappear from DOM, I write method like:
def disappear_element
begin
driver.find_element(:class, 'gritter-item')
rescue Selenium::WebDriver::Error::NoSuchElementError
true
else
false
end
end
and called it like:
wait.until { disappear_element }
This way I could achieve absence of element. Is there any better way in Ruby to achieve the same?
You can write disappear_element as follow (using find_elements instead of find_element):
def disappear_element
driver.find_elements(:class, 'gritter-item').size == 0
end

Workling return store not retrieving value from set

In my apps/controllers/model_controller.rb I have (names of models/methods changed to protect the innocent):
def background_sync
#background_task_uid = Model.async_process_model_cache({:name => 'name'})
#model_sync = ModelSync.new # Adds a new record in the queue of pending jobs
#model_sync.process_id = #background_task_uid # Puts the background process id into the new ModelSync record
#model_sync.save
end
In app/workers/model_worker.rb:
def process_model_cache(options={})
[long background task]
result = Workling::Return::Store.set(options[:uid], 'done')
result = Workling::Return::Store.get(options[:uid]) #=> 'done'
end
Notice that the set and get are functioning properly here within this worker. The problem is later on...
Back in app/views/model/index.html.rb, I have a prototype helper polling a request to the same controller to determine whether the background job is complete:
<%= periodically_call_remote( :url => { :action => :background_complete }, :frequency => 5, :update => 'status_div') %>
And in apps/controllers/model_controller.rb, the function for checking the status of the background job:
def background_complete
#background_task_uid = ModelSync.find(:last)
if #background_task_uid
#background_task_uid.each do |task|
unless task.process_id == "" || task.process_id.nil?
#result = Workling::Return::Store.get(task.process_id) #=> nil
if #result.nil?
task.destroy
end
else
task.destroy
end
unless #result.nil?
render :text => "<span style='font-size:12px;margin-left:20px;'>"+#result+"</span>"
else
#result = "none" if #result.nil?
render :text => "<span style='font-size:12px;margin-left:20px;'>"+#result+"</span>"
end
end
end
end
And finally, in config/environments/development.rb:
Workling::Return::Store.instance = Workling::Return::Store::MemoryReturnStore.new
Workling::Remote.dispatcher = Workling::Remote::Runners::StarlingRunner.new
(Note that I've tried running this with and without the last line commented. If commented out, Workling reverts to Spawn rather than Starling.)
So the problem is that I get nil from this line in background_complete:
#result = Workling::Return::Store.get(task.process_id) #=> nil
I know it is a year since you asked this question, but just getting into Starling now, myself, so didn't see this till now.
But it looks like your problem was (from development.rb):
Workling::Return::Store.instance = Workling::Return::Store::MemoryReturnStore.new
Workling::Remote.dispatcher = Workling::Remote::Runners::StarlingRunner.new
It needed to be:
Workling::Return::Store.instance = Workling::Return::Store::StarlingReturnStore.new
Workling::Remote.dispatcher = Workling::Remote::Runners::StarlingRunner.new
At least for the benefit of those google searchers out there... :)
Found the answer to this question. The fix is to remove the Workling::Return::Store.instance line from config/environments/development.rb
Then replace the get AND set calls as follows:
In app/workers/model_worker.rb:
store = Workling::Return::Store::StarlingReturnStore.new
key, value = #uid, #progress
store.set(key, value)
In app/controllers/models_controller.rb:
store = Workling::Return::Store::StarlingReturnStore.new
#result = store.get(task.process_id)
Obviously, there is a way to declare a shortcut in environments.rb to avoid calling a new StarlingReturnStore each time, but I'm out of my depth because I can't make that work.
Anyway, this fix works for me. I'm getting the output from each background job to report via set to the get in the controller, which then gets captured by the AJAX call and reported to the page via RJS.
Nice!

Resources