# Variables empty in Gherkins/Cucumber test - ruby

This is my test, but even though #timeout_exception is working during running of the code, it's empty during the test. So how can I test if this variable is set?
Then(/^the output should be '(.*)'$/) do |expectedException|
expect(#timeout_exception).to eq(expectedException)
end
This is the output of the bundle exec cucumber run.
And the output should be 'Execution Timeout Error: This deployment has taken too long to run' # features/step_definitions/my_steps.rb:309
expected: "Execution Timeout Error: This deployment has taken too long to run"
got: nil
(compared using ==)
(RSpec::Expectations::ExpectationNotMetError)
./features/step_definitions/my_steps.rb:310:in `/^the output should be '(.*)'$/'
features/timeout_lengthy_deploys.feature:25:in `And the output should be 'Execution Timeout Error: This deployment has taken too long to run''
Failing Scenarios:
cucumber features/timeout_lengthy_deploys.feature:11 # Scenario: Normal deploy that times out because it takes too long

Selenium has its own waits and if they are set less than your expected wait then you will never see your expected wait triggered. Make sense?
The following sets the max wait for page load to 5 seconds
#browser.driver.manage.timeouts.page_load = 5
Script timeout is another (generally used with Ajax)
#browser.driver.manage.timeouts.script_timeout = 5
#browser.execute_script("return jQuery.active")
Implicit Wait is the maximum wait time for Selenium to wait for an operation on an element to complete. If this triggers first then your expect will fail.
#browser.driver.manage.timeouts.implicit_wait = 5
I would suggest setting implicit_wait higher than your timeout just prior to the test and then setting it back just afterwards. BTW, if your timeout raises an exception you will need a rescue block.

Related

RSpec pending groups/tests results in non-zero exit status

I have a set of rspec groups + examples, with a few groups and examples marked pending using the pending: 'Some notes about why' approach to marking them as such.
My suite runs, and successfully reports:
Finished in 35.63 seconds (files took 6.51 seconds to load)
65 examples, 0 failures, 26 pending
Unfortunately, it still returns a non-zero exit status, causing CI to still treat this as a failure:
petejohanson#xo-mb:~/Dev/xo-web/api $ echo $?
1
I don't want to ignore the non-zero code altogether, because then I will miss true test failures.
Anyone successfully used pending test and not had it give you problems with CI systems?

Delayed job: Set max run time of job , not the worker

I have below piece of code :
Converter.delay.convert("some params")
Now I want this job to be run for max 1 minute. If exceeded,delayed job should raise the exception.
I tried setting up
Delayed::Worker.max_run_time = 1.minute
but it seems it sets a timeout on the worker , not on the job.
Converter class is defined in RAILS_ROOT/lib/my_converter.rb
Timeout in the job itself
require 'timeout'
class Converter
def self.convert(params)
Timeout.timeout(60) do
# your processing
end
end
end
Delayed::Worker.max_run_time=1.minute
Its the max time on each task given to worker. The execution of any task takes more than specified we get exception raised as .
execution expired (Delayed::Worker.max_run_time is only 1 minutes)
The worker continue to run and process next tasks.

Ruby: running external processes in parallel and keeping track of exit codes

I have a smoke test that I run against my servers before making them live. At the moment it runs in serial form and takes around 60s per server. I can run these in parallel and I've done it with Thread.new which is great as it runs them a lot faster but I lose track of whether the test actually passed or not.
I'm trying to improve this by using Process.spawn to manage my processes.
pids = []
uris.each do |uri|
command = get_http_tests_command("Smoke")
update_http_tests_config( uri )
pid = Process.spawn( system( command ) )
pids.push pid
Process.detach pid
end
# make sure all pids return a passing status code
# results = Process.waitall
I'd like to kick off all my tests but then afterwards make sure that all the tests return a passing status code.
I tried using Process.waitall but I believe that to be incorrect and used for forks, not spawns.
After all the process have completed I'd like to return the status of true if they all pas or false if any one of them fails.
Documentation here
Try:
statuses = pids.map { |pid| Process.wait(pid, 0); $? }
This waits for each of the process ids to finish, and checks for the result status set in $? for each process

why their is difference in output of ruby script?

I have follwing ruby scripts
rubyScript.rb:
require "rScript"
t1 = Thread.new{LongRunningOperation(); puts "doneLong"}
sleep 1
shortOperation()
puts "doneShort"
t1.join
rScript.rb:
def LongRunningOperation()
puts "In LongRunningOperation method"
for i in 0..100000
end
return 0
end
def shortOperation()
puts "IN shortOperation method"
return 0
end
THE OUTPUT of above script i.e.(ruby rubyScript.rb)
1) With use of sleep function
In veryLongRunningOperation method
doneLong
IN shortOperation method
doneShort
2) Without use of sleep function i.e. removing sleep function.(ruby rubyScript.rb)
In veryLongRunningOperation method
IN shortOperation method
doneShort
doneLong
why there is difference in output. What sleep does in ablve case. Thannks in advance.
The sleep lets the main thread sleep for 1 second.
Your long running function runs longer than your short running function but it is still faster than one second.
If you remove the sleep, then your long running function starts in a new thread and the main thread continues without any wait. It then starts the short running function, which finishes nearly immediatly, while the long running function is still running.
In the case of the none removed sleep it goes as following:
Your long running function starts in a new Thread and the main thread continues. Now the main thread encounters the sleep command and waits for 1 second. In this time the long running function in the other thread is still running and finishes. The main thread continues after its sleep time and starts the short running function.
sleep 1 makes the current thread sleep (i.e. do nothing) for one second. So veryLongRunningOperation (which despite being a very long running operation still takes less than a second) has enough time to finish before shortOperation even starts.
sleep 1
Makes the main thread to wait for 1 second, that allows t1 to finish before shortOperation is executed.

How to compare results of two RSpec suite runs?

I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.
Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.
I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.
I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.

Resources