In my app, I sometimes write tests using iterations like this:
%w[one two three].each do |number|
it 'is going to fail under several circumstances' do
expect(someting_from(number))
end
end
And I know that sometimes for several instances in the array I iterate over this spec can temporarily fail but it's going to be fine some days after, so I want to conditionally skip them, using internal RSpec pending mechanics, so I get notified when this spec stops failing. Is there a way to do this?
One can use pending inside an example with if. Just like this:
%w[one two three].each do |number|
it 'is going to fail under several circumstances' do
pending("It's not your fault") if we_know_about_this(number)
expect(someting_from(number))
end
end
I believe you don't need them to skip, you can change expectation for a particular item. When the test will be fixed, your test case will fail and you get notified:
%w[one two three].each do |number|
it 'is going to fail under several circumstances' do
if number == 'two'
expect(something_from(number)).to be false
else
expect(something_from(number))
end
end
end
When something_from('two') == false
...
Finished in 0.01532 seconds (files took 0.80331 seconds to load)
3 examples, 0 failures
When something_from('two') == true
.F.
Failures:
1) is going to fail under several circumstances
Failure/Error: expect(something_from(number)).to be false
expected false
got true
# ./spec/conditional_spec.rb:7:in `block (3 levels) in <top (required)>'
Finished in 0.0175 seconds (files took 0.79527 seconds to load)
3 examples, 1 failure
Failed examples:
rspec ./spec/conditional_spec.rb[1:2] # is going to fail under several circumstances
UPD
With pending to fail tests when a particular test stops failing:
%w[one two three].each do |number|
it 'is going to fail under several circumstances' do
if number == 'two'
pending
expect(something_from(number))
else
expect(something_from(number))
end
end
end
Related
In one of my specs, I find myself repeating lines like these often:
expect(result.status).to be(:success)
expect(result.offers).not_to be_empty
expect(result.comments).to be_empty
To make my tests more succinct and readable, I want to compose these into a line like this:
expect(result).to be_successful
I can do this by creating a custom matcher:
matcher :be_successful do
match { |result|
result.status == :success &&
result.offers.length > 0 &&
result.comments.empty?
}
end
But I now have a failing test, and the failure message is completely useless. All it says now is Expected #<Result ...> to be successful.
I know I can override the failure message, but now this solution is getting more complicated than it's worth for saving 2 lines for every spec example. The original 3 lines generated useful failure messages, all I wanted to do was combine them into 1 line.
I could move the 3 lines into a separate function (e.g. assert_successful) and call that from each spec example, but I'd like to keep the matcher syntax.
Can this be done?
According to this
You could do something like this:
RSpec::Matchers.define :be_successful do
match do |result|
result.status == :success &&
result.offers.length > 0 &&
result.comments.empty?
end
failure_message do |result|
"Should have #{result} equal to be successful"
end
failure_message_when_negated do |result|
"Should not have #{result} to be successful"
end
end
If you reuse this test elsewhere more than 3 times, then it makes sense to create a new matcher and override the failure messages(It's not an overhead). If you use this test only one time then it makes sense to keep it without overly abstract it.
One way of organizing this is: Put the real testing code into
a method _check_ok(expected, actual) of its own,
which returns either [true] or [false, message].
This is called as follows:
RSpec::Matchers.defined :check_ok do |expected|
match do |actual|
return _check_ok(expected, actual)[0]
end
failure_message do |actual|
return _check_ok(expected, actual)[1]
end
end
This repeats the _check_ok call in the failure case,
which normally isn't a problem.
In prep for hurricane irma I wrote a quick trash script to download a bunch of exercises off exercism.io. It works, but there's an error at the call to threads.each that I don't understand, all the code up until threads.each is synchronous if I understand correctly, so I'm not sure what the best way to fix is.:
rb:14:in '<main>': undefined method 'each' for nil:NilClass (NoMethodError)
It's interesting to me because I get the error but the program still runs as expected, so I'm sure I'm not writing this properly.
language = ARGV[0]
exercises = `exercism list #{#language}`.split("\n")
threads = exercises.map do |exercise|
break if exercise == ''
Thread.new do
system("exercism fetch #{language} #{exercise}")
end
end
threads.each(&:join)
Use next instead of break so that threads is still set if any exercises are blank. break will cancel the whole loop, but next will skip only the current iteration.
Then some threads could still be nil if their exercise is blank, because no thread has started for them. You can use threads.compact.each(&:join) to skip these nil values.
Or if you need the break, then add to threads inside the loop like:
threads = []
exercises.each do |exercise|
break if exercise == ''
threads << Thread.new do
system("exercism fetch #{language} #{exercise}")
end
end
I have a test which is a bit like the following. The details isn't important, but I have a method which takes about 10 seconds, and gets back some data which I want to use a bunch of times in a bunch of tests. The data won't be any more fresh - I only need to fetch it once. My understanding of let is that it memoizes, so I would expect the following to only call slow_thing once. But I see it called as many times as I refer to slowthing. What am I doing wrong?
describe 'example' do
def slow_thing
puts "CALLING ME!"
sleep(100)
end
let(:slowthing) { slow_thing }
it 'does something slow' do
expect(slowthing).to be_true
end
it 'does another slow thing' do
expect(slowthing).to be_true
end
end
When I run the test, I see CALLING ME! as many times as I have assertions or use slowthing.
The documentation states values are not cached across examples:
The value will be cached across multiple calls in the same example but not across examples. [Emphasis mine.]
E.g., also from the docs:
$count = 0
describe "let" do
let(:count) { $count += 1 }
it "memoizes the value" do
count.should == 1
count.should == 1
end
it "is not cached across examples" do
count.should == 2
end
end
From https://www.relishapp.com/rspec/rspec-core/v/2-6/docs/helper-methods/let-and-let
So I'm working on Rspec problems, and this is the last one I have left. For whatever reason, it's been much harder than all of the others. The three Rspec tests that are in question are as folows:
it "runs a block N times" do
n = 0
measure(4) do
n += 1
end
n.should == 4
end
it "returns the average time, not the total time, when running multiple times" do
run_times = [8,6,5,7]
fake_time = #eleven_am
Time.stub(:now) { fake_time }
average_time = measure(4) do
fake_time += run_times.pop
end
average_time.should == 6.5
end
it "returns the average time when running a random number of times for random lengths of time" do
fake_time = #eleven_am
Time.stub(:now) { fake_time }
number_of_times = rand(10) + 2
average_time = measure(number_of_times) do
delay = rand(10)
fake_time += delay
end
average_time.should == (fake_time - #eleven_am).to_f/number_of_times
end
And my code is as follows:
require 'time'
def measure(pass = 0)
start_time = Time.now
if pass == 0
yield
else
pass.times {|current| result = yield(current)}
end
Time.now - start_time
end
(The if/else is present, as an earlier test requires that the code takes one second to execute a program that sleeps for 1 second. In that case, pass would be 0, so the program would jump straight to the yield.)
Full Rspec here
Now, the code DOES pass the 'it "runs a block N times" do' test, but I feel that the way I have it set up prevents the other two tests from being able to pass. (At the same time, a simple yield won't allow it to pass, because it will get an error from trying to + 1 to nil)
I'm not looking for a copy/paste answer, but moreso whether or not I'm on the right track (Or if my pass.times should be reworked.)
If you have any examples that may be able to lead me in the right direction, I'd be more than happy to see them!
You say you "feel" it won't let the other tests pass, do you know this? I just ran it, and here's the results:
Performance Monitor
takes about 0 seconds to run an empty block
takes exactly 0 seconds to run an empty block (with stubs)
takes about 1 second to run a block that sleeps for 1 second
takes exactly 1 second to run a block that sleeps for 1 second (with stubs)
runs a block N times
returns the average time, not the total time, when running multiple times (FAILED - 1)
So the last spec fails. Looks like it's just returning the wrong value, it doesn't look at whether there were multiple passes. So updating that:
require 'time'
def measure(pass = 0)
start_time = Time.now
if pass == 0
yield
else
pass.times {|current| result = yield(current)}
end
(Time.now - start_time) / (pass == 0 ? 1 : pass)
end
Now running the specs shows me:
(in /Users/nick/learn_ruby)
Performance Monitor
takes about 0 seconds to run an empty block
takes exactly 0 seconds to run an empty block (with stubs)
takes about 1 second to run a block that sleeps for 1 second
takes exactly 1 second to run a block that sleeps for 1 second (with stubs)
runs a block N times
returns the average time, not the total time, when running multiple times
returns the average time when running a random number of times for random lengths of time
Finished in 1.01 seconds
7 examples, 0 failures
Great part about testing first is you can just find out if what you are doing is wrong, and then fix it.
def measure(pass = 1)
start_time = Time.now
pass.times {yield}
(Time.now - start_time) / (pass)
end
for any future passerbys, this could be written more efficiently like so, using a default value of 1 for pass
Is there a way in Ruby to have it print the __LINE__ number of code (at my script level, not required gems) it's working on if taking longer than 9 seconds (adjustable)?
For debugging I am getting it to print verbose output of what it's trying to do, where it is in the code etc., rather than silently sitting for long periods of time.
A flaky situation makes it unpredicable how far it gets before something times out, so successive advancing doesn't apply here.
EDIT
Something like a trap would work, such that:
The original line number and hopefully code get remembered (both benchmark and timeout gems lose track of __LINE__ for instance.... Maybe there is a way to push it off to another .rb file to manipulate the stack to include my file & line of interest?)
When the overtime warning prints, execution still continues as if nothing had changed.
require 'timeout'
def do_something
Timeout::timeout(9) do
sleep 10
end
rescue Timeout::Error => e
puts "Something near line #{__LINE__} is taking too long!"
# or, count backwards in method
puts "Line #{__LINE__ - 5} is taking too long!"
end
do_something
This will stop execution if the timeout block runs out of time and raise a Timeout error.
If you want to continue execution, you might do better with benchmark:
require 'benchmark'
time = Benchmark.realtime do
sleep 10
end
puts "Line #{__LINE__ - 2} is slow" if time > 9
One benchmark block can have multiple timers:
Benchmark.bm do |b|
b.report('sleeping:') { sleep 3 }
b.report('chomping:') { " I eat whitespace ".chomp }
end
See more about benchmark here:
http://ruby-doc.org/stdlib-1.9.3/libdoc/benchmark/rdoc/Benchmark.html
If you want to keep track of the line number being executed, why don't you try passing it in to a custom method like so:
def timethis(line, &block)
if Benchmark.realtime(&block) > 2
puts "Line #{line} is slow"
end
end
timethis(__LINE__) { sleep 1 }