RSpec "allow" silently terminates "it" block - ruby

I have an RSpec test like this:
RSpec.describe 'outer' do
describe 'inner' do
context 'context' do
it 'my test' do
puts ":::: BEFORE"
allow(String).to receive(:broken?).and return(false)
puts ":::: AFTER"
expect(0).to eq(1) # Won't be executed
end
end
end
end
Of course in my own test, I do not use allow on a String, but on one of my classes, but the effect is the same: When I run this code, BEFORE is printed, but AFTER is not printed (nor is anything else executed which would come after the allow. The effect is, as if the allow would terminate the test. There is no error message; RSpec just says "1 example, 0 failures".
Other things worth to mention:
The RSpec mocking is configured with mocks.verify_partial_doubles = true, but the effect is the same, independent on whether we specify with allow an existing method or (as in my example) a non-existing one.
Warnings are turned on, with config.warnings = true.
Any idea why allow might show this behaviour?

You have a line:
allow(String).to receive(:broken?).and return(false)
This calls allow(String).to(receive(:broken?).and(return(false)))
So you call return keyword that silently ends the example.
You should call and_return not and return (note the underscore instead of a space).

Related

Best way to generate dynamic tests that don't leak using RSpec (LeakyConstantDeclaration issue)?

Am taking over some Ruby code which includes a fairly large test suite. One thing that's been missing that I'm adding is rubocop to fix some problems. One thing I've noticed is that numerous tests have been set up to be dynamically generated in a way like so:
describe 'some test' do
SOME_CONSTANT = { some hash }
SOME_CONSTANT.each do |param1, param2|
it 'does #{param1} and successfully checks #{param2} do
# do something with #{param1} or #{param2}
expect(param2).to eq "blahblah"
end
end
end
The issue here is SOME_CONSTANT. With rubocop this fails the RSpec/LeakyConstantDeclaration cop rule. The way these tests are set up these constants can re-assign a global constant by accident and result in random spec failures elsewhere if folks aren't paying attention.
The only workable solution to I've found is to change these constants into instance variables. For example:
describe 'some test' do
#some_constant = { some hash }
#some_constant.each do |param1, param2|
it 'does #{param1} and successfully checks #{param2} do
# do something with #{param1} or #{param2}
expect(param2).to eq "blahblah"
end
end
end
There is a danger that these instance vars can leak into other it/example specs too (within the same spec file if a single test changes it), but at least it's limited to the individual *_spec.rb files, and won't impact global scope of the entire test suite. This also fixes the RSpec/LeakyConstantDeclaration.
Would anyone have any better suggestions? One that does not use instance variables, and is more modern RSpec friendly? I've tried using let, and let! but the way the tests are setup any variables set this way are only accessible within the it blocks. Have also tried using stub_const in a before(:context) block, but run into the same issue where the stubbed constant is only accessible within the it/example context. I also even tried RSpec.Mocks.with_temporary_scope and same issue. Instance variables seem to be the only thing that works in this set up.
Thanks in advance for any helpful suggestions!
The example you provided feels a little too much like programming, rather than a test; e.g. I'd try to be a little bit more explicit rather than adding a loop of the behaviors.
Two constructs come to mind:
Using the let (which I know you said you did), but with additional nesting. IIRC, you might be able to add another describe block outside of your each block
Shared examples. This is what I'd try first. I added some pseudocode below
describe 'my test' do
shared_examples 'does the thing' do |arg1, arg2|
before { arg1.call }
it "does #{arg1} and returns #{arg2}" do
expect(arg2).to eq(true)
end
end
it_behaves_like 'does the thing', Foo.new, bar
it_behaves_like 'does the thing', Blarge.new, blah
end
You can also combine these with lets and/or a block (and refactor the shared example to reference these rather than passing in the arguments explicitly
it_behaves_like 'does the thing' do
let(:method) { :foo }
let(:result) { 42 }
end
Agree w/ Jay, I try to steer away from dynamic tests and prefer to make them simpler and explicit (if more repetitive). However that may be more of a refactor than you're looking to take on. To only address the issue of variable names I would consider not assigning the data to a variable at all e.g:
[:each, :attribute, :to, :test].each do |attr|
it "does a thing with #{attr}"...
end
and if it's a larger array/hash:
{
some
larger
hash
}.each do |param1, param2|
it 'does #{param1} and successfully checks #{param2}' do
# do something with #{param1} or #{param2}
expect(param2).to eq "blahblah"
end
end

Can variables be passed after a do/end block?

I am working with a custom testing framework and we are trying to expand some of the assert functionality to include a custom error message if the assert fails. The current assert is called like this:
assert_compare(first_term, :neq, second_term) do
puts 'foobar'
end
and we want something with the functionality of:
assert_compare(first_term, :neq, second_term, error_message) do
puts 'foobar'
end
so that if the block fails the error message will describe the failure. I think this is ugly, however, as the framework we are moving away from did this and i have to go through a lot of statements that look like:
assert.compare(variable_foo['ARRAY1'][2], variable_bar['ARRAY2'][2], 'This assert failed because someone did something unintelligent when writing the test. Probably me, since in am the one writing this really really long error statement on the same line so that you have to spend a quarter of your day scrolling to the side just to read it')
This type of method call makes it difficult to read, even when using a variable for the error message. I feel like a better way should be possible.
assert_compare(first_term, :neq, second_term) do
puts 'foobar'
end on_fail: 'This is a nice error message'
This, to me, is the best way to do it but i don't know how or if it is even possible to accomplish this in ruby.
The goal here is to make it as aesthetic as possible. Any suggestions?
You could make on_fail a method of whatever assert_compare returns and write
assert_compare(first_term, :neq, second_term) do
puts 'foobar'
end.on_fail: 'This is a nice error message'
In short, no. Methods in ruby take a block as the final parameter only. As Chuck mentioned you could attempt to make the on_fail method a method of whatever assert_compare returns and that is a good solution. The solution I've come up with is not what you are looking for, but it works:
def test block, param
block.call
puts param
end
test proc { puts "hello"}, "hi"
will result in
"hello"
"hi"
What I've done here is create a Proc (which is essentially a block) and then passed it as a regular parameter.

override namespaced puts only works after overriding Kernel.puts?

Sorry for the vague question title, but I have no clue what causes the following:
module Capistrano
class Configuration
def puts string
::Kernel.puts 'test'
end
end
end
Now when Capistrano calls puts, I don't see "test", but I see the original output.
However, when I also add this:
module Kernel
def puts string
::Kernel.puts 'what gives?'
end
end
Now, suddenly, puts actually returns "test", not "what gives?", not the original content, but "test".
Is there a reasonable explanation why this is happening (besides my limited understanding of the inner-workings of Ruby Kernel)?
Things that look off to me (but somehow "seem to work"):
I would expect the first block to return 'test', but it didn't
I would expect the combination of the two blocks to return 'what gives?', but it returns 'test'?
The way I override the Kernel.puts seems like a never-ending loop to me?
module Capistrano
class Configuration
def puts string
::Kernel.puts 'test'
end
def an_thing
puts "foo"
end
end
end
Capistrano::Configuration.new.an_thing
gives the output:
test
The second version also gives the same output. The reason is that you're defining an instance level method rather than a class level method (this post seems to do a good job explaining the differences). A slightly different version:
module Kernel
def self.puts string
::Kernel.puts 'what gives?'
end
end
does the following. Because it is causing infinite recursion, like you expected.
/tmp/foo.rb:14:in `puts': stack level too deep (SystemStackError)
from /tmp/foo.rb:14:in `puts'
from /tmp/foo.rb:4:in `puts'
from /tmp/foo.rb:7:in `an_thing'
from /tmp/foo.rb:18
shell returned 1
I use an answer rather than a comment because of its editing capabilities. You can edit it to add more information and I may delete it later.
Now when Capistrano calls puts, I don't see "test", but I see the
original output.
It's difficult to answer your question without seeing how Capistrano calls puts and which one. I would say it's normal if puts displays its parameter, using the original Kernel#puts (it is not clear what you call original output, I must suppose you mean the string given to puts).
I would expect the first block to return 'test', but it didn't
The only way I see to call the instance method puts defined in the class Configuration in the module Capistrano is :
Capistrano::Configuration.new.puts 'xxx'
or
my_inst_var = Capistrano::Configuration.new
and somewhere else
my_inst_var.puts 'xxx'
and of course it prints test. Again, without seeing the puts statement whose result surprises you, it's impossible to tell what's going on.
I would expect the combination of the two blocks to return 'what gives?', but it returns 'test'?
The second point is mysterious and I need to see the code calling puts, as well as the console output.

RSpec test for a module

I'm brand new to RSpec and TDD. I was wondering if someone might help me with creating a test well-suited for this Module:
module Kernel
# define new 'puts' which which appends "This will be appended!" to all puts output
def puts_with_append *args
puts_without_append args.map{|a| a + "This will be appended!"}
end
# back up name of old puts
alias_method :puts_without_append, :puts
# now set our version as new puts
alias_method :puts, :puts_with_append
end
I'd like for my test to check that the content from a 'puts' ends with "This will be appended!". Would that be a sufficient test? How would I do that?
The best tests test what you're trying to achieve, not how you achieve it... Tying tests to implementation makes your tests brittle.
So, what you're trying to achieve with this method is a change to "puts" whenever your extension is loaded. Testing the method puts_with_append doesn't achieve this goal... If you later accidentally re-alias that to something else, your desired puts change won't work.
However, testing this without using an implementation detail would be rather difficult, so instead, we can try to push the implementation details down to somewhere they won't change, like STDOUT.
Just the Test Content
$stdout.stub!(:write)
$stdout.should_receive(:write).with("OneThis will be appended!")
puts "One"
Full Test
I'm going to turn this into a blog post within the next day or so, but I think you should also consider that you've got a desired result for one and many arguments, and your tests should be easy to read. The ultimate structure I'd use is:
require "rspec"
require "./your_extention.rb"
describe Kernel do
describe "#puts (overridden)" do
context "with one argument" do
it "should append the appropriate string" do
$stdout.stub!(:write)
$stdout.should_receive(:write).with("OneThis will be appended!")
puts "One"
end
end
context "with more then one argument" do
it "should append the appropriate string to every arg" do
$stdout.stub!(:write)
$stdout.should_receive(:write).with("OneThis will be appended!")
$stdout.should_receive(:write).with("TwoThis will be appended!")
puts("One", "Two")
end
end
end
end

Optional parens in Ruby for method with uppercase start letter?

I just started out using IronRuby (but the behaviour seems consistent when I tested it in plain Ruby) for a DSL in my .NET application - and as part of this I'm defining methods to be called from the DSL via define_method.
However, I've run into an issue regarding optional parens when calling methods starting with an uppercase letter.
Given the following program:
class DemoClass
define_method :test do puts "output from test" end
define_method :Test do puts "output from Test" end
def run
puts "Calling 'test'"
test()
puts "Calling 'test'"
test
puts "Calling 'Test()'"
Test()
puts "Calling 'Test'"
Test
end
end
demo = DemoClass.new
demo.run
Running this code in a console (using plain ruby) yields the following output:
ruby .\test.rb
Calling 'test'
output from test
Calling 'test'
output from test
Calling 'Test()'
output from Test
Calling 'Test'
./test.rb:13:in `run': uninitialized constant DemoClass::Test (NameError)
from ./test.rb:19:in `<main>'
I realize that the Ruby convention is that constants start with an uppercase letter and that the general naming convention for methods in Ruby is lowercase. But the parens are really killing my DSL syntax at the moment.
Is there any way around this issue?
This is just part of Ruby's ambiguity resolution.
In Ruby, methods and variables live in different namespaces, therefore there can be methods and variables (or constants) with the same name. This means that, when using them, there needs to be some way to distinguish them. In general, that's not a problem: messages have receivers, variables don't. Messages have arguments, variables don't. Variables are assigned to, messages aren't.
The only problem is when you have no receiver, no argument and no assignment. Then, Ruby cannot tell the difference between a receiverless message send without arguments and a variable. So, it has to make up some arbitrary rules, and those rules are basically:
for an ambiguous token starting with a lowercase letter, prefer to interpret it as a message send, unless you positively know it is a variable (i.e. the parser (not(!) the interpreter) has seen an assignment before)
for an ambiguous token starting with an uppercase letter, prefer to interpret it as a constant
Note that for a message send with arguments (even if the argument list is empty), there is no ambiguity, which is why your third example works.
test(): obviously a message send, no ambiguity here
test: might be a message send or a variable; resolution rules say it is a message send
Test(): obviously a message send, no ambiguity here
self.Test: also obviously a message send, no ambiguity here
Test: might be a message send or a constant; resolution rules say it is a constant
Note that those rules are a little bit subtle, for example here:
if false
foo = 'This will never get executed'
end
foo # still this will get interpreted as a variable
The rules say that whether an ambiguous token gets interpreted as a variable or a message send is determined by the parser and not the interpreter. So, because the parser has seen foo = whatever, it tags foo as a variable, even though the code will never get executed and foo will evaluate to nil as all uninitialized variables in Ruby do.
TL;DR summary: you're SOL.
What you could do is override const_missing to translate into a message send. Something like this:
class DemoClass
def test; puts "output from test" end
def Test; puts "output from Test" end
def run
puts "Calling 'test'"
test()
puts "Calling 'test'"
test
puts "Calling 'Test()'"
Test()
puts "Calling 'Test'"
Test
end
def self.const_missing(const)
send const.downcase
end
end
demo = DemoClass.new
demo.run
Except this obviously won't work, since const_missing is defined on DemoClass and thus, when const_missing is run, self is DemoClass which means that it tries to call DemoClass.test when it should be calling DemoClass#test via demo.test.
I don't know how to easily solve this.

Resources