Running rspec as part of a program, rather than as a dedicated test suite [duplicate] - ruby

I am trying to execute rspec from ruby, and get the status or number of failures from a method or something like that. Actually I am running something like this:
system("rspec 'myfilepath'")
but I only can get the string returned by the function. Is there any way to do this directly using objects?

I think the best way would be using RSpec's configuration and Formatter. This would not involve parsing the IO stream, also gives much richer result customisation programmatically.
RSpec 2:
require 'rspec'
config = RSpec.configuration
# optionally set the console output to colourful
# equivalent to set --color in .rspec file
config.color = true
# using the output to create a formatter
# documentation formatter is one of the default rspec formatter options
json_formatter = RSpec::Core::Formatters::JsonFormatter.new(config.output)
# set up the reporter with this formatter
reporter = RSpec::Core::Reporter.new(json_formatter)
config.instance_variable_set(:#reporter, reporter)
# run the test with rspec runner
# 'my_spec.rb' is the location of the spec file
RSpec::Core::Runner.run(['my_spec.rb'])
Now you can use the json_formatter object to get result and summary of a spec test.
# gets an array of examples executed in this test run
json_formatter.output_hash
An example of output_hash value can be found here:
RSpec 3
require 'rspec'
require 'rspec/core/formatters/json_formatter'
config = RSpec.configuration
formatter = RSpec::Core::Formatters::JsonFormatter.new(config.output_stream)
# create reporter with json formatter
reporter = RSpec::Core::Reporter.new(config)
config.instance_variable_set(:#reporter, reporter)
# internal hack
# api may not be stable, make sure lock down Rspec version
loader = config.send(:formatter_loader)
notifications = loader.send(:notifications_for, RSpec::Core::Formatters::JsonFormatter)
reporter.register_listener(formatter, *notifications)
RSpec::Core::Runner.run(['spec.rb'])
# here's your json hash
p formatter.output_hash
Other Resources
Detailed work through
Gist example

I suggest you to take a look into rspec source code to find out the answer. I think you can start with example_group_runner
Edit: Ok here is the way:
RSpec::Core::Runner::run(options, err, out)
Options - array of directories, err & out - streams. For example
RSpec::Core::Runner.run(['spec', 'another_specs'], $stderr, $stdout)

Your problem is that you're using the Kernel#system method to execute your command, which only returns true or false based on whether or not it can find the command and run it successfully. Instead you want to capture the output of the rspec command. Essentially you want to capture everything that rspec outputs to STDOUT. You can then iterate through the output to find and parse the line which will tell you how many examples were run and how many failures there were.
Something along the following lines:
require 'open3'
stdin, stdout, stderr = Open3.popen3('rspec spec/models/my_crazy_spec.rb')
total_examples = 0
total_failures = 0
stdout.readlines.each do |line|
if line =~ /(\d*) examples, (\d*) failures/
total_examples = $1
total_failures = $2
end
end
puts total_examples
puts total_failures
This should output the number of total examples and number of failures - adapt as needed.

This one prints to console and at the same time captures the message. The formatter.stop is just a stub function, I don't know what it is for normally, I had to include it to use DocumentationFormatter. Also the formatter output contains console coloring codes.
formatter = RSpec::Core::Formatters::DocumentationFormatter.new(StringIO.new)
def formatter.stop(arg1)
end
RSpec.configuration.reporter.register_listener(formatter, :message, :dump_summary, :dump_profile, :stop, :seed, :close, :start, :example_group_started)
RSpec::Core::Runner.run(['test.rb','-fdocumentation'])
puts formatter.output.string

Related

inspec - i want to output structured data to be parsed by another function

I have a inspec test, this is great:
inspec exec scratchpad/profiles/forum_profile --reporter yaml
Trouble is I want to run this in a script and output this to an array
I cannot find the documentation that indicated what method i need to use to simulate the same
I do this
def my_func
http_checker = Inspec::Runner.new()
http_checker.add_target('scratchpad/profiles/forum_profile')
http_checker.run
puts http_checker.report
So the report method seems to give me load of the equivalent type and much more - does anyone have any documentation or advice on returning the same output as the --reporter yaml type response but in a script? I want to parse the response so I can share output with another function
I've never touched inspec, so take the following with a grain of salt, but according to https://github.com/inspec/inspec/blob/master/lib/inspec/runner.rb#L140, you can provide reporter option while instantiating the runner. Looking at https://github.com/inspec/inspec/blob/master/lib/inspec/reporters.rb#L11 I think it should be smth. like ["yaml", {}]. So, could you please try
# ...
http_checker = Inspec::Runner.new(reporter: ["yaml", {}])
# ...
(chances are it will give you the desired output)

How do I run a program via Ruby and access its standard input and output

I have a completely separate Ruby file that reads from Standard Input and writes to Standard Output.
I have certain test cases that I want to try. How do I pass my inputs to Standard Input to the file, and then test the Standard Output against the expected results?
As an example, here's the stuff I've already figured out:
There's a file that reads a number from standard input, squares it, and writes it to standard input
square.rb:
#!/usr/local/bin/ruby -w
input = STDIN.read
# square it
puts input.to_i ** 2
Complete the pass_input_to_file method of test.rb:
require 'minitest/autorun'
def pass_input_to_file(input)
# complete code here
end
class Test < Minitest::Test
def test_file
assert_equal pass_input_to_file(2), 4
end
end
You can use the Ruby Open3 Library to submit STDIN when calling a ruby script.
require 'open3'
def pass_input_to_file(input)
output, _status = Open3.capture2('path_to_script', :stdin_data => input)
output
end
The easiest way to test this would probably be to have your program look to see if it was passed any arguments first. Something like this:
#!/usr/local/bin/ruby -w
input = ARGV[0] || STDIN.read
# square it
puts input.to_i ** 2
and then you can shell out to test it:
def pass_input_to_file(input)
`path/to/file #{input}`.to_i
end
Otherwise, I would reach for something like expect to automate a subshell.
As an aside, for more complicated programs, using OptionParser or a CLI gem is probably better than looking at ARGV directly.

Test CLI with parameters

I assume this is very newbie stuff but I'm learning Ruby by doing, and I'm developing a small CLI tool that receives a couple of parameters in order to do its stuff properly. This is my current workflow:
I want to test (using Minitest) all the possible flows:
Exits with 0 and help message is shown if ARGV.count != 2
Exits with 1 if first param is not correct
Exits with 1 if second param is not correct
Exits with 1 if both params are not correct
Exits with 0 and does stuff if all params are correct
Now, if I run tests the only thing I see is the help output as there is no parameter being passed.
So, a couple of questions:
How can I pass arguments to the main program in tests?
How can I test the output? (I'm using puts)
Thanks!
nice diagram!
you can either use helpers like aruba https://github.com/cucumber/aruba
or dig into ruby internals in order to bend it to your will!
# test.rb
pseudoIO = StringIO.new
$stdout = pseudoIO
puts "hi #{ARGV.join(', ')}"
ARGV.replace ["file1"]
puts "now its #{ARGV.join(', ')}"
abort "captured: #{pseudoIO.string}"
output should be
ruby text.rb "whutup"
# => captured: hi whutup
# => now its file1

RSpec and Testing Command Line Args Passed to Script

I'm some rspec tests for a command line ruby based application. I'm trying to build my test suite allow for testing of missing command line parameters. Specifically, I'd like to stub out what ARGV[0]..ARGV[N] would appear to the application. I've seen similar posts mention ENV.stub; however, I don't see how I can simulate "nameless" args and a given order.
Any help would be appreciated. Thanks.
I simply did this in my test case located in spec:
it "does something" do
arg = "/tmp"
exe = File.expand_path('../bin/site_checker', File.dirname(__FILE__))
stdin, stdout, stderr = Open3.popen3("#{exe} -Ilib #{arg}")
stdout.readlines.should be_empty
end

How to test a script that generates files

I am creating a Rubygem that will let me generate jekyll post files. One of the reasons I am developing this project is to learn TDD. This gem is strictly functional on the command line, and it has to make a series of checks to make sure that it finds the _posts directory. This depends on two things:
Wether or not a location option was passed
Is that location option valid?
A location option was not passed
Is the posts dir in the current directory?
Is the posts dir the current working directory?
At that point, I am really having a hard time testing that part of the application. So I have two questions:
is it acceptable/okay to skip tests for small parts of the application like the one described above?
If not, how do you test file manipulation in ruby using minitest?
Some projects I've seen implement their command line tools as Command objects (for example: Rubygems and my linebreak gem). These objects are initialized with the ARGV simply have a call or execute method which then starts the whole process. This enables these projects to put their command line applications into a virtual environment. They could, for example hold the input and output stream objects in instance variables of the command object to make the application independant of using STDOUT/STDIN. And thus, making it possible to test the input/output of the command line application. In the same way I imagine, you could hold your current working directory in an instance variable to make your command line application independent of your real working directory. You could then create a temporary directory for each test and set this one as the working directory for your Command object.
And now some code:
require 'pathname'
class MyCommand
attr_accessor :input, :output, :error, :working_dir
def initialize(options = {})
#input = options[:input] ? options[:input] : STDIN
#output = options[:output] ? options[:output] : STDOUT
#error = options[:error] ? options[:error] : STDERR
#working_dir = options[:working_dir] ? Pathname.new(options[:working_dir]) : Pathname.pwd
end
# Override the puts method to use the specified output stream
def puts(output = nil)
#output.puts(output)
end
def execute(arguments = ARGV)
# Change to the given working directory
Dir.chdir(working_dir) do
# Analyze the arguments
if arguments[0] == '--readfile'
posts_dir = Pathname.new('posts')
my_file = posts_dir + 'myfile'
puts my_file.read
end
end
end
end
# Start the command without mockups if the ruby script is called directly
if __FILE__ == $PROGRAM_NAME
MyCommand.new.execute
end
Now in your test's setup and teardown methods you could do:
require 'pathname'
require 'tmpdir'
require 'stringio'
def setup
#working_dir = Pathname.new(Dir.mktmpdir('mycommand'))
#output = StringIO.new
#error = StringIO.new
#command = MyCommand.new(:working_dir => #working_dir, :output => #output, :error => #error)
end
def test_some_stuff
#command.execute(['--readfile'])
# ...
end
def teardown
#working_dir.rmtree
end
(In the example I'm using Pathname, which is a really nice object oriented file system API from Ruby's standard library and StringIO, which is useful for for mocking STDOUT as it's an IO object which streams into a simple String)
In the acutal test you could now use the #working_dir variable to test for existence or content of files:
path = #working_dir + 'posts' + 'myfile'
path.exist?
path.file?
path.directory?
path.read == "abc\n"
From my experience (and thus this is VERY subjective), I think it's ok sometimes to skip unit testing in some areas which are difficult to test. You need to find out what you get in return and the cost for testing or not. My rule of thumb is that the decision to not test a class should be very unusual (around less than 1 in 300 classes)
If what you're trying to test is very difficult, because of the dependencies with the file system, I think you could try to extract all the bits that interact with the file system.

Resources