Is there a matcher for comparing REXML elements for logical equality in RSpec? I tried writing a custom matcher that converts them to formatted strings, but it fails if the attribute order is different. (As noted in the XML spec, the order of attributes should not be significant.)
I could grind through writing a custom matcher that compares the name, namespace, child nodes, attributes, etc., etc., but this seems time-consuming and error-prone, and if someone else has already done it I'd rather not reinvent the wheel.
I ended up using the equivalent-xml gem and writing an RSpec custom matcher to convert the REXML to Nokogiri, compare with equivalent-xml, and pretty-print the result if needed.
The test assertion is pretty simple:
expect(actual).to be_xml(expected)
or
expect(actual).to be_xml(expected, path)
if you want to display the file path or some sort of identifier (e.g. if you're comparing a lot of documents).
The match code is a little fancier than it needs to be because it handles REXML, Nokogiri, and strings.
module XMLMatchUtils
def self.to_nokogiri(xml)
return nil unless xml
case xml
when Nokogiri::XML::Element
xml
when Nokogiri::XML::Document
xml.root
when String
to_nokogiri(Nokogiri::XML(xml, &:noblanks))
when REXML::Element
to_nokogiri(xml.to_s)
else
raise "be_xml() expected XML, got #{xml.class}"
end
end
def self.to_pretty(nokogiri)
return nil unless nokogiri
out = StringIO.new
save_options = Nokogiri::XML::Node::SaveOptions::FORMAT | Nokogiri::XML::Node::SaveOptions::NO_DECLARATION
nokogiri.write_xml_to(out, encoding: 'UTF-8', indent: 2, save_with: save_options)
out.string
end
def self.equivalent?(expected, actual, filename = nil)
expected_xml = to_nokogiri(expected) || raise("expected value #{expected || 'nil'} does not appear to be XML#{" in #{filename}" if filename}")
actual_xml = to_nokogiri(actual)
EquivalentXml.equivalent?(expected_xml, actual_xml, element_order: false, normalize_whitespace: true)
end
def self.failure_message(expected, actual, filename = nil)
expected_string = to_pretty(to_nokogiri(expected))
actual_string = to_pretty(to_nokogiri(actual)) || actual
# Uncomment this to dump expected/actual to file for manual diffing
#
# now = Time.now.to_i
# FileUtils.mkdir('tmp') unless File.directory?('tmp')
# File.open("tmp/#{now}-expected.xml", 'w') { |f| f.write(expected_string) }
# File.open("tmp/#{now}-actual.xml", 'w') { |f| f.write(actual_string) }
diff = Diffy::Diff.new(expected_string, actual_string).to_s(:text)
"expected XML differs from actual#{" in #{filename}" if filename}:\n#{diff}"
end
def self.to_xml_string(actual)
to_pretty(to_nokogiri(actual))
end
def self.failure_message_when_negated(actual, filename = nil)
"expected not to get XML#{" in #{filename}" if filename}:\n\t#{to_xml_string(actual) || 'nil'}"
end
end
The actual matcher is fairly straightforward:
RSpec::Matchers.define :be_xml do |expected, filename = nil|
match do |actual|
XMLMatchUtils.equivalent?(expected, actual, filename)
end
failure_message do |actual|
XMLMatchUtils.failure_message(expected, actual, filename)
end
failure_message_when_negated do |actual|
XMLMatchUtils.failure_message_when_negated(actual, filename)
end
end
Related
E.G.
def do_the_thing(file_to_load, hash_path)
file = File.read(file)
data = JSON.parse(file, { symbolize_names: true })
data[sections.to_sym]
end
do_the_thing(file_I_want, '[:foo][:bar][0]')
Tried a few methods but failed so far.
Thanks for any help in advance :)
Assuming you missed the parameters names...
Lets assume our file is:
// test.json
{
"foo": {
"bar": ["foobar"]
}
}
Recomended solution
Does your param really need to be a string??
If your code can be more flexible, and pass arguments as they are on ruby, you can use the Hash dig method:
require 'json'
def do_the_thing(file, *hash_path)
file = File.read(file)
data = JSON.parse(file, symbolize_names: true)
data.dig(*hash_path)
end
do_the_thing('test.json', :foo, :bar, 0)
You should get
"foobar"
It should work fine !!
Read the rest of the answer if that doesn't satisfy your question
Alternative solution (using the same argument)
If you REALLY need to use that argument as string, you can;
Treat your params to adapt to the first solution, it won't be a small or fancy code, but it will work:
require 'json'
BRACKET_REGEX = /(\[[^\[]*\])/.freeze
# Treats the literal string to it's correspondent value
def treat_type(param)
# Remove the remaining brackets from the string
# You could do this step directly on the regex if you want to
param = param[1..-2]
case param[0]
# Checks if it is a string
when '\''
param[1..-2]
# Checks if it is a symbol
when ':'
param[1..-1].to_sym
else
begin
Integer(param)
rescue ArgumentError
param
end
end
end
# Converts your param to the accepted pattern of 'dig' method
def string_to_args(param)
# Scan method will break the match results of the regex into an array
param.scan(BRACKET_REGEX).flatten.map { |match| treat_type(match) }
end
def do_the_thing(file, hash_path)
hash_path = string_to_args(hash_path)
file = File.read(file)
data = JSON.parse(file, symbolize_names: true)
data.dig(*hash_path)
end
so:
do_the_thing('test.json', '[:foo][:bar][0]')
returns
"foobar"
This solution though is open to bugs when the "hash_path" is not on an acceptable pattern, and treating it's bugs might make the code even longer
Shortest solution (Not safe)
You can use Kernel eval method which I EXTREMELY discourage to use for security reasons, read the documentation and understand its danger before using it
require 'json'
def do_the_thing(file, hash_path)
file = File.read(file)
data = JSON.parse(file, symbolize_names: true)
eval("data#{hash_path}")
end
do_the_thing('test.json', '[:foo][:bar][0]')
If the procedure you were trying to work with was just extracting the JSON data to an object, you might find yourself using either of the following scenarios:
def do_the_thing(file_to_load)
file = File.read(file)
data = JSON.parse(file, { symbolize_names: true })
data[sections.to_sym]
end
do_the_thing(file_I_want)[:foo][:bar][0]
or use the dig function of Hash :
def do_the_thing(file_to_load, sections)
file = File.read(file)
data = JSON.parse(file, { symbolize_names: true })
data.dig(*sections)
end
do_the_thing(file_I_want, [:foo, :bar, 0])
I have been trying to use Minitest to test my code (full repo) but am having trouble with one method which downloads a SHA1 hash from a .txt file on a website and returns the value.
Method:
def download_remote_sha1
#log.info('Downloading Elasticsearch SHA1.')
#remote_sha1 = ''
Kernel.open(#verify_url) do |file|
#remote_sha1 = file.read
end
#remote_sha1 = #remote_sha1.split(/\s\s/)[0]
#remote_sha1
end
You can see that I log what is occurring to the command line, create an object to hold my SHA1 value, open the url (e.g. https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.deb.sha1.txt)
I then split the string so that I only have the SHA1 value.
The problem is that during a test, I want to stub the Kernel.open which uses OpenURI to open the URL. I would like to ensure that I'm not actually reaching out to download any file, but rather I'm just passing the block my own mock IO object testing just that it correctly splits stuff.
I attempted it like the block below but when #remote_sha1 = file.read occurs the file item is nil.
#mock_file = Minitest::Mock.new
#mock_file.expect(:read, 'd377e39343e5cc277104beee349e1578dc50f7f8 elasticsearch-1.4.2.deb')
Kernel.stub :open, #mock_file do
#downloader = ElasticsearchUpdate::Downloader.new(hash, true)
#downloader.download_remote_sha1.must_equal 'd377e39343e5cc277104beee349e1578dc50f7f8'
end
I was working on this question too, but matt figured it out first. To add to what matt posted:
When you write:
Kernel.stub(:open, #mock_file) do
#block code
end
...that means when Kernel.open() is called--in any code, anywhere before the stub() block ends--the return value of Kernel.open() will be #mock_file. However, you never use the return value of Kernel.open() in your code:
Kernel.open(#verify_url) do |f|
#remote_sha1 = f.read
end
If you wanted to use the return value of Kernel.open(), you would have to write:
return_val = Kernel.open(#verify_url) do |f|
#remote_sha1 = f.read
end
#do something with return_val
Therefore, the return value of Kernel.open() is irrelevant in your code--which means the second argument of stub() is irrelevant.
A careful examination of the source code for stub() reveals that stub() takes a third argument--an argument which will be passed to a block specified after the stubbed method call. You, in fact, have specified a block after your stubbed Kernel.open() method call:
stubbed method call -+ +- start of block
| | |
V V V
Kernel.open(#verify_url) do |f|
#remote_sha1 = f.read
end
^
|
end of block
So, in order to pass #mockfile to the block you need to specify it as the third argument to Kernel.stub():
Kernel.stub(:open, 'irrelevant', #mock_file) do
end
Here is a full example for future searchers:
require 'minitest/autorun'
class Dog
def initialize
#verify_url = 'http://www.google.com'
end
def download_remote_sha1
#remote_sha1 = ''
Kernel.open(#verify_url) do |f|
#remote_sha1 = f.read
end
#puts #remote_sha1[0..300]
#remote_sha1 = #remote_sha1.split(" ")[0] #Using a single space for the split() pattern will split on contiguous whitespace.
end
end
#Dog.new.download_remote_sha1
describe 'downloaded file' do
it 'should be an sha1 code' do
#mock_file = Minitest::Mock.new
#mock_file.expect(:read, 'd377e39343e5cc277104beee349e1578dc50f7f8 elasticsearch-1.4.2.deb')
Kernel.stub(:open, 'irrelevant', #mock_file) do
#downloader = Dog.new
#downloader.download_remote_sha1.must_equal 'd377e39343e5cc277104beee349e1578dc50f7f8'
end
end
end
xxx
The second argument to stub is what you want the return value to be for the duration of your test, but the way Kernel.open is used here requires the value it yields to the block to be changed instead.
You can achieve this by providing a third argument. Try changing the call to Kernel.stub to
Kernel.stub :open, true, #mock_file do
#...
Note the extra argument true, so that #mock_file is now the third argument and will be yielded to the block. The actual value of the second argument doesn’t really matter in this case, you might want to use #mock_file there too to more closely correspond to how open behaves.
I'm writing a test for one of my classes which has the following constructor:
def initialize(filepath)
#transactions = []
File.open(filepath).each do |line|
next if $. == 1
elements = line.split(/\t/).map { |e| e.strip }
transaction = Transaction.new(elements[0], Integer(1))
#transactions << transaction
end
end
I'd like to test this by using a fake file, not a fixture. So I wrote the following spec:
it "should read a file and create transactions" do
filepath = "path/to/file"
mock_file = double(File)
expect(File).to receive(:open).with(filepath).and_return(mock_file)
expect(mock_file).to receive(:each).with(no_args()).and_yield("phrase\tvalue\n").and_yield("yo\t2\n")
filereader = FileReader.new(filepath)
filereader.transactions.should_not be_nil
end
Unfortunately this fails because I'm relying on $. to equal 1 and increment on every line and for some reason that doesn't happen during the test. How can I ensure that it does?
Global variables make code hard to test. You could use each_with_index:
File.open(filepath) do |file|
file.each_with_index do |line, index|
next if index == 0 # zero based
# ...
end
end
But it looks like you're parsing a CSV file with a header line. Therefore I'd use Ruby's CSV library:
require 'csv'
CSV.foreach(filepath, col_sep: "\t", headers: true, converters: :numeric) do |row|
#transactions << Transaction.new(row['phrase'], row['value'])
end
You can (and should) use IO#each_line together with Enumerable#each_with_index which will look like:
File.open(filepath).each_line.each_with_index do |line, i|
next if i == 1
# …
end
Or you can drop the first line, and work with others:
File.open(filepath).each_line.drop(1).each do |line|
# …
end
If you don't want to mess around with mocking File for each test you can try FakeFS which implements an in memory file system based on StringIO that will clean up automatically after your tests.
This way your test's don't need to change if your implementation changes.
require 'fakefs/spec_helpers'
describe "FileReader" do
include FakeFS::SpecHelpers
def stub_file file, content
FileUtils.mkdir_p File.dirname(file)
File.open( file, 'w' ){|f| f.write( content ); }
end
it "should read a file and create transactions" do
file_path = "path/to/file"
stub_file file_path, "phrase\tvalue\nyo\t2\n"
filereader = FileReader.new(file_path)
expect( filereader.transactions ).to_not be_nil
end
end
Be warned: this is an implementation of most of the file access in Ruby, passing it back onto the original method where possible. If you are doing anything advanced with files you may start running into bugs in the FakeFS implementation. I got stuck with some binary file byte read/write operations which weren't implemented in FakeFS quite how Ruby implemented them.
I've got a complex XML file, and I want to extract a content of a specific tag from it.
I use a ruby script with XmlSimple gem. I retrieve an XML file with HTTP request, then strip all the unnecessary tags and pull out necessary info. That's the script itself:
data = XmlSimple.xml_in(response.body)
hash_1 = Hash[*data['results']]
def find_value(hash, value)
hash.each do |key, val|
if val[0].kind_of? Hash then
find_value(val[0], value)
else
if key.to_s.eql? value
puts val
end
end
end
end
hash_1['book'].each do |arg|
find_value(arg, "title")
puts("\n")
end
The problem is, that when I change replace puts val with return val, and then call find_value method with puts find_value (arg, "title"), i get the whole contents of hash_1[book] on the screen.
How to correct the find_value method?
A "complex XML file" and XmlSimple don't mix. Your task would be solved a lot easier with Nokogiri, and be faster as well:
require 'nokogiri'
doc = Nokogiri::XML(response.body)
puts doc.xpath('//book/title/text()')
I am parsing a text file and want to be able to extend the sets of tokens that can be recognized easily. Currently I have the following:
if line =~ /!DOCTYPE/
puts "token doctype " + line[0,20]
#ast[:doctype] << line
elsif line =~ /<html/
puts "token main HTML start " + line[0,20]
html_scanner_off = false
elsif line =~ /<head/ and not html_scanner_off
puts "token HTML header starts " + line[0,20]
html_header_scanner_on = true
elsif line =~ /<title/
puts "token HTML title " + line[0,20]
#ast[:HTML_header_title] << line
end
Is there a way to write this with a yield block, e.g. something like:
scanLine("title", :HTML_header_title, line)
?
Don't parse HTML with regexes.
That aside, there are several ways to do what you're talking about. One:
class Parser
class Token
attr_reader :name, :pattern, :block
def initialize(name, pattern, block)
#name = name
#pattern = pattern
#block = block
end
def process(line)
#block.call(self, line)
end
end
def initialize
#tokens = []
end
def scanLine(line)
#tokens.find {|t| line =~ t.pattern}.process(line)
end
def addToken(name, pattern, &block)
#tokens << Token.new(name, pattern, block)
end
end
p = Parser.new
p.addToken("title", /<title/) {|token, line| puts "token #{token.name}: #{line}"}
p.scanLine('<title>This is the title</title>')
This has some limitations (like not checking for duplicate tokens), but works:
$ ruby parser.rb
token title: <title>This is the title</title>
$
If you're intending to parse HTML content, you might want to use one of the HTML parsers like nokogiri (http://nokogiri.org/) or Hpricot (http://hpricot.com/) which are really high-quality. A roll-your-own approach will probably take longer to perfect than figuring out how to use one of these parsers.
On the other hand, if you're dealing with something that's not quite HTML, and can't be parsed that way, then you'll need to roll your own somehow. There's a few Ruby parser frameworks out there that may help, but for simple tasks where performance isn't a critical factor, you can get by with a pile of regexps like you have here.