How do I test reading a file? - ruby

I'm writing a test for one of my classes which has the following constructor:
def initialize(filepath)
#transactions = []
File.open(filepath).each do |line|
next if $. == 1
elements = line.split(/\t/).map { |e| e.strip }
transaction = Transaction.new(elements[0], Integer(1))
#transactions << transaction
end
end
I'd like to test this by using a fake file, not a fixture. So I wrote the following spec:
it "should read a file and create transactions" do
filepath = "path/to/file"
mock_file = double(File)
expect(File).to receive(:open).with(filepath).and_return(mock_file)
expect(mock_file).to receive(:each).with(no_args()).and_yield("phrase\tvalue\n").and_yield("yo\t2\n")
filereader = FileReader.new(filepath)
filereader.transactions.should_not be_nil
end
Unfortunately this fails because I'm relying on $. to equal 1 and increment on every line and for some reason that doesn't happen during the test. How can I ensure that it does?

Global variables make code hard to test. You could use each_with_index:
File.open(filepath) do |file|
file.each_with_index do |line, index|
next if index == 0 # zero based
# ...
end
end
But it looks like you're parsing a CSV file with a header line. Therefore I'd use Ruby's CSV library:
require 'csv'
CSV.foreach(filepath, col_sep: "\t", headers: true, converters: :numeric) do |row|
#transactions << Transaction.new(row['phrase'], row['value'])
end

You can (and should) use IO#each_line together with Enumerable#each_with_index which will look like:
File.open(filepath).each_line.each_with_index do |line, i|
next if i == 1
# …
end
Or you can drop the first line, and work with others:
File.open(filepath).each_line.drop(1).each do |line|
# …
end

If you don't want to mess around with mocking File for each test you can try FakeFS which implements an in memory file system based on StringIO that will clean up automatically after your tests.
This way your test's don't need to change if your implementation changes.
require 'fakefs/spec_helpers'
describe "FileReader" do
include FakeFS::SpecHelpers
def stub_file file, content
FileUtils.mkdir_p File.dirname(file)
File.open( file, 'w' ){|f| f.write( content ); }
end
it "should read a file and create transactions" do
file_path = "path/to/file"
stub_file file_path, "phrase\tvalue\nyo\t2\n"
filereader = FileReader.new(file_path)
expect( filereader.transactions ).to_not be_nil
end
end
Be warned: this is an implementation of most of the file access in Ruby, passing it back onto the original method where possible. If you are doing anything advanced with files you may start running into bugs in the FakeFS implementation. I got stuck with some binary file byte read/write operations which weren't implemented in FakeFS quite how Ruby implemented them.

Related

How to "observe" a stream in Ruby's CSV module?

I am writing a class that takes a CSV files, transforms it, and then writes the new data out.
module Transformer
class Base
def initialize(file)
#file = file
end
def original_data(&block)
opts = { headers: true }
CSV.open(file, 'rb', opts, &block)
end
def transformer
# complex manipulations here like modifying columns, picking only certain
# columns to put into new_data, etc but simplified to `+10` to keep
# example concise
-> { |row| new_data << row['some_header'] + 10 }
end
def transformed_data
self.original_data(self.transformer)
end
def write_new_data
CSV.open('new_file.csv', 'wb', opts) do |new_data|
transformed_data
end
end
end
end
What I'd like to be able to do is:
Look at the transformed data without writing it out (so I can test that it transforms the data correctly, and I don't need to write it to file right away: maybe I want to do more manipulation before writing it out)
Don't slurp all the file at once, so it works no matter the size of the original data
Have this as a base class with an empty transformer so that instances only need to implement their own transformers but the behavior for reading and writing is given by the base class.
But obviously the above doesn't work because I don't really have a reference to new_data in transformer.
How could I achieve this elegantly?
I can recommend one of two approaches, depending on your needs and personal taste.
I have intentionally distilled the code to just its bare minimum (without your wrapping class), for clarity.
1. Simple read-modify-write loop
Since you do not want to slurp the file, use CSV::Foreach. For example, for a quick debugging session, do:
CSV.foreach "source.csv", headers: true do |row|
row["name"] = row["name"].upcase
row["new column"] = "new value"
p row
end
And if you wish to write to file during that same iteration:
require 'csv'
csv_options = { headers: true }
# Open the target file for writing
CSV.open("target.csv", "wb") do |target|
# Add a header
target << %w[new header column names]
# Iterate over the source CSV rows
CSV.foreach "source.csv", **csv_options do |row|
# Mutate and add columns
row["name"] = row["name"].upcase
row["new column"] = "new value"
# Push the new row to the target file
target << row
end
end
2. Using CSV::Converters
There is a built in functionality that might be helpful - CSV::Converters - (see the :converters definition in the CSV::New documentation)
require 'csv'
# Register a converter in the options hash
csv_options = { headers: true, converters: [:stripper] }
# Define a converter
CSV::Converters[:stripper] = lambda do |value, field|
value ? value.to_s.strip : value
end
CSV.open("target.csv", "wb") do |target|
# same as above
CSV.foreach "source.csv", **csv_options do |row|
# same as above - input data will already be converted
# you can do additional things here if needed
end
end
3. Separate input and output from your converter classes
Based on your comment, and since you want to minimize I/O and iterations, perhaps extracting the read/write operations from the responsibility of the transformers might be of interest. Something like this.
require 'csv'
class NameCapitalizer
def self.call(row)
row["name"] = row["name"].upcase
end
end
class EmailRemover
def self.call(row)
row.delete 'email'
end
end
csv_options = { headers: true }
converters = [NameCapitalizer, EmailRemover]
CSV.open("target.csv", "wb") do |target|
CSV.foreach "source.csv", **csv_options do |row|
converters.each { |c| c.call row }
target << row
end
end
Note that the above code still does not handle the header, in case it was changed. You will probably have to reserve the last row (after all transformations) and prepend its #headers to the output CSV.
There are probably plenty other ways to do it, but the CSV class in Ruby does not have the cleanest interface, so I try to keep code that deals with it as simple as I can.

Using Ruby to parse and write Puppet node definitions

I am writing a helper API in Ruby to automatically create and manipulate node definitions. My code is working; it can read and write the node defs successfully, however, it is a bit clunky.
Ruby is not my main language, so I'm sure there is a cleaner, and more rubyesque solution. I would appreciate some advice or suggestions.
Each host has its own file in manifests/nodes containing just the node definition. e.g.
node 'testnode' {
class {'firstclass': }
class {'secondclass': enabled => false }
}
The classes all are either enabled (default) or disabled elements. In the Ruby code, I store these as an instance variable hash #elements.
The read method looks like this:
def read()
data = File.readlines(#filepath)
for line in data do
if line.include? 'class'
element = line[/.*\{'([^\']*)':/, 1]
if #elements.include? element.to_sym
if not line.include? 'enabled => false'
#elements[element.to_sym] = true
else
#elements[element.to_sym] = false
end
end
end
end
end
And the write method looks like this:
def write()
data = "node #{#hostname} {\n"
for element in #elements do
if element[1]
line = " class {'#{element[0]}': }\n"
else
line = " class {'#{element[0]}': enabled => false}\n"
end
data += line
end
data += "}\n"
file = File.open(#filepath, 'w')
file.write(data)
file.close()
end
One thing to add is that these systems will be isolated from the internet. So I'd prefer to avoid large number of dependency libraries as I'll need to install / maintain them manually.
If your goal is to define your node's programmatically, there is a much more straightforward way then reading and writing manifests. One of the built-in features of puppet is "External Node Classifiers"(ENC). The basic idea is that something external to puppet will define what a node should look like.
In the simplest form, the ENC can be a ruby/python/whatever script that writes out yaml with the list of classes and enabled parameters. Reading and writing yaml from ruby is as simple as it gets.
Ruby has some pretty good methods to iterate over data structures. See below for an example of how to rubify your code a little bit. I am by no means an expert on the subject, and have not tested the code. :)
def read
data = File.readlines(#filepath)
data.each_line do |line|
element = line[/.*\{'([^\']*)':/, 1].to_sym
if #elements.include?(element)
#elements[element] = line.include?('enabled => false') ? false : true
end
end
end
def write
File.open(#filepath, 'w') do |file|
file.puts "node #{#hostname} {"
#elements.each do |element|
if element[1]
file.puts " class {'#{element[0]}': }"
else
file.puts " class {'#{element[0]}': enabled => false }"
end
end
file.puts '}'
end
end
Hope this points you in the right direction.

How can I copy the contents of one file to another using Ruby's file methods?

I want to copy the contents of one file to another using Ruby's file methods.
How can I do it using a simple Ruby program using file methods?
There is a very handy method for this - the IO#copy_stream method - see the output of ri copy_stream
Example usage:
File.open('src.txt') do |f|
f.puts 'Some text'
end
IO.copy_stream('src.txt', 'dest.txt')
For those that are interested, here's a variation of the IO#copy_stream, File#open + block answer(s) (written against ruby 2.2.x, 3 years too late).
copy = Tempfile.new
File.open(file, 'rb') do |input_stream|
File.open(copy, 'wb') do |output_stream|
IO.copy_stream(input_stream, output_stream)
end
end
As a precaution I would recommend using buffer unless you can guarantee whole file always fits into memory:
File.open("source", "rb") do |input|
File.open("target", "wb") do |output|
while buff = input.read(4096)
output.write(buff)
end
end
end
Here my implementation
class File
def self.copy(source, target)
File.open(source, 'rb') do |infile|
File.open(target, 'wb') do |outfile2|
while buffer = infile.read(4096)
outfile2 << buffer
end
end
end
end
end
Usage:
File.copy sourcepath, targetpath
Here is a simple way of doing that using ruby file operation methods :
source_file, destination_file = ARGV
script = $0
input = File.open(source_file)
data_to_copy = input.read() # gather the data using read() method
puts "The source file is #{data_to_copy.length} bytes long"
output = File.open(destination_file, 'w')
output.write(data_to_copy) # write up the data using write() method
puts "File has been copied"
output.close()
input.close()
You can also use File.exists? to check if the file exists or not. This would return a boolean true if it does!!
Here's a fast and concise way to do it.
# Open first file, read it, store it, then close it
input = File.open(ARGV[0]) {|f| f.read() }
# Open second file, write to it, then close it
output = File.open(ARGV[1], 'w') {|f| f.write(input) }
An example for running this would be.
$ ruby this_script.rb from_file.txt to_file.txt
This runs this_script.rb and takes in two arguments through the command-line. The first one in our case is from_file.txt (text being copied from) and the second argument second_file.txt (text being copied to).
You can also use File.binread and File.binwrite if you wish to hold onto the file contents for a bit. (Other answers use an instant copy_stream instead.)
If the contents are other than plain text files, such as images, using basic File.read and File.write won't work.
temp_image = Tempfile.new('image.jpg')
actual_img = IO.binread('image.jpg')
IO.binwrite(temp_image, actual_img)
Source: binread,
binwrite.

trying to find the 1st instance of a string in a CSV using fastercsv

I'm trying to open a CSV file, look up a string, and then return the 2nd column of the csv file, but only the the first instance of it. I've gotten as far as the following, but unfortunately, it returns every instance. I'm a bit flummoxed.
Can the gods of Ruby help? Thanks much in advance.
M
for the purpose of this example, let's say names.csv is a file with the following:
foo, happy
foo, sad
bar, tired
foo, hungry
foo, bad
#!/usr/local/bin/ruby -w
require 'rubygems'
require 'fastercsv'
require 'pp'
FasterCSV.open('newfile.csv', 'w') do |output|
FasterCSV.foreach('names.csv') do |lookup|
index_PL = lookup.index('foo')
if index_PL
output << lookup[2]
end
end
end
ok, so, if I want to return all instances of foo, but in a csv, then how does that work?
so what I'd like as an outcome is happy, sad, hungry, bad. I thought it would be:
FasterCSV.open('newfile.csv', 'w') do |output|
FasterCSV.foreach('names.csv') do |lookup|
index_PL = lookup.index('foo')
if index_PL
build_str << "," << lookup[2]
end
output << build_str
end
end
but it does not seem to work
Replace foreach with open (to get an Enumerable) and find:
FasterCSV.open('newfile.csv', 'w') do |output|
output << FasterCSV.open('names.csv').find { |r| r.index('foo') }[2]
end
The index call will return nil if it doesn't find anything; that means that the find will give you the first row that has 'foo' and you can pull out the column at index 2 from the result.
If you're not certain that names.csv will have what you're looking for then a bit of error checking would be advisable:
FasterCSV.open('newfile.csv', 'w') do |output|
foos_row = FasterCSV.open('names.csv').find { |r| r.index('foo') }
if(foos_row)
output << foos_row[2]
else
# complain or something
end
end
Or, if you want to silently ignore the lack of 'foo' and use an empty string instead, you could do something like this:
FasterCSV.open('newfile.csv', 'w') do |output|
output << (FasterCSV.open('names.csv').find { |r| r.index('foo') } || ['','',''])[2]
end
I'd probably go with the "complain if it isn't found" version though.

RSpec: how to test file operations and file content

In my app, I have the following code:
File.open "filename", "w" do |file|
file.write("text")
end
I want to test this code via RSpec. What are the best practices for doing this?
I would suggest using StringIO for this and making sure your SUT accepts a stream to write to instead of a filename. That way, different files or outputs can be used (more reusable), including the string IO (good for testing)
So in your test code (assuming your SUT instance is sutObject and the serializer is named writeStuffTo:
testIO = StringIO.new
sutObject.writeStuffTo testIO
testIO.string.should == "Hello, world!"
String IO behaves like an open file. So if the code already can work with a File object, it will work with StringIO.
For very simple i/o, you can just mock File. So, given:
def foo
File.open "filename", "w" do |file|
file.write("text")
end
end
then:
describe "foo" do
it "should create 'filename' and put 'text' in it" do
file = mock('file')
File.should_receive(:open).with("filename", "w").and_yield(file)
file.should_receive(:write).with("text")
foo
end
end
However, this approach falls flat in the presence of multiple reads/writes: simple refactorings which do not change the final state of the file can cause the test to break. In that case (and possibly in any case) you should prefer #Danny Staple's answer.
This is how to mock File (with rspec 3.4), so you could write to a buffer and check its content later:
it 'How to mock File.open for write with rspec 3.4' do
#buffer = StringIO.new()
#filename = "somefile.txt"
#content = "the content fo the file"
allow(File).to receive(:open).with(#filename,'w').and_yield( #buffer )
# call the function that writes to the file
File.open(#filename, 'w') {|f| f.write(#content)}
# reading the buffer and checking its content.
expect(#buffer.string).to eq(#content)
end
You can use fakefs.
It stubs filesystem and creates files in memory
You check with
File.exists? "filename"
if file was created.
You can also just read it with
File.open
and run expectation on its contents.
For someone like me who need to modify multiple files in multiple directories (e.g. generator for Rails), I use temp folder.
Dir.mktmpdir do |dir|
Dir.chdir(dir) do
# Generate a clean Rails folder
Rails::Generators::AppGenerator.start ['foo', '--skip-bundle']
File.open(File.join(dir, 'foo.txt'), 'w') {|f| f.write("write your stuff here") }
expect(File.exist?(File.join(dir, 'foo.txt'))).to eq(true)
end
end

Resources