Parse CSV file with header fields as attributes for each row - ruby

I would like to parse a CSV file so that each row is treated like an object with the header-row being the names of the attributes in the object. I could write this, but I'm sure its already out there.
Here is my CSV input:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
The code would look something like this:
CSV.open('my_file.csv','r') do |csv_obj|
puts csv_obj.foo #prints 1 the 1st time, "blah" 2nd time, etc
puts csv.bar #prints 2 the first time, 7 the 2nd time, etc
end
With Ruby's CSV module I believe I can only access the fields by index. I think the above code would be a bit more readable. Any ideas?

Using Ruby 1.9 and above, you can get a an indexable object:
CSV.foreach('my_file.csv', :headers => true) do |row|
puts row['foo'] # prints 1 the 1st time, "blah" 2nd time, etc
puts row['bar'] # prints 2 the first time, 7 the 2nd time, etc
end
It's not dot syntax but it is much nicer to work with than numeric indexes.
As an aside, for Ruby 1.8.x FasterCSV is what you need to use the above syntax.

Here is an example of the symbolic syntax using Ruby 1.9. In the examples below, the code reads a CSV file named data.csv from Rails db directory.
:headers => true treats the first row as a header instead of a data row. :header_converters => :symbolize parameter then converts each cell in the header row into Ruby symbol.
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
In Ruby 1.8:
require 'fastercsv'
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
Based on the CSV provided by the Poul (the StackOverflow asker), the output from the example code above will be:
1,2,3
blah,7,blam
4,5,6
Depending on the characters used in the headers of the CSV file, it may be necessary to output the headers in order to see how CSV (FasterCSV) converted the string headers to symbols. You can output the array of headers from within the CSV.foreach.
row.headers

Easy to get a hash in Ruby 2.3:
CSV.foreach('my_file.csv', headers: true, header_converters: :symbol) do |row|
puts row.to_h[:foo]
puts row.to_h[:bar]
end

Although I am pretty late to the discussion, a few months ago I started a "CSV to object mapper" at https://github.com/vicentereig/virgola.
Given your CSV contents, mapping them to an array of FooBar objects is pretty straightforward:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
require 'virgola'
class FooBar
include Virgola
attribute :foo
attribute :bar
attribute :baz
end
csv = <<CSV
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
CSV
foo_bars = FooBar.parse(csv).all
foo_bars.each { |foo_bar| puts foo_bar.foo, foo_bar.bar, foo_bar.baz }

Since I hit this question with some frequency:
array_of_hashmaps = CSV.read("path/to/file.csv", headers: true)
puts array_of_hashmaps.first["foo"] # 1
This is the non-block version, when you want to slurp the whole file.

Related

Ruby CSV converter, remove all converters?

I have some data I was writing from one CSV to another CSV because I need to do some data manipulation.
I noticed the CSV library has some default converters that are taking my values that look like dates and parsing those into new date strings.
I was wondering if I could remove all converters? I tried using my custom converter, but no matter what I do it seems that the dates keep getting parsed.
Here is my code simplified:
require 'csv'
CSV::Converters[:my_converter] = lambda do |value|
value
end
CSV.open('new-data.csv', 'w') do |csv|
data = CSV.read('original-data.csv', :converters => [:my_converter]).each do |row|
csv << row
end
end
The value 9/30/14 0:00 is getting changed to 9/30/2014 0:00, for example.
Are you sure that your CSV file doesn't actually contain the 4-digit year? Try looking at puts File.read('original-data.csv')
When I tried this on Ruby 2.1.8, it didn't change the value
require 'csv'
my_csv_data = 'hello,"9/30/14 0:00",world'
CSV.new(my_csv_data).each do |row|
puts row.inspect # prints ["hello", "9/30/14 0:00", "world"], as expected
end
CSV files are not parsed and converted into objects, the data in the fields is returned as a string. Always. This behavior is different than YAML or JSON, which do convert back to their base types.
Consider this:
require 'csv'
CSV.parse("1,10/1/14,foo") # => [["1", "10/1/14", "foo"]]
All values are strings.
csv = ["foo", 'bar', 1, Date.new(2014, 10, 1)].to_csv # => "foo,bar,1,2014-10-01\n"
Converting an array containing native Ruby objects results in a string of comma-delimited values.
CSV.parse(csv) # => [["foo", "bar", "1", "2014-10-01"]]
Reparsing that string returns the string versions but doesn't attempt to return them to their original types as CSV doesn't have a way of knowing what those were. The developer (you) has to know and do that.
The end-result of all that is that CSV won't change a year from '14' to '2014'. It doesn't know that it's a date, and, because it's not CSV's place to convert to objects, it only splits the fields appropriately and passes the information on to be massaged by the developer.

Taking json data and converting it to a CSV file

Okay... so new to Ruby here but loving it so far. My problem is I cannot get the data to go into the CSV files.
#!/usr/bin/env ruby
require 'date'
require_relative 'amf'
require 'json'
require 'csv'
amf = Amf.new
#This makes it go out 3 days
apps = amf.post( 'Appointments.getBetweenDates',
{ 'startDate' => Date.today, 'endDate' => Date.today + 4 }
)
apps.each do |app|
cor_md_params = { 'appId' => app['appID'], 'relId' => 7 }
cor_md = amf.post( 'Clinicians.getByAppIdAndRelId', cor_md_params ).first
#this is where it breaks ----->
CSV.open("ile.csv", "wb") do |csv|
csv << ["column1", "column2", "etc.", "etc.."]
csv << ([
# if added puts ([ I can display the info and then make a csv...
app['patFirstName'],
app['patMiddleName'],
app['patLastName'],
app['patBirthdate'],
app['patHin'],
app['patPhone'],
app['patCellPhone'],
app['patBusinessPhone'],
app['appTime'],
app['appID'],
app['patPostalCode'],
app['patProvince'],
app['locName'],
# note that this is not exactly accurate for follow-ups,
# where you have to replace the "1" with the actual value
# in weeks, days, months, etc
#app[ 'bookName' ], => not sure this is needed
cor_md['id'],
cor_md['providerCode'],
cor_md['firstName'],
cor_md['lastName']
].join(', '))
end
end
Now, if I remove the attempt to make the ile.cvs file and just output it with a puts, all the data shows. But I don't want to have to go into the terminal and create a csv file... I would rather just run the .rb program and have it created. Also, hopefully I am making the columns correctly as well...
The thought occurred to me that I could just add another puts above the output.
Or, better, insert a row into the array before I output it...
Really not sure what is best practice here and standards.
This is what I have done and attempted. How can I get it to cleanly output to a CSV file since my attempts are not working
Also, to clarify where it breaks, it does add the column names just not the JSON info that is parsed. I could also be completely doing this the wrong way or a way that isn't possible. I just do not know.
What kind of error do you get? Is it this one:
<<': undefined methodmap' for "something":String (NoMethodError)
I think, you should remove the .join(', ')
The << method of CSV accepts an array, but not a String
http://ruby-doc.org/stdlib-1.9.2/libdoc/csv/rdoc/CSV.html#method-i-3C-3C
So instead of:
cor_md['lastName']
].join(', '))
rather:
cor_md['lastName']
])
The problem with the loop (why it writes only 1 row of data)
In the body of your loop, you always reopen the file, and always rewrite what you added before. What you want to do, is probably this:
CSV.open("ile3.csv", "wb") do |csv|
csv << ["column1", "column2", "etc.", "etc.."]
apps.each do |app|
cor_md_params = { 'appId' => app['appID'], 'relId' => 7 }
cor_md = amf.post( 'Clinicians.getByAppIdAndRelId', cor_md_params ).first
#csv << your long array
end
end

Merging CSV tables with Ruby

I'm trying to join CSV files containing stock indexes with Ruby, and having a surprisingly hard time understanding the documentation out there. It's late, and I could use a friend, so go easy on me:
I have several files, with identical headers:
["Date", "Open", "High", "Low", "Close", "Volume"]
I would like my ruby script to read each "Date" column, and write to a new CSV compiling an all encompassing date range from the earliest date to the latest.
Bonus:
Ideally, I would like to add all of the other column data ("Open", "High", etc.) into this new CSV file, split by a column simply containing the following CSV's filename for reference.
Thanks for any consideration given to this. What I'd really like to do is sit down with a Ruby sensei to help me make sense of the documentation. How can I use the CSV.read() or CSV.foreach() do |x| methods to create arrays / hashes to perform upon?
(Theoretical and intelligent responses welcomed)
hypothetical:
CSV.read("data/DOW.csv") do |output|
puts output
end
returns:
[["Date", "Open", "High", "Low", "Close", "Volume"], ["2014-07-14", "71.35", "71.52", "70.82", "71.28", "823063.0"], ["2014-07-15", "71.32", "71.76", "71.0", "71.28", "813861.0"], ["2014-07-16", "71.34", "71.58", "70.68", "71.02", "843347.0"], ["2014-07-17", "70.54", "71.46", "70.54", "71.13", "1303839.0"], ["2014-07-18", "71.46", "72.95", "71.09", "72.46", "1375922.0"], ["2014-07-21", "72.21", "73.46", "71.88", "73.38", "1603854.0"], ["2014-07-22", "73.46", "74.76", "73.46", "74.57", "1335305.0"], ["2014-07-23", "74.54", "75.1", "73.77", "74.88", "1834953.0"]]
How can I identify rows, columns, etc? I'm looking for methods or ways to transform this array into hashes etc. Honestly, an overarching theoretical approach would suit my needs.
I've been playing with Ruby and CSV most of this day, I might be able to help (even though I am beginner myself) but I don't understand what do you want as output (little example would help).
This example would load only columns "Date", "High" and "Volume" into "my_array".
my_array = []
CSV.foreach("data.csv") do |row|
my_array.push([row[0], row[2], row[5]])
end
If you want every column try:
my_array = []
CSV.foreach("data.csv") do |row|
my_array.push(row)
end
If you want to access element of array inside array:
puts my_array[0][0].inspect #=> "Date"
puts my_array[1][0].inspect #=> "2014-07-14"
When you finally get what you want as output, if you are on Windows you can do this from command prompt to save it:
my_file.rb > output_in_text_form.txt
You can do something like this:
#!/usr/bin/env ruby
require 'csv'
input = ARGV.shift
output = ARGV.shift
File.open(output, 'w') do |o|
csv_string = File.read(input)
CSV.parse(csv_string).each do |r|
# r is an array of columns. Do something with it.
...
# Generate string version.
new_csv_row = CSV.generate_line(r, {:force_quotes => true})
# Write to file
o.puts new_csv_row
end
end
Using files is optional. You can use shell redirection and directly read from STDIN and/or directly write to STDOUT.

How do I make an array of arrays out of a CSV?

I have a CSV file that looks like this:
Jenny, jenny#example.com ,
Ricky, ricky#example.com ,
Josefina josefina#example.com ,
I'm trying to get this output:
users_array = [
['Jenny', 'jenny#example.com'], ['Ricky', 'ricky#example.com'], ['Josefina', 'josefina#example.com']
]
I've tried this:
users_array = Array.new
file = File.new('csv_file.csv', 'r')
file.each_line("\n") do |row|
puts row + "\n"
columns = row.split(",")
users_array.push columns
puts users_array
end
Unfortunately, in Terminal, this returns:
Jenny
jenny#example.com
Ricky
ricky#example.com
Josefina
josefina#example.com
Which I don't think will work for this:
users_array.each_with_index do |user|
add_page.form_with(:id => 'new_user') do |f|
f.field_with(:id => "user_email").value = user[0]
f.field_with(:id => "user_name").value = user[1]
end.click_button
end
What do I need to change? Or is there a better way to solve this problem?
Ruby's standard library has a CSV class with a similar api to File but contains a number of useful methods for working with tabular data. To get the output you want, all you need to do is this:
require 'csv'
users_array = CSV.read('csv_file.csv')
PS - I think you are getting the output you expected with your file parsing as well, but maybe you're thrown off by how it is printing to the terminal. puts behaves differently with arrays, printing each member object on a new line instead of as a single array. If you want to view it as an array, use puts my_array.inspect.
Assuming that your CSV file actually has a comma between the name and email address on the third line:
require 'csv'
users_array = []
CSV.foreach('csv_file.csv') do |row|
users_array.push row.delete_if(&:nil?).map(&:strip)
end
users_array
# => [["Jenny", "jenny#example.com"],
# ["Ricky", "ricky#example.com"],
# ["Josefina", "josefina#example.com"]]
There may be a simpler way, but what I'm doing there is discarding the nil field created by the trailing comma and stripping the spaces around the email addresses.

How do you change headers in a CSV File with FasterCSV then save the new headers?

I'm having trouble understanding the :header_converters and :converters in FasterCSV. Basically, all I want to do is change column headers to their appropriate column names.
something like:
FasterCSV.foreach(csv_file, {:headers => true, :return_headers => false, :header_converters => :symbol, :converters => :all} ) do |row|
puts row[:some_column_header] # Would be "Some Column Header" in the csv file.
execpt I don't umderstand :symbol and :all in the converter parameters.
The :all converter means that it tries all of the built-in converters, specifically:
:integer: Converts any field Integer() accepts.
:float: Converts any field Float() accepts.
:date: Converts any field Date::parse() accepts.
:date_time: Converts any field DateTime::parse() accepts.
Essentially, it means that it will attempt to convert any field into those values (if possible) instead of leaving them as a string. So if you do row[i] and it would have returned the String value '9', it will instead return an Integer value 9.
Header converters change the way the headers are used to index a row. For example, if doing something like this:
FastCSV.foreach(some_file, :header_converters => :downcase) do |row|
You would index a column with the header "Some Header" as row['some header'].
If you used :symbol instead, you would index it with row[:some_header]. Symbol downcases the header name, replaces spaces with underscores, and removes characters other than a-z, 0-9, and _. It's useful because comparison of symbols is far faster than comparison of strings.
If you want to index a column with row['Some Header'], then just don't provide any :header_converter option.
EDIT:
In response to your comment, headers_convert won't do what you want, I'm afraid. It doesn't change the values of the header row, just how they are used as an index. Instead, you'll have to use the :return_headers option, detect the header row, and make your changes. To change the file and write it out again, you can use something like this:
require 'fastercsv'
input = File.open 'original.csv', 'r'
output = File.open 'modified.csv', 'w'
FasterCSV.filter input, output, :headers => true, :write_headers => true, :return_headers => true do |row|
change_headers(row) if row.header_row?
end
input.close
output.close
If you need to completely replace the original file, add this line after doing the above:
FileUtils.mv 'modified.csv', 'original.csv', :force => true
I've found a simple approach for solving this problem. The FasterCSV library works just fine. I'm sure that ~7 years between when the post was created to now may have something to do with it, but I thought it was worth noting here.
When reading CSV files, the FasterCSV :header_converters option isn't well documented, in my opinion. But, instead of assigning a symbol (header_converters: :symbol) one can assign a lambda (header_converters: lambda {...}). When the CSV library reads the file it transforms the headers using the lambda. Then, one can save a new CSV file that reflects the transformed headers.
For example:
options = {
headers: true,
header_converters: lambda { |h| HEADER_MAP.keys.include?(h.to_sym) ? HEADER_MAP[h.to_sym] : h }
}
table = CSV.read(FILE_TO_PROCESS, options)
File.open(PROCESSED_FILE, "w") do |file|
file.write(table.to_csv)
end
Rewriting CSV file headers is a common requirement for anyone converting exported CSV files into imports.
I found the following approach gave me what I needed:
lookup_headers = { "old": "new", "cat": "dog" } # The desired header swaps
CSV($>, headers: true, write_headers: true) do |csv_out|
CSV.foreach( ARGV[0],
headers: true,
# the following lambda replaces the header if it is found, leaving it if not...
header_converters: lambda{ |h| lookup_headers[h] || h},
return_headers: true) do |master_row|
if master_row.header_row?
# The headers are now correctly replaced by calling the updated headers
csv_out << master_row.headers
else
csv_out << master_row
end
end
end
Hope this helps!

Resources