ruby hash.values is not working with built in method - ruby

i tried almost everything but I am feeling cornered.
I have a CSV and reading a line from it:
CSV.foreach(file, quote_char: '"', col_sep: ',', row_sep: :auto, headers: true) { |line|
newLine = []
newLine = line.values #undefined method .values
...
}
line is aparently hash, because line['column_name'] is working fine and also line.to_a returns ["col","value","col2","value2",...]
please help, thank you!

You can use #fields on the class CSV::Row
http://ruby-doc.org/stdlib-1.9.3/libdoc/csv/rdoc/CSV/Row.html

It is not a regular hash, it is an instance of CSV::Row, see here for the API
As you can see in the result of the following code the method values isn't there. Your solution of using line['column_name'] is fine.
You can get all the fields with the method fields without parameter.
CSV.parse(DATA, :col_sep => ",", :headers => true).each do |row|
puts row.class
puts row.methods - Object.methods
end
__END__
kId,kName,kURL
1,Google UK,http://google.co.uk
2,Yahoo UK,http://yahoo.co.uk

It is a CSV row which is part array and part hash and doesn't have the .values method available. Use .to_hash first and then you will be able to use .values. (Note that this will remove the field ordering and any duplicate fields)
newLine = line.to_hash.values

Related

How to remove a row from a CSV with Ruby

Given the following CSV file, how would you remove all rows that contain the word 'true' in the column 'foo'?
Date,foo,bar
2014/10/31,true,derp
2014/10/31,false,derp
I have a working solution, however it requires making a secondary CSV object csv_no_foo
#csv = CSV.read(#csvfile, headers: true) #http://bit.ly/1mSlqfA
#headers = CSV.open(#csvfile,'r', :headers => true).read.headers
# Make a new CSV
#csv_no_foo = CSV.new(#headers)
#csv.each do |row|
# puts row[5]
if row[#headersHash['foo']] == 'false'
#csv_no_foo.add_row(row)
else
puts "not pushing row #{row}"
end
end
Ideally, I would just remove the offending row from the CSV like so:
...
if row[#headersHash['foo']] == 'false'
#csv.delete(true) #Doesn't work
...
Looking at the ruby documentation, it looks like the row class has a delete_if function. I'm confused on the syntax that that function requires. Is there a way to remove the row without making a new csv object?
http://ruby-doc.org/stdlib-1.9.2/libdoc/csv/rdoc/CSV/Row.html#method-i-each
You should be able to use CSV::Table#delete_if, but you need to use CSV::table instead of CSV::read, because the former will give you a CSV::Table object, whereas the latter results in an Array of Arrays. Be aware that this setting will also convert the headers to symbols.
table = CSV.table(#csvfile)
table.delete_if do |row|
row[:foo] == 'true'
end
File.open(#csvfile, 'w') do |f|
f.write(table.to_csv)
end
You might want to filter rows in a ruby manner:
require 'csv'
csv = CSV.parse(File.read(#csvfile), {
:col_sep => ",",
:headers => true
}
).collect { |item| item[:foo] != 'true' }
Hope it help.

CSV.generate and converters?

I'm trying to create a converter to remove newline characters from CSV output.
I've got:
nonewline=lambda do |s|
s.gsub(/(\r?\n)+/,' ')
end
I've verified that this works properly IF I load a variable and then run something like:
csv=CSV(variable,:converters=>[nonewline])
However, I'm attempting to use this code to update a bunch of preexisting code using CSV.generate, and it does not appear to work at all.
CSV.generate(:converters=>[nonewline]) do |csv|
csv << ["hello\ngoodbye"]
end
returns:
"\"hello\ngoodbye\"\n"
I've tried quite a few things as well as trying other examples I've found online, and it appears as though :converters has no effect when used with CSV.generate.
Is this correct, or is there something I'm missing?
You need to write your converter as as below :
CSV::Converters[:nonewline] = lambda do |s|
s.gsub(/(\r?\n)+/,' ')
end
Then do :
CSV.generate(:converters => [:nonewline]) do |csv|
csv << ["hello\ngoodbye"]
end
Read the documentation Converters .
Okay, above part I didn't remove, as to show you how to write the custom CSV converters. The way you wrote it is incorrect.
Read the documentation of CSV::generate
This method wraps a String you provide, or an empty default String, in a CSV object which is passed to the provided block. You can use the block to append CSV rows to the String and when the block exits, the final String will be returned.
After reading the docs, it is quite clear that this method is for writing to a csv file, not for reading. Now all the converters options ( like :converters, :header_converters) is applied, when you are reading a CSV file, but not applied when you are writing into a CSV file.
Let me show you 2 examples to illustrate this more clearly.
require 'csv'
string = <<_
foo,bar
baz,quack
_
File.write('a',string)
CSV::Converters[:upcase] = lambda do |s|
s.upcase
end
I am reading from a CSV file, so :converters option is applied to it.
CSV.open('a','r',:converters => :upcase) do |csv|
puts csv.read
end
output
# >> FOO
# >> BAR
# >> BAZ
# >> QUACK
Now I am writing into the CSV file, converters option is not applied.
CSV.open('a','w',:converters => :upcase) do |csv|
csv << ['dog','cat']
end
CSV.read('a') # => [["dog", "cat"]]
Attempting to remove newlines using :converters did not work.
I had to override the << method from csv.rb adding the following code to it:
# Change all CR/NL's into one space
row.map! { |element|
if element.is_a?(String)
element.gsub(/(\r?\n)+/,' ')
else
element
end
}
Placed right before
output = row.map(&#quote).join(#col_sep) + #row_sep # quote and separate
at line 21.
I would think this would be a good patch to CSV, as newlines will always produce bad CSV output.

import from CSV into Ruby array, with 1st field as hash key, then lookup a field's value given header row

Maybe somebody can help me.
Starting with a CSV file like so:
Ticker,"Price","Market Cap"
ZUMZ,30.00,933.90
XTEX,16.02,811.57
AAC,9.83,80.02
I manage to read them into an array:
require 'csv'
tickers = CSV.read("stocks.csv", {:headers => true, :return_headers => true, :header_converters => :symbol, :converters => :all} )
To verify data, this works:
puts tickers[1][:ticker]
ZUMZ
However this doesn't:
puts tickers[:ticker => "XTEX"][:price]
How would I go about turning this array into a hash using the ticker field as unique key, such that I could easily look up any other field associatively as defined in line 1 of the input? Dealing with many more columns and rows.
Much appreciated!
Like this (it works with other CSVs too, not just the one you specified):
require 'csv'
tickers = {}
CSV.foreach("stocks.csv", :headers => true, :header_converters => :symbol, :converters => :all) do |row|
tickers[row.fields[0]] = Hash[row.headers[1..-1].zip(row.fields[1..-1])]
end
Result:
{"ZUMZ"=>{:price=>30.0, :market_cap=>933.9}, "XTEX"=>{:price=>16.02, :market_cap=>811.57}, "AAC"=>{:price=>9.83, :market_cap=>80.02}}
You can access elements in this data structure like this:
puts tickers["XTEX"][:price] #=> 16.02
Edit (according to comment): For selecting elements, you can do something like
tickers.select { |ticker, vals| vals[:price] > 10.0 }
CSV.read(file_path, headers:true, header_converters: :symbol, converters: :all).collect do |row|
Hash[row.collect { |c,r| [c,r] }]
end
CSV.read(file_path, headers:true, header_converters: :symbol, converters: :all).collect do |row|
row.to_h
end
To add on to Michael Kohl's answer, if you want to access the elements in the following manner
puts tickers[:price]["XTEX"] #=> 16.02
You can try the following code snippet:
CSV.foreach("Workbook1.csv", :headers => true, :header_converters => :symbol, :converters => :all) do |row|
hash_row = row.headers[1..-1].zip( (Array.new(row.fields.length-1, row.fields[0]).zip(row.fields[1..-1])) ).to_h
hash_row.each{|key, value| tickers[key] ? tickers[key].merge!([value].to_h) : tickers[key] = [value].to_h}
end
To get the best of both worlds (very fast reading from a huge file AND the benefits of a native Ruby CSV object) my code had since evolved into this method:
$stock="XTEX"
csv_data = CSV.parse IO.read(%`|sed -n "1p; /^#{$stock},/p" stocks.csv`), {:headers => true, :return_headers => false, :header_converters => :symbol, :converters => :all}
# Now the 1-row CSV object is ready for use, eg:
$company = csv_data[:company][0]
$volatility_month = csv_data[:volatility_month][0].to_f
$sector = csv_data[:sector][0]
$industry = csv_data[:industry][0]
$rsi14d = csv_data[:relative_strength_index_14][0].to_f
which is closer to my original method, but only reads in one record plus line 1 of the input csv file containing the headers. The inline sed instructions take care of that--and the whole thing is noticably instant. This this is better than last because now I can access all the fields from Ruby, and associatively, not caring about column numbers anymore as was the case with awk.
Not as 1-liner-ie but this was more clear to me.
csv_headers = CSV.parse(STDIN.gets)
csv = CSV.new(STDIN)
kick_list = []
csv.each_with_index do |row, i|
row_hash = {}
row.each_with_index do |field, j|
row_hash[csv_headers[0][j]] = field
end
kick_list << row_hash
end
While this isn't a 100% native Ruby solution to the original question, should others stumble here and wonder what awk call I wound up using for now, here it is:
$dividend_yield = IO.readlines("|awk -F, '$1==\"#{$stock}\" {print $9}' datafile.csv")[0].to_f
where $stock is the variable I had previously assigned to a company's ticker symbol (the wannabe key field).
Conveniently survives problems by returning 0.0 if: ticker or file or field #9 not found/empty, or if value cannot be typecasted to a float. So any trailing '%' in my case gets nicely truncated.
Note that at this point one could easily add more filters within awk to have IO.readlines return a 1-dim array of output lines from the smaller resulting CSV, eg.
awk -F, '$9 >= 2.01 && $2 > 99.99 {print $0}' datafile.csv
outputs in bash which lines have a DivYld (col 9) over 2.01 and price (col 2) over 99.99. (Unfortunately I'm not using the header row to to determine field numbers, which is where I was ultimately hoping for some searchable associative Ruby array.)

Parse CSV file with header fields as attributes for each row

I would like to parse a CSV file so that each row is treated like an object with the header-row being the names of the attributes in the object. I could write this, but I'm sure its already out there.
Here is my CSV input:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
The code would look something like this:
CSV.open('my_file.csv','r') do |csv_obj|
puts csv_obj.foo #prints 1 the 1st time, "blah" 2nd time, etc
puts csv.bar #prints 2 the first time, 7 the 2nd time, etc
end
With Ruby's CSV module I believe I can only access the fields by index. I think the above code would be a bit more readable. Any ideas?
Using Ruby 1.9 and above, you can get a an indexable object:
CSV.foreach('my_file.csv', :headers => true) do |row|
puts row['foo'] # prints 1 the 1st time, "blah" 2nd time, etc
puts row['bar'] # prints 2 the first time, 7 the 2nd time, etc
end
It's not dot syntax but it is much nicer to work with than numeric indexes.
As an aside, for Ruby 1.8.x FasterCSV is what you need to use the above syntax.
Here is an example of the symbolic syntax using Ruby 1.9. In the examples below, the code reads a CSV file named data.csv from Rails db directory.
:headers => true treats the first row as a header instead of a data row. :header_converters => :symbolize parameter then converts each cell in the header row into Ruby symbol.
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
In Ruby 1.8:
require 'fastercsv'
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
Based on the CSV provided by the Poul (the StackOverflow asker), the output from the example code above will be:
1,2,3
blah,7,blam
4,5,6
Depending on the characters used in the headers of the CSV file, it may be necessary to output the headers in order to see how CSV (FasterCSV) converted the string headers to symbols. You can output the array of headers from within the CSV.foreach.
row.headers
Easy to get a hash in Ruby 2.3:
CSV.foreach('my_file.csv', headers: true, header_converters: :symbol) do |row|
puts row.to_h[:foo]
puts row.to_h[:bar]
end
Although I am pretty late to the discussion, a few months ago I started a "CSV to object mapper" at https://github.com/vicentereig/virgola.
Given your CSV contents, mapping them to an array of FooBar objects is pretty straightforward:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
require 'virgola'
class FooBar
include Virgola
attribute :foo
attribute :bar
attribute :baz
end
csv = <<CSV
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
CSV
foo_bars = FooBar.parse(csv).all
foo_bars.each { |foo_bar| puts foo_bar.foo, foo_bar.bar, foo_bar.baz }
Since I hit this question with some frequency:
array_of_hashmaps = CSV.read("path/to/file.csv", headers: true)
puts array_of_hashmaps.first["foo"] # 1
This is the non-block version, when you want to slurp the whole file.

How do you change headers in a CSV File with FasterCSV then save the new headers?

I'm having trouble understanding the :header_converters and :converters in FasterCSV. Basically, all I want to do is change column headers to their appropriate column names.
something like:
FasterCSV.foreach(csv_file, {:headers => true, :return_headers => false, :header_converters => :symbol, :converters => :all} ) do |row|
puts row[:some_column_header] # Would be "Some Column Header" in the csv file.
execpt I don't umderstand :symbol and :all in the converter parameters.
The :all converter means that it tries all of the built-in converters, specifically:
:integer: Converts any field Integer() accepts.
:float: Converts any field Float() accepts.
:date: Converts any field Date::parse() accepts.
:date_time: Converts any field DateTime::parse() accepts.
Essentially, it means that it will attempt to convert any field into those values (if possible) instead of leaving them as a string. So if you do row[i] and it would have returned the String value '9', it will instead return an Integer value 9.
Header converters change the way the headers are used to index a row. For example, if doing something like this:
FastCSV.foreach(some_file, :header_converters => :downcase) do |row|
You would index a column with the header "Some Header" as row['some header'].
If you used :symbol instead, you would index it with row[:some_header]. Symbol downcases the header name, replaces spaces with underscores, and removes characters other than a-z, 0-9, and _. It's useful because comparison of symbols is far faster than comparison of strings.
If you want to index a column with row['Some Header'], then just don't provide any :header_converter option.
EDIT:
In response to your comment, headers_convert won't do what you want, I'm afraid. It doesn't change the values of the header row, just how they are used as an index. Instead, you'll have to use the :return_headers option, detect the header row, and make your changes. To change the file and write it out again, you can use something like this:
require 'fastercsv'
input = File.open 'original.csv', 'r'
output = File.open 'modified.csv', 'w'
FasterCSV.filter input, output, :headers => true, :write_headers => true, :return_headers => true do |row|
change_headers(row) if row.header_row?
end
input.close
output.close
If you need to completely replace the original file, add this line after doing the above:
FileUtils.mv 'modified.csv', 'original.csv', :force => true
I've found a simple approach for solving this problem. The FasterCSV library works just fine. I'm sure that ~7 years between when the post was created to now may have something to do with it, but I thought it was worth noting here.
When reading CSV files, the FasterCSV :header_converters option isn't well documented, in my opinion. But, instead of assigning a symbol (header_converters: :symbol) one can assign a lambda (header_converters: lambda {...}). When the CSV library reads the file it transforms the headers using the lambda. Then, one can save a new CSV file that reflects the transformed headers.
For example:
options = {
headers: true,
header_converters: lambda { |h| HEADER_MAP.keys.include?(h.to_sym) ? HEADER_MAP[h.to_sym] : h }
}
table = CSV.read(FILE_TO_PROCESS, options)
File.open(PROCESSED_FILE, "w") do |file|
file.write(table.to_csv)
end
Rewriting CSV file headers is a common requirement for anyone converting exported CSV files into imports.
I found the following approach gave me what I needed:
lookup_headers = { "old": "new", "cat": "dog" } # The desired header swaps
CSV($>, headers: true, write_headers: true) do |csv_out|
CSV.foreach( ARGV[0],
headers: true,
# the following lambda replaces the header if it is found, leaving it if not...
header_converters: lambda{ |h| lookup_headers[h] || h},
return_headers: true) do |master_row|
if master_row.header_row?
# The headers are now correctly replaced by calling the updated headers
csv_out << master_row.headers
else
csv_out << master_row
end
end
end
Hope this helps!

Resources