Hash: no implicit conversion of String into Integer - ruby

In ruby if I have a CSV file called, vehicles.csv:
make,model,color,doors
dodge,charger,black,4
ford,focus,blue,5
nissan,350z,black,2
mazda,miata,white,2
honda,civid,brown,4
corvette,stingray,red,2
ford,fiesta,blue,5
This is my code:
require "csv"
file = CSV.open("vehicles.csv", headers: :first_row).map(&:to_h)
puts file["make"]
I turn this csv file into a hash and then try to output one of the keys of the hash but keep getting "no implicit conversion of String into Integer" what must be done? I am trying to get something that looks like this as the output:
dodge
ford
nissan
mazda
honda
corvette
ford

file = CSV.open("vehicles.csv", headers: :first_row).map(&:to_h)
leaves file as an array of hashes. Try this to see what I mean.
require "csv"
file = CSV.open("vehicles.csv", headers: :first_row).map(&:to_h)
puts file.class
puts file.first.class
puts file.first
puts file.map {_1["make"]}
Another way to accomplish what you want is read the csv file into a table and then use the values_at method to get all the data in a given column
file = CSV.open("vehicles.csv", headers: :first_row)
table = file.read
puts table.values_at("make")

Related

Ignoring multiple header lines in a CSV

I've worked a bit with Ruby's CSV module, but am having some problems getting it to ignore multiple header lines.
Specifically, here are the first twenty lines of a file I want to parse:
USGS Digital Spectral Library splib06a
Clark and others 2007, USGS, Data Series 231.
For further information on spectrsocopy, see: http://speclab.cr.usgs.gov
ASCII Spectral Data file contents:
line 15 title
line 16 history
line 17 to end: 3-columns of data:
wavelength reflectance standard deviation
(standard deviation of 0.000000 means not measured)
( -1.23e34 indicates a deleted number)
----------------------------------------------------
Olivine GDS70.a Fo89 165um W1R1Bb AREF
copy of splib05a r 5038
0.205100 -1.23e34 0.090781
0.213100 -1.23e34 0.018820
0.221100 -1.23e34 0.005416
0.229100 -1.23e34 0.002928
The actual headers are given on the tenth line, and the seventeenth line is where the actual data start.
Here's my code:
require "nyaplot"
# Note that DataFrame basically just inherits from Ruby's CSV module.
class SpectraHelper < Nyaplot::DataFrame
class << self
def from_csv filename
df = super(filename, col_sep: ' ') do |csv|
csv.convert do |field, info|
STDERR.puts "Field is #{field}"
end
end
end
end
def csv_headers
[:wavelength, :reflectance, :standard_deviation]
end
end
def read_asc filename
f = File.open(filename, "r")
16.times do
line = f.gets
puts "Ignoring #{line}"
end
d = SpectraHelper.from_csv(f)
end
The output suggests that my calls to f.gets are not actually ignoring those lines, and I can't understand why. Here are the first few lines of output:
Field is Clark
Field is and
Field is others
Field is 2007,
Field is USGS,
I tried looking for a tutorial or example which shows processing of more complicated CSV files, but haven't had much luck. If someone could point me towards a resource which answers this question, I would be grateful (and would prefer to mark that as accepted over a solution to my specific problem — but both would be appreciated).
Using Ruby 2.1.
It believe that you are using ::open which uses IO.open. This method will open the file again.
I modified the script a bit
require 'csv'
class SpectraHelper < CSV
def self.from_csv(filename)
df = open(filename, 'r' , col_sep: ' ') do |csv|
csv.drop(16).each {|c| p c}
end
end
end
def read_asc(filename)
SpectraHelper.from_csv(filename)
end
read_asc "data/csv1.csv"
It turns out the problem here was not with my understanding of CSV, but rather with now Nyaplot::DataFrame handles CSV files.
Basically, Nyaplot doesn't actually store things as CSVs. CSV is just an intermediate format. So a simple way to handle the files makes use of #khelli's suggestion:
def read_asc filename
Nyaplot::DataFrame.new(CSV.open(filename, 'r',
col_sep: ' ',
headers: [:wavelength, :reflectance, :standard_deviation],
converters: :numeric).
drop(16).
map do |csv_row|
csv_row.to_h.delete_if { |k,v| k.nil? }
end)
end
Thanks, everyone, for the suggestions.
I wouldn't use the CSV module since your file is not well formatted. the following code will read the file and give you an array of your records:
lines = File.open(filename,'r').readlines
lines.slice!(0,16)
records = lines.map {|line| line.chomp.split}
the recordsoutput:
[["0.205100", "-1.23e34", "0.090781"], ["0.213100", "-1.23e34", "0.018820"], ["0.221100", "-1.23e34", "0.005416"], ["0.229100", "-1.23e34", "0.002928"]]

Difficulty processing json with ruby

I have the following json...
{
"NumPages":"17",
"Page":"1",
"PageSize":"50",
"Total":"808",
"Start":"1",
"End":"50",
"FirstPageUri":"/v3/results?PAGE=1",
"LastPageUri":"/v3/results?PAGE=17",
"PreviousPageUri":"",
"NextPageUri":"/v3/results?PAGE=2",
"User":[
{
"RowNumber":"1",
"UserId":"86938",
"InternalId":"",
"CompletionPercentage":"100",
"DateTimeTaken":"2014-06-18T01:43:25Z",
"DateTimeLastUpdated":"2014-06-18T01:58:11Z",
"DateTimeCompleted":"2014-06-18T01:58:11Z",
"Account":{
"Id":"655",
"Name":"Technical Community College"
},
"FirstName":"Matthew",
"LastName":"Knice",
"EmailAddress":"knice#gmail.com",
"AssessmentResults":[
{
"Title":"Life Factors",
"Code":"LifeFactors",
"IsComplete":"1",
"AttemptNumber":"1",
"Percent":"58",
"Readiness":"fail",
"DateTimeCompleted":"2014-06-18T01:46:00Z"
},
{
"Title":"Learning Styles",
"Code":"LearnStyles",
"IsComplete":"0"
},
{
"Title":"Personal Attributes",
"Code":"PersonalAttributes",
"IsComplete":"1",
"AttemptNumber":"1",
"Percent":"52.08",
"Readiness":"fail",
"DateTimeCompleted":"2014-06-18T01:49:00Z"
},
{
"Title":"Technical Competency",
"Code":"TechComp",
"IsComplete":"1",
"AttemptNumber":"1",
"Percent":"100",
"Readiness":"pass",
"DateTimeCompleted":"2014-06-18T01:51:00Z"
},
{
"Title":"Technical Knowledge",
"Code":"TechKnowledge",
"IsComplete":"1",
"AttemptNumber":"1",
"Percent":"73.44",
"Readiness":"question",
"DateTimeCompleted":"2014-06-18T01:58:00Z"
},
{
"Title":"Reading Rate & Recall",
"Code":"Reading",
"IsComplete":"0"
},
{
"Title":"Typing Speed & Accuracy",
"Code":"Typing",
"IsComplete":"0"
}
]
},
{
"RowNumber":"2",
"UserId":"8654723",
"InternalId":"",
"CompletionPercentage":"100",
"DateTimeTaken":"2014-06-13T14:37:59Z",
"DateTimeLastUpdated":"2014-06-13T15:00:12Z",
"DateTimeCompleted":"2014-06-13T15:00:12Z",
"Account":{
"Id":"655",
"Name":"Technical Community College"
},
"FirstName":"Virginia",
"LastName":"Bustas",
"EmailAddress":"bigBusta#students.college.edu",
"AssessmentResults":[
{
...
I need to start processing where you see "User:" The stuff at the beginning (numpages, page, ect) I want to ignore. Here is the processing script I am working on...
require 'csv'
require 'json'
CSV.open("your_csv.csv", "w") do |csv| #open new file for write
JSON.parse(File.open("sample.json").read).each do |hash| #open json to parse
csv << hash.values
end
end
Right now this fails with the error:
convert.rb:6:in `block (2 levels) in <main>': undefined method `values' for ["NumPages", "17"]:Array (NoMethodError)
I have ran the json through a parser, and it seems to be valid. What is the best way to only process the "User" data?
You have to look at the structure of the JSON object being created. Here's a very small subset of your document being parsed, which makes it easier to see and understand:
require 'json'
foo = '{"NumPages":17,"User":[{"UserId":12345}]}'
bar = JSON[foo]
# => {"NumPages"=>17, "User"=>[{"UserId"=>12345}]}
bar['User'].first['UserId'] # => 12345
foo contains the JSON for a hash. bar contains the Ruby object created by the JSON parser after it reads foo.
User is the key pointing to an array of hashes. Because it's an array, you have to specify which of the hashes in the array you want to look at, which is what bar['User'].first does.
An alternate way to access that sub-hash is:
bar['User'][0]['UserId'] # => 12345
If there were multiple hashes inside the array, you could access them by using the appropriate index value. For example, if there are two hashes, and I want the second one:
foo = '{"NumPages":17,"User":[{"UserId":12345},{"UserId":12346}]}'
bar = JSON[foo]
# => {"NumPages"=>17, "User"=>[{"UserId"=>12345}, {"UserId"=>12346}]}
bar['User'].first['UserId'] # => 12345
bar['User'][0]['UserId'] # => 12345
bar['User'][1]['UserId'] # => 12346
I'm wondering if I am going down the wrong road with the JSON.parse(File.open("sample.json").read).each do |hash|?
Yes, you are. You need to understand what you're doing, and break your code into digestible pieces so they make sense to you. Consider this:
require 'csv'
require 'json'
json_object = JSON.parse(File.read("sample.json"))
CSV.open("your_csv.csv", "w") do |csv| #open new file for write
csv << %w[RowNumber UserID AccountID AccountName FirstName LastName EmailAddress]
json_object['User'].each do |user_hash|
puts 'RowNumber: %s' % user_hash['RowNumber']
puts 'UserID: %s' % user_hash['UserID']
account = user_hash['UserID']['Account']
puts 'Account->Id: %s' % account['Id']
puts 'Account->Name: %s' % account['Name']
puts 'FirstName: %s' % user_hash['FirstName']
puts 'LastName: %s' % user_hash['LastName']
puts 'EmailAddress: %s' % user_hash['EmailAddress']
csv << [
user_hash['RowNumber'],
user_hash['UserID'],
account['Id'],
account['Name'],
user_hash['FirstName'],
user_hash['LastName'],
user_hash['EmailAddress']
]
end
end
This reads the JSON file and parses it into a Ruby object immediately. There is no special magic or anything else that happens with the file, it's opened, read, closed, and its content is passed to the JSON parser and assigned to json_object.
Once parsed, the CSV file is opened and a header row is written. It could have been written as part of the open statement but this is clearer for explaining what's going on.
json_object is a hash, so to access the 'User' data you have to use a normal hash access json_object['User']. The value for the User key is an array of hashes, so those need to be iterated over, which is what json_object['User'].each does, passing the hash elements of that array into the block as user_hash.
Inside that block it's pretty much the same thing as access the value for 'User', each "element" is a key/value pair, except 'Account' which is an embedded hash.
Read the error message. each called on a hash is giving you a sequence of arrays with two members (the key and value together). There is no values method on an array. And in any case if what you have is a hash there seems little point cycling through it with each; if you want the "User" entry in the hash, why don't you ask for it up front?
Just for posterity and context this is the script I ended up using in its entity. I needed to pull from a url, and process the results and move them to a simple CSV. I needed to wite the student id, first name, last name, and the score from each of 4 assessments to the csv.
require 'csv'
require 'json'
require 'curb'
c = Curl::Easy.new('myURL/m/v3/results')
c.http_auth_types = :basic
c.username = 'myusername'
c.password = 'mypassword'
c.perform
json_object = JSON.parse(c.body_str)
CSV.open("your_csv.csv", "w") do |csv| #open new file for write
csv << %w[UserID FirstName LastName LifeFactors PersonalAttributes TechComp TechKnowledge]
json_object['User'].each do |user_hash|
csv << [
user_hash['UserId'],
user_hash['FirstName'],
user_hash['LastName'],
user_hash['AssessmentResults'][0]['Percent'],
user_hash['AssessmentResults'][2]['Percent'],
user_hash['AssessmentResults'][3]['Percent'],
user_hash['AssessmentResults'][4]['Percent']
]
end
end

How do I make an array of arrays out of a CSV?

I have a CSV file that looks like this:
Jenny, jenny#example.com ,
Ricky, ricky#example.com ,
Josefina josefina#example.com ,
I'm trying to get this output:
users_array = [
['Jenny', 'jenny#example.com'], ['Ricky', 'ricky#example.com'], ['Josefina', 'josefina#example.com']
]
I've tried this:
users_array = Array.new
file = File.new('csv_file.csv', 'r')
file.each_line("\n") do |row|
puts row + "\n"
columns = row.split(",")
users_array.push columns
puts users_array
end
Unfortunately, in Terminal, this returns:
Jenny
jenny#example.com
Ricky
ricky#example.com
Josefina
josefina#example.com
Which I don't think will work for this:
users_array.each_with_index do |user|
add_page.form_with(:id => 'new_user') do |f|
f.field_with(:id => "user_email").value = user[0]
f.field_with(:id => "user_name").value = user[1]
end.click_button
end
What do I need to change? Or is there a better way to solve this problem?
Ruby's standard library has a CSV class with a similar api to File but contains a number of useful methods for working with tabular data. To get the output you want, all you need to do is this:
require 'csv'
users_array = CSV.read('csv_file.csv')
PS - I think you are getting the output you expected with your file parsing as well, but maybe you're thrown off by how it is printing to the terminal. puts behaves differently with arrays, printing each member object on a new line instead of as a single array. If you want to view it as an array, use puts my_array.inspect.
Assuming that your CSV file actually has a comma between the name and email address on the third line:
require 'csv'
users_array = []
CSV.foreach('csv_file.csv') do |row|
users_array.push row.delete_if(&:nil?).map(&:strip)
end
users_array
# => [["Jenny", "jenny#example.com"],
# ["Ricky", "ricky#example.com"],
# ["Josefina", "josefina#example.com"]]
There may be a simpler way, but what I'm doing there is discarding the nil field created by the trailing comma and stripping the spaces around the email addresses.

Ruby CSV parsing string with escaped quotes

I have a line in my CSV file that has some escaped quotes:
173,"Yukihiro \"The Ruby Guy\" Matsumoto","Japan"
When I try to parse it the the Ruby CSV parser:
require 'csv'
CSV.foreach('my.csv', headers: true, header_converters: :symbol) do |row|
puts row
end
I get this error:
.../1.9.3-p327/lib/ruby/1.9.1/csv.rb:1914:in `block (2 levels) in shift': Missing or stray quote in line 122 (CSV::MalformedCSVError)
How can I get around this error?
The \" is typical Unix whereas Ruby CSV expects ""
To parse it:
require 'csv'
text = File.read('test.csv').gsub(/\\"/,'""')
CSV.parse(text, headers: true, header_converters: :symbol) do |row|
puts row
end
Note: if your CSV file is very large, it uses a lot of RAM to read the entire file. Consider reading the file one line at a time.
Note: if your CSV file may have slashes in front of slashes, use Andrew Grimm's suggestion below to help:
gsub(/(?<!\\)\\"/,'""')
CSV supports "converters", which we can normally use to massage the content of a field before it's passed back to our code. For instance, that can be used to strip extra spaces on all fields in a row.
Unfortunately, the converters fire off after the line is split into fields, and it's during that step that CSV is getting mad about the embedded quotes, so we have to get between the "line read" step, and the "parse the line into fields" step.
This is my sample CSV file:
ID,Name,Country
173,"Yukihiro \"The Ruby Guy\" Matsumoto","Japan"
Preserving your CSV.foreach method, this is my example code for parsing it without CSV getting mad:
require 'csv'
require 'pp'
header = []
File.foreach('test.csv') do |csv_line|
row = CSV.parse(csv_line.gsub('\"', '""')).first
if header.empty?
header = row.map(&:to_sym)
next
end
row = Hash[header.zip(row)]
pp row
puts row[:Name]
end
And the resulting hash and name value:
{:ID=>"173", :Name=>"Yukihiro \"The Ruby Guy\" Matsumoto", :Country=>"Japan"}
Yukihiro "The Ruby Guy" Matsumoto
I assumed you were wanting a hash back because you specified the :headers flag:
CSV.foreach('my.csv', headers: true, header_converters: :symbol) do |row|
Open the file in MSExcel and save as MS-DOS Comma Separated(.csv)

Parse CSV file with header fields as attributes for each row

I would like to parse a CSV file so that each row is treated like an object with the header-row being the names of the attributes in the object. I could write this, but I'm sure its already out there.
Here is my CSV input:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
The code would look something like this:
CSV.open('my_file.csv','r') do |csv_obj|
puts csv_obj.foo #prints 1 the 1st time, "blah" 2nd time, etc
puts csv.bar #prints 2 the first time, 7 the 2nd time, etc
end
With Ruby's CSV module I believe I can only access the fields by index. I think the above code would be a bit more readable. Any ideas?
Using Ruby 1.9 and above, you can get a an indexable object:
CSV.foreach('my_file.csv', :headers => true) do |row|
puts row['foo'] # prints 1 the 1st time, "blah" 2nd time, etc
puts row['bar'] # prints 2 the first time, 7 the 2nd time, etc
end
It's not dot syntax but it is much nicer to work with than numeric indexes.
As an aside, for Ruby 1.8.x FasterCSV is what you need to use the above syntax.
Here is an example of the symbolic syntax using Ruby 1.9. In the examples below, the code reads a CSV file named data.csv from Rails db directory.
:headers => true treats the first row as a header instead of a data row. :header_converters => :symbolize parameter then converts each cell in the header row into Ruby symbol.
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
In Ruby 1.8:
require 'fastercsv'
CSV.foreach("#{Rails.root}/db/data.csv", {:headers => true, :header_converters => :symbol}) do |row|
puts "#{row[:foo]},#{row[:bar]},#{row[:baz]}"
end
Based on the CSV provided by the Poul (the StackOverflow asker), the output from the example code above will be:
1,2,3
blah,7,blam
4,5,6
Depending on the characters used in the headers of the CSV file, it may be necessary to output the headers in order to see how CSV (FasterCSV) converted the string headers to symbols. You can output the array of headers from within the CSV.foreach.
row.headers
Easy to get a hash in Ruby 2.3:
CSV.foreach('my_file.csv', headers: true, header_converters: :symbol) do |row|
puts row.to_h[:foo]
puts row.to_h[:bar]
end
Although I am pretty late to the discussion, a few months ago I started a "CSV to object mapper" at https://github.com/vicentereig/virgola.
Given your CSV contents, mapping them to an array of FooBar objects is pretty straightforward:
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
require 'virgola'
class FooBar
include Virgola
attribute :foo
attribute :bar
attribute :baz
end
csv = <<CSV
"foo","bar","baz"
1,2,3
"blah",7,"blam"
4,5,6
CSV
foo_bars = FooBar.parse(csv).all
foo_bars.each { |foo_bar| puts foo_bar.foo, foo_bar.bar, foo_bar.baz }
Since I hit this question with some frequency:
array_of_hashmaps = CSV.read("path/to/file.csv", headers: true)
puts array_of_hashmaps.first["foo"] # 1
This is the non-block version, when you want to slurp the whole file.

Resources