Ruby iterating over hash of hash - ruby

I have the following array and am struggling to format it for my needs.
consolidated = [
{:name=>"Bob", :details=>{"work"=>"Carpenter", "age"=>"26", "Experience"=>"6"} },
{:name=>"Colin", :details=>{"work"=>"painting", "age"=>"20", "Experience"=>"4"} }
]
I am trying to format it as below:
Bob work Carpenter
age 26
Experience 6
Colin work painting
age 20
Experience 4
I tried the following:
require 'csv'
CSV.open("output.csv", "wb") do |csv|
csv << ["name", "nature", "details"]
consolidated.each do |val|
csv << [val[:name], val[:details]]
end
end
#=> [{:name=>"Bob", :details=>{"work"=>"Carpenter", "age"=>"26", "Experience"=>"6"}},
# {:name=>"Colin", :details=>{"work"=>"painting", "age"=>"20", "Experience"=>"4"}}]
but it prints the following
name nature details
Bob "work"=>"Carpenter", "age"=>"26", "Experience"=>"6"
Colin "work"=>"painting", "age"=>"20", "Experience"=>"4"
I'm not exactly sure how to iterate hash of hash from the 1st loop only to get the expected format.
Thanks.

Here's something to get you started:
require 'csv'
data = [
{:name => "Bob", :details=>{"work"=>"Carpenter", "age"=>"26", "Experience"=>"6"}},
{:name => "Colin", :details=>{"work"=>"painting", "age"=>"20", "Experience"=>"4"}}
]
str = CSV.generate do |csv|
data.each do |datum|
datum[:details].each do |detail_key, detail_value|
csv << [datum[:name], detail_key, detail_value]
end
end
end
puts str
# >> Bob,work,Carpenter
# >> Bob,age,26
# >> Bob,Experience,6
# >> Colin,work,painting
# >> Colin,age,20
# >> Colin,Experience,4
Simply iterate all details and emit a new row for each key-value pair there, adding a name of a person.
This will get you almost what you need. Missing only blank rows between sections and person's name is duplicated on each line. It'll be your homework to find out how to add those improvements.

I don't know about CSV generation (so, assuming it works as you have written), you can iterate on your object this way:
consolidated = [{:name => "Bob", :details=>{"work"=>"Carpenter", "age"=>"26", "Experience"=>"6"}}, {:name => "Colin", :details=> {"work"=>"painting", "age"=>"20", "Experience"=>"4"}}]
CSV.open("output.csv", "wb") do |csv|
csv << ["name", "nature", "details"]
consolidated.each do |val|
details = val[:details]
nature_1 = details.keys.first
detail_1 = details.delete(nature_1)
csv << [val[:name], nature_1, detail_1]
details.each do |k, v|
csv << [nil, k, v]
end
end
end
Note: This will corrupt your original data array consolidated. So, if you want to preserve it, dup it first. Or modify the logic to not delete the first key-value from val[:details].

You need to iterate the embedded hash by each_pair iterator.
Something like this:
data = {:name => "Bob", :details=>{"work"=>"Carpenter", "age"=>"26", "Experience"=>"6"}}
CSV.open("output.csv", "wb") do |csv|
csv << ["name", "nature", "details"]
data.each do |val|
csv << [ val[:name], val[:details]['work'] ]
data[:details].each_pair do |key, value]
# here we have to drop the first pair because i've used it earlier
next if key == 'work'
csv << [ "", key, value ]
end
end
end

Related

Nested Ruby Hash to CSV

I have a nested hash as follow example:
{
"0001" => {
"All nodes" => [N001, N002, N003],
"All links" => [N001.1, N002.1, N003.1],
"Pumps" => [N001.2]
},
"0002" => {
"All nodes" => [N004, N005, N006],
"All links" => [N004.1, N005.1, N006.1],
"Pumps" => [N005.2]
},
"0003" => {
"All nodes" => [N007, N008, N009],
"All links" => [N007.1, N008.1, N009.1],
"Pumps" => [N007.2]
}
}
Under Nodes are stored information like coordinates, under Links are stored inverts, diameters and storage and under Pumps are stored On/Off levels and discharge.
I would like to know if you have any idea for how to export to CSV the information that are stored in the hash but in the right column (which will be the hash keys (0001, 0002 and 0003)).
As example, this is what I managed to make till now:
require 'CSV'
net=WSApplication.current_network
CSVsaveloc=WSApplication.file_dialog(false, "csv", "Comma Separated Variable File", "testexportcsv",false,true)
f = File.new(CSVsaveloc, "w")
CSV.open(CSVsaveloc,"wb") do |csv|
csv << Hash.keys
Hash.each do |key,values|
csv << x.mean
csv << y.mean
csv << diameter.min
csv << discharge.min
end
end
Now I'm getting the export like this:
0001,0002,0003
246164.2646
518466.7589
300mm
0.01
246181.6492
518444.1727
250mm
0.005
246171.5763
518509.8948
500mm
0.1
BUT, I would like to have it like this:
0001,0002,0003
246164.2646,246181.6492,246171.5763
518466.7589,518444.1727,518509.8948
300mm,250mm,500mm
0.01,0.005,0.1
From what I'm noticing, it seems that CSV inserts to a new line every time you run csv << . It might help to create an array for each line you're expecting, then append to CSV from there.
Expanding from the example code you gave earlier, maybe you can try this instead?
require 'CSV'
net=WSApplication.current_network
CSVsaveloc=WSApplication.file_dialog(false, "csv", "Comma Separated Variable File", "testexportcsv",false,true)
f = File.new(CSVsaveloc, "w")
# Create arrays
x_mean = []
y_mean = []
diameter_min = []
discharge_min = []
# Insert each value to corresponding array
Hash.each do |key, values|
x_mean << x.mean
y_mean << y.mean
diameter_min << diameter.min
discharge_min << discharge.min
end
# Insert each array to each line of CSV
CSV.open(CSVsaveloc,"wb") do |csv|
csv << Hash.keys
csv << x_mean
csv << y_mean
csv << diameter_min
csv << discharge_min
end

manipulating csv with ruby

I have a CSV from which I've removed the irrelevant data.
Now I need to split "Name and surname" into 2 columns by space but ignoring a 3rd column in case there are 3 names, then invert the order of the columns "Name and surname" and "Phone" (phone first) and then put them into a file ignoring the headers. I've never actually learned Ruby but I've played with Python 10 years ago. Can you help me? This is what I was able to do until now:
E.g.
require 'csv'
csv_table = CSV.read(ARGV[0], :headers => true)
keep = ["Name and surname", "Phone", "Email"]
new_csv_table = csv_table.by_col!.delete_if do |column_name,column_values|
!keep.include? column_name
end
new_csv_table.to_csv
Begin by creating a CSV file.
str =<<~END
Name and surname,Phone,Email
John Doe,250-256-3145,John#Doe.com
Marsha Magpie,250-256-3154,Marsha#Magpie.com
END
File.write('t_in.csv', str)
#=> 109
Initially, let's read the file, add two columns, "Name" and "Surname", and optionally delete the column, "Name and surname", without regard to column order.
First read the file into a CSV::Table object.
require 'csv'
tbl = CSV.read('t_in.csv', headers: true)
#=> #<CSV::Table mode:col_or_row row_count:3>
Add the new columns.
tbl.each do |row|
row["Name"], row["Surname"] = row["Name and surname"].split
end
#=> #<CSV::Table mode:col_or_row row_count:3>
Note that if row["Name and surname"] had equaled “John Paul Jones”, we would have obtained row["Name"] #=> “John” and row["Surname"] #=> “Paul”.
If the column "Name and surname" is no longer required we can delete it.
tbl.delete("Name and surname")
#=> ["John Doe", "Marsha Magpie"]
Write tbl to a new CSV file.
CSV.open('t_out.csv', "w") do |csv|
csv << tbl.headers
tbl.each { |row| csv << row }
end
#=> #<CSV::Table mode:col_or_row row_count:3>
Let's see what was written.
puts File.read('t_out.csv')
displays
Phone,Email,Name,Surname
250-256-3145,John#Doe.com,John,Doe
250-256-3154,Marsha#Magpie.com,Marsha,Magpie
Now let's rearrange the order of the columns.
header_order = ["Phone", "Name", "Surname", "Email"]
CSV.open('t_out.csv', "w") do |csv|
csv << header_order
tbl.each { |row| csv << header_order.map { |header| row[header] } }
end
puts File.read('t_out.csv')
#=> #<CSV::Table mode:col_or_row row_count:3>
displays
Phone,Name,Surname,Email
250-256-3145,John,Doe,John#Doe.com
250-256-3154,Marsha,Magpie,Marsha#Magpie.com

Ruby - Merge CSV duplicate columns with same SKU

I have created a CSV file about my eshop that contains multiple items with different SKUs. Some SKUs appear more than once because they can be in more than one category (but the Title and Price will always be the same for a given SKU). Example:
SKU,Title,Category,Price
001,Soap,Bathroom,0.5
001,Soap,Kitchen,0.5
002,Water,Kitchen,0.4
002,Water,Garage,0.4
003,Juice,Kitchen,0.8
I now wish to create from that file another CSV file that has no duplicate SKU's and aggregates the "Category" attributes as follows:
SKU,Title,Category,Price
001,Soap,Bathroom/Kitchen,0.5
002,Water,Kitchen/Garage,0.4
003,Juice,Kitchen,0.8
How can I do that?
It's my understand you wish to read a CSV file, perform some operations on the data and then write the result to a new CSV file. You could do that as follows.
Code
require 'csv'
def convert(csv_file_in, csv_file_out, group_field, aggregate_field)
csv = CSV.read(FNameIn, headers: true)
headers = csv.headers
arr = csv.group_by { |row| row[group_field] }.
map do |_,a|
headers.map { |h| h==aggregate_field ?
(a.map { |row| row[aggregate_field] }.join('/')) : a.first[h] }
end
CSV.open(FNameOut, "wb") do |csv|
csv << headers
arr.each { |row| csv << row }
end
end
Example
Let's create a CSV file with the following data:
s =<<_
SKU,Title,Category,Price
001,Soap,Bathroom,0.5
001,Soap,Kitchen,0.5
002,Water,Kitchen,0.4
002,Water,Garage,0.4
003,Juice,Kitchen,0.8
_
FNameIn = 'testin.csv'
FNameOut = 'testout.csv'
IO.write(FNameIn, s)
#=> 135
Now execute the method with these values:
convert(FNameIn, FNameOut, "SKU", "Category")
and confirm FNameOut was written correctly:
puts IO.read(FNameOut)
SKU,Title,Category,Price
001,Soap,Bathroom/Kitchen,0.5
002,Water,Kitchen/Garage,0.4
003,Juice,Kitchen,0.8
Explanation
The steps are as follows:
csv_file_in = FNameIn
csv_file_out = FNameOut
group_field = "SKU"
aggregate_field = "Category"
csv = CSV.read(FNameIn, headers: true)
See CSV::read.
headers = csv.headers
#=> ["SKU", "Title", "Category", "Price"]
h = csv.group_by { |row| row[group_field] }
#=> {"001"=>[
#<CSV::Row "SKU":"001" "Title":"Soap" "Category":"Bathroom" "Price":"0.5">,
# #<CSV::Row "SKU":"001" "Title":"Soap" "Category":"Kitchen" "Price":"0.5">
# ],
# "002"=>[
# #<CSV::Row "SKU":"002" "Title":"Water" "Category":"Kitchen" "Price":"0.4">,
# #<CSV::Row "SKU":"002" "Title":"Water" "Category":"Garage" "Price":"0.4">
# ],
# "003"=>[
# #<CSV::Row "SKU":"003" "Title":"Juice" "Category":"Kitchen" "Price":"0.8">
# ]
# }
arr = h.map do |_,a|
headers.map { |h| h==aggregate_field ?
(a.map { |row| row[aggregate_field] }.join('/')) : a.first[h] }
end
#=> [["001", "Soap", "Bathroom/Kitchen", "0.5"],
# ["002", "Water", "Kitchen/Garage", "0.4"],
# ["003", "Juice", "Kitchen", "0.8"]]
See CSV#headers and Enumerable#group_by, an oft-used method. Lastly, write the output file:
CSV.open(FNameOut, "wb") do |csv|
csv << headers
arr.each { |row| csv << row }
end
See CSV::open. Now let's return to the calculation of arr. This is most easily explained by inserting some puts statements and executing the code.
arr = h.map do |_,a|
puts " _=#{_}"
puts " a=#{a}"
headers.map do |h|
puts " header=#{h}"
if h==aggregate_field
a.map { |row| row[aggregate_field] }.join('/')
else
a.first[h]
end.
tap { |s| puts " mapped to #{s}" }
end
end
See Object#tap. The following is displayed.
_=001
a=[#<CSV::Row "SKU":"001" "Title":"Soap" "Category":"Bathroom" "Price":"0.5">,
#<CSV::Row "SKU":"001" "Title":"Soap" "Category":"Kitchen" "Price":"0.5">]
header=SKU
mapped to 001
header=Title
mapped to Soap
header=Category
mapped to Bathroom/Kitchen
header=Price
mapped to 0.5
_=002
a=[#<CSV::Row "SKU":"002" "Title":"Water" "Category":"Kitchen" "Price":"0.4">,
#<CSV::Row "SKU":"002" "Title":"Water" "Category":"Garage" "Price":"0.4">]
header=SKU
mapped to 002
header=Title
mapped to Water
header=Category
mapped to Kitchen/Garage
header=Price
mapped to 0.4
_=003
a=[#<CSV::Row "SKU":"003" "Title":"Juice" "Category":"Kitchen" "Price":"0.8">]
header=SKU
mapped to 003
header=Title
mapped to Juice
header=Category
mapped to Kitchen
header=Price
mapped to 0.8
It seems that in order for this to be correct, we must assume the SKU number and the price are always the same. Since you know the only key you want to merge data between is Category here is how you can do it.
Assuming this is your test.csv in the same path as the ruby script:
# test.csv
SKU,Title,Category,Price
001,Soap,Bathroom,0.5
001,Soap,Kitchen,0.5
002,Water,Kitchen,0.4
002,Water,Garage,0.4
003,Juice,Kitchen,0.8
Ruby script in same directory as your test.csv file
# fix_csv.rb
require 'csv'
rows = CSV.read 'test.csv', :headers => true
skews = rows.group_by{|row| row['SKU']}.keys.uniq
values = rows.group_by{|row| row['SKU']}
merged = skews.map do |key|
group = values.select{|k,v| k == key}.values.flatten.map(&:to_h)
category = group.map{|k,v| k['Category']}.join('/')
new_data = group[0]
new_data['Category'] = category
new_data
end
CSV.open('merged_data.csv', 'w') do |csv|
csv << merged.first.keys # writes the header row
merged.each do |hash|
csv << hash.values
end
end
puts 'see contents of merged_data.csv'

How to split string in array in two?

I currently have a single column CSV file such as: ["firstname lastname", "firstname lastname", ...].
I would like to create a CSV file such as ["f.lastname", "f.lastname"...]; f being the first letter of the firstname.
Any idea how I should do that ?
update
Ok well, I feel that I am close thanks to you guys, here's what I got so far :
require 'csv'
filename = CSV.read("mails.csv")
mails = []
CSV.foreach(filename) do |col|
mails << filename.map { |n| n.sub(/\A(\w)\w* (\w+)\z/, '\1. \2') }
end
puts mails.to_s
But I still get an error.
update2
Ok this works just fine :
require 'csv'
mails = []
CSV.foreach('mails.csv', :headers => false) do |row|
mails << row.map(&:split).map{|f,l| "#{f[0]}.#{l}#mail.com" }
end
File.open("mails_final.csv", 'w') {|f| f.puts mails }
puts mails.to_s
Thanks a lot to all of you ;)
A solution without using regular expression:
ary = ["firstname lastname", "firstname lastname"]
ary.map(&:split).map{|f, l| "#{f[0]}. #{l}" }
#=> ["f. lastname", "f. lastname"]
ary = ["firstname lastname", "firstname lastname"]
ary.map{|a| e=a.split(" "); e[0][0]+"."+e[1]}
#=> ["f.lastname", "f.lastname"]
You need to modify your this following code:--
CSV.foreach(filename) do |col|
mails << filename.map { |n| n.sub(/\A(\w)\w* (\w+)\z/, '\1. \2') }
end
to match something like the following:--
CSV.foreach(path_to_csv_file/mails.csv, headers: true/false) do |row|
# row is kind _of CSV::Row, do not use filename.map => causing error
mails << row.to_hash.map { |n| n.sub(/\A(\w)\w* (\w+)\z/, '\1. \2') }
end
I would do that way:
array = ["firstname lastname", "firstname lastname"]
array.map { |n| "#{n[0]}.#{n.split[1]}" }

Output array to CSV in Ruby

It's easy enough to read a CSV file into an array with Ruby but I can't find any good documentation on how to write an array into a CSV file. Can anyone tell me how to do this?
I'm using Ruby 1.9.2 if that matters.
To a file:
require 'csv'
CSV.open("myfile.csv", "w") do |csv|
csv << ["row", "of", "CSV", "data"]
csv << ["another", "row"]
# ...
end
To a string:
require 'csv'
csv_string = CSV.generate do |csv|
csv << ["row", "of", "CSV", "data"]
csv << ["another", "row"]
# ...
end
Here's the current documentation on CSV: http://ruby-doc.org/stdlib/libdoc/csv/rdoc/index.html
If you have an array of arrays of data:
rows = [["a1", "a2", "a3"],["b1", "b2", "b3", "b4"], ["c1", "c2", "c3"]]
Then you can write this to a file with the following, which I think is much simpler:
require "csv"
File.write("ss.csv", rows.map(&:to_csv).join)
I've got this down to just one line.
rows = [['a1', 'a2', 'a3'],['b1', 'b2', 'b3', 'b4'], ['c1', 'c2', 'c3'], ... ]
csv_str = rows.inject([]) { |csv, row| csv << CSV.generate_line(row) }.join("")
#=> "a1,a2,a3\nb1,b2,b3\nc1,c2,c3\n"
Do all of the above and save to a csv, in one line.
File.open("ss.csv", "w") {|f| f.write(rows.inject([]) { |csv, row| csv << CSV.generate_line(row) }.join(""))}
NOTE:
To convert an active record database to csv would be something like this I think
CSV.open(fn, 'w') do |csv|
csv << Model.column_names
Model.where(query).each do |m|
csv << m.attributes.values
end
end
Hmm #tamouse, that gist is somewhat confusing to me without reading the csv source, but generically, assuming each hash in your array has the same number of k/v pairs & that the keys are always the same, in the same order (i.e. if your data is structured), this should do the deed:
rowid = 0
CSV.open(fn, 'w') do |csv|
hsh_ary.each do |hsh|
rowid += 1
if rowid == 1
csv << hsh.keys# adding header row (column labels)
else
csv << hsh.values
end# of if/else inside hsh
end# of hsh's (rows)
end# of csv open
If your data isn't structured this obviously won't work
If anyone is interested, here are some one-liners (and a note on loss of type information in CSV):
require 'csv'
rows = [[1,2,3],[4,5]] # [[1, 2, 3], [4, 5]]
# To CSV string
csv = rows.map(&:to_csv).join # "1,2,3\n4,5\n"
# ... and back, as String[][]
rows2 = csv.split("\n").map(&:parse_csv) # [["1", "2", "3"], ["4", "5"]]
# File I/O:
filename = '/tmp/vsc.csv'
# Save to file -- answer to your question
IO.write(filename, rows.map(&:to_csv).join)
# Read from file
# rows3 = IO.read(filename).split("\n").map(&:parse_csv)
rows3 = CSV.read(filename)
rows3 == rows2 # true
rows3 == rows # false
Note: CSV loses all type information, you can use JSON to preserve basic type information, or go to verbose (but more easily human-editable) YAML to preserve all type information -- for example, if you need date type, which would become strings in CSV & JSON.
Building on #boulder_ruby's answer, this is what I'm looking for, assuming us_eco contains the CSV table as from my gist.
CSV.open('outfile.txt','wb', col_sep: "\t") do |csvfile|
csvfile << us_eco.first.keys
us_eco.each do |row|
csvfile << row.values
end
end
Updated the gist at https://gist.github.com/tamouse/4647196
Struggling with this myself. This is my take:
https://gist.github.com/2639448:
require 'csv'
class CSV
def CSV.unparse array
CSV.generate do |csv|
array.each { |i| csv << i }
end
end
end
CSV.unparse [ %w(your array), %w(goes here) ]

Resources