I have 2 CSV files with columns like A, B, C.. & D, E, F. I want to join these two CSV files into a new file with rows where File1.B = File2.E and the row would have columns A, B/E, C, D, F. How can I achieve this JOIN without using tables?
Givens
We are given the following.
The paths for the two input files:
fname1 = 't1.csv'
fname2 = 't2.csv'
The path for the output file:
fname3 = 't3.csv'
The names of the headers to match in each of the two input files:
target1 = 'B'
target2 = 'E'
I do assume that (as is the case with the example) the two files necessarily contain the same number of lines.
Create test files
Let's first create the two files:
str = [%w|A B C|, %w|1 1 1|, %w|2 2 2|, %w|3 4 5|, %w|6 9 9|].
map { |a| a.join(",") }.join("\n")
#=> "A,B,C\n1,1,1\n2,2,2\n3,4,5\n6,9,9"
File.write(fname1, str)
#=> 29
str = [%w|D E F|, %w|21 1 41|, %w|22 5 42|, %w|23 8 45|, %w|26 9 239|].
map { |a| a.join(",") }.join("\n")
#=> "D,E,F\n21,1,41\n22,5,42\n23,8,45\n26,9,239"
File.write(fname2, str)
#=> 38
Read the input files into CSV::Table objects
When reading fname1 I will use the :header_converters option to convert the header "B" to "B/E". Note that this does not require knowledge of the location of the column with header "B" (or whatever it may be).
require 'csv'
new_target1 = target1 + "/" + target2
#=> "B/E"
csv1 = CSV.read(fname1, headers: true,
header_converters: lambda { |header| header==target1 ? new_target1 : header})
csv2 = CSV.read(fname2, headers: true)
Construct arrays of headers to be written from each input file
headers1 = csv1.headers
#=> ["A", "B/E", "C"]
headers2 = csv2.headers - [target2]
#=> ["D", "F"]
Create the output file
We will first write the new headers headers1 + headers2 to the output file.
Next, for each row index i (i = 0 corresponding to the first row after the header row in each file), for which a condition is satisfied, we write as a single row the elements of csv1[i] and csv2[i] that are in the columns having headers in headers1 and headers2. The condition to be satisfied to write the rows at index i is that i satisfies:
csv1[i][new_target1] == csv2[i][target2] #=> true
Now open fname3 for writing, write the headers and then the body.
CSV.open(fname3, 'w') do |csv|
csv << headers1 + headers2
[csv1.size, csv2.size].min.times do |i|
csv << (headers1.map { |h| csv1[i][h] } +
headers2.map { |h| csv2[i][h] }) if
csv1[i][new_target1] == csv2[i][target2]
end
end
#=> 4
Let's confirm that what was written is correct.
puts File.read(fname3)
A,B/E,C,D,F
1,1,1,21,41
6,9,9,26,239
If you have CSV files like these:
first.csv:
A | B | C
1 | 1 | 1
2 | 2 | 2
3 | 4 | 5
6 | 9 | 9
second.csv:
D | E | F
21 | 1 | 41
22 | 5 | 42
23 | 8 | 45
26 | 9 | 239
You can do something like this:
require 'csv'
first = CSV.read('first.csv')
second = CSV.read('second.csv')
CSV.open("result.csv", "w") do |csv|
csv << %w[A B.E C D F]
first.each do |rowF|
second.each do |rowS|
csv << [rowF[0],rowF[1],rowF[2],rowS[0],rowS[2]] if rowF[1] == rowS[1]
end
end
end
To get this:
result.csv:
A | B.E | C | D | F
1 | 1 | 1 | 21 | 41
6 | 9 | 9 | 26 | 239
The answer is to use group by to create a hash table and then iterate over the keys of the hash table. Assuming the column you're joining on is unique in each table:
join_column = "whatever"
csv1 = CSV.table("file1.csv").group_by { |r| r[join_column] }
csv2 = CSV.table("file2.csv").group_by { |r| r[join_column] }
joined_data = csv1.keys.sort.map do |join_column_values|
csv1[join_column].first.merge(csv2[join_column].first)
end
If the column is not unique in each table, then you need to decide how you want to handle those cases since there will be more than just the first element in the arrays csv1[join_column] and csv2[join_column]. You could do an O(mxn) join as suggested in one of the other answers (i.e. nested map calls), or you could filter or combine them based on some criteria. The choice really depends on your usecase.
Related
I'm trying to split multiple values in a CSV cell. I can do it right if the multiple values in a cell is found in a single column only, but I'm having difficulty doing it if the multiple values are found in multiple columns. Any guidance will be appreciated.
Here's the sample of the data I'm trying to split:
| Column A | Column B |
|Value1, Value2, Value3 | Value3, Value4, Value5 |
|Value6 | Value7, Value8 |
I'm aiming to have a result like this:
| Column A | Column B |
| Value1 | Value3 |
| Value2 | Value4 |
| Value3 | Value5 |
| Value6 | Value7 |
| Value6 | Value8 |
Here's my code:
require 'csv'
split_a = []
split_b = []
def split_values(value)
value = value.to_s
value = value.gsub('/', ',').gsub('|', ',').gsub(' ', ',')
return value.split(',').map(&:strip)
end
source_csv = kendo_shipment = CSV.read('source_file.csv', headers: true, header_converters: :symbol, liberal_parsing: true).map(&:to_h)
source_csv.each do |source_csv|
column_a = source_csv[:column_a]
column_b = source_csv[:column_b]
column_a = split_values(column_a)
column_a.each do |column_a|
next if column_a.nil? || column_a.empty?
split_a << [
column_a: column_a,
column_b: column_b
]
end
end
split_a.each do |key, split_a|
column_a = key[:column_a]
column_b = key[:column_b]
column_b = split_values(column_b)
column_b.each do |column_b|
next if column_b.nil? || column_b.empty?
split_b << [
column_a,
column_b
]
end
end
There is a special option to define a column separator col_sep: '|' it simplifies the code.
require 'csv'
source_csv = CSV.read('source_file.csv', col_sep: '|', headers: true, header_converters: :symbol, liberal_parsing: true)
split_a = []
split_b = []
# I just assign values to separate arrays, because I am not sure what kind of data structure you want to get at the end.
source_csv.each do |row|
split_a += row[:column_a].split(',').map(&:strip)
split_b += row[:column_b].split(',').map(&:strip)
end
# The result
split_a
# => ["Value1", "Value2", "Value3", "Value6"]
split_b
# => ["Value3", "Value4", "Value5", "Value7", "Value8"]
Here is the code:
require 'csv'
source_csv = CSV.read('source_file.csv',
col_sep: '|',
headers: true,
header_converters: lambda {|h| h.strip},
liberal_parsing: true
)
COLUMN_NAMES = ['Column A', 'Column B']
# Column values
columns = COLUMN_NAMES.map do |col_name|
source_csv&.map do |row|
row[col_name]&.split(',')&.map(&:strip)
end&.flatten(1) || []
end
# repeat the last value in case the number of values in the columns differs:
vals_last_id = columns.map {|col| col.count}.max - 1
columns.each do |col|
# replace `col.last` with `nil` on the next line if you want to leave the value blank
col.fill(col.last, col.length..vals_last_id) if col.length <= vals_last_id
end
values = columns[0].zip(*columns[1..-1])
# result:
pp values; 1
# [["Value1", "Value3"],
# ["Value2", "Value4"],
# ["Value3", "Value5"],
# ["Value6", "Value7"],
# ["Value6", "Value8"]]
Generate CSV text, with | (pipe) delimiter instead comma:
csv = CSV.new( '',
col_sep: '|',
headers: COLUMN_NAMES,
write_headers: true
);
values.each {|row| csv << row};
puts csv.string
# Column A|Column B
# Value1|Value3
# Value2|Value4
# Value3|Value5
# Value6|Value7
# Value6|Value8
Formatted output:
col_val = [COLUMN_NAMES] + values
col_widths = (0..(COLUMN_NAMES.count - 1)).map do |col_id|
col_val.map {|row| row[col_id]&.length || 0}.max
end
fmt = "|" + col_widths.map {|w| " %-#{w}s "}.join('|') + "|\n"
col_val.each {|row| printf fmt % row}; 1
# | Column A | Column B |
# | Value1 | Value3 |
# | Value2 | Value4 |
# | Value3 | Value5 |
# | Value6 | Value7 |
# | Value6 | Value8 |
As you want the output to be a CSV file I would suggest that it look like:
Column A|Column B
Value1|Value3
Value2|Value4
Value3|Value5
Value6|Value7
Value6|Value8
rather than
| Column A | Column B |
| Value1 | Value3 |
| Value2 | Value4 |
| Value3 | Value5 |
| Value6 | Value7 |
| Value6 | Value8 |
Enclosing each line with column separators and adding unnecessary spaces makes it unnecessarily difficult to extract the text of interest from the file.
Let's begin by creating the file you are given, though I have modified your example to make it easier to follow what is happening.
str=<<~_
| Column A | Column B |
| A1, A2, A3 | B1, B2, B3 |
| A4 | B4, B5 |
_
IN_NAME = 'in.csv'
OUT_NAME = 'out.csv'
File.write(IN_NAME, str)
#=> 84
See IO::write1.
As the structure of this file resembles a CSV file only vaguely, I think it's easiest to read it using ordinary file I/O methods.
header, *body = IO.foreach(IN_NAME, chomp: true).with_object([]) do |line,arr|
arr << line.gsub(/^\| *| *\|$/, '')
.split(/ *\| */)
.flat_map { |s| s.split(/, +/) }
end
(I provide an explanation of this calculation later.) This results in the following:
header
#=> ["Column A", "Column B"]
body
#=> [["A1", "A2", "A3", "B1", "B2", "B3"], ["A4", "B4", "B5"]]
See IO::foreach, Enumerator#with_object and Enumerable#flat_map. Note that foreach without a block returns an enumerator that I have chained to with_object.
At this point it is convenient to compute the number of rows to be written to the output file after the header row.
mx = body.map(&:size).max
#=> 6
Next we need to modify body to make it suitable for writing the output CSV file.
mod_body = Array.new(body.size) do |i|
Array.new(mx) { |j| body[i][j] || body[i].last }
end.transpose
#=> [["A1", "A4"], ["A2", "B4"], ["A3", "B5"], ["B1", "B5"],
# ["B2", "B5"], ["B3", "B5"]]
See Array::new.
It is now a simple matter to write the output CSV file.
require 'csv'
CSV.open(OUT_NAME, "wb", col_sep: '|', headers: header, write_headers: true) do |csv|
mod_body.each { |a| csv << a }
end
See CSV::open.
Lastly, let's look at the file that was written.
puts File.read(OUT_NAME)
diplays
Column A|Column B
A1|A4
A2|B4
A3|B5
B1|B5
B2|B5
B3|B5
See IO::read1.
To explain the calculations made in
header, *body = IO.foreach(IN_NAME, chomp: true).with_object([]) do |line,arr|
arr << line.gsub(/^\| *| *\|$/, '')
.split(/ *\| */)
.flat_map { |s| s.split(/, +/) }
end
it is easiest to run it with some puts statements inserted.
header, *body = IO.foreach(IN_NAME, chomp: true).with_object([]) do |line,arr|
puts "line = #{line}"
puts "arr = #{arr}"
arr << line.gsub(/^\| *| *\|$/, '')
.tap { |l| puts " line after gsub = #{l}" }
.split(/ *\| */)
.tap { |a| puts " array after split = #{a}" }
.flat_map { |s| s.split(/, +/) }
.tap { |a| puts " array after flat_map = #{a}" }
end
#=> [["Column A", "Column B"], ["A1", "A2", "A3", "B1", "B2", "B3"],
# ["A4", "B4", "B5"]]
The following is displayed.
line = | Column A | Column B |
arr = []
line after gsub = Column A | Column B
array after split = ["Column A", "Column B"]
array after flat_map = ["Column A", "Column B"]
line = | A1, A2, A3 | B1, B2, B3 |
arr = [["Column A", "Column B"]]
line after gsub = A1, A2, A3 | B1, B2, B3
array after split = ["A1, A2, A3", "B1, B2, B3"]
array after flat_map = ["A1", "A2", "A3", "B1", "B2", "B3"]
line = | A4 | B4, B5 |
arr = [["Column A", "Column B"], ["A1", "A2", "A3", "B1", "B2", "B3"]]
line after gsub = A4 | B4, B5
array after split = ["A4", "B4, B5"]
array after flat_map = ["A4", "B4", "B5"]
1. IO methods are commonly invoked on File. That is permissible since File.superclass #=> IO.
I have two CSV files
file1.csv
username;userid;full_name;follower_count;following_count;media_count;email;category
helloworld;1234;data3;data4;data5;data6;data7;data8
file2.csv
username;owner_id;owner_profile_pic_url;media_url;tagged_brand_username
helloworld;1234;data3b;data4b;data5b
I need the following output file using Ruby with blank if file1.csv username is not found in file2.csv (e.g. row 2).
output.csv
username;userid;full_name;follower_count;following_count;media_count;email;category;owner_profile_pic_url;media_url;tagged_brand_username
helloworld;1234;data3;data4;data5;data6;data7;data8;data3b;data4b;data5b
helloworld;1234;data3;data4;data5;data6;data7;data8;;;
Currently I'm doing it using a Excel vlookup function.
Thanks
There's a lot to unpack in this script. Essentially you need to read both CSV files into a hash, merge file2 into file1, and write it back to a CSV.
require "csv"
dict = Hash.new
options = { col_sep: ";", headers: true}
# read file1
CSV.foreach("file1.csv", options) do |row|
row = row.to_h
user = "#{row['username']+row['userid']}"
dict[user] = row
end
# read file2
CSV.foreach("file2.csv", options) do |row|
row = row.to_h
user = "#{row['username']+row['owner_id']}"
row.delete('owner_id')
dict[user] = row.merge(dict[user]) if dict[user]
end
# turn hash into rows
rows = [['username','userid','full_name','follower_count','following_count','media_count','email','category','owner_profile_pic_url','media_url','tagged_brand_username']]
dict.each do |key, value|
row = rows[0].map{|h| value[h] || "" }
rows.push(row)
end
# write to csv
File.write("output.csv", rows.map{|r| r.to_csv(col_sep: ";") }.join)
This covers both when there is a match and no username match in file1.
# file1.csv
username;userid;full_name;follower_count;following_count;media_count;email;category
helloworld;1234;data3;data4;data5;data6;data7;data8
goodbyeworld;5678;data3;data4;data5;data6;data7;file2.csv
# file2.csv
username;owner_id;owner_profile_pic_url;media_url;tagged_brand_username
helloworld;1234;data3b;data4b;data5b
# output.csv
username;userid;full_name;follower_count;following_count;media_count;email;category;owner_profile_pic_url;media_url;tagged_brand_username
helloworld;1234;data3;data4;data5;data6;data7;data8;data3b;data4b;data5b
goodbyeworld;5678;data3;data4;data5;data6;data7;data8;"";"";""
As mentioned, the fact that there is two lines with the same ID in output.csv is very confusing. Next time just add an extra row showing what happens if there's no match. While this is a good question, we have guidelines on how to write an excellent question.
There are two existing CSV input files and we wish to create one CSV output file:
FNAME1 = 'file1.csv'
FNAME2 = 'file2.csv'
FILE_OUT = 'output.csv'
Let's first create the two input files.
File.write(FNAME1, "username;userid;full_name;follower_count;following_count;media_count;email;category\nhelloworld;1234;data3;data4;data5;data6;data7;data8\n")
#=> 136
File.write(FNAME2, "username;owner_id;owner_profile_pic_url;media_url;tagged_brand_username\nhelloworld;1234;data3b;data4b;data5b\n")
#=> 109
Now go through the steps to read those files, manipulate their contents and write the output file.
require 'csv'
First read both input files and save their contents in variables.
def read_csv(fname)
CSV.read(fname, col_sep: ';', headers: true)
end
csv1 = read_csv(FNAME1)
#=> #<CSV::Table mode:col_or_row row_count:2>
csv2 = read_csv(FNAME2)
#=> #<CSV::Table mode:col_or_row row_count:2>
Note:
csv1.to_a
#=> [["username", "userid", "full_name", "follower_count", "following_count",
# "media_count", "email", "category"],
# ["helloworld", "1234", "data3", "data4", "data5",
# "data6", "data7", "data8"]]
csv2.to_a
#=> [["username", "owner_id", "owner_profile_pic_url", "media_url", "tagged_brand_username"],
# ["helloworld", "1234", "data3b", "data4b", "data5b"]]
As you see, these are ordinary arrays, so if we wished we could at this point forget they came from CSV files and use standard Ruby methods to create the desired output file.
Now see if the values of "username" are the same in both files:
username1 = csv1['username'].first
#=> "helloworld"
username2 = csv2['username'].first
#=> "helloworld"
csv1['username'] creates an array of all values in the "helloworld" column. Here that is simply ["helloworld"]; hence .first. Same for csv2, of course.
If username1 == username2 #=> false we perform an action that I am not clear about, then quit. Henceforth, I assume the two usernames are equal.
Read the headers of both files into arrays.
headers1 = csv1.headers
#=> ["username", "userid", "full_name", "follower_count", "following_count",
# "media_count", "email", "category"]
headers2 = csv2.headers
#=> ["username", "owner_id", "owner_profile_pic_url", "media_url",
# "tagged_brand_username"]
The output file is to contain all the columns in headers1 and all the columns in headers2 with the exception of "username" and "owner_id" in headers2, so let's next get rid of those headers in headers2:
headers2 -= ["username", "owner_id"]
#=> ["owner_profile_pic_url", "media_url", "tagged_brand_username"]
Next retrieve the values of the headers in the first file:
values1 = headers1.flat_map { |h| csv1[h] }
#=> ["helloworld", "1234", "data3", "data4", "data5", "data6", "data7", "data8"]
and the values of the remaining headers in the second file:
values2 = headers2.flat_map { |h| csv2[h] }
#=> ["data3b", "data4b", "data5b"]
We will modify values2 below so we need to save its current size:
values2_size = values2.size
#=> i
The first line in the output file after the header line is to contain the values:
values1 += values2
#=> ["helloworld", "1234", "data3", "data4", "data5", "data6", "data7", "data8",
# "data3b", "data4b", "data5b"]
and the second line is to contain:
values2 = values1 - values2
#=> ["helloworld", "1234", "data3", "data4", "data5", "data6", "data7", "data8",
plus values2_size #=> 3 empty fields.
We could use CSV methods to write this to file, but there is really no advantage in doing so over using regular file methods. We can simply write the following string to file.
str = [(headers1 + headers2).join(';'),
values1.join(';'),
values2.join(';') + ';' * values2_size
].join("\n")
puts str
username;userid;full_name;follower_count;following_count;media_count;email;category;owner_profile_pic_url;media_url;tagged_brand_username
helloworld;1234;data3;data4;data5;data6;data7;data8;data3b;data4b;data5b
helloworld;1234;data3;data4;data5;data6;data7;data8;;;
Let's do it.
File.write(FILE_OUT, str)
#=> 265
Note that, if a and b are arrays, a += b and a -= b expand to a = a + b and a = a - b, respectively. The CSV methods I've used are documented here.
I will leave it to the OP to combine the operations I've discussed into a method.
I have a text file:
GLKIIM 08052016 08052016 444-22222222 33333 5675555
ABCDEF 87645123 34211016 333-11111111 22222 5123455
I am using CSV.read to read the text file.
For each line in the text file, I need to extract the column values by the start and end positions. For that I have arrays:
start_pos = [1 8 17 26 30 39 45]
end_pos = [6 15 24 28 37 43 51]
which mean in the text file from position start_pos[0] to end_pos[0], i.e 1 to 6, we will have the first column's values, GLKIIM and ABCDEF.
The column names are:
column_name = [SOURCE_NAME BATCH_DATE EFFECT_DATE ID ACCOUNT_NO ENTITY ACCOUNT]
I need to create a hash as follows:
{
0=>{"SOURCE_NAME"=>"GLKIIM", "BATCH_DATE"=>"08052016", "EFFECT_DATE"=>"08052016", "ID"=>"444", "ACCOUNT_NO"=>"22222222", "ENTITY"=>"33333", "ACCOUNT"=>"5675555"},
1=>{"SOURCE_NAME"=>"ABCDEF", "BATCH_DATE"=>"87645123", "EFFECT_DATE"=>"34211016", "ID"=>"333", "ACCOUNT_NO"=>"11111111", "ENTITY"=>"22222", "ACCOUNT"=>"5123455"}
}
I cannot use space () as a delimiter to segregate the columns values, I need to use the start and end positions.
input = 'GLKIIM 08052016 08052016 444-22222222 33333 5675555
ABCDEF 87645123 34211016 333-11111111 22222 5123455'
start_pos = %w|1 8 17 26 30 39 45|.map &:to_i
end_pos = %w|6 15 24 28 37 43 51|.map &:to_i
input.split($/).map do |line|
start_pos.zip(end_pos).map { |s, e| line[s-1..e-1] }
end
#⇒ [["GLKIIM", "08052016", "08052016", "444", "22222222", "33333", "5675555"],
# ["ABCDEF", "87645123", "34211016", "333", "11111111", "22222", "5123455"]]
Do not read the file as a Comma-Separated-Values (CSV) file, if it isn't one.
Using "speaking code" you could use File.readlines instead:
#!/bin/env ruby
result = ARGF.readlines.map do |line|
[line[0..5], line[7..14], line[16..23], line[24..36]]
end
puts result.inspect
# => [["GLKIIM", "08052016", "08052016", " 444-22222222"], ["ABCDEF", "87645123", "34211016", " 333-11111111"]]
If you save this script you can run it as:
readliner.rb MYFILE.TXT MYFILE2.TXT MYFILE3.TXT
or pipe into it:
cat myfile | readliner.rb
Alternatively use
File.readlines("MYFILE.TXT")
instead of ARGF.readlines in the script.
The use of readlines can bring problems with it, as it reads the whole file into memory to yield an array of lines. See the comments for a small discussion on that topic.
Let's code-golf a bit, while staying somewhat readable and removing readlines:
#!/bin/env ruby
COLS = { "SOURCE_NAME" => 0..5,
"BATCH_DATE" => 7..14,
"EFFECT_DATE" => 16..23 }
result = ARGF.each_with_index.map do |line, idx|
[idx, COLS.map{|name,range| [name, line[range]] }.to_h ]
end.to_h
puts result.inspect
# => {0=>{"SOURCE_NAME"=>"GLKIIM", "BATCH_DATE"=>"08052016", "EFFECT_DATE"=>"08052016"}, 1=>{"SOURCE_NAME"=>"ABCDEF", "BATCH_DATE"=>"87645123", "EFFECT_DATE"=>"34211016"}}
I used below code:
file = File.open('abc.TXT', "r")
i = 0
file.each_line do |line|
temp = {}
for itrator in 0..column_name.length-1
temp[column_name[itrator]] = line[start_pos[itrator]-1..end_pos[itrator]-1]
end
data_hash[i] = temp
i+=1
end
puts data_hash
Assuming that file name containing the following data is abc.txt:
GLKIIM 08052016 08052016 444-22222222 33333 5675555
ABCDEF 87645123 34211016 333-11111111 22222 5123455
I need to sort (ascending order) a table by a field.
This is my table:
vTable=Text::Table.new
vTable.head= ["Value", "Rest", "NumberRun"]
I store my data in the table inside a loop
lp= [value, rest, num]
vTable.rows << lp
After completing my table, it looks like:
33183 | 109 | 5.10.0.200
34870 | 114 | 5.4.1.100
28437 | 93 | 5.6.0.050
31967 | 105 | 5.6.2.500
29942 | 98 | 5.7.2.900
I would like to sort, ascending order, this table by "NumberRun" considering that 5.10.0.200 is bigger than 5.7.2.900.
My idea was to remove "." from "NumberRun", convert it into a number, and then sort. In this way 5.10.x > 5.9.x
You can sort the rows the following way. It uses the Array#sort_by!() method, which was introduced with Ruby 1.9.2.
vTable.rows.sort_by! { |_,_,num| num.split('.').map(&:to_i) }
It also uses Destructuring for the block parameters of sort_by!().
For Ruby versions older than 1.9.2 you can sort the rows and reassign them.
vTable.rows = vTable.rows.sort_by { |_,_,num| num.split('.').map(&:to_i) }
I have created a NumberRun class for you that can do the comparision.
#number_run.rb
class NumberRun
include Comparable
attr_accessor :run_number
def initialize(str)
#run_number = str
end
def <=> str1
str_split = #run_number.split(".")
str1_split = str1.run_number.split(".")
output = 0
str_split.each_with_index do |num, index|
if((num.to_i <=> str1_split[index].to_i) != 0 && index != (str_split.count - 1))
output = (num.to_i <=> str1_split[index].to_i)
break
elsif index == (str_split.count - 1)
output = (num.to_i <=> str1_split[index].to_i)
end
end
output
end
end
For an example:
a = [
NumberRun.new('5.10.0.200'),
NumberRun.new('5.4.1.100'),
NumberRun.new('5.6.0.050'),
NumberRun.new('5.6.2.500'),
NumberRun.new('5.7.2.900')
]
a.sort.map(&:run_number)
Ok, I have a hash which contains several properties. I wanted certain properties of this hash to be added to a CSV file.
Here's what I've written:
require 'csv'
require 'curb'
require 'json'
arr = []
CSV.foreach('test.csv') do | row |
details = []
details << result['results'][0]['formatted_address']
result['results'][0]['address_components'].each do | w |
details << w['short_name']
end
arr << details
end
CSV.open('test_result.csv', 'w') do | csv |
arr.each do | e |
csv << [e]
end
end
end
All works fine apart from the fact the I get each like so:
["something", "300", "something", "something", "something", "something", "something", "GB", "something"]
As an array, which I do not want. I want each element of the array in a new column. The problem is that I do not know how many items I'll have otherwise I could something like this:
CSV.open('test_result.csv', 'w') do | csv |
arr.each do | e |
csv << [e[0], e[1], ...]
end
end
end
Any ideas?
Change csv << [e] to csv << e.