I have keys and data [sic] as follows, which I need to export in a text file.
keys = %w[ID No time]
Data = ["a", ["1", "2", "3", "4"], 20]
My desired output is:
ID No time
a 1 20
a 2 20
a 3 20
a 4 20
I had attempted the following code so far:
File.open('test1.txt', 'w') {|f| f.write Data.join("\t")}
But it doesn't show my desired output.
Any direction regarding this would be highly appreciated.
Update :
Just extending the question :
if there are same Keys and a block of Data (Data1,Data2, Data3 ,...) how to efficiently concatenate and export the total output to a text file?
Data1 = [a, [1, 2, 3, 4], 20]
Data2 = [b,[5,6,7,8],8]
Data3 =[c,[9,10,11,13],10]
require 'csv'
keys = %w(ID No time)
data = ['a', [1, 2, 3, 4], 20]
id, numbers, time = data
CSV.open('test1.txt', 'w', headers: keys, write_headers: true, col_sep: "\t") do |csv|
numbers.each do |number|
csv << [id, number, time]
end
end
Without using csv library:
keys = %w[ID No time]
data = ["a", ["1", "2", "3", "4"], 20]
File.open('test1.txt', 'w') do |file|
file.write(keys.join("\t")+"\n")
data[1].map { |x| file.write("#{data[0]}\t#{x}\t#{data[2]}\n") }
end
For multiple data:
data_array = []
data_array << data1
data_array << data2
data_array << data3
.....
Which results data_array as:
data_array = [['a', [1, 2, 3, 4], 20], ['b',[5,6,7,8],8], ['c',[9,10,11,13],10]]
File.open('test1.txt', 'w') do |file|
file.write(keys.join("\t")+"\n")
data_array.each do |data|
data[1].map { |x| file.write("#{data[0]}\t#{x}\t#{data[2]}\n") }
end
end
Just extending the question :
if there are same Keys and a block of Data (Data1,Data2, Data3 ,...) how to efficiently concatenate and export the total output to a text file?
Data1 = [a, [1, 2, 3, 4], 20]
Data2 = [b,[5,6,7,8],8]
Data3 =[c,[9,10,11,13],10]
Related
I have a 2D array... is their any way to create CSV::Table with first row considered as headers and assuming all rows has same number of headers.
You can create a CSV::Table object with headers from a 2D array using CSV.parse.
First convert your 2d array to a string where the values in each row are joined by a comma, and each row is joined by a newline, then pass that string to CSV.parse along with the headers: true option
require 'csv'
sample_array = [
["column1", "column2", "column3"],
["r1c1", "r1c2", "r1c3"],
["r2c1", "r2c2", "r2c3"],
["r3c1", "r3c2", "r3c3"],
]
csv_data = sample_array.map {_1.join(",")}.join("\n")
table = CSV.parse(csv_data, headers: true)
p table
p table.headers
p table[0]
p table[1]
p table[2]
=>
#<CSV::Table mode:col_or_row row_count:4>
["column1", "column2", "column3"]
#<CSV::Row "column1":"r1c1" "column2":"r1c2" "column3":"r1c3">
#<CSV::Row "column1":"r2c1" "column2":"r2c2" "column3":"r2c3">
#<CSV::Row "column1":"r3c1" "column2":"r3c2" "column3":"r3c3">
Below is the basic example to create CSV file in ruby:
hash = {a: [1, 2, 3], b: [4, 5, 6]}
require 'csv'
CSV.open("my_file.csv", "wb") do |csv|
csv << %w(header1 header2 header3)
hash.each_value do |array|
csv << array
end
end
#diwanshu-tyagi will this help to resolve your question? if not please add example of your input value, I'll update this answer.
Thanks
So, I have a hash with arrays, like this one:
{"name": ["John","Jane","Chris","Mary"], "surname": ["Doe","Doe","Smith","Martins"]}
I want to merge them into an array of hashes, combining the corresponding elements.
The results should be like that:
[{"name"=>"John", "surname"=>"Doe"}, {"name"=>"Jane", "surname"=>"Doe"}, {"name"=>"Chris", "surname"=>"Smith"}, {"name"=>"Mary", "surname"=>"Martins"}]
Any idea how to do that efficiently?
Please, note that the real-world use scenario could contain a variable number of hash keys.
Try this
h[:name].zip(h[:surname]).map do |name, surname|
{ 'name' => name, 'surname' => surname }
end
I suggest writing the code to permit arbitrary numbers of attributes. It's no more difficult than assuming there are two (:name and :surname), yet it provides greater flexibility, accommodating, for example, future changes to the number or naming of attributes:
def squish(h)
keys = h.keys.map(&:to_s)
h.values.transpose.map { |a| keys.zip(a).to_h }
end
h = { name: ["John", "Jane", "Chris"],
surname: ["Doe", "Doe", "Smith"],
age: [22, 34, 96]
}
squish(h)
#=> [{"name"=>"John", "surname"=>"Doe", "age"=>22},
# {"name"=>"Jane", "surname"=>"Doe", "age"=>34},
# {"name"=>"Chris", "surname"=>"Smith", "age"=>96}]
The steps for the example above are as follows:
b = h.keys
#=> [:name, :surname, :age]
keys = b.map(&:to_s)
#=> ["name", "surname", "age"]
c = h.values
#=> [["John", "Jane", "Chris"], ["Doe", "Doe", "Smith"], [22, 34, 96]]
d = c.transpose
#=> [["John", "Doe", 22], ["Jane", "Doe", 34], ["Chris", "Smith", 96]]
d.map { |a| keys.zip(a).to_h }
#=> [{"name"=>"John", "surname"=>"Doe", "age"=>22},
# {"name"=>"Jane", "surname"=>"Doe", "age"=>34},
# {"name"=>"Chris", "surname"=>"Smith", "age"=>96}]
In the last step the first value of b is passed to map's block and the block variable is assigned its value.
a = d.first
#=> ["John", "Doe", 22]
e = keys.zip(a)
#=> [["name", "John"], ["surname", "Doe"], ["age", 22]]
e.to_h
#=> {"name"=>"John", "surname"=>"Doe", "age"=>22}
The remaining calculations are similar.
If your dataset is really big, you can consider using Enumerator::Lazy.
This way Ruby will not create intermediate arrays during calculations.
This is how #Ursus answer can be improved:
h[:name]
.lazy
.zip(h[:surname])
.map { |name, surname| { 'name' => name, 'surname' => surname } }
.to_a
Other option for the case where:
[..] the real-world use scenario could contain a variable number of hash keys
h = {
'name': ['John','Jane','Chris','Mary'],
'surname': ['Doe','Doe','Smith','Martins'],
'whathever': [1, 2, 3, 4, 5]
}
You could use Object#then with a splat operator in a one liner:
h.values.then { |a, *b| a.zip *b }.map { |e| (h.keys.zip e).to_h }
#=> [{:name=>"John", :surname=>"Doe", :whathever=>1}, {:name=>"Jane", :surname=>"Doe", :whathever=>2}, {:name=>"Chris", :surname=>"Smith", :whathever=>3}, {:name=>"Mary", :surname=>"Martins", :whathever=>4}]
The first part, works this way:
h.values.then { |a, *b| a.zip *b }
#=> [["John", "Doe", 1], ["Jane", "Doe", 2], ["Chris", "Smith", 3], ["Mary", "Martins", 4]]
The last part just maps the elements zipping each with the original keys then calling Array#to_h to convert to hash.
Here I removed the call .to_h to show the intermediate result:
h.values.then { |a, *b| a.zip *b }.map { |e| h.keys.zip e }
#=> [[[:name, "John"], [:surname, "Doe"], [:whathever, 1]], [[:name, "Jane"], [:surname, "Doe"], [:whathever, 2]], [[:name, "Chris"], [:surname, "Smith"], [:whathever, 3]], [[:name, "Mary"], [:surname, "Martins"], [:whathever, 4]]]
[h[:name], h[:surname]].transpose.map do |name, surname|
{ 'name' => name, 'surname' => surname }
end
I want to generate the below excel:
I tried bellow code
row = [1, 2, [31, 32]]
p = Axlsx::Package.new
wb = p.workbook
wb.add_worksheet(:name => "Sheet1") do |sheet|
sheet.add_row row
end
But I get the bellow result
|column1|column2| column3 |
| 1 | 2 | [31, 32]|
axlsx cell merging cannot be performed during row insertion.
What you want to do here is insert row 1 using [1, 2, 31] and row 2 using [nil, nil, 32] and then perform your merging after insert.
Have a look at the example:
require 'axlsx'
package = Axlsx::Package.new
package.workbook do |workbook|
workbook.add_worksheet name: 'merged_cells' do |sheet|
4.times do
sheet.add_row %w(a b c d e f g)
end
sheet.merge_cells "A1:A2"
sheet.merge_cells "B1:B2"
end
end
https://github.com/randym/axlsx/blob/master/examples/merge_cells.rb
It will likely get you most of the way there.
How can I convert a string of JSON data to a multidimensional array?
# Begin with JSON
json_data = "[
{"id":1,"name":"Don"},
{"id":2,"name":"Bob"},
...
]"
# do something here to convert the JSON data to array of arrays.
# End with multidimensional arrays
array_data = [
["id", "name"],
[1,"Don"],
[2,"Bob"],
...
]
For readability and efficiency, I would do it like this:
require 'json'
json_data = '[{"id":1,"name":"Don"},{"id":2,"name":"Bob"}]'
arr = JSON.parse(json_data)
#=> "[{\"id\":1,\"name\":\"Don\"},{\"id\":2,\"name\":\"Bob\"}]"
keys = arr.first.keys
#=> ["id", "name"]
arr.map! { |h| h.values_at(*keys) }.unshift(keys)
#=> [["id", "name"], [1, "Don"], [2, "Bob"]]
This should do the trick:
require 'json'
json_data = '[{"id":1,"name":"Don"},{"id":2,"name":"Bob"}]'
JSON.parse(json_data).inject([]) { |result, e| result + [e.keys, e.values] }.uniq
First, we read the JSON into an array with JSON.parse. For each element in the JSON, we collect all keys and values using inject which results in the following array:
[
["id", "name"],
[1, "Don"],
["id", "name"],
[2, "Bob"]
]
To get rid of the repeating key-arrays, we call uniq and are done.
[
["id", "name"],
[1, "Don"],
[2, "Bob"]
]
Adding to #tessi's answer, we can avoid using 'uniq' if we combine 'with_index' and 'inject'.
require 'json'
json_data = '[{"id":1,"name":"Don"},{"id":2,"name":"Bob"}]'
array_data = JSON.parse(json_data).each.with_index.inject([]) { |result, (e, i)| result + (i == 0 ? [e.keys, e.values] : [e.values]) }
puts array_data.inspect
The result is:
[["id", "name"], [1, "Don"], [2, "Bob"]]
I have a big array of hashes, like this
# note that the key order isn't consistent
data = [
{foo: 1, bar: 2, baz: 3},
{foo: 11, baz: 33, bar: 22}
]
I want to turn this into a CSV
foo,bar,baz
1,2,3
11,22,33
I am doing so like this:
columns = [:foo, :bar, :baz]
csv_string = CSV.generate do |csv|
csv << columns
data.each do |d|
row = []
columns.each do |column|
row << d[column]
end
csv << row
end
end
Is there a better way to do this? What I'd like to do is something like...
csv_string = CSV.generate do |csv|
csv << [:foo, :bar, :baz]
data.each do |row|
csv.add_row_hash row
end
end
With the appropriate options passed to generate, you can achieve what you want. Note that you can add the hash directly to the CSV once the headers are set.
c = CSV.generate(:headers => [:foo, :bar, :baz], :write_headers => true) do |csv|
data.each { |row| csv << row }
end
Output:
foo,bar,baz
1,2,3
11,22,33
If keys can be missing, you need to get all the possible keys
keys = data.map(&:keys).flatten.uniq
Then map each row using those keys.
csv_string = CSV.generate do |csv|
csv << keys
data.each do |row|
csv << row.values_at(keys)
end
end
My first idea: Your data could be used to insert data into a database table.
If you combine this with a csv-output of a DB-table you have another solution.
Example:
data = [
{foo: 1, bar: 2, baz: 3},
{foo: 11, baz: 33, bar: 22},
{foo: 11, baz: 33, bar: 22, xx: 3}, #additional parameters are no problem
]
#Prepare DB as a helper
require 'sequel'
DB = Sequel.sqlite
DB.extension(:sequel_3_dataset_methods) #define to_csv
DB.create_table(:tab){
add_column :foo
add_column :bar
add_column :baz
}
DB[:tab].multi_insert(data) #fill table
#output as csv (the gsub is necessary on Windows, maybe not necessary on other OS
puts DB[:tab].to_csv.gsub("\r\n","\n")
Disadvantage: You need Sequel
Advantage: You can adapt the order quite easy:
puts DB[:tab].select(:bar, :baz).to_csv.gsub("\r\n","\n")