How to group Date and time data from API - ruby

I am trying to group data I am getting from an API to serve to our front application. I mean group "time" by "date".
dates: {date1: [time1, time2, timeN], date2: [time1...]}
My input is like this:
{"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T13:00:00"}
{"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T14:00:00"}
{"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T12:00:00"}
{"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T13:00:00"}
And my output should be like this:
dates: [{date: "2017-04-04T00:00:00", availableTimes: ["1754-01-01T13:00:00", "1754-01-01T14:00:00"]}, {date: "2017-04-05T00:00:00", availableTimes: ["1754-01-01T12:00:00", "1754-01-01T13:00:00"]}]
I am trying to do this this way but without going into loop madness. I have the following:
dates = Hash[input_data.map{|sd| [sd.date, [""]]}]
This gives me the data outpout like this:
{"2017-04-04T00:00:00"=>[""],
"2017-04-05T00:00:00"=>[""],
"2017-04-11T00:00:00"=>[""],
"2017-04-12T00:00:00"=>[""],
"2017-04-18T00:00:00"=>[""],
"2017-04-19T00:00:00"=>[""],
"2017-04-25T00:00:00"=>[""],
"2017-04-26T00:00:00"=>[""]}

Just one possible way:
input.each_with_object(Hash.new { |h, k| h[k] = [] }) do |h, m|
m[h['date']] << h['time']
end.map { |k, v| { date: k, avaliable_times: v } }
#=> [{:date=>"2017-04-04T00:00:00", :avaliable_times=>["1754-01-01T13:00:00", "1754-01-01T14:00:00"]},
# {:date=>"2017-04-05T00:00:00", :avaliable_times=>["1754-01-01T12:00:00", "1754-01-01T13:00:00"]}]
Actually, it seems like your data structure would be more concise without last map, I mean:
#=> {"2017-04-04T00:00:00"=>["1754-01-01T13:00:00", "1754-01-01T14:00:00"],
# "2017-04-05T00:00:00"=>["1754-01-01T12:00:00", "1754-01-01T13:00:00"]}

You are getting that output because your map function is not actually modifying any sort of data structure. It is simply returning a new array full of arrays that contain the date and an array with an empty string. Basically, this isn't going to be done with just a single map call.
So, the basic algorithm would be:
Find array of all unique dates
Loop through unique dates and use select to only get the date/time pairs for the current date in the loop iteration
Set up the data in the format you prefer
This code will have filteredDates be in the format you need the data
filteredDates = { dates: [] }
uniqueDates = input_data.map { |d| d["date"] }.uniq # This is an array of only unique dates
uniqueDates.each do |date|
dateTimes = input_data.select { |d| d["date"] == date }
newObj = { date: date }
newObj[:availableTimes] = dateTimes.map { |d| d["time"] }
filteredDates[:dates].push(newObj)
end
Here is what filteredDates will look like:
{:dates=>[{:date=>"2017-04-04T00:00:00", :availableTimes=>["1754-01-01T13:00:00", "1754-01-01T14:00:00"]}, {:date=>"2017-04-05T00:00:00", :availableTimes=>["1754-01-01T12:00:00", "1754-01-01T13:00:00"]}]}

There is many ways you can do this, one way is to create a new hash, and set the default value to be an array, then loop over the results and insert the dates:
dates = Hash.new { |hash, key| hash[key] = [] }
input_data.each{ |sd| dates[sd["date"]] << sd["time"] }

I would use Enumerable#group_by.
dates = [{"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T13:00:00"},
{"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T14:00:00"},
{"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T12:00:00"},
{"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T13:00:00"}]
dates.group_by { |g| g["date"] }.
map { |k,v| { date: k, available_times: v.map { |h| h["time"] } } }
#=> [{:date=>"2017-04-04T00:00:00",
# :available_times=>["1754-01-01T13:00:00", "1754-01-01T14:00:00"]},
# {:date=>"2017-04-05T00:00:00",
# :available_times=>["1754-01-01T12:00:00", "1754-01-01T13:00:00"]}]
The first step produces the following intermediate value:
dates.group_by { |g| g["date"] }
#=> {"2017-04-04T00:00:00"=>
# [{"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T13:00:00"},
# {"date"=>"2017-04-04T00:00:00", "time"=>"1754-01-01T14:00:00"}],
# "2017-04-05T00:00:00"=>
# [{"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T12:00:00"},
# {"date"=>"2017-04-05T00:00:00", "time"=>"1754-01-01T13:00:00"}]}

There are probably more elegant ways, but
results = Hash.new
dates.each do |date|
d, t = date['date'].split('T') # (clean up/split date and time formatting)
results.key?(d) ? nil : results[d] = Array.new
results[d] << t
end
puts results
# => {"2017-04-04"=>["13:00:00", "14:00:00"], "2017-04-05"=>["12:00:00", "13:00:00"]}

Related

Convert object with array values into array of object

I do have this kind of params
params = { "people" =>
{
"fname" => ['john', 'megan'],
"lname" => ['doe', 'fox']
}
}
Wherein I loop through using this code
result = []
params["people"].each do |key, values|
values.each_with_index do |value, i|
result[i] = {}
result[i][key.to_sym] = value
end
end
The problem on my code is that it always gets the last key and value.
[
{ lname: 'doe' },
{ lname: 'fox' }
]
i want to convert it into
[
{fname: 'john', lname: 'doe'},
{fname: 'megan', lname: 'fox'}
]
so that i can loop through of them and save to database.
Your question has been answered but I'd like to mention an alternative calculation that does not employ indices:
keys, values = params["people"].to_a.transpose
#=> [["fname", "lname"], [["john", "megan"], ["doe", "fox"]]]
keys = keys.map(&:to_sym)
#=> [:fname, :lname]
values.transpose.map { |val| keys.zip(val).to_h }
#=> [{:fname=>"john", :lname=>"doe"},
# {:fname=>"megan", :lname=>"fox"}]
result[i] = {}
The problem is that you're doing this each loop iteration, which resets the value and deletes any existing keys you already put there. Instead, only set the value to {} if it doesn't already exist.
result[i] ||= {}
In your inner loop, you're resetting the i-th element to an empty hash:
result[i] = {}
So you only end up with the data from the last key-value-pair, i.e. lname.
Instead you can use this to only set it to an empty hash if it doesn't already exist:
result[i] ||= {}
So the first loop through, it gets set to {}, but after that, it just gets set to itself.
Alternatively, you can also use
result[i] = {} if !result[i]
which may or may not be more performant. I don't know.

How to create a Hash from a nested CSV in Ruby?

I have a CSV in the following format:
name,contacts.0.phone_no,contacts.1.phone_no,codes.0,codes.1
YK,1234,4567,AB001,AK002
As you can see, this is a nested structure. The CSV may contain multiple rows. I would like to convert this into an array of hashes like this:
[
{
name: 'YK',
contacts: [
{
phone_no: '1234'
},
{
phone_no: '4567'
}
],
codes: ['AB001', 'AK002']
}
]
The structure uses numbers in the given format to represent arrays. There can be hashes inside arrays. Is there a simple way to do that in Ruby?
The CSV headers are dynamic. It can change. I will have to create the hash on the fly based on the CSV file.
There is a similar node library called csvtojson to do that for JavaScript.
Just read and parse it line-by-line. The arr variable in the code below will hold an array of Hash that you need
arr = []
File.readlines('README.md').drop(1).each do |line|
fields = line.split(',').map(&:strip)
hash = { name: fields[0], contacts: [fields[1], fields[2]], address: [fields[3], fields[4]] }
arr.push(hash)
end
Let's first construct a CSV file.
str = <<~END
name,contacts.0.phone_no,contacts.1.phone_no,codes.0,IQ,codes.1
YK,1234,4567,AB001,173,AK002
ER,4321,7654,BA001,81,KA002
END
FName = 't.csv'
File.write(FName, str)
#=> 121
I have constructed a helper method to construct a pattern that will be used to convert each row of the CSV file (following the first, containing the headers) to an element (hash) of the desired array.
require 'csv'
def construct_pattern(csv)
csv.headers.group_by { |col| col[/[^.]+/] }.
transform_values do |arr|
case arr.first.count('.')
when 0
arr.first
when 1
arr
else
key = arr.first[/(?<=\d\.).*/]
arr.map { |v| { key=>v } }
end
end
end
In the code below, for the example being considered:
construct_pattern(csv)
#=> {"name"=>"name",
# "contacts"=>[{"phone_no"=>"contacts.0.phone_no"},
# {"phone_no"=>"contacts.1.phone_no"}],
# "codes"=>["codes.0", "codes.1"],
# "IQ"=>"IQ"}
By tacking if pattern.empty? onto the above expression we ensure the pattern is constructed only once.
We may now construct the desired array.
pattern = {}
CSV.foreach(FName, headers: true).map do |csv|
pattern = construct_pattern(csv) if pattern.empty?
pattern.each_with_object({}) do |(k,v),h|
h[k] =
case v
when Array
case v.first
when Hash
v.map { |g| g.transform_values { |s| csv[s] } }
else
v.map { |s| csv[s] }
end
else
csv[v]
end
end
end
#=> [{"name"=>"YK",
# "contacts"=>[{"phone_no"=>"1234"}, {"phone_no"=>"4567"}],
# "codes"=>["AB001", "AK002"],
# "IQ"=>"173"},
# {"name"=>"ER",
# "contacts"=>[{"phone_no"=>"4321"}, {"phone_no"=>"7654"}],
# "codes"=>["BA001", "KA002"],
# "IQ"=>"81"}]
The CSV methods I've used are documented in CSV. See also Enumerable#group_by and Hash#transform_values.

Best way to parse json in Ruby for the format given

For my rails app, SQL query result is received in the below format.
#data= JSON.parse(request,symbolize_names: true)[:data]
# #data sample
[{"time":"2017-11-14","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-15","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-16","A":2,"B":1,"C":1,"D":0,"E":1},
{"time":"2017-11-17","A":0,"B":0,"C":1,"D":0,"E":1},
{"time":"2017-11-20","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-21","A":6,"B":17,"C":0,"D":0,"E":1}]
But I want the data in the format
[{"name":"A","data":{"2017-11-16":2,"2017-11-21":6}},
{"name":"B","data":{"2017-11-16":1,"2017-11-21":17}},
{"name":"C","data":{"2017-11-16":1,"2017-11-17":1}},
{"name":"D","data":{}},
{"name":"E","data":{"2017-11-16":1,"2017-11-17":1,"2017-11-21":1}}]
What is the best way to parse this in Ruby?
I tried using #data.each method, but it is lengthy.
I am totally new to Ruby. Any help would be appreciated.
Oddly specific question, but kinda an interesting problem so I took a stab at it. If this is coming from a SQL database I feel like the better solution would be to have SQL format the data for you as opposed to transforming it in ruby.
#data = JSON.parse(request,symbolize_names: true)[:data]
intermediate = {}
#data.each do |row|
time = row.delete(:time)
row.each do |key, val|
intermediate[key] ||= {data: {}}
intermediate[key][:data][time] = val if val > 0
end
end
transformed = []
intermediate.each do |key, val|
transformed << {name: key.to_s, data: val}
end
At the end of this transformed will contain the transformed data. Horrible variable names, and I hate having to do this in two passes. But got something working and figured I would share in case it is helpful.
I agree with csexton that it looks like a better query to source the data would be the ultimate solution here.
Anyway, here's a solution that's similar to csexton's but uses nested default Hash procs to simplify some of the operations:
def pivot(arr, column)
results = Hash.new do |hash, key|
hash[key] = Hash.new(0)
end
arr.each do |hash|
data = hash.dup
pivot = data.delete(column)
data.each_pair do |name, value|
results[name][pivot] += value
end
end
results.map { |name, data| {
name: name.to_s,
data: data.delete_if { |_, sum| sum.zero? }
}}
end
pivot(#data, :time) # => [{:name=>"A", :data=>{"2017-11-16"=>2, "2017-11-21"=>6}}, ..
Here's a more "Ruby-ish" (depending on who you ask) solution:
def pivot(arr, column)
arr
.flat_map do |hash|
hash
.to_a
.delete_if { |key, _| key == column }
.map! { |data| data << hash[column] }
end
.group_by(&:shift)
.map { |name, outer| {
name: name.to_s,
data: outer
.group_by(&:last)
.transform_values! { |inner| inner.sum(&:first) }
.delete_if { |_, sum| sum.zero? }
}}
end
pivot(#data, :time) # => [{:name=>"A", :data=>{"2017-11-16"=>2, "2017-11-21"=>6}}, ..
Quite frankly, I find it pretty unreadable and I wouldn't want to support it. :)
arr = [{"time":"2017-11-14","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-15","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-16","A":2,"B":1,"C":1,"D":0,"E":1},
{"time":"2017-11-17","A":0,"B":0,"C":1,"D":0,"E":1},
{"time":"2017-11-20","A":0,"B":0,"C":0,"D":0,"E":0},
{"time":"2017-11-21","A":6,"B":17,"C":0,"D":0,"E":1}]
(arr.first.keys - [:time]).map do |key|
{ name: key.to_s,
data: arr.select { |h| h[key] > 0 }.
each_with_object({}) { |h,g| g.update(h[:time]=>h[key]) } }
end
#=> [{:name=>"A", :data=>{"2017-11-16"=>2, "2017-11-21"=>6}},
# {:name=>"B", :data=>{"2017-11-16"=>1, "2017-11-21"=>17}},
# {:name=>"C", :data=>{"2017-11-16"=>1, "2017-11-17"=>1}},
# {:name=>"D", :data=>{}},
# {:name=>"E", :data=>{"2017-11-16"=>1, "2017-11-17"=>1, "2017-11-21"=>1}}]
Note that
arr.first.keys - [:time]
#=> [:A, :B, :C, :D, :E]

How to compare ruby hash with same key?

I have two hashes like this:
hash1 = Hash.new
hash1["part1"] = "test1"
hash1["part2"] = "test2"
hash1["part3"] = "test3"
hash2 = Hash.new
hash2["part1"] = "test1"
hash2["part2"] = "test2"
hash2["part3"] = "test4"
Expected output: part3
Basically, I want to iterate both of the hashes and print out "part3" because the value for "part3" is different in the hash. I can guarantee that the keys for both hashes will be the same, the values might be different. I want to print out the keys when their values are different?
I have tried iterating both hashes at once and comparing values but does not seem to give the right solution.
The cool thing about Ruby is that it is so high level that it is often basically English:
Print keys from the first hash if the values in the two hashes are different:
hash1.keys.each { |key| puts key if hash1[key] != hash2[key] }
Select the first hash keys that have different values in the two hashes and print each of them:
hash1.keys.select { |key| hash1[key] != hash2[key] }.each { |key| puts key }
Edit: I'll leave this should it be of interest, but #ndn's solution is certainly better.
p hash1.merge(hash2) { |_,v1,v2| v1==v2 }.reject { |_,v| v }.keys
# ["part3"]
hash1["part1"] = "test99"
p hash1.merge(hash2) { |_,v1,v2| v1==v2 }.reject { |_,v| v }.keys
# ["part1", "part3"]
This uses the form of Hash#merge that employs a block (here { |_,v1,v2| v1==v2 }) to determine the values of keys that are present in both hashes being merged. See the doc for an explanation of the three block variables, _, v1 and v2. The first block variable equals the common key. I've used the local variable _ for that, as is customary when the variable is not used in the block calculation.
The steps (for the original hash1):
g = hash1.merge(hash2) { |_,v1,v2| v1==v2 }
#=> {"part1"=>true, "part2"=>true, "part3"=>false}
h = g.reject { |_,v| v }
#=> {"part3"=>false}
h.keys
#=> ["part3"]
The obvious way is that of ndn, here a solution without blocks by converting to arrays, joining them and subtracting the elements that are the same, followed by converting back to hash and asking for the keys.
Next time it would be better to include what you tried so far.
((hash1.to_a + hash2.to_a) - (hash1.to_a & hash2.to_a)).to_h.keys
# ["part3"]

How to update a Ruby nested hash inside a loop?

I'm creating a nested hash in ruby rexml and want to update the hash when i enter a loop.
My code is like:
hash = {}
doc.elements.each(//address) do |n|
a = # ...
b = # ...
hash = { "NAME" => { a => { "ADDRESS" => b } } }
end
When I execute the above code the hash gets overwritten and I get only the info in the last iteration of the loop.
I don't want to use the following way as it makes my code verbose
hash["NAME"] = {}
hash["NAME"][a] = {}
and so on...
So could someone help me out on how to make this work...
Assuming the names are unique:
hash.merge!({"NAME" => { a => { "ADDRESS" => b } } })
You always create a new hash in each iteration, which gets saved in hash.
Just assign the key directly in the existing hash:
hash["NAME"] = { a => { "ADDRESS" => b } }
hash = {"NAME" => {}}
doc.elements.each('//address') do |n|
a = ...
b = ...
hash['NAME'][a] = {'ADDRESS' => b, 'PLACE' => ...}
end
blk = proc { |hash, key| hash[key] = Hash.new(&blk) }
hash = Hash.new(&blk)
doc.elements.each('//address').each do |n|
a = # ...
b = # ...
hash["NAME"][a]["ADDRESS"] = b
end
Basically creates a lazily instantiated infinitely recurring hash of hashes.
EDIT: Just thought of something that could work, this is only tested with a couple of very simple hashes so may have some problems.
class Hash
def can_recursively_merge? other
Hash === other
end
def recursive_merge! other
other.each do |key, value|
if self.include? key and self[key].can_recursively_merge? value
self[key].recursive_merge! value
else
self[key] = value
end
end
self
end
end
Then use hash.recursive_merge! { "NAME" => { a => { "ADDRESS" => b } } } in your code block.
This simply recursively merges a heirachy of hashes, and any other types if you define the recursive_merge! and can_recusively_merge? methods on them.

Resources