array = [
["sean", "started", "shift", "at", "10:30:00"],
["anna", "started", "shift", "at", "11:00:00"],
["sean", "started", "shift", "at", "10:41:45"],
["anna", "finished", "shift", "at", "11:30:00"],
["sean", "finished", "shift", "at", "10:48:45"],
["sean", "started", "shift", "at", "11:31:00"],
["sean", "finished", "shift", "at", "11:40:00"]
]
Few things to consider
if you look at sean's entries - there are 2 entries for 'start times' one at 10:30:00 and also at 10:41:45 . The system can record multiple 'start times' but only one 'Finished' time. The logic is to pair first 'started' & first 'finished' and combine them.
How to skip the duplicated 'start time' entries (such as Sean's) and get a desired output as below...
array = [
["sean", "started", "shift", "at", "10:30:00", "finished", "shift", "at", "10:48:45"],
["anna", "started", "shift", "at", "11:00:00", "finished", "shift", "at", "11:30:00"],
["sean", "started", "shift", "at", "11:31:00", "finished", "shift", "at", "11:40:00"]
]
Theres no easy way is it?
array.group_by(&:first).map do |person, events|
events.chunk { |_, event_type| event_type }.each_slice(2).map do |(_, (start, _)), (_, (finish, _))|
%W(#{p} started shift at #{start[4]} finished shift at #{finish[4]})
end
end
# => [
# => ["sean", "started", "shift", "at", "10:30:00", "finished", "shift", "at", "10:48:45"],
# => ["sean", "started", "shift", "at", "11:31:00", "finished", "shift", "at", "11:40:00"],
# => ["anna", "started", "shift", "at", "11:00:00", "finished", "shift", "at", "11:30:00"]
# => ]
started = {}
result = []
array.each do |name, *event|
if event[0] == "started" && !started[name]
result << (started[name] = [name] + event)
elsif event[0] == "finished" && started[name]
started[name].concat(event)
started[name] = nil
end
end
result
EDIT Completely agree with fotanus, BTW.
EDIT2 Forgot to change a variable name. Also: logic. You take each row in turn. started will contain any records that have started but not yet finished. So, take a row; if it's "started", and only if we don't know that that person already started, remember the start. If it's "finished", and only if we already know the person has started, finish up the record by appending the finish info.
Related
I want to copy all they keys in the json except one which i want to transform.
ex.
Input JSON
{
"ts": "20200420121222",
"name": "broker",
"city": "queensland",
"age": 21,
"gender": "male"
"characteristics": {
"Card Id": "63247354",
"Termination Plan": "paid"
}
}
Output JSON
{
"ts": "20200420121222",
"name": "broker",
"city": "queensland",
"age": 21,
"gender": "male"
"characteristics": {
"card_id": "63247354", // change here
"termination_plan": "paid" // change here
}
}
Is there any better way via which i can just change the following above keys and copy the rest
You can use the "*": "&" construct to include all other fields that have not yet been matched:
[
{
"operation": "shift",
"spec": {
"characteristics": {
"Card Id": "characteristics.card_id",
"Termination Plan": "characteristics.termination_plan"
},
"*": "&"
}
}
]
I have incoming JSON rows of data like,
{"signalName": "IU_BATT_ParkAssist", "msgId": 2268, "epoch": 1582322746, "usec": 376360, "vlan": "-1", "msgName": "EBS_Frame12", "vin": "000004", "value": 14.171869, "timestamp": 1582322746376}
I want the output to be modified to produce,
{"IU_BATT_ParkAssist":14.171869, "msgId": 2268, "epoch": 1582322746, "usec": 376360, "vlan": "-1", "msgName": "EBS_Frame12", "vin": "000004", "timestamp": 1582322746376}
The signalName and value keys were combined to make one new key:value pair with the key being the signalName and the value being the value field, "IU_BATT_ParkAssist":14.171869 along with the other original key, value pairs.
How can I achieve this in Nifi given that the signalName field will be dynamically changing in each row?
Try with below spec:
[
{
"operation": "shift",
"spec": {
"#(1,value)": "#(2,signalName)",
"*": "&"
}
},
{
"operation": "remove",
"spec": {
"signalName": "",
"value": ""
}
}
]
In shift operation we are combining signalName and value.
In remove operation we are removing signalName and value from our json data.
Output:
{
"IU_BATT_ParkAssist" : 14.171869,
"msgId" : 2268,
"epoch" : 1582322746,
"usec" : 376360,
"vlan" : "-1",
"msgName" : "EBS_Frame12",
"vin" : "000004",
"timestamp" : 1582322746376
}
I am working with an API which accepts some JSON objects (sent as post request) and fails others based on certain criteria.
I am trying to compile a "log" of the objects which have failed and ones which have been validated successfully so I don't have to manually copy and paste them each time. (There are hundreds of objects).
Basically if the API returns "false", I want to push that object into a file, and if it returns true, all those objects go into another file.
I have tried to read a bunch of documentation / blogs on "select, detect, reject" etc enumerators but my problem is very different from the examples given.
I have written some pseudo code in my ruby file below and I think I'm going along the right lines, but need a bit of guidance to complete the task:
restaurants = JSON.parse File.read('pretty-minified.json')
restaurants.each do |restaurant|
create_response = HTTParty.post("https://api.hailoapp.com/business/create",
{
:body => restaurant.to_json,
:headers => { "Content-Type" => "text", "Accept" => "application/x-www-form-urlencoded", "Authorization" => "token #{api_token}" }
})
data = create_response.to_hash
alert = data["valid"]
if alert == false
# select restaurant json objects which return false and push into new file
# false_rest = restaurants.detect { |r| r == false }
File.open('false_objects.json', 'w') do |file|
file << JSON.pretty_generate(false_rest)
else
# select restaurant json objects which return true and push into another file
File.open('true_objects.json', 'w') do |file|
file << JSON.pretty_generate()
end
end
An example of the output (JSON) from the API is as follows:
{"id":"102427","valid":true}
{"valid":false}
The JSON file is basically an huge array of hashes (or objects), here is a short excerpt:
[
{
"id": "223078",
"name": "3 South Place",
"phone": "+442032151270",
"email": "3sp#southplacehotel.com",
"website": "",
"location": {
"latitude": 51.5190536,
"longitude": -0.0871038,
"address": {
"line1": "3 South Place",
"line2": "",
"line3": "",
"postcode": "EC2M 2AF",
"city": "London",
"country": "UK"
}
}
},
{
"id": "210071",
"name": "5th View Bar & Food",
"phone": "+442077347869",
"email": "waterstones.piccadilly#elior.com",
"website": "http://www.5thview.com",
"location": {
"latitude": 51.5089594,
"longitude": -0.1359897,
"address": {
"line1": "Waterstone's Piccadilly",
"line2": "203-205 Piccadilly",
"line3": "",
"postcode": "W1J 9HA",
"city": "London",
"country": "UK"
}
}
},
{
"id": "239971",
"name": "65 & King",
"phone": "+442072292233",
"email": "hello#65king.com",
"website": "http://www.65king.com/",
"location": {
"latitude": 51.5152533,
"longitude": -0.1916538,
"address": {
"line1": "65 Westbourne Grove",
"line2": "",
"line3": "",
"postcode": "W2 4UJ",
"city": "London",
"country": "UK"
}
}
}
]
Assuming you want to filter by emails, ending with elior.com (this condition might be easily changed):
NB! The data above looks like a javascript var, it’s not a valid ruby object. I assume you just got it from somewhere as a string. That’s why json:
require 'json'
array = JSON.parse(restaurants) # data is a string: '[{....... as you received it
result = array.group_by do |e|
# more sophisticated condition goes here
e['email'] =~ /elior\.com$/ ? true : false
end
File.open('false_objects.json', 'w') do |file|
file << JSON.pretty_generate(result[false])
end
File.open('true_objects.json', 'w') do |file|
file << JSON.pretty_generate(result[true])
end
There is a hash in result, containing two elements:
#⇒ {
# true: [..valids here ..],
# false: [..invalids here..]
# }
Hi I convert a PDF to a txt file in Ruby 1.9.3
Here is part of the txt file:
[["Rate", "Card", "February", "29,", "2012"]]
[["Termination", "Color", "Test", "No", "Rate", "Currency", "Notes"]]
[["x", "A", "CAMEL", "56731973573", "$", "0.1400", "USD", "30/45/100%"]]
["y", "A", "CARDINAL", "56731972501", "$", "0.1400", "USD", "30/45/100%"]]
[["z", "A", "CARNELIAN", "56731971654", "$", "0.1400", "USD", "30/45/100%"]]
.....
....
[["Rate", "Card", "February", "29,", "2012"]]
[["Termination", "Color", "Test", "No", "Rate", "Currency", "Notes"]]
I store every line in a different array, but the problem is that I don't want to read the two first lines which appears lots of times in my txt file, because those lines are the header in every page on the pdf. Any idea about how to do that? Thanks!
You can read file into array and reject lines you do not need:
rejected = [
'[["Rate", "Card", "February", "29,", "2012"]]',
'[["Termination", "Color", "Test", "No", "Rate", "Currency", "Notes"]]',
]
lines = File.readlines('/path/to/file').reject { |line| rejected.include? line }
I have two json files that I'm trying to merge. The JSONs have different formatting (see below). I'd like to merge records, so [0] from file one and [0] from file two would become one record [0] in the new merged file.
The first JSON (file_a.json), appears like so:
{
"query": {
"count": 4,
"created": "2012-11-21T23:07:00Z",
"lang": "en-US",
"results": {
"quote": [
{
"Name": "Bill",
"Age": "46",
"Number": "3.55"
},
{
"Name": "Jane",
"Age": "33",
"Number": nil
},
{
"Name": "Jack",
"Age": "55",
"Number": nil
},
{
"Name": "Xavier",
"Age": nil,
"Number": "153353535"
}
]
}
}
}
The second JSON (file_b.json) appears like so:
[
{
"Number2": 25253,
"Number3": 435574,
"NAME": "Bill"
},
{
"Number2": 345353,
"Number3": 5566,
"NAME": "Jane"
},
{
"Number2": 56756,
"Number3": 232435,
"NAME": "Jack"
},
{
"Number2": 7457,
"Number3": 45425,
"NAME": "Xavier"
}
]
None of the keys are the same in both JSONs (well, actually "Name" is a key in both, but in the first the key is "Name" and in the second its "NAME" - just so I can check that the merge works correctly - so I want "Name" and "NAME" in the final JSON), the first record in the first file matches with the first record in the second file, and so on.
So far, I tried merging like this:
merged = %w[a b].inject([]) { |m,f| m << JSON.parse(File.read("file_#{f}.json")) }.flatten
But this of course merged them, but not how I wanted them merged (they are merged sucessively, and because of the different formatting, it gets quite ugly).
I also tried merging like this:
a = JSON.parse(File.read("file_a.json"))
b = JSON.parse(File.read("file_b.json"))
merged = a.zip(b)
Came closer but still not correct and the formatting was still horrendous.
In the end, what I want is this (formatting of second JSON - headers from first JSON can be junked):
[
{
"Name": "Bill",
"Age": 46,
"Number": 3.55,
"Number2": 25253,
"Number3": 435574,
"NAME": "Bill"
},
{
"Name": "Jane",
"Age": 33,
"Number": nil,
"Number2": 345353,
"Number3": 5566,
"NAME": "Jane"
},
{
"Name": "Jack",
"Age": 55,
"Number": nil,
"Number2": 56756,
"Number3": 232435,
"NAME": "Jack"
},
{
"Name": "Xavier",
"Age": nil,
"Number": 153353535,
"Number2": 7457,
"Number3": 45425,
"NAME": "Xavier"
}
]
Any help is appreciated. Thanks a lot.
Hеllo, seems format changed from last time :)
UPDATE: more readable version that also convert corresponding values to integers/floats:
require 'json'
require 'ap'
a = JSON.parse(File.read('./a.json'))['query']['results']['quote'] rescue []
b = JSON.parse(File.read('./b.json'))
final = []
a.each_with_index do |ah,i|
unless bh = b[i]
bh = {}
puts "seems b has no #{i} key, merging skipped"
end
final << ah.merge(bh).inject({}) do |f, (k,v)|
if v.is_a?(String)
if v =~ /\A\d+\.\d+\Z/
v = v.to_f
elsif v =~ /\A\d+\Z/
v = v.to_i
end
end
f.update k => v
end
end
ap final
will display:
[
[0] {
"Name" => "Bill",
"Age" => 46,
"Number" => 3.55,
"Number2" => 25253,
"Number3" => 435574,
"NAME" => "Bill"
},
[1] {
"Name" => "Jane",
"Age" => 33,
"Number" => nil,
"Number2" => 345353,
"Number3" => 5566,
"NAME" => "Jane"
},
[2] {
"Name" => "Jack",
"Age" => 55,
"Number" => nil,
"Number2" => 56756,
"Number3" => 232435,
"NAME" => "Jack"
},
[3] {
"Name" => "Xavier",
"Age" => nil,
"Number" => 153353535,
"Number2" => 7457,
"Number3" => 45425,
"NAME" => "Xavier"
}
]
Here is a working demo
Btw, your json is a bit wrong in both files.
See the fixed versions here and here