I'm looking for a method in pure Ruby of taking a timezone string such as "US/Eastern" and using it with a string representation of a timestamp to convert into a timestamp with timezone.
All I've found so far is strptime which supports a short timezone name like "EST" or a timezone offset like "-0500", but not a full timezone string.
I need to be able to run this as part of a Logstash ruby filter. I have some JSON that contains timestamps with no timezone that looks like this:
{
"event": {
"created": "2021-02-15_11-26-29",
},
"Accounts": [
{
"Name": "operator",
"Caption": "SERVER\\operator",
"Domain": "SERVER",
"PasswordChangeable": "False",
"PasswordRequired": "True",
"PasswordExpires": "False",
"Disabled": "False",
"Lockout": "False",
"LocalAccount": "True",
"FullName": "operator",
"Status": "OK",
"LastLogon": "07/08/2020 2:14:13 PM"
},
...
]
}
For the event.created field I can just use a date filter:
date {
match => [ "[event][created]", "yyyy-MM-dd_HH-mm-ss" ]
timezone => "${TIMEZONE}"
target => "[event][created]"
}
Where ${TIMEZONE} is an environment variable holding the full timezone name, i.e. "US/Eastern". But for the Acounts.LastLogin field I can't use a date filter because it resides in a list of variable length, so I have to resort to a ruby filter. The closest I was able to come was this:
ruby {
code => 'event.get("[Accounts]").each_index {|x|
tz = "-0500"
last_logon_str = event.get("[Accounts][#{x}][LastLogon]")
last_logon = DateTime.strptime(last_logon_str + " " + tz, "%m/%d/%Y %I:%M:%S %p %z")
event.set("[users][#{x}][last_logon]", last_logon.strftime("%Y-%m-%dT%H:%M:%S%z"))
}'
}
But of course this is using a hardcoded timezone offset and not the variable containing the full name.
The docs I looked at for the Time object at https://ruby-doc.org/core-2.6/Time.html stated that a Time object can be created using a timezone:
Or a timezone object:
tz = timezone("Europe/Athens") # Eastern European Time, UTC+2
Time.new(2002, 10, 31, 2, 2, 2, tz) #=> 2002-10-31 02:02:02 +0200
Which I could then use to extract the offset, but I couldn't find a reference to timezone anywhere.
What's the best way to handle this?
timezone is part of the TZInfo class. You need to require it. The following code
ruby {
code => '
require "tzinfo"
tz = TZInfo::Timezone.get(ENV["TIMEZONE"])
event.set("offset", tz.observed_utc_offset())
'
}
gets me "offset" => -18000, when $TIMEZONE is "US/Eastern".
Related
I'm new to ruby so please excuse any ignorance I may bear. I was wondering how to parse a JSON reponse for every value belonging to a specific key. The response is in the format,
[
{
"id": 10008,
"name": "vpop-fms-inventory-ws-client",
"msr": [
{
"key": "blocker_violations",
"val": 0,
"frmt_val": "0"
},
]
},
{
"id": 10422,
"name": "websample Maven Webapp",
"msr": [
{
"key": "blocker_violations",
"val": 0,
"frmt_val": "0"
}...
There's some other entries in the response, but for the sake of not having a huge block of code, I've shortened it.The code I've written is:
require 'uri'
require 'net/http'
require 'JSON'
url = URI({my url})
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Get.new(url)
request["cache-control"] = 'no-cache'
request["postman-token"] = '69430784-307c-ea1f-a488-a96cdc39e504'
response = http.request(request)
parsed = response.read_body
h = JSON.parse(parsed)
num = h["msr"].find {|h1| h1['key']=='blocker_violations'}['val']
I am essentially looking for the val for each blocker violation (the json reponse contains hundreds of entries, so im expecting hundreds of blocker values). I had hoped num would contain an array of all the 'val's. If you have any insight in this, it would be of great help!
EDIT! I'm getting a console output of
scheduler caught exception:
no implicit conversion of String into Integer
C:/dashing/test_board/jobs/issue_types.rb:20:in `[]'
C:/dashing/test_board/jobs/issue_types.rb:20:in `block (2 levels) in <top (requi
red)>'
C:/dashing/test_board/jobs/issue_types.rb:20:in `select'
I suspect that might have too much to do with the question, but some help is appreciated!
You need to do 2 things. Firstly, you're being returned an array and you're only interested in a subset of the elements. This is a common pattern that is solved by a filter, or select in Ruby. Secondly, the condition by which you wish to select these elements also depends on the values of another array, which you need to filter using a different technique. You could attempt it like this:
res = [
{
"id": 10008,
"name": "vpop-fms-inventory-ws-client",
"msr": [
{
"key": "blocker_violations",
"val": 123,
"frmt_val": "0"
}
]
},
{
"id": 10008,
"name": "vpop-fms-inventory-ws-client",
"msr": [
{
"key": "safe",
"val": 0,
"frmt_val": "0"
}
]
}
]
# define a lambda function that we will use later on to filter out the blocker violations
violation = -> (h) { h[:key] == 'blocker_violations' }
# Select only those objects who contain any msr with a key of blocker_violations
violations = res.select {|h1| h1[:msr].any? &violation }
# Which msr value should we take? Here I just take the first.
values = violations.map {|v| v[:msr].first[:val] }
The problem you may have with this code is that msr is an array. So theoretically, you could end up with 2 objects in msr, one that is a blocker violation and one that is not. You have to decide how you handle that. In my example, I include it if it has a single blocker violation through the use of any?. However, you may wish to only include them if all msr objects are blocker violations. You can do this via the all? method.
The second problem you then face is, which value to return? If there are multiple blocker violations in the msr object, which value do you choose? I just took the first one - but this might not work for you.
Depending on your requirements, my example might work or you might need to adapt it.
Also, if you've never come across the lambda syntax before, you can read more about it here
Here is my JSON file:
[{
"name": "chetan",
"age": 23,
"hobby": ["cricket", "football"]
}, {
"name": "raj",
"age": 24,
"hobby": ["cricket", "golf"]
}]
Here is the golang code I tried but didn't work as expected.
id:= "ket"
c.EnsureIndexKey("hobby")
err = c.Find(bson.M{"$hobby": bson.M{"$search": id,},}).All(&result)
It gives error:
$hobby exit status 1
From $search I'm assuming you're trying to use a text index/search, but in your case that wouldn't work. Text index doesn't support partials. You can still use regex to find those documents, but performance wise it wouldn't be a wise choice probably, unless you can utilize the index - which in your case wouldn't happen.
Still, you could achieve what you want with:
id := "ket"
regex := bson.M{"$regex": bson.RegEx{Pattern: id}}
err = c.Find(bson.M{"hobby": regex}).All(&result)
I have an array of objects I am sending to a REST API to receive back information on those objects. To do this I am using RestClient with the following lines to send the call and parse the response.
response_raw = RestClient.get "http://#{re_host}:#{re_port}/reachengine/api/inventory/search?rql=fCSAssetNumber=#{fcs_id_num}%20size%20#{size}%20&apiKey=#{api_key}", headers
response_json = Crack::JSON.parse(response_raw)
response_json['results'].each do |result|
For the first 20+ records I perform this action on, everything works fine. Then I start to get a NoMethodError: undefined method `[]' for nil:NilClass
When I run the code step by step in IRB, what I see in the results is very strange
result = response_json['results'][0]
>=> {"name"=>"Publicaciòn_Listin_Diario.png", " id"=>" 294290", " dateCreated"=>" 2015-09-20T20:35:06.000+0000", " dateUpdated"=>" 2015-12-23T19:33:13.000+0000", " systemKeywords"=>" Publicaciòn_Listin_Diario.png Image ", "t humbnailId"=>"4 24725", "m etadata"=>{" sourceFilePath"=>"/ Volumes/ONLINE_DAM/MEDIA/RAW_GRAPHICS/1307001_August_2013_KOS/Publicaciòn_Listin_Diario.png", "pa MdCustAgency_picklist_sortable"=>"nul l", "th umbnailAssetFlag"=>"fal se", "re storeKey"=>"nul l", "ar chiveStatus_picklist_sortable"=>"nul l", "fC SAssetNumber"=>"18 2725", "fC SMetadataSet"=>"Ra w Graphic", "cu stKeywords"=>"Do minican Republic Cycling Team, 1307001 August 2013 KOS Kickoff show", "cu stAssetStatus_picklist_sortable"=>"nul l", "se archableFlag"=>"fal se", "as setType"=>"Im age", "pa MdCustHerbalifeJobNumber"=>"13 07001", "da teCreated"=>"20 15-09-20T20:35:06", "da teLocked"=>"nul l", "uu id"=>"30 9d9bb3-6935-4ab6-a04a-ef7264132bc6", "ve rsionFlag"=>"nul l", "ag ency_picklist_sortable"=>"nul l", "pr oducer_picklist_sortable"=>"nul l", "tr uncatedFlag"=>"fal se", "cu stDescription"=>"*R aw Graphics for 1307001_August_2013_KOS"}, "in ventoryKey"=>"im age"}
Usually, with this response, I can run
result['metadata']['fCSAssetNumber']
However; because of the random spaces, this fails with a "NoMethodError: undefined method `[]' for nil:NilClass" because instead of the string being 'metadata' it is actually 'm etadata'
Whats really strange about all of this and why this is a Ruby issue and not easily determined to be the API's issue is that the same exact call made via Postman REST Client in chrome returns this result:
>{
"results": [
{
"name": "Publicaciòn_Listin_Diario.png",
"id": "294290",
"dateCreated": "2015-09-20T20:35:06.000+0000",
"dateUpdated": "2015-12-23T19:33:13.000+0000",
"systemKeywords": "Publicaciòn_Listin_Diario.png Image ",
"thumbnailId": "424725",
"metadata": {
"sourceFilePath": "/Volumes/ONLINE_DAM/MEDIA/RAW_GRAPHICS/1307001_August_2013_KOS/Publicaciòn_Listin_Diario.png",
"paMdCustAgency_picklist_sortable": null,
"thumbnailAssetFlag": false,
"restoreKey": null,
"archiveStatus_picklist_sortable": null,
"fCSAssetNumber": "182725",
"fCSMetadataSet": "Raw Graphic",
"custKeywords": "Dominican Republic Cycling Team, 1307001 August 2013 KOS Kickoff show",
"custAssetStatus_picklist_sortable": null,
"searchableFlag": false,
"assetType": "Image",
"paMdCustHerbalifeJobNumber": "1307001",
"dateCreated": "2015-09-20T20:35:06",
"dateLocked": null,
"uuid": "309d9bb3-6935-4ab6-a04a-ef7264132bc6",
"versionFlag": null,
"agency_picklist_sortable": null,
"producer_picklist_sortable": null,
"truncatedFlag": false,
"custDescription": "*Raw Graphics for 1307001_August_2013_KOS"
},
"inventoryKey": "image"
}
],
"total": "1"
}
Ad you can see above, when Postman runs the same exact call there is no issue with the response, but when ruby runs the call there is. Also note that this doesnt happen all of the time.
Below is a sample response from the same exact ruby call that actually worked.
result = response_json['results'][0]
=> {"name"=>"Marco_1er_dia.png", "id"=>"294284", "dateCreated"=>"2015-09-`20T20:34:54.000+0000", "dateUpdated"=>"2015-12-23T19:33:10.000+0000", "systemKeywords"=>"Marco_1er_dia.png Image ", "thumbnailId"=>"424716", "metadata"=>{"sourceFilePath"=>"/Volumes/ONLINE_DAM/MEDIA/RAW_GRAPHICS/1307001_August_2013_KOS/Marco_1er_dia.png", "paMdCustAgency_picklist_sortable"=>nil, "collectionMemberships"=>"320 321", "thumbnailAssetFlag"=>false, "restoreKey"=>nil, "fCSMetadataSet"=>"Raw Graphic", "fCSAssetNumber"=>"182722", "archiveStatus_picklist_sortable"=>nil, "custAssetStatus_picklist_sortable"=>nil, "custKeywords"=>"1307001 August 2013 KOS Kickoff show, Dominican Republic Cycling Team", "searchableFlag"=>false, "assetType"=>"Image", "paMdCustHerbalifeJobNumber"=>"1307001", "dateCreated"=>"2015-09-20T20:34:54", "dateLocked"=>nil, "uuid"=>"b5e55c14-b94e-4629-9e2a-61a2dc0876f6", "versionFlag"=>nil, "fCSProductionStatus_picklist_sortable"=>nil, "agency_picklist_sortable"=>nil, "producer_picklist_sortable"=>nil, "truncatedFlag"=>false, "custDescription"=>"*Raw Graphics for 1307001_August_2013_KOS"}, "inventoryKey"=>"image"}`
Notice how the response above has no spacing issue? The only glaring difference I can see here is that there is a special character in use in the filename: ò
Is there something specific I need to do for RESTClient to work with this?
Anyone have any idea how this can be fixed?
The issue was related to the 'crack' gem. I've used this for a long time. Apparently, as of Ruby 1.9, the parse method has been available on the standard JSON class. When I switched to using this by changing
response_json = Crack::JSON.parse(response_raw)
to
response_json = JSON.parse(response_raw)
The issue went away.
I have a JSON like this:
[
{
"Low": 8.63,
"Volume": 14211900,
"Date": "2012-10-26",
"High": 8.79,
"Close": 8.65,
"Adj Close": 8.65,
"Open": 8.7
},
{
"Low": 8.65,
"Volume": 12167500,
"Date": "2012-10-25",
"High": 8.81,
"Close": 8.73,
"Adj Close": 8.73,
"Open": 8.76
},
{
"Low": 8.68,
"Volume": 20239700,
"Date": "2012-10-24",
"High": 8.92,
"Close": 8.7,
"Adj Close": 8.7,
"Open": 8.85
}
]
And have calculated a simple moving average for each day of the closing prices and called it a variable sma9day. I'd like to join the moving average values with the original JSON, so I get something like this for each day:
{
"Low": 8.68,
"Volume": 20239700,
"Date": "2012-10-24",
"High": 8.92,
"Close": 8.7,
"Adj Close": 8.7,
"Open": 8.85,
"SMA9": 8.92
}
With the sma9day variable I did this:
h = { "SMA9" => sma9day }
sma9json = h.to_json
puts sma9json
which outputs this:
{"SMA9":[8.92,8.93,8.93]}
How do I put it in a compatible format with the JSON and join the two? I'll need to "match/join" from the top down, as the last 8 records in the JSON will not have 9 day moving average values (in these cases I'd still like the key to be there (SMA9), but have nil or zero as the value.
Thank you.
LATEST UPDATE:
I now have this, which gets me very close, however it returns the entire string in the SMA9 field in the JSON...
require json
require simple_statistics
json = File.read("test.json")
quotes = JSON.parse(json)
# Calculations
def sma9day(quotes, i)
close = quotes.collect {|quote| quote['Close']}
sma9day = close.each_cons(9).collect {|close| close.mean}
end
quotes = quotes.each_with_index do |day, i|
day['SMA9'] = sma9day(quotes, i)
end
p quotes[0]
=> {"Low"=>8.63, "Volume"=>14211900, "Date"=>"2012-10-26", "High"=>8.79, "Close"=>8.65, "Adj Close"=>8.65, "Open"=>8.7, "SMA9"=>[8.922222222222222, 8.93888888888889, 8.934444444444445, 8.94222222222222, 8.934444444444445, 8.937777777777777, 8.95, 8.936666666666667, 8.924444444444443, 8.906666666666666, 8.912222222222221, 8.936666666666666, 8.946666666666665, 8.977777777777778, 8.95111111111111, 8.92, 8.916666666666666]}
When I try to do sma9day.round(2) before the end of the calculations, it gives a method error (presumably because of the array?), and when I did sma9day[0].round(2), it does correctly round, but every record has the same SMA of course.
Any help is appreciated. Thanks
Presumably to do the calculation in ruby, you somehow parsed the json, and got a ruby Hash out of it.
To get this straight, you have an array of sma9day values, and an array of objects, and you want to iterate through them.
To do that, something like this should get you started:
hashes = JSON.parse( json )
sma9day_values = [9.83, 9.82, etc... ]
hashes.each_with_index do |hash, index|
if index >= 9
hash["SMA9"] = sma9day_values[index-9]
else
hash["SMA9"] = 0
end
end
puts hashes.to_json
Edit:
You really need to try a beginning ruby tutorial. The problem is that you are calling round(2) on an array. The variable i in the sma9day(quotes, i) function is not used (hint). Maybe try something like sma9day[i].round(2)
Also the return of each_with_index is not something to assign. Dont do that, just call each_with_index on an array. I.e.
quotes = quotes.each_with_index do |day, i| #bad
quotes.each_with_index do |day, i| #good
I took your input and compiled a solution in this gist. I hope it helps.
I'm new to Ruby and had a question. I'm trying to create a .rb file that converts JSON to CSV.
I came across some disparate sources that got me to make:
require "rubygems"
require 'fastercsv'
require 'json'
csv_string = FasterCSV.generate({}) do |csv|
JSON.parse(File.open("small.json").read).each do |hash|
csv << hash
end
end
puts csv_string
Now, it does in fact output text but they are all squashed together without spaces, commas etc. How do I make it more customised, clear for a CSV file so I can export that file?
The JSON would look like:
{
"results": [
{
"reportingId": "s",
"listingType": "Business",
"hasExposureProducts": false,
"name": "Medeco Medical Centre World Square",
"primaryAddress": {
"geoCodeGranularity": "PROPERTY",
"addressLine": "Shop 9.01 World Sq Shopng Cntr 644 George St",
"longitude": "151.206172",
"suburb": "Sydney",
"state": "NSW",
"postcode": "2000",
"latitude": "-33.876416",
"type": "VANITY"
},
"primaryContacts": [
{
"type": "PHONE",
"value": "(02) 9264 8500"
}
]
},xxx
}
The CSV to just have something like:
reportingId, s, listingType, Business, name, Medeco Medical...., addressLine, xxxxx, longitude, xxxx, latitude, xxxx, state, NSW, postcode, 2000, type, phone, value, (02) 92648544
Since your JSON structure is a mix of hashes and lists, and also has levels of different heights, it is not as trivial as the code you show. However (assuming your input files always look the same) it shouldn't be hard to write an appropriate converter. On the lowest level, you can transform a hash to CSV by
hash.to_a.flatten
E.g.
input = JSON.parse(File.open("small_file.json").read)
writer = FasterCSV.open("out.csv", "w")
writer << input["results"][0]["primaryAddress"].to_a.flatten
will give you
type,VANITY,latitude,-33.876416,postcode,2000,state,NSW,suburb,Sydney,longitude,151.206172,addressLine,Shop 9.01 World Sq Shopng Cntr 644 George St,geoCodeGranularity,PROPERTY
Hope that guides you the direction.
Btw, your JSON looks invalid. You should change the },xxx line to }].