How do I access JSON array data? - ruby

I have the following array:
[ { "attributes": {
"id": "usdeur",
"code": 4
},
"name": "USD/EUR"
},
{ "attributes": {
"id": "eurgbp",
"code": 5
},
"name": "EUR/GBP"
}
]
How can I get both ids for futher processing as output?
I tried a lot but no success. My problem is I always get only one id as output:
Market.all.select.each do |market|
present market.id
end
Or:
Market.all.each{|attributes| present attributes[:id]}
which gives me only "eurgbp" as a result while I need both ids.

JSON#parse should help you with this
require 'json'
json = '[ { "attributes": {
"id": "usdeur",
"code": 4
},
"name": "USD/EUR"
},
{ "attributes": {
"id": "eurgbp",
"code": 5
},
"name": "EUR/GBP"
}]'
ids = JSON.parse(json).map{|hash| hash['attributes']['id'] }
#=> ["usdeur", "eurgbp"]
JSON#parse turns a jSON response into a Hash then just use standard Hash methods for access.

I'm going to assume that the data is JSON that you're parsing (with JSON.parse) into a Ruby Array of Hashes, which would look like this:
hashes = [ { "attributes" => { "id" => "usdeur", "code" => 4 },
"name" => "USD/EUR"
},
{ "attributes" => { "id" => "eurgbp", "code" => 5 },
"name" => "EUR/GBP"
} ]
If you wanted to get just the first "id" value, you'd do this:
first_hash = hashes[0]
first_hash_attributes = first_hash["attributes"]
p first_hash_attributes["id"]
# => "usdeur"
Or just:
p hashes[0]["attributes"]["id"]
# => "usdeur"
To get them all, you'll do this:
all_attributes = hashes.map {|hash| hash["attributes"] }
# => [ { "id" => "usdeur", "code" => 4 },
# { "id" => "eurgbp", "code" => 5 } ]
all_ids = all_attributes.map {|attrs| attrs["id"] }
# => [ "usdeur", "eurgbp" ]
Or just:
p hashes.map {|hash| hash["attributes"]["id"] }
# => [ "usdeur", "eurgbp" ]

JSON library what using Rails is very slowly...
I prefer to use:
gem 'oj'
from https://github.com/ohler55/oj
fast and simple! LET'S GO!

Related

Proper way to Parse a Payload in Ruby

I have the following payload:
[{:payload=>
"{\"user\":\"test\",\"job\":\"Test\",\"username\":\"Bob\",\"blocks\":[{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"this is the title\"}},{\"type\":\"context\",\"elements\":[{\"type\":\"mrkdwn\",\"text\":\"Test\"}]},{\"type\":\"divider\"}]}"}]
I'm trying to figure out how to extract it. I tried
JSON.parse(response)
But I get the following error
TypeError: no implicit conversion of Hash into String
How can I extract this value to something where I can do something like:
response.job == "test" ?
Let's assume that you meant to say:
response = [{:payload => "{\"user\":\"test\",\"job\":\"Test\",\"username\":\"Bob\",\"blocks\":[{\"type\":\"section\",\"text\":{\"type\":\"mrkdwn\",\"text\":\"this is the title\"}},{\"type\":\"context\",\"elements\":[{\"type\":\"mrkdwn\",\"text\":\"Test\"}]},{\"type\":\"divider\"}]}"}]
Then response is an array with one element. That one element is a hash. You would thus access the payload with:
payload = JSON.parse(response.first[:payload])
=> {
"user" => "test",
"job" => "Test",
"username" => "Bob",
"blocks" => [
[0] {
"type" => "section",
"text" => {
"type" => "mrkdwn",
"text" => "this is the title"
}
},
[1] {
"type" => "context",
"elements" => [
[0] {
"type" => "mrkdwn",
"text" => "Test"
}
]
},
[2] {
"type" => "divider"
}
]
}
The payload object is then a hash and its child elements can be accessed using the standard [] call:
job = payload['job']
=> "Test"

How to ignore RSpec::ExampleGroups::EmptyContentValidation error on compiling a null value in JSON?

Here is what I get:
*** NameError Exception: undefined local variable or method `null' for #<RSpec::ExampleGroups::EmptyContentValidation:0x00007fdd2d2775d8>
Here is what I have as code:
actual =
{
"posts": [
{
"id": 3,
"title": "Post 3"
}
],
"profile": {
"name": ""
},
"nulldata": {
"res": null
}
}
Not sure why RSpec::ExampleGroups::EmptyContentValidation is
thrown when the above json snippet is compiled
How to get rid of this error message (or to turn off this error which is not relevant)
That isn't JSON, that's a Ruby hash. And in Ruby there is no null, instead you use nil. You can confirm all of this by pasting into your REPL:
actual =
{
"posts": [
{
"id": 3,
"title": "Post 3"
}
],
"profile": {
"name": ""
},
"nulldata": {
"res": null
}
}
This will return:
NameError: undefined local variable or method `null' for main:Object
Now change it to nil:
actual =
{
"posts": [
{
"id": 3,
"title": "Post 3"
}
],
"profile": {
"name": ""
},
"nulldata": {
"res": nil
}
}
This will return:
=> {
:posts => [
[0] {
:id => 3,
:title => "Post 3"
}
],
:profile => {
:name => ""
},
:nulldata => {
:res => nil
}
}
And you can even confirm that it's not JSON (which is a String) but is instead a Hash:
actual.class
=> Hash < Object

Json Array splitting issue Logstash configuration : Unexpected end-of-input: expected close marker for Array (start marker at [Source: (S

This is how my json object looks like, i have verified that the json i am getting is a valid. I tries setting up configuration files for the same, but always get the same error
SON parse error, original data now in message field {:error=>#, :data=>"{\"total_rows\":15587,\"offset\":0,\"rows\":[\r"}
[2019-08-05T21:07:49,799][WARN ][logstash.filters.split ] Only String and Array types are splittable. field:[doc][serversGroups] is of type = NilClass
[2019-08-05T21:07:50,584][WARN ][logstash.filters.split ] Only String and Array types are splittable. field:[doc][serversGroups][ActiveUsers] is of type = NilClass
This is my source Config file i am using for logstash
filter {
json {
source => "message"
skip_on_invalid_json => "true"
target => "doc"
}
split {
field => "[doc][serversGroups]"
}
split {
field => "[doc][serversGroups][ActiveUsers]"
}
date {
match => [ "[doc][date]", "UNIX" ]
target => "unix_time"
}
mutate {
convert => { "[doc][serversGroups][ActiveUsers][handle]" => "integer"
"[doc][serversGroups][list][UsedLicenses]" => "integer"
"[doc][serversGroups][list][issuedLicenses]" => "integer"
}
}
fingerprint {
concatenate_all_fields => "true"
method => "SHA256"
target => "fingerprint"
}
}
output {
stdout {
codec => "rubydebug"
}
elasticsearch {
hosts => ["localhost:9200"]
index => "pyyython"
codec => "json"
document_id => "%{[fingerprint]}"
}
}
This is my source JSON
{
"total_rows": 156122,
"offset": 12,
"rows": [
{
"id": "12345",
"key": "12345",
"value": {
"rev": "1-12345"
},
"doc": {
"_id": "12345",
"_rev": "1-12345",
"date": "15645348122",
"HostServerName": "abc.com",
"serversGroups": [
{
"ServiceName": "--- ",
"list": {
"issuedLicenses": "123",
"UsedLicenses": "12"
},
"ActiveUsers": [
{}
]
},
{
"ServiceName": "--- ",
"list": {
"issuedLicenses": "123",
"UsedLicenses": "12"
},
"ActiveUsers": [
{}
]
},
{
"ServiceName": "--- ",
"list": {
"issuedLicenses": "123",
"UsedLicenses": "12"
},
"ActiveUsers": [
{}
]
},
{
"ServiceName": "--- ",
"list": {
"issuedLicenses": "123",
"UsedLicenses": "1"
},
"ActiveUsers": [
{
"user": "me",
"user_host": "myself",
"dispay": "andI",
"version": "v1.1",
"server_host": "testing.abc.com",
"handle": "12345",
"last_date_license_check": "7/7",
"last_time_license_check": "12:12"
}
]
}
]
}
}
]
}
I keep getting this error
SON parse error, original data now in message field {:error=>#<LogStash::Json::ParserError: Unexpected end-of-input: expected close marker for Array (start marker at [Source: (S"; line: 1, column: 39])87,"offset":0,"rows":[
"; line: 2, column: 41]>, :data=>"{\"total_rows\":15587,\"offset\":0,\"rows\":[\r"}
[2019-08-05T21:07:49,799][WARN ][logstash.filters.split ] Only String and Array types are splittable. field:[doc][serversGroups] is of type = NilClass
[2019-08-05T21:07:50,584][WARN ][logstash.filters.split ] Only String and Array types are splittable. field:[doc][serversGroups][ActiveUsers] is of type = NilClass
not sure if my splitting is wrong!
The source JSON that you show is clearly invalid, since it ends with a comma. If I replace the comma with
]
}
}
]
}
then it is valid. With that change made it can be split using
split { field => "[doc][rows][0][doc][serversGroups]" }
split { field => "[doc][rows][0][doc][serversGroups][ActiveUsers]" }

How make ruby write my JSON correctly?

How can I make Ruby write the way my JSON is structured?
I want this way:
{
"keywords": [
{
"id": "1" ,
"product": "car"
} ,
{
"id": "2" ,
"product": "mobile"
}
]
}
When i run the code with a 3rd object,
Ruby writes:
{
"keywords": [
{
"id": "1" ,
"product": "car"
} ,
{
"id": "2" ,
"product": "mobile"
}
],"3":"ball"
}
I'm generating the JSON this way:
data_hash.store(3, 'ball')
json_output = data_hash.to_json
file = File.open('keywords.json','w')
file.write(json_output)
You probably want use the following instead of store:
data_hash['keywords'] << { 'id' => '3', 'product' => 'ball' }

Merging records in JSON with Ruby

I have two json files that I'm trying to merge. The JSONs have different formatting (see below). I'd like to merge records, so [0] from file one and [0] from file two would become one record [0] in the new merged file.
The first JSON (file_a.json), appears like so:
{
"query": {
"count": 4,
"created": "2012-11-21T23:07:00Z",
"lang": "en-US",
"results": {
"quote": [
{
"Name": "Bill",
"Age": "46",
"Number": "3.55"
},
{
"Name": "Jane",
"Age": "33",
"Number": nil
},
{
"Name": "Jack",
"Age": "55",
"Number": nil
},
{
"Name": "Xavier",
"Age": nil,
"Number": "153353535"
}
]
}
}
}
The second JSON (file_b.json) appears like so:
[
{
"Number2": 25253,
"Number3": 435574,
"NAME": "Bill"
},
{
"Number2": 345353,
"Number3": 5566,
"NAME": "Jane"
},
{
"Number2": 56756,
"Number3": 232435,
"NAME": "Jack"
},
{
"Number2": 7457,
"Number3": 45425,
"NAME": "Xavier"
}
]
None of the keys are the same in both JSONs (well, actually "Name" is a key in both, but in the first the key is "Name" and in the second its "NAME" - just so I can check that the merge works correctly - so I want "Name" and "NAME" in the final JSON), the first record in the first file matches with the first record in the second file, and so on.
So far, I tried merging like this:
merged = %w[a b].inject([]) { |m,f| m << JSON.parse(File.read("file_#{f}.json")) }.flatten
But this of course merged them, but not how I wanted them merged (they are merged sucessively, and because of the different formatting, it gets quite ugly).
I also tried merging like this:
a = JSON.parse(File.read("file_a.json"))
b = JSON.parse(File.read("file_b.json"))
merged = a.zip(b)
Came closer but still not correct and the formatting was still horrendous.
In the end, what I want is this (formatting of second JSON - headers from first JSON can be junked):
[
{
"Name": "Bill",
"Age": 46,
"Number": 3.55,
"Number2": 25253,
"Number3": 435574,
"NAME": "Bill"
},
{
"Name": "Jane",
"Age": 33,
"Number": nil,
"Number2": 345353,
"Number3": 5566,
"NAME": "Jane"
},
{
"Name": "Jack",
"Age": 55,
"Number": nil,
"Number2": 56756,
"Number3": 232435,
"NAME": "Jack"
},
{
"Name": "Xavier",
"Age": nil,
"Number": 153353535,
"Number2": 7457,
"Number3": 45425,
"NAME": "Xavier"
}
]
Any help is appreciated. Thanks a lot.
Hеllo, seems format changed from last time :)
UPDATE: more readable version that also convert corresponding values to integers/floats:
require 'json'
require 'ap'
a = JSON.parse(File.read('./a.json'))['query']['results']['quote'] rescue []
b = JSON.parse(File.read('./b.json'))
final = []
a.each_with_index do |ah,i|
unless bh = b[i]
bh = {}
puts "seems b has no #{i} key, merging skipped"
end
final << ah.merge(bh).inject({}) do |f, (k,v)|
if v.is_a?(String)
if v =~ /\A\d+\.\d+\Z/
v = v.to_f
elsif v =~ /\A\d+\Z/
v = v.to_i
end
end
f.update k => v
end
end
ap final
will display:
[
[0] {
"Name" => "Bill",
"Age" => 46,
"Number" => 3.55,
"Number2" => 25253,
"Number3" => 435574,
"NAME" => "Bill"
},
[1] {
"Name" => "Jane",
"Age" => 33,
"Number" => nil,
"Number2" => 345353,
"Number3" => 5566,
"NAME" => "Jane"
},
[2] {
"Name" => "Jack",
"Age" => 55,
"Number" => nil,
"Number2" => 56756,
"Number3" => 232435,
"NAME" => "Jack"
},
[3] {
"Name" => "Xavier",
"Age" => nil,
"Number" => 153353535,
"Number2" => 7457,
"Number3" => 45425,
"NAME" => "Xavier"
}
]
Here is a working demo
Btw, your json is a bit wrong in both files.
See the fixed versions here and here

Resources