Ruby - Parse a file into hash - ruby

I've a file containing hundreds of object and value combination like below manner. I want to get input from user as object name & numeric value and return that associated value.
Object cefcFRUPowerOperStatus
Type PowerOperType
1:offEnvOther
2:on
3:offAdmin
4:offDenied
5:offEnvPower
6:offEnvTemp
Object cefcModuleOperStatus
Type ModuleOperType
1:unknown
2:ok
3:disabled
4:okButDiagFailed
5:boot
6:selfTest
E.g. - input -
objectName = 'cefcModuleOperStatus'
TypeNumber = '4'
Return - 'okButDiagFailed'
I am not aware of Ruby and get this done to help my peer. So please excuse if this is a novice question.
Note:- I've to create the file so with any file format it would be a great help.

If like you say you have control over creating the original data file, then creating it in json format would make accessing it trivial.
Here is a repl.it of complete working example. Just select the main.rb file and hit run!
For example if you create json file like:
data.json
{
"cefcFRUPowerOperStatus": {
"type": "PowerOperType",
"status": {
"1": "offEnvOther",
"2": "on",
"3": "offAdmin",
"4": "offDenied",
"5": "offEnvPower",
"6": "offEnvTemp"
}
},
"cefcModuleOperStatus": {
"type": "ModuleOperType",
"status": {
"1": "unknown",
"2": "ok",
"3": "disabled",
"4": "okButDiagFailed",
"5": "boot",
"6": "selfTest"
}
}
}
Then parsing it and accessing it in Ruby is as simple as :
require 'json'
file = File.read('data.json')
data = JSON.parse(file)
#accessing this data is simple now:
puts data["cefcModuleOperStatus"]["status"]["4"]
# > okButDiagFailed
Note: that this JSON format will work if your statuses are unique. If they are not, you can still use this way, but you will need to convert JSON to an array format. Let me know if this is the case and I can show you how to modify the json and ruby code for this.
Hope that helps, let me know if you have further questions about how this works.

Related

How can I remove dot from a nested object in logstash

We have a complex object with nested fields that the field names can be dynamic and contains dot.When I try to ingest data to elasticsearch it gives me the following error
Object mapping for [x] tried to parse field [x.y] as object, but found a concrete value
One record can have key/values like a.b.c:4 and in other record it can have a.b:3. We don't have control of the source of coming data so the only option can be changing the object in logstash. Here is an example of coming object:
{
"result": "https://www.yahoo.com",
"tags": {
"url": "https://www.yahoo.com",
"projectName": "monitor",
"host": "ttt",
"dd": 12345,
"vv": "kk"
},
"timestamp": 1586599441000,
"runId": 12345,
"performance": {
"x.y.z": 31307
},
"channel": "clientperf",
"asset": {
"a.b.c": 5,
"a.b":4
}
}
as you see values inside asset and performance has dot. The fields on the roots(like runId, performance and ...) are fine. How can I resolve this either with replacing the dot in logstash or anything that doesn't give me error. I'm aware of de_dot plugin but to use it we need to specifically tell what are the name of nested fields while we cannot enforce the naming for the coming records.I also know that we probably can achieve this with ruby plugin but I have zero knowledge of ruby. Any help can be appreciated.
Could use Hash#deep_transform_keys from ActiveSupport:
class Hash
def deep_transform_keys(&block)
result = {}
each do |key, value|
result[yield(key)] = value.is_a?(Hash) ? value.deep_transform_keys(&block) : value
end
result
end
end
puts hash.deep_transform_keys { |key| key.to_s.gsub(".", "" ) }

How do I use FreeFormTextRecordSetWriter

I my Nifi controller I want to configure the FreeFormTextRecordSetWriter, but I have no Idea what I should put in the "Text" field. I'm getting the text from my source (in my case GetSolr), and just want to write this, period.
Documentation and mailinglist do not seem to tell me how this is done, any help appreciated.
EDIT: Here the sample input + output I want to achieve (as you can see: not ransformation needed, plain text, no JSON input)
EDIT: I now realize, that I can't tell GetSolr to return just CSV data - but I have to use Json
So referencing with attribute seems to be fine. What the documentation omits is, that the ${flowFile} attribute should containt the complete flowfile that is returned.
Sample input:
{
"responseHeader": {
"zkConnected": true,
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"_": "1553686715465"
}
},
"response": {
"numFound": 3194,
"start": 0,
"docs": [
{
"id": "{402EBE69-0000-CD1D-8FFF-D07756271B4E}",
"MimeType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"FileName": "Test.docx",
"DateLastModified": "2019-03-27T08:05:00.103Z",
"_version_": 1629145864291221504,
"LAST_UPDATE": "2019-03-27T08:16:08.451Z"
}
]
}
}
Wanted output
{402EBE69-0000-CD1D-8FFF-D07756271B4E}
BTW: The documentation says this:
The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
I want to use my source's text, so I'm confused
You need to use expression language as if the record's fields are the FlowFile's attributes.
Example:
Input:
{
"t1": "test",
"t2": "ttt",
"hello": true,
"testN": 1
}
Text property in FreeFormTextRecordSetWriter:
${t1} k!${t2} ${hello}:boolean
${testN}Num
Output(using ConvertRecord):
test k!ttt true:boolean
1Num
EDIT:
Seems like what you needed was reading from Solr and write a single column csv. You need to use CSVRecordSetWriter. As for the same,
I should tell you to consider to upgrade to 1.9.1. Starting from 1.9.0, the schema can be inferred for you.
otherwise, you can set Schema Access Strategy as Use 'Schema Text' Property
then, use the following schema in Schema Text
{
"name": "MyClass",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "id",
"type": "int"
}
]
}
this should work
I'll edit it into my answer. If it works for you, please choose my answer :)

How to read value from JSON object?

I'm trying to read individual value from be json array object to display in the page. I have tried with below code but couldn't make it. Please advise what am i doing wrong here.
Apperciate your help.
You can get the length of a JavaScript array via its property length. To access the array Reference in your object, you can use dot notation.
In combination, the following should do what you expect:
var obj = {
"Reference": [
{
"name": "xxxxxxxx",
"typeReference": {
"articulation": 0,
"locked": false,
"createdBy": {
"userName": "System",
},
"lastModifiedBy": {
"userName": "System",
},
"lastModified": 1391084398660,
"createdOn": 1391084398647,
"isSystem": true
},
...
},
...
]
};
console.log(obj.Reference.length);
In case you are actually dealing with a JSON string, not a JavaScript object, you will need to parse it first via JSON.parse().
You get the length of an array by simply access the length attribute.
For example [0,1,2,3].length === 4.
If you just want to loop through the array, use forEach or map instead of a for loop. It's safer, cleaner, less hassle and you don't meed to know the length.
E.g.
[0,1,2,3].forEach(num => console.log(num))

using ruby json library to parse json data into a hash some keys missing

lets say I have this json data file
{
"page": {
"title": "Example Page"
},
"employers": {
"name": "Jon"
},
"employees": [
{ "name": "Mike", "nicknames": ["Superman"] },
{ "name": "Peter", "nicknames": ["Peet", "Peetee", "Peterr"] }
]
}
this data.json file exist as a file outside of the script
I have these 3 lines to read and parse it with json ruby library
data = File.read("data.json")
obj = JSON.parse(data)
puts obj.values
in my terminal it comes out to be like this
{"title"=>"Example Page"}
{"name"=>"Jon"}
{"name"=>"Mike", "nicknames"=>["Superman"]}
{"name"=>"Peter", "nicknames"=>["Peet", "Peetee", "Peterr"]}
what happened to employers and employees? now I have the same key or name in this case. Its difficult for me to grab the values to use them.
Employers and employees are the keys for primary hash, and you requested values, that's why you get what you get. Try putting obj .

Efficiently check that a JSON response contains a specific element within an array

Given the JSON response:
{
"tags": [
{
"id": 81499,
"name": "sign-in"
},
{
"id": 81500,
"name": "user"
},
{
"id": 81501,
"name": "authentication"
}
]
}
Using RSpec 2, I want to verify that this response contains the tag with the name authentication. Being a fairly new to Ruby, I figured there is a more efficient way than iterating the array and checking each value of name using include? or map/collect. I could simply user a regex to check for /authentication/i but that doesn't seem like the best approach either.
This is my spec so far:
it "allows filtering" do
response = #client.story(15404)
#response.tags.
end
So, if
t = JSON.parse '{ ... }'
Then this expression will either return nil, which is false, or it will return the thing it detected, which has a boolean evaluation of true.
t['tags'].detect { |e| e['name'] == 'authentication' }
This will raise NoMethodError if there is no tags key. I think that's handled just fine in a test, but you can arrange for that case to also show up as false (i.e., nil) with:
t['tags'].to_a.detect { |e| e['name'] == 'authentication' }

Resources