couchbase cbimport key generation with key in nest json - full-text-search

I have the following json document to be uploaded to a full text search index
{
"bank_id": {
"country_code": "AT",
"bank_code": "ASPKAT2LXXX",
"bank_code_type": "SWIFT_CODE"
},
"institution": {
"name": "Allgemeine Sparkasse Oberoesterreich Bankaktiengesellschaft"
},
"address": {
"address_line_1": "Promenade 11-13",
"address_line_2": "Linz",
"country_code": "AT"
},
"bank_capabilities": ["INSTANT_CREDIT"],
"payment_network_details": [{
"network": "REAL_TIME_PAYMENTS",
"capabilities": ["INSTANT_CREDIT"]
}]
}
And I want to generate key with country_code and bank_code. How can i accomplish this?
tried with -g %bank_id.country_code%::%bank_id.bank_code% and it is not working.

Related

Turn a JSON array of key/value pairs into object properties

I'm trying to use JSONata to convert arrays of "key/value" objects into properties of the parent object. My input looks like this:
[
{
"city": "Ottawa",
"properties": [
{
"name": "population",
"value": 37
},
{
"name": "postalCode",
"value": 10001
},
{
"name": "founded",
"value": 1826
}
]
},
{
"city": "Toronto",
"properties": [
{
"name": "population",
"value": 54
},
{
"name": "postalCode",
"value": 10002
}
]
}
]
I'm struggling to generate the output I need, I've seen examples that reference explicit elements, like in this answer, but I need the properties to be converted "dynamically" since I don't know them in advance. I think I need something like this, but I'm missing some particular function:
$[].{
"city": city,
properties.name: properties.value
}
This is the output I need to generate:
[
{
"city": "Ottawa",
"population": 37,
"postalCode": 10001,
"founded": 1826
},
{
"city": "Toronto",
"population": 54,
"postalCode": 10002
}
]
The properties arrays don't always contain the same keys, but the city attributes are always present.
You can use the reduce operator, as described in the Grouping docs here:
$[].(
$city := city;
properties{ "city": $city, name: value }
)
You can play with it live: https://stedi.link/uUANwtE
Please try this expression.
$[].{
"city": $.city,
$.properties[0].name: $.properties[0].value,
$.properties[1].name: $.properties[1].value,
$.properties[2].name: $.properties[2].value,
$.properties[3].name: $.properties[3].value
}
https://try.jsonata.org/s1Ea4kUvo

How to use JSONpath to extract specific values

I'm using JSONpath to try and find data with an array of JSON objects but I'm struggling to get to the information I want. The array contains many objects similar to below where there are values for RecID throughout. If I use $..RecID I get them all when I only want the first Key.RecID of each object (with a value 1338438 in this example). Is there a way to only extract the top level Key.RecID value?
BTW I'm trying to do this in jMeter and I'm assuming JSONpath is the best way to do what I want but if there is a better way I'd be happy to hear about it.
Thanks in advance
[{
"Key": {
"RecID": 1338438
},
"Users": [{
"FullName": "Miss Burns",
"Users": {
"Key": {
"Name": "Burns",
"RecID": 1317474
}
}
},
{
"FullName": "Mrs Fisher",
"Users": {
"Key": {
"Name": "Fisher",
"RecID": 1317904
}
}
}
],
"User": {
"FullName": "Mrs Fisher",
"Key": {
"Name": "Fisher",
"RecID": 1317904
}
},
"Organisation": {
"Key": {
"RecID": 1313881
}
}
}]

elasticsearch filebeat mapper_parsing_exception when using decode_json_fields

I have ECK setup and im using filebeat to ship logs from Kubernetes to elasticsearch.
Ive recently added decode_json_fields processor to my configuration, so that im able decode the json that is usually in the message field.
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 10
target: "log"
overwrite_keys: true
add_error_key: true
However logs have stopped appearing since adding it.
example log:
{
"_index": "filebeat-7.9.1-2020.10.01-000001",
"_type": "_doc",
"_id": "wF9hB3UBtUOF3QRTBcts",
"_score": 1,
"_source": {
"#timestamp": "2020-10-08T08:43:18.672Z",
"kubernetes": {
"labels": {
"controller-uid": "9f3f9d08-cfd8-454d-954d-24464172fa37",
"job-name": "stream-hatchet-cron-manual-rvd"
},
"container": {
"name": "stream-hatchet-cron",
"image": "<redacted>.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4"
},
"node": {
"name": "ip-172-20-32-60.us-east-2.compute.internal"
},
"pod": {
"uid": "041cb6d5-5da1-4efa-b8e9-d4120409af4b",
"name": "stream-hatchet-cron-manual-rvd-bh96h"
},
"namespace": "default"
},
"ecs": {
"version": "1.5.0"
},
"host": {
"mac": [],
"hostname": "ip-172-20-32-60",
"architecture": "x86_64",
"name": "ip-172-20-32-60",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat",
"name": "CentOS Linux",
"kernel": "4.9.0-11-amd64"
},
"containerized": false,
"ip": []
},
"cloud": {
"instance": {
"id": "i-06c9d23210956ca5c"
},
"machine": {
"type": "m5.large"
},
"region": "us-east-2",
"availability_zone": "us-east-2a",
"account": {
"id": "<redacted>"
},
"image": {
"id": "ami-09d3627b4a09f6c4c"
},
"provider": "aws"
},
"stream": "stdout",
"message": "{\"message\":{\"log_type\":\"cron\",\"status\":\"start\"},\"level\":\"info\",\"timestamp\":\"2020-10-08T08:43:18.670Z\"}",
"input": {
"type": "container"
},
"log": {
"offset": 348,
"file": {
"path": "/var/log/containers/stream-hatchet-cron-manual-rvd-bh96h_default_stream-hatchet-cron-73069980b418e2aa5e5dcfaf1a29839a6d57e697c5072fea4d6e279da0c4e6ba.log"
}
},
"agent": {
"type": "filebeat",
"version": "7.9.1",
"hostname": "ip-172-20-32-60",
"ephemeral_id": "6b3ba0bd-af7f-4946-b9c5-74f0f3e526b1",
"id": "0f7fff14-6b51-45fc-8f41-34bd04dc0bce",
"name": "ip-172-20-32-60"
}
},
"fields": {
"#timestamp": [
"2020-10-08T08:43:18.672Z"
],
"suricata.eve.timestamp": [
"2020-10-08T08:43:18.672Z"
]
}
}
In the filebeat logs i can see the following error:
2020-10-08T09:25:43.562Z WARN [elasticsearch] elasticsearch/client.go:407 Cannot
index event
publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x36b243a0,
ext:63737745936, loc:(*time.Location)(nil)}, Meta:null,
Fields:{"agent":{"ephemeral_id":"5f8afdba-39c3-4fb7-9502-be7ef8f2d982","hostname":"ip-172-20-32-60","id":"0f7fff14-6b51-45fc-8f41-34bd04dc0bce","name":"ip-172-20-32-60","type":"filebeat","version":"7.9.1"},"cloud":{"account":{"id":"700849607999"},"availability_zone":"us-east-2a","image":{"id":"ami-09d3627b4a09f6c4c"},"instance":{"id":"i-06c9d23210956ca5c"},"machine":{"type":"m5.large"},"provider":"aws","region":"us-east-2"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"ip-172-20-32-60","ip":["172.20.32.60","fe80::af:9fff:febe:dc4","172.17.0.1","100.96.1.1","fe80::6010:94ff:fe17:fbae","fe80::d869:14ff:feb0:81b3","fe80::e4f3:b9ff:fed8:e266","fe80::1c19:bcff:feb3:ce95","fe80::fc68:21ff:fe08:7f24","fe80::1cc2:daff:fe84:2a5a","fe80::3426:78ff:fe22:269a","fe80::b871:52ff:fe15:10ab","fe80::54ff:cbff:fec0:f0f","fe80::cca6:42ff:fe82:53fd","fe80::bc85:e2ff:fe5f:a60d","fe80::e05e:b2ff:fe4d:a9a0","fe80::43a:dcff:fe6a:2307","fe80::581b:20ff:fe5f:b060","fe80::4056:29ff:fe07:edf5","fe80::c8a0:5aff:febd:a1a3","fe80::74e3:feff:fe45:d9d4","fe80::9c91:5cff:fee2:c0b9"],"mac":["02:af:9f:be:0d:c4","02:42:1b:56:ee:d3","62:10:94:17:fb:ae","da:69:14:b0:81:b3","e6:f3:b9:d8:e2:66","1e:19:bc:b3:ce:95","fe:68:21:08:7f:24","1e:c2:da:84:2a:5a","36:26:78:22:26:9a","ba:71:52:15:10:ab","56:ff:cb:c0:0f:0f","ce:a6:42:82:53:fd","be:85:e2:5f:a6:0d","e2:5e:b2:4d:a9:a0","06:3a:dc:6a:23:07","5a:1b:20:5f:b0:60","42:56:29:07:ed:f5","ca:a0:5a:bd:a1:a3","76:e3:fe:45:d9:d4","9e:91:5c:e2:c0:b9"],"name":"ip-172-20-32-60","os":{"codename":"Core","family":"redhat","kernel":"4.9.0-11-amd64","name":"CentOS
Linux","platform":"centos","version":"7
(Core)"}},"input":{"type":"container"},"kubernetes":{"container":{"image":"700849607999.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4","name":"stream-hatchet-cron"},"labels":{"controller-uid":"a79daeac-b159-4ba7-8cb0-48afbfc0711a","job-name":"stream-hatchet-cron-manual-c5r"},"namespace":"default","node":{"name":"ip-172-20-32-60.us-east-2.compute.internal"},"pod":{"name":"stream-hatchet-cron-manual-c5r-7cx5d","uid":"3251cc33-48a9-42b1-9359-9f6e345f75b6"}},"log":{"level":"info","message":{"log_type":"cron","status":"start"},"timestamp":"2020-10-08T09:25:36.916Z"},"message":"{"message":{"log_type":"cron","status":"start"},"level":"info","timestamp":"2020-10-08T09:25:36.916Z"}","stream":"stdout"},
Private:file.State{Id:"native::30998361-66306", PrevId:"",
Finished:false, Fileinfo:(*os.fileStat)(0xc001c14dd0),
Source:"/var/log/containers/stream-hatchet-cron-manual-c5r-7cx5d_default_stream-hatchet-cron-4278d956fff8641048efeaec23b383b41f2662773602c3a7daffe7c30f62fe5a.log",
Offset:539, Timestamp:time.Time{wall:0xbfd7d4a1e556bd72,
ext:916563812286, loc:(*time.Location)(0x607c540)}, TTL:-1,
Type:"container", Meta:map[string]string(nil),
FileStateOS:file.StateOS{Inode:0x1d8ff59, Device:0x10302},
IdentifierName:"native"}, TimeSeries:false}, Flags:0x1,
Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400):
{"type":"mapper_parsing_exception","reason":"failed to parse field
[log.message] of type [keyword] in document with id
'56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron,
status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:113"}}
It throws an error because apparently log.message is of type "keyword" however this does not exist in the index mapping.
I thought this maybe an issue with the "target": "log" so ive tried changing this to something arbitrary like "my_parsed_message" or "m_log" or "mlog" and i get the same error for all of them.
{"type":"mapper_parsing_exception","reason":"failed to parse field
[mlog.message] of type [keyword] in document with id
'J5KlDHUB_yo5bfXcn2LE'. Preview of field's value: '{log_type=cron,
status=end}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:217"}}
Elastic version: 7.9.2
The problem is that some of your JSON messages contain a message field that is sometimes a simple string and other times a nested JSON object (like in the case you're showing in your question).
After this index was created, the very first message that was parsed was probably a string and hence the mapping has been modified to add the following field (line 10553):
"mlog": {
"properties": {
...
"message": {
"type": "keyword",
"ignore_above": 1024
},
}
}
You'll find the same pattern for my_parsed_message (line 10902), my_parsed_logs (line 10742), etc...
Hence the next message that comes with message being a JSON object, like
{"message":{"log_type":"cron","status":"start"}, ...
will not work because it's an object, not a string...
Looking at the fields of your custom JSON, it seems you don't really have the control over either their taxonomy (i.e. naming) or what they contain...
If you're serious about willing to search within those custom fields (which I think you are since you're parsing the field, otherwise you'd just store the stringified JSON), then I can only suggest to start figuring out a proper taxonomy in order to make sure that they all get a standard type.
If all you care about is logging your data, then I suggest to simply disable the indexing of that message field. Another solution is to set dynamic: false in your mapping to ignore those fields, i.e. not modify your mapping.

Nifi - Copy all the keys after transforming only one key

I want to copy all they keys in the json except one which i want to transform.
ex.
Input JSON
{
"ts": "20200420121222",
"name": "broker",
"city": "queensland",
"age": 21,
"gender": "male"
"characteristics": {
"Card Id": "63247354",
"Termination Plan": "paid"
}
}
Output JSON
{
"ts": "20200420121222",
"name": "broker",
"city": "queensland",
"age": 21,
"gender": "male"
"characteristics": {
"card_id": "63247354", // change here
"termination_plan": "paid" // change here
}
}
Is there any better way via which i can just change the following above keys and copy the rest
You can use the "*": "&" construct to include all other fields that have not yet been matched:
[
{
"operation": "shift",
"spec": {
"characteristics": {
"Card Id": "characteristics.card_id",
"Termination Plan": "characteristics.termination_plan"
},
"*": "&"
}
}
]

How to add key/value pair in hash with ruby laungage

I am using ruby 2.0.0, At present i have one demo.json file with following hash values:
{
"users":
{
"#jon" :
{
"name": "pradeep",
"Email": "pradeep#yahoo.com",
"area": "#jon",
"location": "#newyork"
},
"#smith" :
{
"name": "Smith",
"Email": "Joe#yahoo.com",
"area": "#smith",
"location": "#lverginia"
}
}
}
now i am taking json values in object using following codes:
require 'json'
json = File.read('demo.json')
obj = JSON.parse(json)
here #jon and #smith is usernames, Now i wants to take usernames via keyboard inputs and than all the other values inside of #jon with same keyboard.
Suppose i have one new user #david and his other values are like:
"name": "pradeep",
"Email": "pradeep#yahoo.com",
"area": "#jon",
"location": "#newyork"
i wants to add this in above demo.json file without remove other values, Any idea how can i do?
I tried to do this in this way:
obj["users"]
But as i got username via input so i cant hard code username after user key on "obj" object, Hope this make sense..
Read this JSON also .
I'd do as below :
require 'json'
json = JSON.parse(File.read("test.json"))
new_information_arry = ["users", "name", "Email", "area", "location"].map do |elem|
puts "please give the value of #{elem}"
[elem,gets.chomp]
end
new_information_hash = Hash[new_information_arry[1..-1]]
json['users'][new_information_arry.first.last] = new_information_hash
File.write("outputfile.json",JSON.pretty_generate(json))
I put the below content to my file 'test.json' :
{
"users":
{
"#jon" :
{
"name": "pradeep",
"Email": "pradeep#yahoo.com",
"area": "#jon",
"location": "#newyork"
},
"#smith" :
{
"name": "Smith",
"Email": "Joe#yahoo.com",
"area": "#smith",
"location": "#lverginia"
}
}
}
Then I ran the above code as below :
please give the value of users
#david
please give the value of name
pradeep
please give the value of Email
pradeep#yahoo.com
please give the value of area
#jon
please give the value of location
#newyor
And now my output file outputfile.json contains :
{
"users": {
"#jon": {
"name": "pradeep",
"Email": "pradeep#yahoo.com",
"area": "#jon",
"location": "#newyork"
},
"#smith": {
"name": "Smith",
"Email": "Joe#yahoo.com",
"area": "#smith",
"location": "#lverginia"
},
"#david": {
"name": "pradeep",
"Email": "pradeep#yahoo.com",
"area": "#jon",
"location": "#newyor"
}
}
}

Resources