i am trying to understand the receiving model from the kafka which is emitted from neo4j streams to one kafka topic.
Actually that data is looing like below.
{
"meta": {
"timestamp": 1608725179199,
"username": "neo4j",
"txId": 16443,
"txEventId": 0,
"txEventsCount": 1,
"operation": "created",
"source": {
"hostname": "9945"
}
},
"payload": {
"id": "1000",
"before": null,
"after": {
"properties": {
"name": "sdfdf"
},
"labels": [
"aaq"
]
},
"type": "node"
},
"schema": {
"properties": {
"name": "String"
},
"constraints": []
}
}
so to consume this type of complex structured data in springboot from kafka we need to create nested model?
i mean nearly 4 classes nested each other?
From my understanding i am trying to create the below classes.
meta (**This is 1st class**)
"operation": "created",
payload(**This is 2ndclass which is nested inside the Meta**)
id
before
after(**it is 3rd class which is nested to payload**)
properties(**it is 4th class which is nested with in after**)
these data only we needed to store
labels
type
Actually i didn't faced thee kind of nested issue before so don't have idea to procees?
so is the above method is right or any other possiblities are there?
Have to consume the data from kafka topic which is emitted by neo4j streams is the ultimate goal?
Language : Java
Framework: Springboot
I have ECK setup and im using filebeat to ship logs from Kubernetes to elasticsearch.
Ive recently added decode_json_fields processor to my configuration, so that im able decode the json that is usually in the message field.
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 10
target: "log"
overwrite_keys: true
add_error_key: true
However logs have stopped appearing since adding it.
example log:
{
"_index": "filebeat-7.9.1-2020.10.01-000001",
"_type": "_doc",
"_id": "wF9hB3UBtUOF3QRTBcts",
"_score": 1,
"_source": {
"#timestamp": "2020-10-08T08:43:18.672Z",
"kubernetes": {
"labels": {
"controller-uid": "9f3f9d08-cfd8-454d-954d-24464172fa37",
"job-name": "stream-hatchet-cron-manual-rvd"
},
"container": {
"name": "stream-hatchet-cron",
"image": "<redacted>.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4"
},
"node": {
"name": "ip-172-20-32-60.us-east-2.compute.internal"
},
"pod": {
"uid": "041cb6d5-5da1-4efa-b8e9-d4120409af4b",
"name": "stream-hatchet-cron-manual-rvd-bh96h"
},
"namespace": "default"
},
"ecs": {
"version": "1.5.0"
},
"host": {
"mac": [],
"hostname": "ip-172-20-32-60",
"architecture": "x86_64",
"name": "ip-172-20-32-60",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat",
"name": "CentOS Linux",
"kernel": "4.9.0-11-amd64"
},
"containerized": false,
"ip": []
},
"cloud": {
"instance": {
"id": "i-06c9d23210956ca5c"
},
"machine": {
"type": "m5.large"
},
"region": "us-east-2",
"availability_zone": "us-east-2a",
"account": {
"id": "<redacted>"
},
"image": {
"id": "ami-09d3627b4a09f6c4c"
},
"provider": "aws"
},
"stream": "stdout",
"message": "{\"message\":{\"log_type\":\"cron\",\"status\":\"start\"},\"level\":\"info\",\"timestamp\":\"2020-10-08T08:43:18.670Z\"}",
"input": {
"type": "container"
},
"log": {
"offset": 348,
"file": {
"path": "/var/log/containers/stream-hatchet-cron-manual-rvd-bh96h_default_stream-hatchet-cron-73069980b418e2aa5e5dcfaf1a29839a6d57e697c5072fea4d6e279da0c4e6ba.log"
}
},
"agent": {
"type": "filebeat",
"version": "7.9.1",
"hostname": "ip-172-20-32-60",
"ephemeral_id": "6b3ba0bd-af7f-4946-b9c5-74f0f3e526b1",
"id": "0f7fff14-6b51-45fc-8f41-34bd04dc0bce",
"name": "ip-172-20-32-60"
}
},
"fields": {
"#timestamp": [
"2020-10-08T08:43:18.672Z"
],
"suricata.eve.timestamp": [
"2020-10-08T08:43:18.672Z"
]
}
}
In the filebeat logs i can see the following error:
2020-10-08T09:25:43.562Z WARN [elasticsearch] elasticsearch/client.go:407 Cannot
index event
publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x36b243a0,
ext:63737745936, loc:(*time.Location)(nil)}, Meta:null,
Fields:{"agent":{"ephemeral_id":"5f8afdba-39c3-4fb7-9502-be7ef8f2d982","hostname":"ip-172-20-32-60","id":"0f7fff14-6b51-45fc-8f41-34bd04dc0bce","name":"ip-172-20-32-60","type":"filebeat","version":"7.9.1"},"cloud":{"account":{"id":"700849607999"},"availability_zone":"us-east-2a","image":{"id":"ami-09d3627b4a09f6c4c"},"instance":{"id":"i-06c9d23210956ca5c"},"machine":{"type":"m5.large"},"provider":"aws","region":"us-east-2"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"ip-172-20-32-60","ip":["172.20.32.60","fe80::af:9fff:febe:dc4","172.17.0.1","100.96.1.1","fe80::6010:94ff:fe17:fbae","fe80::d869:14ff:feb0:81b3","fe80::e4f3:b9ff:fed8:e266","fe80::1c19:bcff:feb3:ce95","fe80::fc68:21ff:fe08:7f24","fe80::1cc2:daff:fe84:2a5a","fe80::3426:78ff:fe22:269a","fe80::b871:52ff:fe15:10ab","fe80::54ff:cbff:fec0:f0f","fe80::cca6:42ff:fe82:53fd","fe80::bc85:e2ff:fe5f:a60d","fe80::e05e:b2ff:fe4d:a9a0","fe80::43a:dcff:fe6a:2307","fe80::581b:20ff:fe5f:b060","fe80::4056:29ff:fe07:edf5","fe80::c8a0:5aff:febd:a1a3","fe80::74e3:feff:fe45:d9d4","fe80::9c91:5cff:fee2:c0b9"],"mac":["02:af:9f:be:0d:c4","02:42:1b:56:ee:d3","62:10:94:17:fb:ae","da:69:14:b0:81:b3","e6:f3:b9:d8:e2:66","1e:19:bc:b3:ce:95","fe:68:21:08:7f:24","1e:c2:da:84:2a:5a","36:26:78:22:26:9a","ba:71:52:15:10:ab","56:ff:cb:c0:0f:0f","ce:a6:42:82:53:fd","be:85:e2:5f:a6:0d","e2:5e:b2:4d:a9:a0","06:3a:dc:6a:23:07","5a:1b:20:5f:b0:60","42:56:29:07:ed:f5","ca:a0:5a:bd:a1:a3","76:e3:fe:45:d9:d4","9e:91:5c:e2:c0:b9"],"name":"ip-172-20-32-60","os":{"codename":"Core","family":"redhat","kernel":"4.9.0-11-amd64","name":"CentOS
Linux","platform":"centos","version":"7
(Core)"}},"input":{"type":"container"},"kubernetes":{"container":{"image":"700849607999.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4","name":"stream-hatchet-cron"},"labels":{"controller-uid":"a79daeac-b159-4ba7-8cb0-48afbfc0711a","job-name":"stream-hatchet-cron-manual-c5r"},"namespace":"default","node":{"name":"ip-172-20-32-60.us-east-2.compute.internal"},"pod":{"name":"stream-hatchet-cron-manual-c5r-7cx5d","uid":"3251cc33-48a9-42b1-9359-9f6e345f75b6"}},"log":{"level":"info","message":{"log_type":"cron","status":"start"},"timestamp":"2020-10-08T09:25:36.916Z"},"message":"{"message":{"log_type":"cron","status":"start"},"level":"info","timestamp":"2020-10-08T09:25:36.916Z"}","stream":"stdout"},
Private:file.State{Id:"native::30998361-66306", PrevId:"",
Finished:false, Fileinfo:(*os.fileStat)(0xc001c14dd0),
Source:"/var/log/containers/stream-hatchet-cron-manual-c5r-7cx5d_default_stream-hatchet-cron-4278d956fff8641048efeaec23b383b41f2662773602c3a7daffe7c30f62fe5a.log",
Offset:539, Timestamp:time.Time{wall:0xbfd7d4a1e556bd72,
ext:916563812286, loc:(*time.Location)(0x607c540)}, TTL:-1,
Type:"container", Meta:map[string]string(nil),
FileStateOS:file.StateOS{Inode:0x1d8ff59, Device:0x10302},
IdentifierName:"native"}, TimeSeries:false}, Flags:0x1,
Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400):
{"type":"mapper_parsing_exception","reason":"failed to parse field
[log.message] of type [keyword] in document with id
'56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron,
status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:113"}}
It throws an error because apparently log.message is of type "keyword" however this does not exist in the index mapping.
I thought this maybe an issue with the "target": "log" so ive tried changing this to something arbitrary like "my_parsed_message" or "m_log" or "mlog" and i get the same error for all of them.
{"type":"mapper_parsing_exception","reason":"failed to parse field
[mlog.message] of type [keyword] in document with id
'J5KlDHUB_yo5bfXcn2LE'. Preview of field's value: '{log_type=cron,
status=end}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:217"}}
Elastic version: 7.9.2
The problem is that some of your JSON messages contain a message field that is sometimes a simple string and other times a nested JSON object (like in the case you're showing in your question).
After this index was created, the very first message that was parsed was probably a string and hence the mapping has been modified to add the following field (line 10553):
"mlog": {
"properties": {
...
"message": {
"type": "keyword",
"ignore_above": 1024
},
}
}
You'll find the same pattern for my_parsed_message (line 10902), my_parsed_logs (line 10742), etc...
Hence the next message that comes with message being a JSON object, like
{"message":{"log_type":"cron","status":"start"}, ...
will not work because it's an object, not a string...
Looking at the fields of your custom JSON, it seems you don't really have the control over either their taxonomy (i.e. naming) or what they contain...
If you're serious about willing to search within those custom fields (which I think you are since you're parsing the field, otherwise you'd just store the stringified JSON), then I can only suggest to start figuring out a proper taxonomy in order to make sure that they all get a standard type.
If all you care about is logging your data, then I suggest to simply disable the indexing of that message field. Another solution is to set dynamic: false in your mapping to ignore those fields, i.e. not modify your mapping.
How should i design a Restful API for PATCH operation that support update some property in the a list with condition?
say i have following json model:
{
"key1": "value",
"key2": "value",
"list": [
{
"property": "someValue",
"toBePatched": "value"
},
{
"property": "otherValue",
"toBePatched": "value"
}
]
}
I need to patch the "toBePatched" property in the list when the "property" equals to "someValue". By looking at the json patch here, i think it is a good way to go, but i dont think the json pointer supports the query? how should i define a path that supports "/list/property=someValue/toBePatch"?
One stupid way to do it is to pass it as query parameter to the api, and have some logic around it, but i dont think thats a standard way to do it.
[
{ "op": "test", "path": "/list/0/property", "value": "someValue"},
{ "op": "test", "path": "/list/0/toBePatched", "value": "value"},
{ "op": "replace", "path": "/list/0/toBePatched", "value": "the-new-value"}
]
test is important, it lets you verify that the server hasn't change the part of the document that you are intending to change. See section 5 on Error Handling for details.
When I search with only birthData in fhir I am getting results.
For example: http://localhost:8080/hapi-fhir-jpaserver/fhir/Patient?_pretty=true&birthdate=2020-03-16 will return patient who has birthdate as 2020-03-16.
When I am searching with _content I am not getting any results. Something like this:
http://localhost:8080/hapi-fhir-jpaserver/fhir/Patient?_content=2019-09-05
_content is for searching text content.
If you want to search for dates you need to use a date search parameter. E.g.:
http://localhost:8080/hapi-fhir-jpaserver/fhir/Patient?birthDate=2019-09-05
This can be achieved using Search Parameters.
Search parameters are essentially named paths within resources that are indexed by the system so that they can be used to find resources that match a given criteria.
Using Search parameter
we can add additional search parameters that will index fields that do not have a standard search parameter defined.
we can add additional search parameters that will index extensions used by your clients.
we can disable search parameters
Example:
Lets say I have a PractitionerRole
"resourceType": "PractitionerRole",
"id": "6639",
"meta": {
"versionId": "1",
"lastUpdated": "2020-03-19T13:26:34.748+05:30",
"source": "#aYyeIlv9Yutudiwy"
},
"text": {
"status": "generated",
"div": "<div xmlns=\"<http://www.w3.org/1999/xhtml\">foo</div>">
},
"active": true,
"practitioner": {
"reference": "Practitioner/6607"
},
"organization": {
"reference": "Organization/6528"
},
"specialty": [
{
"coding": [
{
"system": "<http://snomed.info/sct",>
"code": "42343242",
"display": "Clinical immunology"
}
]
}
]
}
PractitionerRole has thier own search parameters. Apart from those search parameters we wanted to have a search parameter which will filter all practitioner roles based on practitioner.reference. We can achieve this using Search parameters. All we need to do is creating a new search parameter just like below.
{
"resourceType": "SearchParameter",
"title": "Practitioner Referecene",
"base": [ "PractitionerRole" ],
"status": "active",
"code": "practitioner_reference",
"type": "token",
"expression": "PractitionerRole.practitioner.reference",
"xpathUsage": "normal"
}
Here what fhir tells is when user wanted to filter with practitioner_reference then look for PractitionerRole.practitioner.reference.
This looks something like this:
http://localhost:8080/hapi-fhir-jpaserver/fhir/PractitionerRole?practitioner_reference=Practitioner/6607
We can also extend this to search with multiple parameters. We can create a search parameter with or condition so that it can search with multi parameters.
"resourceType": "SearchParameter",
"title": "Patient Multi Search",
"base": [ "Patient" ],
"status": "active",
"code": "pcontent",
"type": "token",
"expression": "Patient.managingOrganization.reference|Patient.birthDate|Patient.address[0].city",
"xpathUsage": "normal"
}
Above SearchParameter will search withPatient.managingOrganization.reference or Patient.birthDate or Patient.address[0].city.
The query looks like this:
Search With City → http://localhost:8080/hapi-fhir-jpaserver/fhir/Patient?pcontent=Bruenmouth
Search With Birth Date → http://localhost:8080/hapi-fhir-jpaserver/fhir/Patient?pcontent=2019-04-06
I don't know how to handle nested data with admin on rest.
My GET request returns the full object without additional calls for filters and thumbnails (see below).
Example object:
{
"id": "58bd633e4b77c718e63bf931",
"title": "Project A",
"description": "Blabla",
"image": "https://placeholdit.imgix.net/~text?txtsize=33&txt=350%C3%97150&w=350&h=150",
"disable": false,
"filters": [
{
"id": "58c662aa4ea73e3d4373dad7",
"filterValue": {
"label": "Filter value",
"color": "#0094d8",
"id": "58c7999162700623b4aac559"
},
"isMain": true
}
],
"thumbnails": [
{
"id": "58bfeac780021c56cc71bfac",
"image": "http://lorempixel.com/1024/768/",
"description": "Bla",
"device": "desktop"
},
{
"id": "58bfeacf80021c56cc71bfad",
"image": "http://lorempixel.com/800/600/",
"description": "Bla",
"device": "laptop"
}
]
}
My first idea was to create custom Input Components but I don't know if it's the best solution? Any ideas or examples?
Admin-on-rest relies on redux-form, which supports nested attributes. Just set the source of your input as the path to the nested property, with dot separator:
<TextInput source="foo.bar" />
For your filters and thumbnails, you'll have to use redux-form's <Fields> component, and create a custom input component with it.