JEST Client is not working - elasticsearch

I have been stuck with this from a couple of days, any help will be highly appreciable.
How actually we retrieve the documents from the result object of Jest client
I am using the following below code
List<Hit<Part,Void>> hits = searchresult.getHits(Part.class);
for (SearchResult.Hit<Part,Void> hit : hits) {
Here in Part PPOJO class I have a variable attributes
private Object attributes;
Here is the mapping of the attributes field, it is of object type.
"attributes": {
"properties": {
"AttrCatgId": {
"type": "long"
},
"AttrCatgText": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"AttrText": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"AttributeId": {
"type": "long"
}
}
}
Now when I'm retrieving the documents from the result object, it's giving me the values in double format ( .0 is appending by default)
{AttrText=3/4 TON, AttrCatgText=LOAD RATING, AttrCatgId=3.0, AttributeId=11.0}
How to get the AttrText=3/4 TON, AttrCatgText=LOAD RATING, AttrCatgId=3, AttributeId=11 Values in this format.
Thanks for your help in advance.
Here is the sample Document.
"catalogLineCode": "G12",
"supplierId": [
5904
],
"partId": 5278493,
"partGrpName": "Thermostat, Gasket & Housing",
"terminologyName": "Rear Right Wheel Cylinder",
"catalogName": "EPE",
"catId": [
10
],
"perCarQty": 1,
"catalogId": 1,
"partGrpId": 10,
"regionId": 1,
"partNumber": "12T1B",
"attributes": [
{
"AttrText": "3/4 TON",
"AttrCatgText": "LOAD RATING",
"AttrCatgId": 3,
"AttributeId": 11
},
{
"AttrText": "M ENG CODE",
"AttrCatgText": "ENG SOURCE CODE",
"AttrCatgId": 16,
"AttributeId": 111
},
{
"AttrText": "4 WHEEL/ALL WHEEL DRIVE",
"AttrCatgText": "DRIVE TYPE",
"AttrCatgId": 27,
"AttributeId": 168
}
],
"vehicleId": [
5274
],
"extendedDescription": "ALTERNATE TEMP - ALL MAKES-STD LINE~STD CAB - BASE",
"terminologyId": 20

Related

Elastic search - logstash: filter on in logstash created aggregation in the elastic index?

I posted this question on the elastic forums, but I thought I should try it here as well. The problem is as follows:
Hi,
We have Elasticsearch with logstash (version 8.2). It is inserting data in elastic index from a jdbc source. In logstash we use an aggregate filter. The config looks like this:
jdbc {
jdbc_connection_string => "jdbc:oracle:thin:#wntstdb03.izaaksuite.nl:1521:wntstf2"
jdbc_user => "webnext_zaken"
jdbc_password => "webnext_zaken"
jdbc_driver_library => ""
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
statement_filepath =>"/appl/sw/webnext/logstash/config_documenten/queries/documenten.sql"
last_run_metadata_path => "/appl/sw/webnext/logstash/config_documenten/parameters/.jdbc_last_run_doc"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
aggregate {
task_id => "%{zaakdoc_id}"
code => "
map['zaak_id'] ||= event.get('zaak_id')
map['result_type'] ||= event.get('result_type')
map['mutatiedatum'] ||= event.get('mutatiedatum')
map['oge_id'] ||= event.get('oge_id')
map['zaakidentificatie'] ||= event.get('zaakidentificatie')
map['zaakomschrijving'] ||= event.get('zaakomschrijving')
map['titel'] ||= event.get('titel')
map['beschrijving'] ||= event.get('beschrijving')
map['zaakdoc_id'] ||= event.get('zaakdoc_id')
map['groepsrollenlijst'] ||= []
map['groepsrollenlijst'] << {'groepsrol' => event.get('rol')}
event.cancel()
"
push_previous_map_as_event => true
timeout => 5
}
}
output {
# stdout { codec => rubydebug }
# file {
# path => ["/appl/sw/webnext/logstash/config_documenten/output/documenten.txt"]
# }
elasticsearch {
hosts => ["localhost:9200"]
index => "documenten"
document_id => "%{zaakdoc_id}"
}
}
The index config looks like this:
{
"documenten": {
"aliases": {
"izaaksuite": {}
},
"mappings": {
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"beschrijving": {
"type": "text"
},
"groepsrollenlijst": {
"properties": {
"groepsrol": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"mutatiedatum": {
"type": "date"
},
"oge_id": {
"type": "text"
},
"result_type": {
"type": "text"
},
"rol": {
"type": "text"
},
"tags": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"titel": {
"type": "text"
},
"zaak_id": {
"type": "text"
},
"zaakdoc_id": {
"type": "long"
},
"zaakidentificatie": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"zaakomschrijving": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"settings": {
"index": {
"routing": {
"allocation": {
"include": {
"_tier_preference": "data_content"
}
}
},
"number_of_shards": "1",
"provided_name": "documenten",
"creation_date": "1654158264412",
"number_of_replicas": "1",
"uuid": "bf4xj4TwQ-mP5K4Orc5HEA",
"version": {
"created": "8010399"
}
}
}
}
}
One document in the index that is eventually build, looks like this:
"_index": "documenten",
"_id": "25066386",
"_version": 1,
"_seq_no": 33039,
"_primary_term": 6,
"found": true,
"_source": {
"groepsrollenlijst": [
{
"groepsrol": "7710_AFH1"
},
{
"groepsrol": "7710_AFH2"
},
{
"groepsrol": "MR_GRP1"
}
],
"zaak_id": 44973087,
"oge_id": 98,
"#version": "1",
"#timestamp": "2022-07-11T08:24:07.717572Z",
"zaakdoc_id": 25066386,
"zaakomschrijving": "testOSiZaakAOS",
"result_type": "doc",
"titel": "Test4",
"zaakidentificatie": "077215353",
"mutatiedatum": "2022-06-27T09:51:52.078119Z",
"beschrijving": "Test4"
}
}
As you can see, the "groepsrollenlijst" is present. Now our problem: when searching we need to match one of the values in groepsrollenlijst (which is dutch for grouprole which is basically an autorisation within the application where the data is coming from) with the grouprole present on the user doing the search. This to prevent users to be able to have data in their search results they don't have acces to.
Our java code looks like this (sorry for the dutch sentences):
List<SearchResult> searchResults = new ArrayList<>();
SearchRequest searchRequest = new SearchRequest(index);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
BoolQueryBuilder fieldsQuery = new BoolQueryBuilder();
/*
* Haal per index alle velden op waarop gezocht kan en mag worden. We kunnen
* niet over alle velden zoeken omdat er dan ook hits voorkomen op de
* groepsrollenlijst (als je op bv op rutten zoekt worden er ook hits gevonden
* op groepsrol "RUTTENGROEP" wat je niet wilt) Ook bij documenten en
* betrokkenen wil je bv niet dat er hits gevonden worden op de
* zaakomschrijving.
*/
String indexFields = index + "Fields";
indexFields = indexFields.substring(0, 1).toUpperCase() + indexFields.substring(1);
List<String> fields = getFieldsFor(indexFields);
// Voeg per veld een query toe voor de ingegeven zoektekst
HighlightBuilder highlightBuilder = new HighlightBuilder();
QueryStringQueryBuilder queryStringQueryBuilder = new QueryStringQueryBuilder(autoCompleteText);
for (String field : fields) {
queryStringQueryBuilder.field(field);
highlightBuilder.field(field);
}
fieldsQuery.should(queryStringQueryBuilder);
// Manipuleer de roles tbv test
roles.clear();
roles.add("7710_AFH1");
roles.add("7710_AFH2");
BoolQueryBuilder rolesQuery = QueryBuilders.boolQuery();
for (String role : roles) {
rolesQuery.should(QueryBuilders.wildcardQuery("groepsrol", "*" + role + "*"));
}
LOG.info("Rollen medewerker: " + roles);
BoolQueryBuilder mainQuery = new BoolQueryBuilder();
mainQuery.must(new TermsQueryBuilder("oge_id", String.valueOf(ogeId)));
mainQuery.must(fieldsQuery);
mainQuery.must(rolesQuery);
searchSourceBuilder.query(mainQuery);
searchSourceBuilder.highlighter(highlightBuilder);
searchRequest.source(searchSourceBuilder);
searchRequest.validate();
// Execute search
LOG.info("Search query: {}", searchRequest.source().toString());
SearchResponse searchResponse = null;
try {
searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
} catch (IOException | ElasticsearchStatusException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return;
}
if (searchResponse == null) {
return;
}
SearchHits hits = searchResponse.getHits();
For the test we hardcoded the user's grouproles into the code.
The issue is that when we search for "testOSiZaakAOS" (one of the values in the document previously shown) which should be a hit, we don't get a result. If we comment out the "mainQuery.must(rolesQuery);" part, we do get a result. But then the roles are not taking into account.
How do we go about fixing this? So user has role x, some documents in the index have key-value pairs for role x, y and z. And some do have only y and z.
Search should only show those where role x is present.
Basically at least one of the roles of the user should match one of the roles present in the document in the index.
Your help is greatly appreciated! Let me know if you need more info.

Gaussian constraint in `normfactor`

I would like to understand how to impose a gaussian constraint with central value expected_yield and error expected_y_error on a normfactor modifier. I want to fit observed_data with a single sample MC_derived_sample. My goal is to extract the bu_y modifier such that the integral of MC_derived_sample scaled by bu_y is gaussian-constrained to expected_yield +/- expected_y_error.
My present attempt employs the normsys modifier as follows:
spec = {
"channels": [
{
"name": "singlechannel",
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
},
]
},
],
"observations": [
{
"name": "singlechannel",
"data": observed_data,
}
],
"measurements": [
{
"name": "sig_y_extraction",
"config": {
"poi": "bu_y",
"parameters": [
{"name":"bu_y", "bounds": [[(1 - (5*expected_y_error/expected_yield), 1+(5*expected_y_error/expected_yield)]], "inits":[1.]},
]
}
}
],
"version": "1.0.0"
}
My thinking is that normsys will introduce a gaussian constraint about unity on the sample scaled by expected_yield.
Please can you provide me any feedback as to whether this approach is correct, please?
In addition, suppose I wanted to include a staterror modifier for the Barlow-Beeston lite implementation, would this be the correct way of doing so?
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "BB_lite_uncty", "type": "staterror", "data": np.sqrt(MC_derived_sample)*expected_yield }, #assume poisson error and scale by central value of constraint
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
}
Thanks a lot in advance for your help,
Blaise

elasticsearch filebeat mapper_parsing_exception when using decode_json_fields

I have ECK setup and im using filebeat to ship logs from Kubernetes to elasticsearch.
Ive recently added decode_json_fields processor to my configuration, so that im able decode the json that is usually in the message field.
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 10
target: "log"
overwrite_keys: true
add_error_key: true
However logs have stopped appearing since adding it.
example log:
{
"_index": "filebeat-7.9.1-2020.10.01-000001",
"_type": "_doc",
"_id": "wF9hB3UBtUOF3QRTBcts",
"_score": 1,
"_source": {
"#timestamp": "2020-10-08T08:43:18.672Z",
"kubernetes": {
"labels": {
"controller-uid": "9f3f9d08-cfd8-454d-954d-24464172fa37",
"job-name": "stream-hatchet-cron-manual-rvd"
},
"container": {
"name": "stream-hatchet-cron",
"image": "<redacted>.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4"
},
"node": {
"name": "ip-172-20-32-60.us-east-2.compute.internal"
},
"pod": {
"uid": "041cb6d5-5da1-4efa-b8e9-d4120409af4b",
"name": "stream-hatchet-cron-manual-rvd-bh96h"
},
"namespace": "default"
},
"ecs": {
"version": "1.5.0"
},
"host": {
"mac": [],
"hostname": "ip-172-20-32-60",
"architecture": "x86_64",
"name": "ip-172-20-32-60",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat",
"name": "CentOS Linux",
"kernel": "4.9.0-11-amd64"
},
"containerized": false,
"ip": []
},
"cloud": {
"instance": {
"id": "i-06c9d23210956ca5c"
},
"machine": {
"type": "m5.large"
},
"region": "us-east-2",
"availability_zone": "us-east-2a",
"account": {
"id": "<redacted>"
},
"image": {
"id": "ami-09d3627b4a09f6c4c"
},
"provider": "aws"
},
"stream": "stdout",
"message": "{\"message\":{\"log_type\":\"cron\",\"status\":\"start\"},\"level\":\"info\",\"timestamp\":\"2020-10-08T08:43:18.670Z\"}",
"input": {
"type": "container"
},
"log": {
"offset": 348,
"file": {
"path": "/var/log/containers/stream-hatchet-cron-manual-rvd-bh96h_default_stream-hatchet-cron-73069980b418e2aa5e5dcfaf1a29839a6d57e697c5072fea4d6e279da0c4e6ba.log"
}
},
"agent": {
"type": "filebeat",
"version": "7.9.1",
"hostname": "ip-172-20-32-60",
"ephemeral_id": "6b3ba0bd-af7f-4946-b9c5-74f0f3e526b1",
"id": "0f7fff14-6b51-45fc-8f41-34bd04dc0bce",
"name": "ip-172-20-32-60"
}
},
"fields": {
"#timestamp": [
"2020-10-08T08:43:18.672Z"
],
"suricata.eve.timestamp": [
"2020-10-08T08:43:18.672Z"
]
}
}
In the filebeat logs i can see the following error:
2020-10-08T09:25:43.562Z WARN [elasticsearch] elasticsearch/client.go:407 Cannot
index event
publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x36b243a0,
ext:63737745936, loc:(*time.Location)(nil)}, Meta:null,
Fields:{"agent":{"ephemeral_id":"5f8afdba-39c3-4fb7-9502-be7ef8f2d982","hostname":"ip-172-20-32-60","id":"0f7fff14-6b51-45fc-8f41-34bd04dc0bce","name":"ip-172-20-32-60","type":"filebeat","version":"7.9.1"},"cloud":{"account":{"id":"700849607999"},"availability_zone":"us-east-2a","image":{"id":"ami-09d3627b4a09f6c4c"},"instance":{"id":"i-06c9d23210956ca5c"},"machine":{"type":"m5.large"},"provider":"aws","region":"us-east-2"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"ip-172-20-32-60","ip":["172.20.32.60","fe80::af:9fff:febe:dc4","172.17.0.1","100.96.1.1","fe80::6010:94ff:fe17:fbae","fe80::d869:14ff:feb0:81b3","fe80::e4f3:b9ff:fed8:e266","fe80::1c19:bcff:feb3:ce95","fe80::fc68:21ff:fe08:7f24","fe80::1cc2:daff:fe84:2a5a","fe80::3426:78ff:fe22:269a","fe80::b871:52ff:fe15:10ab","fe80::54ff:cbff:fec0:f0f","fe80::cca6:42ff:fe82:53fd","fe80::bc85:e2ff:fe5f:a60d","fe80::e05e:b2ff:fe4d:a9a0","fe80::43a:dcff:fe6a:2307","fe80::581b:20ff:fe5f:b060","fe80::4056:29ff:fe07:edf5","fe80::c8a0:5aff:febd:a1a3","fe80::74e3:feff:fe45:d9d4","fe80::9c91:5cff:fee2:c0b9"],"mac":["02:af:9f:be:0d:c4","02:42:1b:56:ee:d3","62:10:94:17:fb:ae","da:69:14:b0:81:b3","e6:f3:b9:d8:e2:66","1e:19:bc:b3:ce:95","fe:68:21:08:7f:24","1e:c2:da:84:2a:5a","36:26:78:22:26:9a","ba:71:52:15:10:ab","56:ff:cb:c0:0f:0f","ce:a6:42:82:53:fd","be:85:e2:5f:a6:0d","e2:5e:b2:4d:a9:a0","06:3a:dc:6a:23:07","5a:1b:20:5f:b0:60","42:56:29:07:ed:f5","ca:a0:5a:bd:a1:a3","76:e3:fe:45:d9:d4","9e:91:5c:e2:c0:b9"],"name":"ip-172-20-32-60","os":{"codename":"Core","family":"redhat","kernel":"4.9.0-11-amd64","name":"CentOS
Linux","platform":"centos","version":"7
(Core)"}},"input":{"type":"container"},"kubernetes":{"container":{"image":"700849607999.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4","name":"stream-hatchet-cron"},"labels":{"controller-uid":"a79daeac-b159-4ba7-8cb0-48afbfc0711a","job-name":"stream-hatchet-cron-manual-c5r"},"namespace":"default","node":{"name":"ip-172-20-32-60.us-east-2.compute.internal"},"pod":{"name":"stream-hatchet-cron-manual-c5r-7cx5d","uid":"3251cc33-48a9-42b1-9359-9f6e345f75b6"}},"log":{"level":"info","message":{"log_type":"cron","status":"start"},"timestamp":"2020-10-08T09:25:36.916Z"},"message":"{"message":{"log_type":"cron","status":"start"},"level":"info","timestamp":"2020-10-08T09:25:36.916Z"}","stream":"stdout"},
Private:file.State{Id:"native::30998361-66306", PrevId:"",
Finished:false, Fileinfo:(*os.fileStat)(0xc001c14dd0),
Source:"/var/log/containers/stream-hatchet-cron-manual-c5r-7cx5d_default_stream-hatchet-cron-4278d956fff8641048efeaec23b383b41f2662773602c3a7daffe7c30f62fe5a.log",
Offset:539, Timestamp:time.Time{wall:0xbfd7d4a1e556bd72,
ext:916563812286, loc:(*time.Location)(0x607c540)}, TTL:-1,
Type:"container", Meta:map[string]string(nil),
FileStateOS:file.StateOS{Inode:0x1d8ff59, Device:0x10302},
IdentifierName:"native"}, TimeSeries:false}, Flags:0x1,
Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400):
{"type":"mapper_parsing_exception","reason":"failed to parse field
[log.message] of type [keyword] in document with id
'56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron,
status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:113"}}
It throws an error because apparently log.message is of type "keyword" however this does not exist in the index mapping.
I thought this maybe an issue with the "target": "log" so ive tried changing this to something arbitrary like "my_parsed_message" or "m_log" or "mlog" and i get the same error for all of them.
{"type":"mapper_parsing_exception","reason":"failed to parse field
[mlog.message] of type [keyword] in document with id
'J5KlDHUB_yo5bfXcn2LE'. Preview of field's value: '{log_type=cron,
status=end}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:217"}}
Elastic version: 7.9.2
The problem is that some of your JSON messages contain a message field that is sometimes a simple string and other times a nested JSON object (like in the case you're showing in your question).
After this index was created, the very first message that was parsed was probably a string and hence the mapping has been modified to add the following field (line 10553):
"mlog": {
"properties": {
...
"message": {
"type": "keyword",
"ignore_above": 1024
},
}
}
You'll find the same pattern for my_parsed_message (line 10902), my_parsed_logs (line 10742), etc...
Hence the next message that comes with message being a JSON object, like
{"message":{"log_type":"cron","status":"start"}, ...
will not work because it's an object, not a string...
Looking at the fields of your custom JSON, it seems you don't really have the control over either their taxonomy (i.e. naming) or what they contain...
If you're serious about willing to search within those custom fields (which I think you are since you're parsing the field, otherwise you'd just store the stringified JSON), then I can only suggest to start figuring out a proper taxonomy in order to make sure that they all get a standard type.
If all you care about is logging your data, then I suggest to simply disable the indexing of that message field. Another solution is to set dynamic: false in your mapping to ignore those fields, i.e. not modify your mapping.

Elasticsearch groovy script not working as expected

My partial mapping of an index listing elasticsearch 2.5 (I know I have to upgrade to newer version and start using painless, let's keep that aside for this question)
"name": { "type": "string" },
"los": {
"type": "nested",
"dynamic": "strict",
"properties": {
"start": { "type": "date", "format": "yyyy-MM" },
"max": { "type": "integer" },
"min": { "type": "integer" }
}
}
I have only one document in my storage and that is as follows:
{
"name": 'foobar',
"los": [{
"max": 12,
"start": "2018-02",
"min": 1
},
{
"max": 8,
"start": "2018-03",
"min": 3
},
{
"max": 10,
"start": "2018-04",
"min": 2
},
{
"max": 12,
"start": "2018-05",
"min": 1
}
]
}
I have a a groovy script in my elastic search query as follows:
los_map = [doc['los.start'], doc['los.max'], doc['los.min']].transpose()
return los_map.size()
This groovy query ALWAYS returns 0, which is not possible, as I have one document, as mentioned above (even if I add multiple documents, it still returns 0) and los field is guaranteed to be present in every doc with multiple objects in it. So it seems the transpose which I am doing is not working correctly?
I also tried changing this line los_map = [doc['los.start'], doc['los.max'], doc['los.min']].transpose() to los_map = [doc['los'].start, doc['los'].max, doc['los'].min].transpose() then I get this error "No field found for [los] in mapping with types [listing]"
Does anyone have any idea how to get the transpose work?
By the way, if you are curious, my complete script is as follows:
losMinMap = [:]
losMaxMap = [:]
los_map = [doc['los.start'], doc['los.max'], doc['los.min']].transpose()
los_map.each {st, mx, mn ->
losMinMap[st] = mn
losMaxMap[st] = mx
}
return los_map['2018-05']
Thank you in advance.

Filter objects in geojson based on a specific key

I try to edit a geojson file to keep only objects that have the key "name".
The filter works but I can't find a way to keep the other objects and, specifically, the geometry and redirect the whole stuff to a new geojson file. Is there a way to display the whole object after filtering one of its children objects?
Here is an example of my data. The first object has the "name" property and the second hasn't:
{
"features": [
{
"type": "Feature",
"id": "way/24824633",
"properties": {
"#id": "way/24824633",
"highway": "tertiary",
"lit": "yes",
"maxspeed": "50",
"name": "Rue de Kleinbettingen",
"surface": "asphalt"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8997935,
49.6467825
],
[
5.8972561,
49.6467445
]
]
}
},
{
"type": "Feature",
"id": "way/474396855",
"properties": {
"#id": "way/474396855",
"highway": "path"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8020608,
49.6907648
],
[
5.8020695,
49.6906054
]
]
}
}
]
}
Here is what I tried, using jq
cat file.geojson | jq '.features[].properties | select(has("name"))'
The "geometry" is also a child of "features" but I can't find a way to make the selection directly from the "features" level. Is there some way to do that? Or a better path to the solution?
So, the required ouput is:
{
"type": "Feature",
"id": "way/24824633",
"properties": {
"#id": "way/24824633",
"highway": "tertiary",
"lit": "yes",
"maxspeed": "50",
"name": "Rue de Kleinbettingen",
"surface": "asphalt"
},
"geometry": {
"type": "LineString",
"coordinates": [
[
5.8997935,
49.6467825
],
[
5.8972561,
49.6467445
]
]}}
You can assign the filtered list back to .features:
jq '.features |= map(select(.properties|has("name")))'

Resources