Kibana does not search on nested field - elasticsearch

working with Elasticsearch/Kibana and trying to search on field in a nested object. However it does not seem to work. Here's mapping that I use in a template:
{
"order": 0,
"template": "ss7_signaling*",
"settings": {
"index": {
"mapping.total_fields.limit": 3000,
"number_of_shards": "5",
"refresh_interval": "30s"
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "no",
"type": "string"
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"properties": {
"message": {
"index": "not_analyzed",
"type": "string"
},
"Protocol": {
"index": "not_analyzed",
"type": "string"
},
"IMSI": {
"index": "not_analyzed",
"type": "string"
},
"nested": {
"type": "nested",
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
}
}
},
"Timestamp": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
},
"#timestamp": {
"type": "date"
},
"#version": {
"index": "not_analyzed",
"type": "string"
}
},
"_all": {
"norms": false,
"enabled": false
}
}
},
"aliases": {
"signaling": {}
}
}
When I do search kibana on single fields - everything works fine. Still though i cannot search on nested fields like 'nested.name'.
Example of my query in kibana: nested.name:hi
Thanks.

Kibana uses the query_string query underneath, and the latter does not support querying on nested fields.
It's still being worked on but in the meantime you need to proceed differently.
UPDATE:
As of ES 7.6, it is now possible to search on nested fields

Related

Delete by Query API throws type [[_delete_by_query, trying to auto create mapping, but dynamic mapping is disabled]] missing]

I am using Elastic Search version number: "1.7.1"
But whenever I run the _delete_by_query I get this error on all my indexes.
Here's the mapping for one of the indexes
{
"composite_task_group": {
"mappings": {
"stock_take": {
"dynamic": "true",
"dynamic_templates": [
{
"strings": {
"mapping": {
"index": "not_analyzed",
"type": "string"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"data": {
"properties": {
"attributes.container_id": {
"type": "string",
"index": "not_analyzed"
},
"attributes.container_label": {
"type": "string",
"index": "not_analyzed"
},
"attributes.container_type": {
"type": "string",
"index": "not_analyzed"
},
"attributes.ref_id": {
"type": "string",
"index": "not_analyzed"
},
"attributes.stock_take_id": {
"type": "string",
"index": "not_analyzed"
},
"attributes.stock_take_type": {
"type": "string",
"index": "not_analyzed"
},
"attributes.wid": {
"type": "string",
"index": "not_analyzed"
},
"client_id": {
"type": "string",
"index": "not_analyzed"
},
"createdAtEpoch": {
"type": "long"
},
"facility_id": {
"type": "string",
"index": "not_analyzed"
},
"id": {
"type": "string",
"index": "not_analyzed"
},
"resources.id": {
"type": "string",
"index": "not_analyzed"
},
"resources.type": {
"type": "string",
"index": "not_analyzed"
},
"status": {
"type": "string",
"index": "not_analyzed"
},
"tenant_id": {
"type": "string",
"index": "not_analyzed"
},
"updatedAtEpoch": {
"type": "long"
}
}
},
"id": {
"type": "string",
"index": "not_analyzed"
},
"tenant": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
This is the _delete_by_query that I am using to delete all documents that have the value of "A" for facility_id
{
"query": {
"query_string": {
"query": "A",
"default_field": "facility_id"
}
}
}
The same payload returns 600 documents for _search API
Where am I going wrong?
I think _delete_by_query wasn't available in ver 1.7.
The correct way to delete by query would in your case be
curl -XDELETE 'http://xyz:9200/index_name/composite_task_group/_query' -d '{
"query" : {
...
}
}
'
The error you're seeing is typically thrown when you're inserting a new document to an index that required types, thus trying to update its mapping but that mapping is optionally prohibited from being dynamically updated, hence the statement "dynamic mapping is disabled".

ElasticSearch Logstash template

I would like to index the SMTP receive log of my Exchange Server with ElasticSearch. So I created a logstash config file and it works very well but all of my fields are strings instead ip for source and target server for example. So I tried to change the default mapping in the logstash template:
I run the command curl -XGET http://localhost:9200/_template/logstash?pretty > C:\temp\logstashTemplate.txt
Edit the textfile and add my 'SourceIP' field
{
"template": "logstash-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
}, {
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}],
"_all": {
"omit_norms": true,
"enabled": true
},
"properties": {
"#timestamp": {
"type": "date"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"#version": {
"index": "not_analyzed",
"type": "string"
},
"SourceIP": {
"type": "ip"
}
}
}
},
"aliases": {}
}
I uploaded the edited template with the command curl -XPUT http://localhost:9200/_t
emplate/logstash -d#C:\temp\logstash.template
Restart the ElasticSearch server and index deleted/re-created
The 'SourceIP' field did not changed to type ip. What do I wrong? Can you please give me a hint? Thanks!

Recreation of mapping elastic search

logstash configI have created my index on elasticsearch and through kibana as well and have uploaded data. Now i want to change the mapping for the index and change some fields to not analyzed .Below is my mapping which i want to replace from existing one . But when i run below command it gives me error
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"}],"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"},"status":400}
Kindly help to get it close.
curl -XPUT 'http://10.56.139.61:9200/rettrmt' -d '{
"rettrmt": {
"aliases": {},
"mappings": {
"RETTRMT": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "string"
},
"acid": {
"type": "string"
},
"actor_id": {
"type": "string",
"index": "not_analyzed"
},
"actor_type": {
"type": "string",
"index": "not_analyzed"
},
"channel_id": {
"type": "string",
"index": "not_analyzed"
},
"circle": {
"type": "string",
"index": "not_analyzed"
},
"cr_dr_indicator": {
"type": "string",
"index": "not_analyzed"
},
"host": {
"type": "string"
},
"message": {
"type": "string"
},
"orig_input_amt": {
"type": "double"
},
"path": {
"type": "string"
},
"r_cre_id": {
"type": "string"
},
"sub_use_case": {
"type": "string",
"index": "not_analyzed"
},
"tran_amt": {
"type": "double"
},
"tran_id": {
"type": "string"
},
"tran_particulars": {
"type": "string"
},
"tran_particulars_2": {
"type": "string"
},
"tran_remarks": {
"type": "string"
},
"tran_sub_type": {
"type": "string"
},
"tran_timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"tran_type": {
"type": "string"
},
"type": {
"type": "string"
},
"use_case": {
"type": "string",
"index": "not_analyzed"
}
}
}
},
"settings": {
"index": {
"creation_date": "1457331693603",
"uuid": "2bR0yOQtSqqVUb8lVE2dUA",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2000099"
}
}
},
"warmers": {}
}
}'
You first need to delete your index and then recreate it with the proper mapping. Here you're getting an error index_already_exists_exception because you try to create an index while the older index still exists, hence the conflict.
Run this first:
curl -XDELETE 'http://10.56.139.61:9200/rettrmt'
And then you can run your command again. Note that this will erase your data, so you will have to repopulate your index.
Did you try something like that ?
curl -XPUT 'http://10.56.139.61:9200/rettrmt/_mapping/RETTRMT' -d '
{
"properties": {
"actor_id": { // or whichever properties you want to add
"type": "string",
"index": "not_analyzed"
}
}
}
works for me

ElasticSearch term query vs query_string?

When I query my index with query_string, I am getting results
But when I query using term query, I dont get any results
{
"query": {
"bool": {
"must": [],
"must_not": [],
"should": [
{
"query_string": {
"default_field": "Printer.Name",
"query": "HL-2230"
}
}
]
}
},
"from": 0,
"size": 10,
"sort": [],
"aggs": {}
}
I know that term is not_analyzed and query_string is analyzed but Name is already as "HL-2230", why doesnt it match with term query? I tried also searching with "hl-2230", I still didnt get any result.
EDIT: mapping looks like as below. Printer is the child of Product. Not sure if this makes difference
{
"state": "open",
"settings": {
"index": {
"creation_date": "1453816191454",
"number_of_shards": "5",
"number_of_replicas": "1",
"version": {
"created": "1070199"
},
"uuid": "TfMJ4M0wQDedYSQuBz5BjQ"
}
},
"mappings": {
"Product": {
"properties": {
"index": "not_analyzed",
"store": true,
"type": "string"
},
"ProductName": {
"type": "nested",
"properties": {
"Name": {
"store": true,
"type": "string"
}
}
},
"ProductCode": {
"type": "string"
},
"Number": {
"index": "not_analyzed",
"store": true,
"type": "string"
},
"id": {
"index": "no",
"store": true,
"type": "integer"
},
"ShortDescription": {
"store": true,
"type": "string"
},
"Printer": {
"_routing": {
"required": true
},
"_parent": {
"type": "Product"
},
"properties": {
"properties": {
"RelativeUrl": {
"index": "no",
"store": true,
"type": "string"
}
}
},
"PrinterId": {
"index": "no",
"store": true,
"type": "integer"
},
"Name": {
"store": true,
"type": "string"
}
}
},
"aliases": []
}
}
As per mapping provided by you above
"Name": {
"store": true,
"type": "string"
}
Name is analysed. So HL-2230 will split into two tokens, HL and 2230. That's why term query is not working and query_string is working. When you use term query it will search for exact term HL-2230 which is not there.

Update ElasticSearch Mapping type without delete it

I have this mapping type on my Index.
{
"iotsens-summarizedmeasures": {
"mappings": {
"summarizedmeasure": {
"properties": {
"id": {
"type": "long"
},
"location": {
"type": "boolean"
},
"rawValue": {
"type": "string"
},
"sensorId": {
"type": "string"
},
"summaryTimeUnit": {
"type": "string"
},
"timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"value": {
"type": "string"
},
"variableName": {
"type": "string"
}
}
}
}
}
}
I want to update sensorId field to.
"sensorId": {
"type": "string",
"index": "not_analyzed"
}
Is there any way to update the index without delete and re-mapping it? I don't have to change type of field, I only set "index": "not_analyzed".
Thanks you.
What you can do is make a multi-field out of your existing sensorId field like this with a sub-field called raw which is not_analyzed:
curl -XPUT localhost:9200/iotsens-summarizedmeasures/_mapping/summarizedmeasure -d '{
"summarizedmeasure": {
"properties": {
"sensorId": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}'
However, you still have to re-index your data to make sure all sensorId.raw sub-fields get created.

Resources