I'm having the following issue with elasticsearch 7 when trying creating a template.
When I'm trying to copy template from elasticsearch 6 to 7 and some of the fields I have removed as per the elasticsearch 7 .e
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [events : {properties={msg={fields={raw={type=keyword}}}, requestId={type=keyword}, logger={type=keyword}, host={type=keyword}, jwtOwner={type=keyword}, requestOriginator={type=keyword}, tag={analyzer=firsttoken, fields={disambiguator={analyzer=keyword, type=text}}}, jwtAuthenticatedUser={type=keyword}, thread={type=keyword}, requestChainOriginator={type=keyword}, revision={type=keyword}}}]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [events : {properties={msg={fields={raw={type=keyword}}}, requestId={type=keyword}, logger={type=keyword}, host={type=keyword}, jwtOwner={type=keyword}, requestOriginator={type=keyword}, tag={analyzer=firsttoken, fields={disambiguator={analyzer=keyword, type=text}}}, jwtAuthenticatedUser={type=keyword}, thread={type=keyword}, requestChainOriginator={type=keyword}, revision={type=keyword}}}]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [events : {properties={msg={fields={raw={type=keyword}}}, requestId={type=keyword}, logger={type=keyword}, host={type=keyword}, jwtOwner={type=keyword}, requestOriginator={type=keyword}, tag={analyzer=firsttoken, fields={disambiguator={analyzer=keyword, type=text}}}, jwtAuthenticatedUser={type=keyword}, thread={type=keyword}, requestChainOriginator={type=keyword}, revision={type=keyword}}}]"
}
},
"status": 400
}
Mapping template: The following is the template I'm trying to post.
POST _template/logstash
{
"order" : 0,
"index_patterns" : [
"logstash*"
],
"settings" : {
"index" : {
"analysis" : {
"filter" : {
"firsttoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^([^\.]*)\.?.*$"""
]
},
"secondtoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^[^\.]*\.([^\.]*)\.?.*$"""
]
},
"thirdtoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^[^\.]*\.[^\.]*\.([^\.]*)\.?.*$"""
]
}
},
"analyzer" : {
"firsttoken" : {
"filter" : [
"firsttoken"
],
"tokenizer" : "keyword"
},
"secondtoken" : {
"filter" : [
"secondtoken"
],
"tokenizer" : "keyword"
},
"thirdtoken" : {
"filter" : [
"thirdtoken"
],
"tokenizer" : "keyword"
}
}
},
"mapper" : {
}
}
},
"mappings" : {
"events" : {
"properties" : {
"msg" : {
"type" : "text",
"fields" : {
"raw" : {
"type" : "keyword"
}
}
},
"requestId" : {
"type" : "keyword"
},
"logger" : {
"type" : "keyword"
},
"host" : {
"type" : "keyword"
},
"jwtOwner" : {
"type" : "keyword"
},
"requestOriginator" : {
"type" : "keyword"
},
"tag" : {
"analyzer" : "firsttoken",
"fields" : {
"disambiguator" : {
"analyzer" : "keyword",
"type" : "text"
}
}
},
"jwtAuthenticatedUser" : {
"type" : "keyword"
},
"thread" : {
"type" : "keyword"
},
"requestChainOriginator" : {
"type" : "keyword"
},
"revision" : {
"type" : "keyword"
}
}
}
},
"aliases" : { }
}
Please help me resolve the issue. Thanks in advance.
There are two issues. One issue is the one mentioned by #OpsterESNinjaKamal
But it still won't work as the tag field has no type.
Here is the template that will work:
PUT _template/logstash
{
"order": 0,
"index_patterns": [
"logstash*"
],
"settings": {
"index": {
"analysis": {
"filter": {
"firsttoken": {
"type": "pattern_capture",
"preserve_original": "false",
"patterns": [
"^([^\\.]*)\\.?.*$"
]
},
"secondtoken": {
"type": "pattern_capture",
"preserve_original": "false",
"patterns": [
"^[^\\.]*\\.([^\\.]*)\\.?.*$"
]
},
"thirdtoken": {
"type": "pattern_capture",
"preserve_original": "false",
"patterns": [
"^[^\\.]*\\.[^\\.]*\\.([^\\.]*)\\.?.*$"
]
}
},
"analyzer": {
"firsttoken": {
"filter": [
"firsttoken"
],
"tokenizer": "keyword"
},
"secondtoken": {
"filter": [
"secondtoken"
],
"tokenizer": "keyword"
},
"thirdtoken": {
"filter": [
"thirdtoken"
],
"tokenizer": "keyword"
}
}
},
"mapper": {}
}
},
"mappings": {
"properties": {
"msg": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
},
"requestId": {
"type": "keyword"
},
"logger": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"jwtOwner": {
"type": "keyword"
},
"requestOriginator": {
"type": "keyword"
},
"tag": {
"type": "text", <--- add type here
"analyzer": "firsttoken",
"fields": {
"disambiguator": {
"analyzer": "keyword",
"type": "text"
}
}
},
"jwtAuthenticatedUser": {
"type": "keyword"
},
"thread": {
"type": "keyword"
},
"requestChainOriginator": {
"type": "keyword"
},
"revision": {
"type": "keyword"
}
}
},
"aliases": {}
}
Notice your mappings. ES post version 7.0, doesn't support type i.e. events in this case and that is has been deprecated.
Post version 7.0, you would need to create a separate index for every type you've had in the index prior to version 7.0.
This link should help you as how you can migrate from version 6.x to 7.x
Basically your mappings section would be as follows:
{
"mappings":{
"properties":{ <---- Notice there is no `events` before `properties` as mentioned in your question
"msg":{
"type":"text",
"fields":{
"raw":{
"type":"keyword"
}
}
},
"requestId":{
"type":"keyword"
},
"logger":{
"type":"keyword"
},
"host":{
"type":"keyword"
},
"jwtOwner":{
"type":"keyword"
},
"requestOriginator":{
"type":"keyword"
},
"tag":{
"analyzer":"firsttoken",
"fields":{
"disambiguator":{
"analyzer":"keyword",
"type":"text"
}
}
},
"jwtAuthenticatedUser":{
"type":"keyword"
},
"thread":{
"type":"keyword"
},
"requestChainOriginator":{
"type":"keyword"
},
"revision":{
"type":"keyword"
}
}
}
}
Sorry, Vol and Opster I missed adding events template. I deleted the event because it is not accepting. The following is the template for events.
PUT _template/logstash
{
"order" : 0,
"index_patterns" : [
"logstash*"
],
"settings" : {
"index" : {
"analysis" : {
"filter" : {
"firsttoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^([^\.]*)\.?.*$"""
]
},
"secondtoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^[^\.]*\.([^\.]*)\.?.*$"""
]
},
"thirdtoken" : {
"type" : "pattern_capture",
"preserve_original" : "false",
"patterns" : [
"""^[^\.]*\.[^\.]*\.([^\.]*)\.?.*$"""
]
}
},
"analyzer" : {
"firsttoken" : {
"filter" : [
"firsttoken"
],
"tokenizer" : "keyword"
},
"secondtoken" : {
"filter" : [
"secondtoken"
],
"tokenizer" : "keyword"
},
"thirdtoken" : {
"filter" : [
"thirdtoken"
],
"tokenizer" : "keyword"
}
}
},
"mapper" : {
}
}
},
"mappings" : {
"events" : {
"properties" : {
"msg" : {
"type" : "text",
"fields" : {
"raw" : {
"type" : "keyword"
}
}
},
"requestId" : {
"type" : "keyword"
},
"logger" : {
"type" : "keyword"
},
"host" : {
"type" : "keyword"
},
"jwtOwner" : {
"type" : "keyword"
},
"requestOriginator" : {
"type" : "keyword"
},
"tag" : {
"analyzer" : "firsttoken",
"fields" : {
"disambiguator" : {
"analyzer" : "keyword",
"type" : "text"
}
},
"type" : "text"
},
"jwtAuthenticatedUser" : {
"type" : "keyword"
},
"thread" : {
"type" : "keyword"
},
"requestChainOriginator" : {
"type" : "keyword"
},
"revision" : {
"type" : "keyword"
}
}
}
},
"aliases" : { }
}
Related
I am struggling to write a query on a field where 2 criteria need to be met for a dashboard in Kibana. My field name is test:keyword and I need the results to be where Test A and Test B have the result:keyword (another field) as PASS
{ "query": {
"match_phrase": {
"test.keyword": "EOL_Overall_test_result" }
}
}
so I need another criteria and test.keyword:"EOL_flash_app_fw"
and these both need to have the result as:
result.keyword:"PASS"
{
"mte" : {
"mappings" : {
"properties" : {
"EESWVer" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"acdID" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"board" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"create" : {
"properties" : {
"board" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"device" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"reason" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"result" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"test" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"timeStamp" : {
"type" : "date"
}
}
},
"device" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"hostname" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"reason" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"result" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"test" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"timeStamp" : {
"type" : "date"
}
}
}
}
}
DOCUMENT SAMPLE
{
"_index": "mte",
"_type": "result",
"_id": "fY1Amn4BTPepfjg1c5x5",
"_version": 1,
"_score": 1,
"_source": {
"timeStamp": "2022-01-27T14:37:01+08:00",
"test": "EOL_Overall_test_result",
"hostname": "eol-test-0",
"acdID": "0x00A2F16A",
"EESWVer": "0.3.0",
"device": "",
"result": "PASS",
"reason": "0b001111111110011011111111",
"board": "JENMUL90"
},
"fields": {
"acdID.keyword": [
"0x00A2F16A"
],
"reason": [
"0b001111111110011011111111"
],
"device.keyword": [
""
],
"test": [
"EOL_Overall_test_result"
],
"test.keyword": [
"EOL_Overall_test_result"
],
"result.keyword": [
"PASS"
],
"EESWVer.keyword": [
"0.3.0"
],
"board.keyword": [
"JENMU90"
],
"result": [
"PASS"
],
"timeStamp": [
"2022-01-27T06:37:01.000Z"
],
"hostname": [
"eol-test-0"
],
"reason.keyword": [
"0b001111111110011011111111"
],
"acdID": [
"0x00A2F16A"
],
"EESWVer": [
"0.3.0"
],
"hostname.keyword": [
"eol-test-0"
],
"device": [
""
],
"board": [
"JENMUL90"
]
}
}
Can you try this query, as far as I understand, it should work the way you expect (not sure though as the test field seems to only contain one single value):
{
"query": {
"bool": {
"filter": [
{
"terms": {
"test.keyword": [
"EOL_Overall_test_result",
"EOL_flash_app_fw"
]
}
},
{
"term": {
"result.keyword": "PASS"
}
}
]
}
}
}
Consider a query such as this one:
{
"size": 200,
"query": {
"bool" : {
....
}
},
"sort": {
"_script" : {
"script" : {
"source" : "params._source.participants[0].participantEmail",
"lang" : "painless"
},
"type" : "string",
"order" : "desc"
}
}
}
This query works for almost every document, for some of them are not at their correct place. How could it be?
The order of the last documents is like that(I'm displaying the first item of the participant array of each doc):
shiend#....
denys#...
Lynn#...
How is it possible? I don't have direction. Is the sort query wrong?
Settings:
"myindex" : {
"settings" : {
"index" : {
"refresh_interval" : "30s",
"number_of_shards" : "5",
"provided_name" : "myindex",
"creation_date" : "1600703588497",
"analysis" : {
"filter" : {
"english_keywords" : {
"keywords" : [
"example"
],
"type" : "keyword_marker"
},
"english_stemmer" : {
"type" : "stemmer",
"language" : "english"
},
"synonym" : {
"type" : "synonym",
"synonyms_path" : "analysis/UK_US_Sync_2.csv",
"updateable" : "true"
},
"english_possessive_stemmer" : {
"type" : "stemmer",
"language" : "possessive_english"
},
"english_stop" : {
"type" : "stop",
"stopwords" : "_english_"
},
"my_katakana_stemmer" : {
"type" : "kuromoji_stemmer",
"minimum_length" : "4"
}
},
"normalizer" : {
"custom_normalizer" : {
"filter" : [
"lowercase",
"asciifolding"
],
"type" : "custom",
"char_filter" : [ ]
}
},
"analyzer" : {
"somevar_english" : {
"filter" : [
"english_possessive_stemmer",
"lowercase",
"english_stop",
"english_keywords",
"english_stemmer",
"asciifolding",
"synonym"
],
"tokenizer" : "standard"
},
"myvar_chinese" : {
"filter" : [
"porter_stem"
],
"tokenizer" : "smartcn_tokenizer"
},
"myvar" : {
"filter" : [
"my_katakana_stemmer"
],
"tokenizer" : "kuromoji_tokenizer"
}
}
},
"number_of_replicas" : "1",
"uuid" : "d0LlBVqIQGSk4afEWFD",
"version" : {
"created" : "6081099",
"upgraded" : "6081299"
}
}
}
}
Mapping:
{
"myindex": {
"mappings": {
"doc": {
"dynamic_date_formats": [
"yyyy-MM-dd HH:mm:ss.SSS"
],
"properties": {
"all_fields": {
"type": "text"
},
"participants": {
"type": "nested",
"include_in_root": true,
"properties": {
"participantEmail": {
"type": "keyword",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256,
"normalizer": "custom_normalizer"
}
},
"copy_to": [
"all_fields"
]
},
"participantType": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256,
"normalizer": "custom_normalizer"
}
},
"copy_to": [
"all_fields"
]
}
}
}
}
}
}
}
}
EDIT: Maybe it's because the email Lynn#.. starts with an uppercase?
Indeed, string are sorted in lexical order, i.e. uppercase letters come prior to lowercase ones (the other way around for descending order)
What you can do is to lowercase all emails in your script:
"sort": {
"_script" : {
"script" : {
"source" : "params._source.participants[0].participantEmail.toLowerCase()",
"lang" : "painless"
},
"type" : "string",
"order" : "desc"
}
}
Data inserted in ElasticSearch is in korean so I cannot present exact case but let's say
i have a word ABBCC that has been tokenized as ["A","BBCC"] and another word AZZXXXtokenized as ["A","ZZXXX"].
if i search ABBCC, then shouldn't AZZXXX come up since they have same token? or is this not how elasticsearch work?
this is how I checked analyzed words:
GET recpost_test/_analyze
{
"analyzer": "my_analyzer",
"text":"my query String!"
}
this is how i created my index:
PUT recpost
{
"settings": {
"index": {
"analysis": {
"tokenizer": {
"nori_user_dict": {
"type": "nori_tokenizer",
"decompound_mode": "mixed",
"user_dictionary": "userdict_ko.txt"
}
},
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "nori_user_dict"
}
},
"filter": {
"substring": {
"type": "edgeNGram",
"min_gram": 1,
"max_gram": 10
}
}
}
}
}
}
this is how i searched:
GET recpost/_search
{
"_source": [""],
"from": 0,
"size": 2,
"query":{
"multi_match": {
"query" : "my query String!",
"type": "best_fields",
"fields" : [
"brandkor",
"content",
"itemname",
"name",
"review",
"shortreview^2",
"title^3"]
}
}
}
EDIT:
I tried adding "analyzer" field to search and still doesn't work
GET recpost/_search
{
"_source": [""],
"from": 0,
"size": 2,
"query":{
"multi_match": {
"query" : "깡스",
"analyzer": "my_analyzer",
"type": "best_fields",
"fields" : [
"brandkor",
"content",
"itemname",
"name",
"review",
"shortreview^2",
"title^3"]
}
}
}
EDIT2: This is my mapping:
{
"recpost_test" : {
"mappings" : {
"properties" : {
"#timestamp" : {
"type" : "date"
},
"brandkor" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"content" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"field_statistics" : {
"type" : "boolean"
},
"fields" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"itemname" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"offsets" : {
"type" : "boolean"
},
"payloads" : {
"type" : "boolean"
},
"positions" : {
"type" : "boolean"
},
"review" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"shortreview" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"term_statistics" : {
"type" : "boolean"
},
"title" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
I dont see that you mounted your fields to your index(mapping).
so for all I know, is that you're indexing all of the fields (brandkor, content, ...etc) as text .. and basically you're matching exact values.
unless you correlated each field with its analyzer.
I have a curl command to fetch data from nested ES documents by date.
Currently it is not working.
Refer to the following for the mapping:
{
"test" : {
"mappings" : {
"doc" : {
"properties" : {
"#timestamp" : {
"type" : "date"
},
"#version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"_APIName" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"_parameters" : {
"properties" : {
"event" : {
"properties" : {
"body_json" : {
"properties" : {
"apps" : {
"properties" : {
"bundle" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
"model_name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"serial_number" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
},
"_stackName" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"beat" : {
"type" : "object"
},
"category" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"log" : {
"properties" : {
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
"log_name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"message" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"offset" : {
"type" : "long"
},
"prospector" : {
"properties" : {
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
"source" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"stack" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
The following is a sample document in ES.
{
"_index": "test",
"_type": "doc",
"_id": "odUvZFjNxoBJGtXhSoBA",
"_version": 1,
"_score": null,
"_source": {
"log.name": "information",
"offset": 8106321,
"prospector": {
"type": "log"
},
"#version": "1",
"beat": {},
"_stackName": "test",
"_APIName": "Information",
"category": "lambda",
"#timestamp": "2019-04-16T02:22:32.000Z",
"_parameters": {
"event": {
"body_json": {
"model_name": "model-01",
"serial_number": "1234567890",
"version": "1.2",
"apps": [
{
"name": "app1",
"version": "1.0.14"
},
{
"name": "app2",
"version": "1.0.15"
}
]
}
}
},
"stack": "test"
},
"fields": {
"#timestamp": [
"2019-04-16T02:22:32.000Z"
]
}
}
This is my curl command:
#!/bin/bash
curl -XGET "http://localhost:9200/test*/_search?pretty" -H 'Content-Type: application/json' -d' {
"query": {
"bool":{
"must":[
{
"range": {
"#timestamp": {
"gte": 1546837215000,
"lte": 1552712415000,
"format": "epoch_millis"
}
}
}
]
}
},
"aggs": {
"source_bucket": {
"nested": {
"path": "_source._parameters.event.body_json"
},
"aggs": {
"model_name": {
"terms": {
"script": {
"inline": "def model = doc['_source._parameters.event.body_json.model_name'].value;\n def serial = doc['_source._parameters.event.body_json.serial_number'].value;\nreturn \"model + serial\";",
"lang": "painless"
}
}
}
}
}
}
}'
As of now, returns this error:
{
"error" : {
"root_cause" : [
{
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"def model = doc[_parameters.event.body_js ...",
" ^---- HERE"
],
"script" : "def model = doc[_parameters.event.body_json.model_name.keyword].value;\n def serial = doc[_parameters.event.body_json.serial_number.keyword].value;\nreturn model + serial;",
"lang" : "painless"
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "test",
"node" : "-OHA7hfMTBGqlTNwjOOngg",
"reason" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"def model = doc[_parameters.event.body_js ...",
" ^---- HERE"
],
"script" : "def model = doc[_parameters.event.body_json.model_name.keyword].value;\n def serial = doc[_parameters.event.body_json.serial_number.keyword].value;\nreturn model + serial;",
"lang" : "painless",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Variable [_parameters] is not defined."
}
}
}
]
},
"status" : 500
}
How can I effectively get model_name and serial_number, concatenate them and return?
Ok, you don't have any nested fields in your mapping, so your query should look like this instead:
#!/bin/bash
curl -XGET "http://localhost:9200/test*/_search?pretty" -H 'Content-Type: application/json' -d'{
"query": {
"bool": {
"filter": [
{
"range": {
"#timestamp": {
"gte": 1546837215000,
"lte": 1552712415000,
"format": "epoch_millis"
}
}
},
{
"exists": {
"field": "_parameters.event.body_json"
}
}
]
}
},
"aggs": {
"model_name": {
"terms": {
"script": {
"source": "def model = doc['_parameters.event.body_json.model_name.keyword'].value;\n def serial = doc['_parameters.event.body_json.serial_number.keyword'].value;\nreturn model + serial;",
"lang": "painless"
}
}
}
}
}'
I am trying to execute the below query on ES-2.3.4. If you remove the inline script at the end the query is working as expected. But if I include the script the query is supposed to return results but it doesn't. It is a groovy script. Where "bio" is a nested object. Can anyone verify the query and suggest me if any changes are required.
{
"bool" : {
"must" : [ {
"nested" : {
"query" : {
"term" : {
"bio.cl" : "Position"
}
},
"path" : "bio"
}
}, {
"nested" : {
"query" : {
"terms" : {
"bio.type" : [ "SV" ]
}
},
"path" : "bio"
}
}, {
"nested" : {
"query" : {
"terms" : {
"bio.node" : [ "XX" ]
}
},
"path" : "bio"
}
}, {
"terms" : {
"domain" : [ "YY" ]
}
} ],
"filter" : [ {
"nested" : {
"query" : {
"term" : {
"bio.chromo" : 1
}
},
"path" : "bio"
}
}, {
"nested" : {
"query" : {
"range" : {
"bio.start" : {
"from" : null,
"to" : 1000140.0,
"include_lower" : true,
"include_upper" : true
}
}
},
"path" : "bio"
}
}, {
"nested" : {
"query" : {
"range" : {
"bio.stop" : {
"from" : 1000861.0,
"to" : null,
"include_lower" : true,
"include_upper" : true
}
}
},
"path" : "bio"
}
}, {
"script" : {
"script" : {
"inline" : "percent <= ([stop,_source.bio.stop.value].min() - [start,_source.bio.start.value].max())/[length,_source.bio.stop.value-_source.bio.start.value+1].max()",
"params" : {
"stop" : 1001100,
"start" : 999901,
"length" : 1200,
"percent" : 0.8
}
}
}
} ]
}
}
Mapping:
"mappings": {
"XX": {
"properties": {
"bio": {
"type": "nested",
"properties": {
"alt": {
"type": "string",
"index": "not_analyzed"
},
"ann": {
"type": "string",
"index": "not_analyzed"
},
"chromo": {
"type": "string",
"index": "not_analyzed"
},
"cod": {
"type": "string"
},
"conseq": {
"type": "string",
"index": "not_analyzed"
},
"contri": {
"type": "string",
"index": "not_analyzed"
},
"created": {
"type": "string",
"index": "not_analyzed"
},
"createdDate": {
"type": "date",
"format": "strict_date_optional_time"
},
"domain": {
"type": "string",
"index": "not_analyzed"
}"id": {
"type": "long"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"node": {
"type": "string",
"index": "not_analyzed"
},
"position": {
"type": "string",
"index": "not_analyzed"
},
"level": {
"type": "string",
"index": "not_analyzed"
},
"start": {
"type": "long"
},
"stop": {
"type": "long"
}
}
}
}
}
}
Sample document:
_source" : {
"id" : 25,
"bio" : [ {
"creation" : "2018-03-05T20:26:46.466Z",
"updateDate" : "2018-03-05T20:26:46.466Z",
"createdBy" : "XX",
"type" : "SV",
"creationDate" : "2018-03-05T20:26:46.472Z",
"updateDate" : "2018-03-05T20:26:46.521Z",
"createdBy" : "XX",
"updatedBy" : "XX",
"domain" : "YY",
"node" : "XX",
"ann" : "1.6",
"gen" : "37",
"level" : "Position",
"chromo" : "1",
"start" : 999901,
"stop" : 1001100
}]
}
Following up from our discussion in the comments above...
You need to concat the arrays correctly, i.e.
[stop] + _source.biomarkers.collect{it.stop}
will create an array with [stop, bio[0].stop, bio[1].stop, etc] and then we can take the max() of that array.
So I suggest something like this should work (untested though)
percent <= (([stop] + _source.biomarkers.collect{it.stop}).min() - ([start] + _source.biomarkers.collect{it.start}).max()) / ([length] +_source.biomarkers.collect{it.stop - it.start + 1}).max()