Elasticsearch - get docs with array field size greater than 1 - elasticsearch

I want to get all docs with a field, which is an array, with size greater than 1.
I'm using the following command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '
{ "query": { "bool": { "filter": { "script" : { "script" : "doc['emoji'].length > 1" } } } } }
'
but I get the following error:
{
"error" : {
"root_cause" : [
{
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
}
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "poly",
"node" : "Dnp4BI4YSgCz-C-p8NmcTg",
"reason" : {
"type" : "query_shard_exception",
"reason" : "failed to create query: compile error",
"index_uuid" : "HlWRuJb5TY-L_2_9iyuVmg",
"index" : "poly",
"caused_by" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
},
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "cannot resolve symbol [emoji]"
}
}
}
}
],
"caused_by" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
},
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "cannot resolve symbol [emoji]"
}
}
},
"status" : 400
}
Nevertheless, the field "emoji" exists in my ElasticSearch docs, as you can see in the result to the following command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '
{
"_source": ["emoji"],
"query" : {
"constant_score" : {
"filter" : {
"exists" : {
"field" : "emoji"
}
}
}
}
}
'
Here is the result for the previous command:
"took" : 28,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 10000,
"relation" : "gte"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1307256887860174848",
"_score" : 1.0,
"_source" : {
"emoji" : [
"❤️"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1278766523134414848",
"_score" : 1.0,
"_source" : {
"emoji" : [
"⏩"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1298632385605431296",
"_score" : 1.0,
"_source" : {
"emoji" : [
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1300563120184651776",
"_score" : 1.0,
"_source" : {
"emoji" : [
"\uD83D\uDC4D",
"\uD83D\uDE00",
"\uD83D\uDE80"
]
}
},
]
}
}
Can someone tell me why I'm getting that error above?

You need to skip characters in your command check command below:
Script:
"script": {"script": "doc['\''emoji'\''].length > 1"
Command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '{ "query": {"bool": {"filter": {"script": {"script": "doc['\''emoji'\''].length > 1"}}}}}'

Related

Unable to delete index pattern in Kibana

I have moved from ELK 7.9 to ELK 7.15 in an attempt to solve this problem and it looks like all that effort was of no use. I am still unable to delete the index pattern in Kibana, neither through the console, nor through the GUI.
Trying to delete it from the saved objects section simply gets stuck for ever on this screen
Deleting it from the below, temporarily shows that the index pattern has been deleted. But once you reload the page, it comes back again
The elastic search index does get deleted successfully however.
Trying it all from the devtools also does not seem to work. Some of the attempts made and the respective outputs are shown below:
GET .kibana/_search
{
"_source": ["index-pattern.title"],
"query": {
"term": {
"type": "index-pattern"
}
}
}
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 4.0535226,
"hits" : [
{
"_index" : ".kibana_7.15.0_001",
"_type" : "_doc",
"_id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"_score" : 4.0535226,
"_source" : {
"index-pattern" : {
"title" : "all-security-bugs-from-jira"
}
}
}
]
}
}
DELETE /index_patterns/index_pattern/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [index_patterns]",
"resource.type" : "index_expression",
"resource.id" : "index_patterns",
"index_uuid" : "_na_",
"index" : "index_patterns"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [index_patterns]",
"resource.type" : "index_expression",
"resource.id" : "index_patterns",
"index_uuid" : "_na_",
"index" : "index_patterns"
},
"status" : 404
}
DELETE index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30"
},
"status" : 404
}
DELETE 822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30"
},
"status" : 404
}
GET /.kibana?pretty
does not give any document with the index pattern concerned. This was also confirmed by the below 2 queries
GET .kibana/index-pattern/822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"_index" : ".kibana_7.15.0_001",
"_type" : "index-pattern",
"_id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"found" : false
}
GET .kibana/index-pattern/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"_index" : ".kibana_7.15.0_001",
"_type" : "index-pattern",
"_id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"found" : false
}
I have been trying to follow the suggestions on https://discuss.elastic.co/t/cant-delete-index-pattern-in-kibana/148341/5
Any help understanding what could I be doing wrong here is greatly appreciated.
You are not deleting from right index
Do below to delete:
DELETE .kibana/_doc/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
To get the documents you are giving wrong type, try below:
GET .kibana/_doc/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30

Query with match to get all values for a given field! ElasticSearch

I'm pretty new to elastic search and would like to write a query for all of the values a specific field? I mean, say i have a field "Number" and "change_manager_group", is there a query to perform list all the numbers of which "change_manager_group = Change Managers - 2"
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 10,
"successful" : 10,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1700,
"max_score" : 1.0,
"hits" : [
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0393073_1554800400000",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 1554800400000,
"Updated_by" : "system",
"Updated" : 1554819333000,
"phase" : "Requested",
"Number" : "CHG0312373",
"change_manager_group" : "Change Managers - 1",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "",
"Standard_template_version" : "",
"close_code" : null,
"actual_start" : 1554819333000,
"closed_by" : "",
"Type" : "Normal"
}
},
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0406522_0",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 0,
"Updated_by" : "svcmdeploy_automation",
"Updated" : 1553320559000,
"phase" : "Requested",
"Number" : "CHG041232",
"change_manager_group" : "Change Managers - 2",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "Change Installed",
"Standard_template_version" : "",
"close_code" : "Successful",
"actual_start" : 1553338188000,
"closed_by" : "",
"Type" : "Automated"
}
},
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0406526_0",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 0,
"Updated_by" : "svcmdeploy_automation",
"Updated" : 1553321854000,
"phase" : "Requested",
"Number" : "CHG0412326",
"change_manager_group" : "Change Managers - 2",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "Change Installed",
"Standard_template_version" : "",
"close_code" : "Successful",
"actual_start" : 1553339629000,
"closed_by" : "",
"Type" : "Automated"
}
},
I tried this after a bit of googling, but that errors out
curl -XGET "http://localhost:9200/test-tem-changes/_search?pretty=true" -H 'Content-Type: application/json' -d '
> {
> "query" : { "Number" : {"query" : "*"} }
> }
> '
What am i missing here?
To get all the documents where change_manager_group ==Change Managers - 2 you want to use a Term Query. Below I am wrapping it in a filter context so that it is faster (does not score relevance).
If change_manager_group is not a keyword mapped field, you may have to use change_manager_group.keyword depending on your mapping.
GET test-tem-changes/_search
{
"query": {
"bool": {
"filter": {
"term": {
"change_manager_group": "Change Managers - 2"
}
}
}
}
}

How to perform the arthimatic operation on data from elasticsearch

I need to have average of cpuload on specific nodetype. For example if I give nodetype as tpt it should give the average of cpuload of nodetype's of all tpt available. I tried different methods but vain...
My data in elasticsearch is below:
{
"took" : 5,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0003",
"_score" : 1.0,
"_source" : {
"kpi" : {
"CpuAverageLoad" : 13,
"NodeId" : "kishan",
"NodeType" : "Tpt",
"State" : "online",
"Static_limit" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0005",
"_score" : 1.0,
"_source" : {
"kpi" : {
"CpuAverageLoad" : 15,
"NodeId" : "kishan1",
"NodeType" : "tpt",
"State" : "online",
"Static_limit" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0004",
"_score" : 1.0,
"_source" : {
"kpi" : {
"MaxLbCapacity" : "700000",
"NodeId" : "kishan2",
"NodeType" : "bang",
"OnlineCSCF" : [
"001",
"002"
],
"State" : "Online",
"TdbGroup" : 1,
"TdGroup" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0002",
"_score" : 1.0,
"_source" : {
"kpi" : {
"MaxLbCapacity" : "700000",
"NodeId" : "kishan3",
"NodeType" : "bang",
"OnlineCSCF" : [
"001",
"002"
],
"State" : "Online",
"TdLGroup" : 1,
"TGroup" : 0
}
}
}
]
}
}
And my query is
curl -XGET 'localhost:9200/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"bool" : {
"must" : {
"script" : {
"script" : {
"source" : "kpi[CpuAverageLoad].value > params.param1",
"lang" : "painless",
"params" : {
"param1" : 5
}
}
}
}
}
}
}'
but is falling as it is unable to find the exact source.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[script] unknown field [source], parser not found"
}
],
"type" : "illegal_argument_exception",
"reason" : "[script] unknown field [source], parser not found"
},
"status" : 400
}

Why are my completion suggester options empty?

I'm currently trying to setup my suggestion implementation.
My index settings / mappings:
{
"settings" : {
"analysis" : {
"analyzer" : {
"trigrams" : {
"tokenizer" : "mesh_default_ngram_tokenizer",
"filter" : [ "lowercase" ]
},
"suggestor" : {
"type" : "custom",
"tokenizer" : "standard",
"char_filter" : [ "html_strip" ],
"filter" : [ "lowercase" ]
}
},
"tokenizer" : {
"mesh_default_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "3",
"max_gram" : "3"
}
}
}
},
"mappings" : {
"default" : {
"properties" : {
"uuid" : {
"type" : "string",
"index" : "not_analyzed"
},
"language" : {
"type" : "string",
"index" : "not_analyzed"
},
"fields" : {
"properties" : {
"content" : {
"type" : "string",
"index" : "analyzed",
"analyzer" : "trigrams",
"fields" : {
"suggest" : {
"type" : "completion",
"analyzer" : "suggestor"
}
}
}
}
}
}
}
}
My query:
{
"suggest": {
"query-suggest" : {
"text" : "som",
"completion" : {
"field" : "fields.content.suggest"
}
}
},
"_source": ["fields.content", "uuid", "language"]
}
The query result:
{
"took" : 44,
"timed_out" : false,
"_shards" : {
"total" : 20,
"successful" : 20,
"failed" : 0
},
"hits" : {
"total" : 5,
"max_score" : 0.0,
"hits" : [ {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "c6b7391075cc437ab7391075cc637a05-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content s<b>om</b>e text with more content you can poke a stick at"
},
"uuid" : "c6b7391075cc437ab7391075cc637a05"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "96e2c6765b6841fea2c6765b6871fe36-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content no text with more content you can poke a stick at"
},
"uuid" : "96e2c6765b6841fea2c6765b6871fe36"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "fd1472555e9d4d039472555e9d5d0386-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content someth<strong>ing</strong> completely different"
},
"uuid" : "fd1472555e9d4d039472555e9d5d0386"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "5a3727b134064de4b727b134063de4c4-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content some<strong>what</strong> strange content"
},
"uuid" : "5a3727b134064de4b727b134063de4c4"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "865257b6be4340c69257b6be4340c603-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content some <strong>more</strong> content you can poke a stick at too"
},
"uuid" : "865257b6be4340c69257b6be4340c603"
}
} ]
},
"suggest" : {
"query-suggest" : [ {
"text" : "som",
"offset" : 0,
"length" : 3,
"options" : [ ]
} ]
}
}
I'm currently using Elasticsearch 2.4.6 and I can't update
There are 5 document in my index and only 4 contain the word "some".
Why do I see 5 hits but no options?
The options are not empty if I start my suggest text with the first word of the field string. (e.g: this)
Is my usage of the suggest feature valid when dealing with fields that contain full html pages? I'm not sure whether the feature was meant to handle many tokens per document.
I already tried to use ngram tokenizer for my suggestor analyzer but that did not change the situation. Any hint or feedback would be appreciated.
It seems that the issue I'm seeing is a restriction is completion suggesters:
Matching always starts at the beginning of the text. So, for example, “Smi” will match “Smith, Fed” but not “Fed Smith”. However, you could list both “Smith, Fed” and “Fed Smith” as two different inputs for the one output.
http://rea.tech/implementing-autosuggest-in-elasticsearch/

Elasticsearch wildcard query with spaces

I'm trying to do a wildcard query with spaces. It easily matches the words on term basis but not on field basis.
I've read the documentation which says that I need to have the field as not_analyzed but with this type set, it returns nothing.
This is the mapping with which it works on term basis:
{
"denshop" : {
"mappings" : {
"products" : {
"properties" : {
"code" : {
"type" : "string"
},
"id" : {
"type" : "long"
},
"name" : {
"type" : "string"
},
"price" : {
"type" : "long"
},
"url" : {
"type" : "string"
}
}
}
}
}
}
This is the mapping with which the exact same query returns nothing:
{
"denshop" : {
"mappings" : {
"products" : {
"properties" : {
"code" : {
"type" : "string"
},
"id" : {
"type" : "long"
},
"name" : {
"type" : "string",
"index" : "not_analyzed"
},
"price" : {
"type" : "long"
},
"url" : {
"type" : "string"
}
}
}
}
}
}
The query is here:
curl -XPOST http://127.0.0.1:9200/denshop/products/_search?pretty -d '{"query":{"wildcard":{"name":"*test*"}}}'
Response with the not_analyzed property:
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
Response without not_analyzed:
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 5,
"max_score" : 1.0,
"hits" : [ {
...
EDIT: Adding requested info
Here is the list of documents:
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 5,
"max_score" : 1.0,
"hits" : [ {
"_index" : "denshop",
"_type" : "products",
"_id" : "3L1",
"_score" : 1.0,
"_source" : {
"id" : 3,
"name" : "Testovací produkt 2",
"code" : "",
"price" : 500,
"url" : "http://www.denshop.lh/damske-obleceni/testovaci-produkt-2/"
}
}, {
"_index" : "denshop",
"_type" : "products",
"_id" : "4L1",
"_score" : 1.0,
"_source" : {
"id" : 4,
"name" : "Testovací produkt 3",
"code" : "",
"price" : 666,
"url" : "http://www.denshop.lh/damske-obleceni/testovaci-produkt-3/"
}
}, {
"_index" : "denshop",
"_type" : "products",
"_id" : "2L1",
"_score" : 1.0,
"_source" : {
"id" : 2,
"name" : "Testovací produkt",
"code" : "",
"price" : 500,
"url" : "http://www.denshop.lh/damske-obleceni/testovaci-produkt/"
}
}, {
"_index" : "denshop",
"_type" : "products",
"_id" : "5L1",
"_score" : 1.0,
"_source" : {
"id" : 5,
"name" : "Testovací produkt 4",
"code" : "",
"price" : 666,
"url" : "http://www.denshop.lh/damske-obleceni/testovaci-produkt-4/"
}
}, {
"_index" : "denshop",
"_type" : "products",
"_id" : "6L1",
"_score" : 1.0,
"_source" : {
"id" : 6,
"name" : "Testovací produkt 5",
"code" : "",
"price" : 666,
"url" : "http://www.denshop.lh/tricka-tilka-tuniky/testovaci-produkt-5/"
}
} ]
}
}
Without the not_analyzed it returns with this:
curl -XPOST http://127.0.0.1:9200/denshop/products/_search?pretty -d '{"query":{"wildcard":{"name":"*testovací*"}}}'
But not with this (notice the space before asterisk):
curl -XPOST http://127.0.0.1:9200/denshop/products/_search?pretty -d '{"query":{"wildcard":{"name":"*testovací *"}}}'
When I add the not_analyzed to mapping, it returns no hits no matter what I put in the wildcard query.
Add a custom analyzer that should lowercase the text. Then in your search query, before passing the text to it have it lowercased in your client application.
To, also, keep the original analysis chain, I've added a sub-field to your name field that will use the custom analyzer.
PUT /denshop
{
"settings": {
"analysis": {
"analyzer": {
"keyword_lowercase": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"products": {
"properties": {
"name": {
"type": "string",
"fields": {
"lowercase": {
"type": "string",
"analyzer": "keyword_lowercase"
}
}
}
}
}
}
}
And the query will work on the sub-field:
GET /denshop/products/_search
{
"query": {
"wildcard": {
"name.lowercase": "*testovací *"
}
}
}

Resources