I am working on Elasticsearch. I want to use search suggestor for multiple indices at a time. I have two indices, tags and pool_tags which has name field in each index. How to use suggestor on this two indices having a similarly named field name.
I tried naming the suggestor (pool_tag_suggest in pool_tags) differently and I tried. Here are the mappings
tags:
{
"tags" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50
}
}
}
}
}
}
}
pool_tags:
{
"pool_tags" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword"
},
"pool_tag_suggest" : {
"type" : "completion",
"analyzer" : "simple",
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50
}
}
}
}
}
}
}
WHAT I TRIED
POST pool_tags,tags/_search
{
"suggest": {
"tags_suggestor": {
"text": "ww",
"term": {
"field": "name.suggest"
}
},
"pooltags_suggestor": {
"text": "ww",
"term": {
"field": "name.pool_tag_suggest"
}
}
}
}
ERROR
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.pool_tag_suggest]"
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "pool_tags",
"node" : "g2rCnS4PQMWyldWABVJawQ",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
}
},
{
"shard" : 0,
"index" : "tags",
"node" : "g2rCnS4PQMWyldWABVJawQ",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.pool_tag_suggest]"
}
}
],
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [name.suggest]"
}
}
},
"status" : 400
}
Related
I have different indexes that contain different fields. And I try to figure out how to get suggests from all indexes and all fields. I know that with GET /_all/_search I can search for results through all indexes. But how can I get all suggestions from all indexes and all fields? Because I want to have a feature like Google "Did you mean: suggests"
So, I tried this out:
GET /_all/_search
{
"query" : {
"multi_match" : {
"query" : "berlin"
}
},
"suggest" : {
"text" : "berlin",
"my-suggest-1" : {
"term" : {
"field" : "street"
}
},
"my-suggest-2" : {
"term" : {
"field" : "city"
}
},
"my-suggest-3" : {
"term" : {
"field" : "description"
}
}
}
}
"my-suggest-1" and "-2" belongs to Index address (see below) and "my-suggest-3" belongs to Index product. I get the following error:
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [street]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [city]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [description]"
}
]
}
But if I use only the fields of 1 index I get suggestions, see:
GET /_all/_search
{
"query" : {
"multi_match" : {
"query" : "berlin"
}
},
"suggest" : {
"text" : "berlin",
"my-suggest-1" : {
"term" : {
"field" : "street"
}
},
"my-suggest-2" : {
"term" : {
"field" : "city"
}
}
}
}
Response
...
"failures" : {
...
},
"hits" : {
...
}
"suggest" : {
"my-suggest-1" : [
{
"text" : "berlin",
"offset" : 0,
"length" : 10,
"options" : [
{
"text" : "berliner",
"score" : 0.9,
"freq" : 12
},
{
"text" : "berlinger",
"score" : 0.9,
"freq" : 1
}
]
}
],
"my-suggest-2" : [
{
"text" : "berlin",
"offset" : 0,
"length" : 10,
"options" : []
}
]
...
I don't know how I can get suggests from index address and product? I would be happy if someone can help me.
Index 1 - Address:
"address" : {
"aliases" : {
....
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"street" : {
"type" : "text"
},
"city" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
Index 2 - Product:
"product" : {
"aliases" : {
...
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"description" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
You can add multiple indices to your search. In this case, you need to search over the fields that exist on all indices. So In your case, you need to define all three fields in both of the indices. The fields "street" and "city" are filed in the first index and the field "description" is filled only in the second index. This will be your mapping for the "Address" index. In this index, the "description" field exists but has no data. In the second index, "street" and "city" exist but have no data.
"address" : {
"aliases" : {
....
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"street" : {
"type" : "text"
},
"city" : {
"type" : "text"
},
"description" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
I'am just started with ES 5.2.2
Trying ad analyzer with support russian morhology.
Run ES using docker, i create image with installed elasticsearch-analysis-morphology.
then i:
Create index,
then put settings
after that get settings, and all sems right
curl http://localhost:9200/news/_settings?pretty
{
"news" : {
"settings" : {
"index" : {
"number_of_shards" : "5",
"provided_name" : "news",
"creation_date" : "1489343955314",
"analysis" : {
"analyzer" : {
"russian_analyzer" : {
"filter" : [
"stop",
"custom_stop",
"russian_stop",
"custom_word_delimiter",
"lowercase",
"russian_morphology",
"english_morphology"
],
"char_filter" : [
"html_strip",
"ru"
],
"type" : "custom",
"tokenizer" : "standard"
}
},
"char_filter" : {
"ru" : {
"type" : "mapping",
"mappings" : [
"Ё=>Е",
"ё=>е"
]
}
},
"filter:" : {
"custom_stop" : {
"type" : "stop",
"stopwords" : [
"n",
"r"
]
},
"russian_stop" : {
"ignore_case" : "true",
"type" : "stop",
"stopwords" : [
"а",
"без",
]
},
"custom_word_delimiter" : {
"split_on_numerics" : "false",
"generate_word_parts" : "false",
"preserve_original" : "true",
"catenate_words" : "true",
"generate_number_parts" : "true",
"catenate_all" : "true",
"split_on_case_change" : "false",
"type" : "word_delimiter",
"catenate_numbers" : "false"
}
}
},
"number_of_replicas" : "1",
"uuid" : "IUkHHwWrStqDMG6fYOqyqQ",
"version" : {
"created" : "5020299"
}
}
}
}
}
then i try open index but ES give me this:
{
"error" : {
"root_cause" : [
{
"type" : "exception",
"reason" : "Failed to verify index [news/IUkHHwWrStqDMG6fYOqyqQ]"
}
],
"type" : "exception",
"reason" : "Failed to verify index [news/IUkHHwWrStqDMG6fYOqyqQ]",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Custom Analyzer [russian_analyzer] failed to find filter under name [custom_stop]"
}
},
"status" : 500
}
Can't understand where i'm wrong.
Can anyone see what the problem is?
There was mistake in "filter" section
was:
look here this This colon was a mistake
|
v
"filter:" : {
"custom_stop" : {
"type" : "stop",
"stopwords" : [
"n",
"r"
]
}...
Thanks #asettou and #andrey-morozov
I have been trying to make one of the fields being indexed to be dynamic and also changed the elasticsearch.yml for the same by adding
index.mapper.dynamic: false
in the end and also restarted elasticsearch and kibana sense. Also tried with different fields and index names but I am still getting the same error :
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [albumdetailid] has unsupported parameters: [dynamic : true]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [package]: Mapping definition for [albumdetailid] has unsupported parameters: [dynamic : true]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [albumdetailid] has unsupported parameters: [dynamic : true]"
}
},
"status": 400
}
the code for adding index is below :
PUT /worldtest221
{
"settings" : {
"index" : {
"creation_date" : "1474989052008",
"uuid" : "Ae-7aFrLT466ZJL4U9QGLQ",
"number_of_replicas" : "0",
"analysis" : {
"char_filter" : {
"quotes" : {
"type" : "mapping",
"mappings" : [ "=>'", "=>'", "‘=>'", "’=>'", "‛=>'" ]
}
},
"filter" : {
"nGram_filter" : {
"max_gram" : "400",
"type" : "nGram",
"min_gram" : "1",
"token_chars" : [ "letter", "digit", "punctuation", "symbol" ]
}
},
"analyzer" : {
"quotes_analyzer" : {
"char_filter" : [ "quotes" ],
"tokenizer" : "standard"
},
"nGram_analyzer" : {
"type" : "custom",
"filter" : [ "lowercase", "asciifolding", "nGram_filter" ],
"tokenizer" : "whitespace"
},
"whitespace_analyzer" : {
"type" : "custom",
"filter" : [ "lowercase", "asciifolding" ],
"tokenizer" : "whitespace"
}
}
},
"cache" : {
"query_result" : {
"enable" : "true"
}
},
"number_of_shards" : "1",
"version" : {
"created" : "2030099"
}
}
},
"mappings" : {
"package" : {
"properties" : {
"autosuggestionpackagedetail" : {
"type" : "string",
"index" : "not_analyzed"
},
"availability" : {
"type" : "nested",
"include_in_all" : false,
"properties" : {
"displaysequence" : {
"type" : "long",
"include_in_all" : false
},
"isquantityavailable" : {
"type" : "boolean"
},
.....
"metatags" : {
"type" : "string",
"include_in_all" : false
},
"minadultage" : {
"type" : "long",
"include_in_all" : false
},
"newmemberrewardpoints" : {
"type" : "long",
"include_in_all" : false
},
"packagealbum" : {
"include_in_all" : false,
"properties" : {
"albumdetailid" : {
"type" : "string",
"include_in_all" : false,
"dynamic": true
},
....
Look at the second last line where I mention "dynamic" : true
This happens because you are trying to set "dynamic: true" on a field ("albumdetailid") of type string! This make no sense! no new fields can be created under a "string" field. The "dynamic" parameter should be set under an "object" field. so Either define "albumdetailid" as "object" or put the "dynamic: true" one level higher - under "packagealbum" like this:
"packagealbum" : {
"include_in_all" : false,
"dynamic": true,
"properties" : {
"albumdetailid" : {
"type" : "string",
"include_in_all" : false
},
....
I've this mapping for fuas type:
curl -XGET 'http://localhost:9201/living_team/_mapping/fuas?pretty'
{
"living_v1" : {
"mappings" : {
"fuas" : {
"properties" : {
"backlogStatus" : {
"type" : "long"
},
"comment" : {
"type" : "string"
},
"dueTimestamp" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"matter" : {
"type" : "string"
},
"metainfos" : {
"properties" : {
"category 1" : {
"type" : "string"
},
"key" : {
"type" : "string"
},
"null" : {
"type" : "string"
},
"processos" : {
"type" : "string"
}
}
},
"resources" : {
"properties" : {
"noteId" : {
"type" : "string"
},
"resourceId" : {
"type" : "string"
}
}
},
"status" : {
"type" : "long"
},
"timestamp" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"user" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
}
I'm trying to perform this aggregation:
curl -XGET 'http://ESNode01:9201/living_team/fuas/_search?pretty' -d '
{
"aggs" : {
"demo" : {
"nested" : {
"path" : "metainfos"
},
"aggs" : {
"key" : { "terms" : { "field" : "metainfos.key" } }
}
}
}
}
'
ES realizes me:
"error" : {
"root_cause" : [ {
"type" : "aggregation_execution_exception",
"reason" : "[nested] nested path [metainfos] is not nested"
} ],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query_fetch",
"grouped" : true,
"failed_shards" : [ {
"shard" : 3,
"index" : "living_v1",
"node" : "HfaFBiZ0QceW1dpqAnv-SA",
"reason" : {
"type" : "aggregation_execution_exception",
"reason" : "[nested] nested path [metainfos] is not nested"
}
} ]
},
"status" : 500
}
Any ideas?
You're missing "type":"nested" from your metainfos mapping.
Should have been:
"metainfos" : {
"type":"nested",
"properties" : {
"category 1" : {
"type" : "string"
},
"key" : {
"type" : "string"
},
"null" : {
"type" : "string"
},
"processos" : {
"type" : "string"
}
}
}
I'm new to ElasticSearch, started working with ElasticSearch 1.7.3 as part of a Logstash-ElasticSearch-Kibana deployment.
I've defined a mapping template for my log messages, this is the interesting part:
{
"template" : "logstash-*",
"settings" : { "index.refresh_interval" : "5s" },
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"date_fields" : {
"match" : "*",
"match_mapping_type" : "date",
"mapping" : { "type" : "date", "doc_values" : true }
}
}],
"properties" : {
"#version" : { "type" : "string", "index" : "not_analyzed" },
"#timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
"message" : { "type" : "string" }
}
} ,
"my_log" : {
"_all" : { "enabled" : true, "omit_norms" : true },
"dynamic_templates" : [ {
"date_fields" : {
"match" : "*",
"match_mapping_type" : "date",
"mapping" : { "type" : "date", "doc_values" : true }
}
}],
"properties" : {
"#timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
"file" : { "type" : "string" },
"message" : { "type" : "string" }
"geolocation" : { "type" : "string" },
}
}
}
}
Although the #timestamp field is defined as doc_value:true I have an error of MemoryException because it is a fielddata:
[FIELDDATA] Data too large, data for [#timestamp] would be larger than
limit of [633785548/604.4 mb]
NOTE:
I know I can change the memory or add more nodes to the cluster, but in my point of view this is a design problem where this field should not be indexed in memory.