I want a simple Pie chart based on my Index. However the fields in the result seem to be embedded within the _source field, which cannot be used in a Terms Aggregation in Kibana.
Sample Result is shown below:
Now if I disable the _source field in the mapping:
I don't get any of the fields:
However, the Kibana Discover page is listing the available fields, which are never returned by the ES results - when _source was enabled.
The Index Mapping is as shown below:
{
"settings": {
"analysis": {
"filter": {
"filter_stemmer": {
"type": "stemmer",
"language": "english"
}
},
"analyzer": {
"tags_analyzer": {
"type": "custom",
"filter": [
"standard",
"lowercase",
"filter_stemmer"
],
"tokenizer": "standard"
}
}
}
},
"mappings": {
"schemav1": {
"properties": {
"user_id": {
"type": "text"
},
"technician_query": {
"analyzer": "tags_analyzer",
"type": "text"
},
"staffer_queries": {
"analyzer": "tags_analyzer",
"type": "text"
},
"status":{
"type":"text"
}
}
}
}
}
Ok, the reason is simple, in order for your fields to be used in aggregations, you need to have a keyword version of them. You cannot aggregate text fields.
Transform your mapping to this:
"mappings": {
"schemav1": {
"properties": {
"user_id": {
"type": "keyword"
},
"technician_query": {
"analyzer": "tags_analyzer",
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
},
"staffer_queries": {
"analyzer": "tags_analyzer",
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
},
"status":{
"type":"keyword"
}
}
}
}
So, user_id and status are now keyword and technician_query.raw and staffer_queries.raw are also `keyword fields, which you can use in terms aggregations, hence in Pie charts as well.
Related
I have an index with documents that have 3 fields name, summary and tags
name is short text that contains small pharse e.g. "Japanese Handmade Sword"
summary is a long text that is description of certain products, it may be more then 200 words.
tags is an array of string with keywords, e.g. ["Japanese", "Antiquity", "Weapon", "Katana"]
I need to combine these fields into one search query to get desired search results. For example, when user searched "Japan" I should get this item. However, match query always gives me empty result, although I have data and can see all documents without query.
Here is my mapping and index settings that performs some tokenization for fields.
PUT lessons
{
"settings": {
"index": {
"number_of_shards": 1
},
"refresh_interval": "5s",
"similarity": {
"string_similarity": {
"type": "BM25"
}
},
"analysis": {
"analyzer": {
"autocomplete": {
"filter": [
"lowercase"
],
"tokenizer": "standard"
},
"autocomplete_search": {
"type": "custom",
"filter": "lowercase",
"tokenizer": "standard"
}
},
}
},
"mappings": {
"properties": {
"name": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search",
"fielddata": true,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"summary": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search",
"fielddata": true,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"tags": {
"type": "text",
"search_analyzer": "autocomplete_search",
"fielddata": true,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
I am using Kibana and when run below query I get no result
GET lessons/_search
{
"query": {
"match": {
"summary": "Japan"
}
}
}
What is wrong with my index settings or mapping?
You can use a multi-match query, to search on multiple fields for the same query text
{
"query": {
"multi_match" : {
"query": "Japan",
"fields": [ "summary", "tags", "name" ]
}
}
}
I'm currently having an issue with being unable to return hits for with a particular search term, and it's a bit perplexing to me:
Term: navy flower
The query would up looking like:
(name: "navy flower"~5 OR sku: "navy flower"~10 OR description: "navy flower"~5)
No hits.
If I change the term to: navy flowers
I get 3 hits with it:
The mappings I currently have setup on the index are as follows:
{
"mappings": {
"_doc": {
"properties": {
"active": {
"type": "long"
},
"description": {
"type": "text"
},
"id": {
"type": "integer"
},
"name": {
"type": "text"
},
"sku": {
"type": "text"
},
"upc": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
I'm must be missing something obvious for the match to not be working on the singular vs plural form of the word.
As per your index mapping, you have not specified any analyzer that means elastic search by default use standard analyzers and standard analyzer doesn't do stemming as by default it have only 2 token filter:
Lower Case Token Filter
Stop Token Filter (by default disabled)
For supporting your use case, you required Stemmer token filter with the analyzer. So you can create a custom analyzer and configured to the required field:
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"stemmer"
]
}
}
}
},
"mappings": {
"properties": {
"active": {
"type": "long"
},
"description": {
"type": "text"
},
"id": {
"type": "integer"
},
"name": {
"type": "text",
"analyzer": "my_analyzer"
},
"sku": {
"type": "text"
},
"upc": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
After this you can search with below query:
GET test/_search?q=(name:"navy flower"~5 OR sku: "navy flower"~10 OR description: "navy flower"~5)
At a high level overview, my use case requires a nested object and I would also like to perform exact case-insensitive matches on my nested objects.
I've started with the example here:
https://www.codementor.io/mehuljain/case-insensitive-exact-matches-in-elasticsearch-nny7ii7fw
which does almost exactly what I want, except it doesn't use nested objects.
I've tried to modify the code on the above page by changing the type from text to nested:
PUT titles
{
"settings": {
"analysis": {
"normalizer": {
"my_normalizer": {
"type": "custom",
"filter": ["lowercase"]
}
}
}
},
"mappings": {
"default": {
"properties": {
"title": {
"type": "nested",
"fields": {
"normalize": {
"type": "keyword",
"normalizer": "my_normalizer"
},
"keyword" : {
"type": "keyword"
}
}
}
}
}
}
}
This, however, doesn't work and I get an error message.
How do I perform a case insensitive exact matching search on a nested object in Elastic?
Since you are dealing with nested object you need to define its properties and not fields.
{
"settings": {
"analysis": {
"normalizer": {
"my_normalizer": {
"type": "custom",
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"default": {
"properties": {
"title": {
"type": "nested",
"properties": { <----------- should be properties and not fields
"normalize": {
"type": "keyword",
"normalizer": "my_normalizer"
},
"keyword": {
"type": "keyword"
}
}
}
}
}
}
}
Based on above change title will be and object which two properties namely normalize and keyword.
I am trying to setup stemming in my ES mapping. I pass the name of the stemming analyzer through the indexed document (de_analyzer)
I observe that the below mapping properly adds the stemmed terms to the index, but now I can no longer search for the unstemmed terms. No matches are returned. It seems that only the stemmed terms are indexed?
This is the index configuration showing the filter, index analyzer, search analyzer and field configuration.
What am I overseeing?
Thanks!
{
"globalfashionmonitor": {
"template": "myindex*",
"settings": {
"index.number_of_shards": 5,
"default_search": "analyzer_search",
"analysis": {
"filter": {
"de_stem_filter": {
"type": "stemmer",
"name": "minimal_german"
}
}
},
"analyzer": {
"analyzer_search": {
"type": "custom",
"tokenizer": "icu_tokenizer",
"filter": [
"icu_folding"
]
},
"de_analyzer": {
"type": "custom",
"filter": [
"icu_normalizer",
"de_stop_filter",
"de_stem_filter",
"icu_folding"
],
"tokenizer": "icu_tokenizer"
}
},
"mappings": {
"items": {
"_analyzer": {
"path": "use_analyzer"
},
"properties": {
"summarizedArticle": {
"fields": {
"stemmed": {
"index_analyzer": "de_analyzer",
"type": "string",
"index": "analyzed"
}
},
"type": "string"
}
}
}
}
}
}
}
I have an existing mapping for a field, and I want to change it to a multi-field.
The existing mapping is
{
"my_index": {
"mappings": {
"my_type": {
"properties": {
"author": {
"type": "string"
},
"isbn": {
"type": "string",
"analyzer": "standard",
"fields": {
"ngram": {
"type": "string",
"search_analyzer": "keyword"
}
}
},
"title": {
"type": "string",
"analyzer": "english",
"fields": {
"std": {
"type": "string",
"analyzer": "standard"
}
}
}
}
}
}
}
}
Based on the documentation, I should be able to change "author" to a multi-field by executing the following
PUT /my_index
{
"mappings": {
"my_type": {
"properties": {
"author":
{
"type": "multi-field",
"fields": {
"ngram": {
"type": "string",
"indexanalyzer": "ngram_analyzer",
"search_analyzer": "keyword"
},
"name" : {
"type": "string"
}
}
}
}
}
}
}
But instead I get the following error:
{
"error": "IndexAlreadyExistsException[[my_index] already exists]",
"status": 400
}
Am I missing something really obvious?
Instead of PUT to /my_index do:
POST /my_index/_mapping
You won't be able to change the field type in an already existing index.
If you can't recreate your index you can make use of the copy to field to achieve a similar capability.
PUT /my_index
{
"mappings": {
"my_type": {
"properties": {
"author":
{
"type": "string",
"copy_to": ["author-name","author-ngram"]
}
"author-ngram": {
"type": "string",
"indexanalyzer": "ngram_analyzer",
"search_analyzer": "keyword"
},
"author-name" : {
"type": "string"
}
}
}
}
}
}
While I have not tried it in your particular example, it is indeed possible to update field mappings by first closing the index and then applying the mappings.
Example:
POST /my_index/_close
POST /my_index/_mapping
{
"my_field:{"new_mapping"}
}
POST /my_index/_open
I have tested it, by adding a "copy_to" mapping property to mapped field.
Based on https://gist.github.com/nicolashery/6317643.