I have an elasticsearch index with customer informations
I have some issues looking for some results with accents
for example, I have {name: 'anais'} and {name: anaïs}
Running
GET /my-index/_search
{
"size": 25,
"query": {
"match": {"name": "anaïs"}
}
}
I would like to get both same for this query, in this case I only have anaïs
GET /my-index/_search
{
"size": 25,
"query": {
"match": {"name": "anais"}
}
}
I would like to get anais and anaïs, in this case I only have anais
I tried adding an analyser
PUT /my-new-celebrity/_settings
{
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
}
But in this case for both search I only get anais
Looks like you forgot to apply your custom default analyzer on your name field, below is working example:
Index def with mapping and setting
{
"settings": {
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings" : {
"properties" :{
"name" : {
"type" : "text",
"analyzer" : "default" // note this
}
}
}
}
Index sample docs
{
"name" : "anais"
}
{
"name" : "anaïs"
}
Search query same as yours
{
"size": 25,
"query": {
"match": {
"name": "anaïs"
}
}
}
And expected both search results
"hits": [
{
"_index": "myindexascii",
"_type": "_doc",
"_id": "1",
"_score": 0.18232156,
"_source": {
"name": "anaïs"
}
},
{
"_index": "myindexascii",
"_type": "_doc",
"_id": "2",
"_score": 0.18232156,
"_source": {
"name": "anais"
}
}
]
Related
I am confused with the skip_duplicates Elasticsearch completion suggester. The settings and mappings is like this:
{
"settings":{
"index" :{
"analysis": {
"analyzer": {
"autocomplete_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
]
}
}
}
}
},
"mappings":
{
"properties": {
"name": {
"type": "text"
},
"category": {
"type": "text"
},
"category_suggest": {
"type": "completion",
"analyzer": "autocomplete_analyzer" }
}
}
}
I have docs like this.
"category_suggest": {
"input": [
"Automotive",
"Auto",
]
}
and also have docs like this
"category_suggest": {
"input": [
"Automotive",
"Car",
]
}
If I use _search endpoint, submit this query, with skip_duplicates=true.
"suggest":
{
"suggestions":{
"prefix": "Aut",
"completion":{
"field": "category_suggest",
"size": 5,
"skip_duplicates": true
}
}
}
It only returns
"suggest": {
"suggestions": [
{
"text": "Aut",
"offset": 0,
"length": 3,
"options": [
{
"text": "Auto",
"_index": "my_index",
"_type": "_doc",
"_id": "oCNRa4IAiXc1hBrN9UnM",
"_score": 1.0
}
]
}
]
}
If I set skip_duplicates=False, both 'Auto' and 'Automotive' will be returned. Is that because in one of document, category_suggest field has both 'Auto' and 'Automotive'. I am using ES 7.10 python.
Thanks
I have hundreds of chemicals results in my index climate_change
I'm using a ngram research and this is the settings that I'm using for the index.
{
"settings": {
"index.max_ngram_diff": 30,
"index": {
"analysis": {
"analyzer": {
"analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
},
"search_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"test_ngram": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 30,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}
}
My main problem is that if I try to do a query like this one
GET climate_change/_search?size=1000
{
"query": {
"match": {
"description": {
"query":"oxygen"
}
}
}
}
I see that a lot of results have the same score 7.381186..but it's strange
{
"_index" : "climate_change",
"_type" : "_doc",
"_id" : "XXX",
"_score" : 7.381186,
"_source" : {
"recordtype" : "chemicals",
"description" : "carbon/oxygen"
}
},
{
"_index" : "climate_change",
"_type" : "_doc",
"_id" : "YYY",
"_score" : 7.381186,
"_source" : {
"recordtype" : "chemicals",
"description" : "oxygen"
}
How could it be possible?
In the example above, If I'm using ngram and I'm searching oxygen in the description field, I'll expect that the second result will have a score bigger than the first one.
I've also tried to specify the type of the tokenizer "standard" and "whitespace" in the settings, but it could not help.
Maybe is the '/' character inside the description?
Thanks a lot!
You need to define the analyzer in the mapping for the description field also.
Adding a working example with index data, mapping, search query and search result
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
},
"search_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"test_ngram": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 30,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"properties": {
"description": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
Index Data:
{
"recordtype": "chemicals",
"description": "carbon/oxygen"
}
{
"recordtype": "chemicals",
"description": "oxygen"
}
Search Query:
{
"query": {
"match": {
"description": {
"query":"oxygen"
}
}
}
}
Search Result:
"hits": [
{
"_index": "67180160",
"_type": "_doc",
"_id": "2",
"_score": 0.89246297,
"_source": {
"recordtype": "chemicals",
"description": "oxygen"
}
},
{
"_index": "67180160",
"_type": "_doc",
"_id": "1",
"_score": 0.6651374,
"_source": {
"recordtype": "chemicals",
"description": "carbon/oxygen"
}
}
]
For the EN language I have a custom analyser using the porter_stem. I want queries with the words "virus" and "viruses" to return the same results.
What I am finding is that porter stems virus->viru and viruses->virus. Consequently I get differing results.
How can I handle this?
You can achieve your use case, i.e, queries with the words "virus" and "viruses" should return the same result, by using snowball token filter,
that stems all the words to their root word.
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_snow"
]
}
},
"filter": {
"my_snow": {
"type": "snowball",
"language": "English"
}
}
}
},
"mappings": {
"properties": {
"desc": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
Analyze API
GET /_analyze
{
"analyzer" : "my_analyzer",
"text" : "viruses"
}
Following tokens are generated -
{
"tokens": [
{
"token": "virus",
"start_offset": 0,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 0
}
]
}
Index Data:
{
"desc":"viruses"
}
{
"desc":"virus"
}
Search Query:
{
"query": {
"match": {
"desc": {
"query": "viruses"
}
}
}
}
Search Result:
"hits": [
{
"_index": "65707743",
"_type": "_doc",
"_id": "1",
"_score": 0.18232156,
"_source": {
"desc": "viruses"
}
},
{
"_index": "65707743",
"_type": "_doc",
"_id": "2",
"_score": 0.18232156,
"_source": {
"desc": "virus"
}
}
]
I'm working on a Spanish search engine. (I don't speak Spanish) But based on my research, the goal is more or less like this: 1. filter stopwords like "dos","de","la"... 2. stem the words for both search and index. e.g If you search "primera", then "primero","primer" should also show up.
My attempt:
es_analyzer={
"settings": {
"analysis": {
"filter": {
"spanish_stop": {
"type": "stop",
"stopwords": "_spanish_"
},
"spanish_stemmer": {
"type": "stemmer",
"language": "spanish"
}
},
"analyzer": {
"default_search": {
"type": "spanish"
},
"rebuilt_spanish": {
"tokenizer": "standard",
"filter": [
"lowercase",
"spanish_stop",
"spanish_stemmer"
]
}
}
}
}
}
The problem:
When I use "type":"spanish" in the "default_search", my query "primera" gets stemmed to "primer", which is correct, but even though I specified to use "spanish_stemmer" in the filter, the documents in the index aren't stemmed. So as a result when I search for "primera", it only shows exact matches for "primer". Any suggestions on fixing this?
Potential fix but I haven't figured out the syntax:
Using built-in "spanish" analyzer in filter. What's the syntax?
Adding spanish stemmer and stopwords in "default_search". But I don't know how to use compound settings there.
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"settings": {
"analysis": {
"filter": {
"spanish_stop": {
"type": "stop",
"stopwords": "_spanish_"
},
"spanish_stemmer": {
"type": "stemmer",
"language": "spanish"
}
},
"analyzer": {
"default_search": {
"type":"spanish",
"tokenizer": "standard",
"filter": [
"lowercase",
"spanish_stop",
"spanish_stemmer"
]
}
}
}
},
"mappings":{
"properties":{
"title":{
"type":"text",
"analyzer":"default_search"
}
}
}
}
Index Data:
{
"title": "primer"
}
{
"title": "primera"
}
{
"title": "primero"
}
Search Query:
{
"query":{
"match":{
"title":"primer"
}
}
}
Search Result:
"hits": [
{
"_index": "stof_64420517",
"_type": "_doc",
"_id": "3",
"_score": 0.13353139,
"_source": {
"title": "primer"
}
},
{
"_index": "stof_64420517",
"_type": "_doc",
"_id": "1",
"_score": 0.13353139,
"_source": {
"title": "primera"
}
},
{
"_index": "stof_64420517",
"_type": "_doc",
"_id": "2",
"_score": 0.13353139,
"_source": {
"title": "primero"
}
}
]
I've a field indexed with custom analyzer with the below configuration
"COMPNAYNAME" : {
"type" : "text",
"analyzer" : "textAnalyzer"
}
"textAnalyzer" : {
"filter" : [
"lowercase"
],
"char_filter" : [ ],
"type" : "custom",
"tokenizer" : "ngram_tokenizer"
}
"tokenizer" : {
"ngram_tokenizer" : {
"type" : "ngram",
"min_gram" : "2",
"max_gram" : "3"
}
}
While I'm searching for a text "ikea" I'm getting the below results
Query :
GET company_info_test_1/_search
{
"query": {
"match": {
"COMPNAYNAME": {"query": "ikea"}
}
}
}
Fallowing are the results,
1.mikea
2.likeable
3.maaikeart
4.likeables
5.ikea b.v. <------
6.likeachef
7.ikea breda <------
8.bernikeart
9.ikea duiven
10.mikea media
I'm expecting the exact match result should be boosted more than the rest of the results.
Could you please help me what is the best way to index if I have to search with exact match as well as with fizziness.
Thanks in advance.
You can use ngram tokenizer along with "search_analyzer": "standard" Refer this to know more about search_analyzer
As pointed out by #EvaldasBuinauskas you can also use edge_ngram tokenizer here, if you want the tokens to be generated from the beginning only and not from the middle.
Adding a working example with index data, mapping, search query, and result
Index Data:
{ "title": "ikea b.v."}
{ "title" : "mikea" }
{ "title" : "maaikeart"}
Index Mapping
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit"
]
}
}
},
"max_ngram_diff": 50
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "standard"
}
}
}
}
Search Query:
{
"query": {
"match" : {
"title" : "ikea"
}
}
}
Search Result:
"hits": [
{
"_index": "normal",
"_type": "_doc",
"_id": "4",
"_score": 0.1499838, <-- note this
"_source": {
"title": "ikea b.v."
}
},
{
"_index": "normal",
"_type": "_doc",
"_id": "1",
"_score": 0.13562363, <-- note this
"_source": {
"title": "mikea"
}
},
{
"_index": "normal",
"_type": "_doc",
"_id": "3",
"_score": 0.083597526,
"_source": {
"title": "maaikeart"
}
}
]