I've a field indexed with custom analyzer with the below configuration
"COMPNAYNAME" : {
"type" : "text",
"analyzer" : "textAnalyzer"
}
"textAnalyzer" : {
"filter" : [
"lowercase"
],
"char_filter" : [ ],
"type" : "custom",
"tokenizer" : "ngram_tokenizer"
}
"tokenizer" : {
"ngram_tokenizer" : {
"type" : "ngram",
"min_gram" : "2",
"max_gram" : "3"
}
}
While I'm searching for a text "ikea" I'm getting the below results
Query :
GET company_info_test_1/_search
{
"query": {
"match": {
"COMPNAYNAME": {"query": "ikea"}
}
}
}
Fallowing are the results,
1.mikea
2.likeable
3.maaikeart
4.likeables
5.ikea b.v. <------
6.likeachef
7.ikea breda <------
8.bernikeart
9.ikea duiven
10.mikea media
I'm expecting the exact match result should be boosted more than the rest of the results.
Could you please help me what is the best way to index if I have to search with exact match as well as with fizziness.
Thanks in advance.
You can use ngram tokenizer along with "search_analyzer": "standard" Refer this to know more about search_analyzer
As pointed out by #EvaldasBuinauskas you can also use edge_ngram tokenizer here, if you want the tokens to be generated from the beginning only and not from the middle.
Adding a working example with index data, mapping, search query, and result
Index Data:
{ "title": "ikea b.v."}
{ "title" : "mikea" }
{ "title" : "maaikeart"}
Index Mapping
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit"
]
}
}
},
"max_ngram_diff": 50
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "standard"
}
}
}
}
Search Query:
{
"query": {
"match" : {
"title" : "ikea"
}
}
}
Search Result:
"hits": [
{
"_index": "normal",
"_type": "_doc",
"_id": "4",
"_score": 0.1499838, <-- note this
"_source": {
"title": "ikea b.v."
}
},
{
"_index": "normal",
"_type": "_doc",
"_id": "1",
"_score": 0.13562363, <-- note this
"_source": {
"title": "mikea"
}
},
{
"_index": "normal",
"_type": "_doc",
"_id": "3",
"_score": 0.083597526,
"_source": {
"title": "maaikeart"
}
}
]
Related
I have hundreds of chemicals results in my index climate_change
I'm using a ngram research and this is the settings that I'm using for the index.
{
"settings": {
"index.max_ngram_diff": 30,
"index": {
"analysis": {
"analyzer": {
"analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
},
"search_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"test_ngram": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 30,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}
}
My main problem is that if I try to do a query like this one
GET climate_change/_search?size=1000
{
"query": {
"match": {
"description": {
"query":"oxygen"
}
}
}
}
I see that a lot of results have the same score 7.381186..but it's strange
{
"_index" : "climate_change",
"_type" : "_doc",
"_id" : "XXX",
"_score" : 7.381186,
"_source" : {
"recordtype" : "chemicals",
"description" : "carbon/oxygen"
}
},
{
"_index" : "climate_change",
"_type" : "_doc",
"_id" : "YYY",
"_score" : 7.381186,
"_source" : {
"recordtype" : "chemicals",
"description" : "oxygen"
}
How could it be possible?
In the example above, If I'm using ngram and I'm searching oxygen in the description field, I'll expect that the second result will have a score bigger than the first one.
I've also tried to specify the type of the tokenizer "standard" and "whitespace" in the settings, but it could not help.
Maybe is the '/' character inside the description?
Thanks a lot!
You need to define the analyzer in the mapping for the description field also.
Adding a working example with index data, mapping, search query and search result
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
},
"search_analyzer": {
"tokenizer": "test_ngram",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"test_ngram": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 30,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"properties": {
"description": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
Index Data:
{
"recordtype": "chemicals",
"description": "carbon/oxygen"
}
{
"recordtype": "chemicals",
"description": "oxygen"
}
Search Query:
{
"query": {
"match": {
"description": {
"query":"oxygen"
}
}
}
}
Search Result:
"hits": [
{
"_index": "67180160",
"_type": "_doc",
"_id": "2",
"_score": 0.89246297,
"_source": {
"recordtype": "chemicals",
"description": "oxygen"
}
},
{
"_index": "67180160",
"_type": "_doc",
"_id": "1",
"_score": 0.6651374,
"_source": {
"recordtype": "chemicals",
"description": "carbon/oxygen"
}
}
]
For the EN language I have a custom analyser using the porter_stem. I want queries with the words "virus" and "viruses" to return the same results.
What I am finding is that porter stems virus->viru and viruses->virus. Consequently I get differing results.
How can I handle this?
You can achieve your use case, i.e, queries with the words "virus" and "viruses" should return the same result, by using snowball token filter,
that stems all the words to their root word.
Adding a working example with index data, mapping, search query, and search result
Index Mapping:
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_snow"
]
}
},
"filter": {
"my_snow": {
"type": "snowball",
"language": "English"
}
}
}
},
"mappings": {
"properties": {
"desc": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
Analyze API
GET /_analyze
{
"analyzer" : "my_analyzer",
"text" : "viruses"
}
Following tokens are generated -
{
"tokens": [
{
"token": "virus",
"start_offset": 0,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 0
}
]
}
Index Data:
{
"desc":"viruses"
}
{
"desc":"virus"
}
Search Query:
{
"query": {
"match": {
"desc": {
"query": "viruses"
}
}
}
}
Search Result:
"hits": [
{
"_index": "65707743",
"_type": "_doc",
"_id": "1",
"_score": 0.18232156,
"_source": {
"desc": "viruses"
}
},
{
"_index": "65707743",
"_type": "_doc",
"_id": "2",
"_score": 0.18232156,
"_source": {
"desc": "virus"
}
}
]
I have an elasticsearch index with customer informations
I have some issues looking for some results with accents
for example, I have {name: 'anais'} and {name: anaïs}
Running
GET /my-index/_search
{
"size": 25,
"query": {
"match": {"name": "anaïs"}
}
}
I would like to get both same for this query, in this case I only have anaïs
GET /my-index/_search
{
"size": 25,
"query": {
"match": {"name": "anais"}
}
}
I would like to get anais and anaïs, in this case I only have anais
I tried adding an analyser
PUT /my-new-celebrity/_settings
{
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
}
But in this case for both search I only get anais
Looks like you forgot to apply your custom default analyzer on your name field, below is working example:
Index def with mapping and setting
{
"settings": {
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings" : {
"properties" :{
"name" : {
"type" : "text",
"analyzer" : "default" // note this
}
}
}
}
Index sample docs
{
"name" : "anais"
}
{
"name" : "anaïs"
}
Search query same as yours
{
"size": 25,
"query": {
"match": {
"name": "anaïs"
}
}
}
And expected both search results
"hits": [
{
"_index": "myindexascii",
"_type": "_doc",
"_id": "1",
"_score": 0.18232156,
"_source": {
"name": "anaïs"
}
},
{
"_index": "myindexascii",
"_type": "_doc",
"_id": "2",
"_score": 0.18232156,
"_source": {
"name": "anais"
}
}
]
This question is similar to my other question enter link description here which Val answered.
I have an index containing 3 documents.
{
"firstname": "Anne",
"lastname": "Borg",
}
{
"firstname": "Leanne",
"lastname": "Ray"
},
{
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
When I search for "Ann", I would like elastic to return all 3 of these documents (because they all match the term "Ann" to a degree). BUT, I would like Leanne Ray to have a lower score (relevance ranking) because the search term "Ann" appears at a later position in this document than the term appears in the other two documents.
Here are my index settings...
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"token_chars": [
"letter",
"digit",
"custom"
],
"custom_token_chars": "'-",
"min_gram": "1",
"type": "ngram",
"max_gram": "2"
}
}
}
},
"mappings": {
"properties": {
"firstname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"lastname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"middlename": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"copy_to": [
"full_name"
]
},
"full_name": {
"type": "text",
"analyzer": "my_analyzer",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
The following query brings back the expected documents, but attributes a higher score to Leanne Ray than to Anne Borg.
{
"query": {
"bool": {
"must": {
"query_string": {
"query": "Ann",
"fields": ["full_name"]
}
},
"should": {
"match": {
"full_name": "Ann"}
}
}
}
}
Here are the results...
"hits": [
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "2",
"_score": 6.6333585,
"_source": {
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
},
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "1",
"_score": 6.142234,
"_source": {
"firstname": "Leanne",
"lastname": "Ray"
}
},
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "3",
"_score": 6.079495,
"_source": {
"firstname": "Anne",
"lastname": "Borg"
}
}
Using an ngram token filter and an ngram tokenizer together seems to fix this problem...
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"filter": [
"ngram"
],
"tokenizer": "ngram"
}
}
}
},
"mappings": {
"properties": {
"firstname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"lastname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"middlename": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"full_name": {
"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "my_analyzer"
}
}
}
}
The same query brings back the expected results with the desired relative scoring. Why does this work? Note that above, I am using an ngram tokenizer with a lowercase filter and the only difference here is that I am using an ngram filter instead of the lowercase filter.
Here are the results. Notice that Leanne Ray scored lower than both Anne Borg and Anne M Stone, as desired.
"hits": [
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "3",
"_score": 4.953257,
"_source": {
"firstname": "Anne",
"lastname": "Borg"
}
},
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "2",
"_score": 4.87168,
"_source": {
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
},
{
"_index": "contacts_4",
"_type": "_doc",
"_id": "1",
"_score": 1.0364896,
"_source": {
"firstname": "Leanne",
"lastname": "Ray"
}
}
By the way, this query also brings back a whole lot of false positive results when the index contains other documents as well. It's not such a problem becasuethese false positives have very low scores relative to the scores of the desirable hits. But still not ideal. For example, if I add {firstname: Gideon, lastname: Grossma} to the document, the above query will bring back that document in the result set as well - albeit with a much lower score than the documents containing the string "Ann"
The answer is the same as in the linked thread. Since you're ngraming all the indexed data, it works the same way with Ann as with Anne, You'll get the exact same response (see below), with different scores, though:
"hits" : [
{
"_index" : "test",
"_type" : "_doc",
"_id" : "5Jr-DHIBhYuDqANwSeiw",
"_score" : 4.8442974,
"_source" : {
"firstname" : "Anne",
"lastname" : "Borg"
}
},
{
"_index" : "test",
"_type" : "_doc",
"_id" : "5pr-DHIBhYuDqANwSeiw",
"_score" : 4.828779,
"_source" : {
"firstname" : "Anne",
"middlename" : "M",
"lastname" : "Stone"
}
},
{
"_index" : "test",
"_type" : "_doc",
"_id" : "5Zr-DHIBhYuDqANwSeiw",
"_score" : 0.12874341,
"_source" : {
"firstname" : "Leanne",
"lastname" : "Ray"
}
}
]
UPDATE
Here is a modified query that you can use to check for parts (i.e. ann vs anne). Again, the casing makes no difference here, since the analyzer lowercases everything before indexing.
{
"query": {
"bool": {
"must": {
"query_string": {
"query": "ann",
"fields": [
"full_name"
]
}
},
"should": [
{
"match_phrase_prefix": {
"firstname": {
"query": "ann",
"boost": "10"
}
}
},
{
"match_phrase_prefix": {
"lastname": {
"query": "ann",
"boost": "10"
}
}
}
]
}
}
}
I have an index containing 3 documents.
{
"firstname": "Anne",
"lastname": "Borg",
}
{
"firstname": "Leanne",
"lastname": "Ray"
},
{
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
When I search for "Anne", I would like elastic to return all 3 of these documents (because they all match the term "Anne" to a degree). BUT, I would like Leanne Ray to have a lower score (relevance ranking) because the search term "Anne" appears at a later position in this document than the term appears in the other two documents.
Initially, I was using an ngram tokenizer. I also have a generated field in my index's mapping called "full_name" that contains the firstname, middlename and lastname strings. When I searched for "Anne", all 3 documents are in the result set. However, Anne M Stone has the same score as Leanne Ray. Anne M Stone should have a higher score than Leanne.
To address this, I changed my ngram tokenizer to an edge_ngram tokenizer. This had the effect of completely leaving out Leanne Ray from the result set. We would like to keep this result in the result set - because it still contains the query string - but with a lower score than the other two better matches.
I read somewhere that it may be possible to use the edge ngram filter alongside an ngram filter in the same index. If so, how should I recreate my index to do so? Is there a better solution?
Here are the initial index settings.
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"token_chars": [
"letter",
"digit",
"custom"
],
"custom_token_chars": "'-",
"min_gram": "3",
"type": "ngram",
"max_gram": "4"
}
}
}
},
"mappings": {
"properties": {
"contact_id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"firstname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"lastname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"middlename": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"copy_to": [
"full_name"
]
},
"full_name": {
"type": "text",
"analyzer": "my_analyzer",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
And here is my query
{
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "Anne",
"fields": [
"full_name"
]
}
}
]
}
}
}
This brought back these results
"hits": {
"total": {
"value": 3,
"relation": "eq"
},
"max_score": 0.59604377,
"hits": [
{
"_index": "contacts_15",
"_type": "_doc",
"_id": "3",
"_score": 0.59604377,
"_source": {
"firstname": "Anne",
"lastname": "Borg"
}
},
{
"_index": "contacts_15",
"_type": "_doc",
"_id": "1",
"_score": 0.5592099,
"_source": {
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
},
{
"_index": "contacts_15",
"_type": "_doc",
"_id": "2",
"_score": 0.5592099,
"_source": {
"firstname": "Leanne",
"lastname": "Ray"
}
}
]
}
If I instead use an edge ngram tokenizer, this is what the index's settings look like...
{
"settings": {
"max_ngram_diff": "10",
"analysis": {
"analyzer": {
"my_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": ["edge_ngram_tokenizer"]
}
},
"tokenizer": {
"edge_ngram_tokenizer": {
"token_chars": [
"letter",
"digit",
"custom"
],
"custom_token_chars": "'-",
"min_gram": "2",
"type": "edge_ngram",
"max_gram": "10"
}
}
}
},
"mappings": {
"properties": {
"contact_id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"firstname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"lastname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"copy_to": [
"full_name"
]
},
"middlename": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"copy_to": [
"full_name"
]
},
"full_name": {
"type": "text",
"analyzer": "my_analyzer",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
and that same query brings back this new result set...
"hits": {
"total": {
"value": 2,
"relation": "eq"
},
"max_score": 1.5131824,
"hits": [
{
"_index": "contacts_16",
"_type": "_doc",
"_id": "3",
"_score": 1.5131824,
"_source": {
"firstname": "Anne",
"middlename": "M",
"lastname": "Stone"
}
},
{
"_index": "contacts_16",
"_type": "_doc",
"_id": "1",
"_score": 1.4100108,
"_source": {
"firstname": "Anne",
"lastname": "Borg"
}
}
]
}
You can keep using ngram (i.e. first solution) but then you need to change your query to improve the relevance. The way it works is that you add a boosted multi_match query in a should clause to increase the score of documents whose first or last name match exactly with the input:
{
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "Anne",
"fields": [
"full_name"
]
}
}
],
"should": [
{
"multi_match": {
"query": "Anne",
"fields": [
"firstname",
"lastname"
],
"boost": 10
}
}
]
}
}
}
This query would bring Anne Borg and Anne M Stone before Leanne Ray.
UPDATE
Here is how I arrived at the results.
First I created a test index with the exact same settings/mappings as you have added to your question:
PUT test
{ ... copy/pasted mappings/settings ... }
Then I added the three sample documents you provided:
POST test/_doc/_bulk
{"index":{}}
{"firstname":"Anne","lastname":"Borg"}
{"index":{}}
{"firstname":"Leanne","lastname":"Ray"}
{"index":{}}
{"firstname":"Anne","middlename":"M","lastname":"Stone"}
Finally, if you run my query above, you get the following results, which is exactly what you expect (look at the scores):
{
"hits" : {
"total" : {
"value" : 3,
"relation" : "eq"
},
"max_score" : 5.1328206,
"hits" : [
{
"_index" : "test",
"_type" : "_doc",
"_id" : "4ZqbDHIBhYuDqANwQ-ih",
"_score" : 5.1328206,
"_source" : {
"firstname" : "Anne",
"lastname" : "Borg"
}
},
{
"_index" : "test",
"_type" : "_doc",
"_id" : "45qbDHIBhYuDqANwQ-ih",
"_score" : 5.0862665,
"_source" : {
"firstname" : "Anne",
"middlename" : "M",
"lastname" : "Stone"
}
},
{
"_index" : "test",
"_type" : "_doc",
"_id" : "4pqbDHIBhYuDqANwQ-ih",
"_score" : 0.38623023,
"_source" : {
"firstname" : "Leanne",
"lastname" : "Ray"
}
}
]
}
}