Elasticsearch word_delimiter filter with uppercase token dont match - elasticsearch

I built an ElasticSearch index using a custom analyzer which uses lowercase and custom word_delimiter filter with keyword tokenizer.
"merged_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"asciifolding",
"word_delim",
"trim"
]
},
"merged_search_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"asciifolding"
]
}
"word_delim": {
"type": "word_delimiter",
"catenate_words": true,
"generate_word_parts": false,
"generate_number_parts": false,
"preserve_original": true
}
"properties": {
"lastName": {
"type": "keyword",
"normalizer": "keyword_normalizer",
"fields": {
"merged": {
"type": "text",
"analyzer": "merged_analyzer",
"search_analyzer": "merged_search_analyzer"
}
}
}
}
Then I tried searching for documents containing dash-separated sub-words, e.g. 'Abc-Xyz'. using the .merged field. Both 'abc-xyz' and 'abcxyz' (in lowercase) match, it's exactly what I expected, but I want my analyzer matchs also with uppercase letters or whitespace (e.g. 'Abc-Xyz', 'abc-xyz ').
It seems like the filters trim and lowercase have no effect on my analyzer
Any idea what I could be doing wrong?
I use elastic 6.2.4

I'm not sure, but it might be that the search analyzer is different from the index analyzer. there are two things you can do to check this.
configure a search_analyzer: https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-analyzer.html which would analyze using your merged_analyzer.
use the Analyze API: https://www.elastic.co/guide/en/elasticsearch/reference/6.2/indices-analyze.html
in order to check if your search tokens are as expected.

Related

Elasticsearch : Search with special character Open & Close parentheses

Hi I am trying to search a word which has these characters in it '(' , ')' in elastic search. I am not able to get expected result.
This is the query I am using
{
"query": {
"query_string" : {
"default_field" : "name",
"query" : "\\(Pas\\)ta\""
}
}}
In the results I am getting records with "PASTORS" , "PAST", "PASCAL", "PASSION" first. I want to get name 'Pizza & (Pas)ta' in the first record in the search result as it is the best match.
Here is the analyzer for the name field in the schema
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": "1",
"max_gram": "20"
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
"name": {
"analyzer": "autocomplete",
"search_analyzer": "standard",
"type": "string"
},
Please help me to fix this, Thanks
You have used standard tokenizer which is removing ( and ) from the tokens generated. Instead of getting token (pas)ta one of the token generated is pasta and hence you are not getting match for (pas)ta.
Instead of using standard tokenizer you can use whitespace tokenizer which will retain all the special characters in the name. Change analyzer definition to below:
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}

Why is my elastic search prefix query case-sensitive despite using lowercase filters on both index and search?

The Problem
I am working on an autocompleter using ElasticSearch 6.2.3. I would like my query results (a list of pages with a Name field) to be ordered using the following priority:
Prefix match at start of "Name" (Prefix query)
Any other exact (whole word) match within "Name" (Term query)
Fuzzy match (this is currently done on a different field to Name using a ngram tokenizer ... so I assume cannot be relevant to my problem but I would like to apply this on the Name field as well)
My Attempted Solution
I will be using a Bool/Should query consisting of three queries (corresponding to the three priorities above), using boost to define relative importance.
The issue I am having is with the Prefix query - it appears to not be lowercasing the search query despite my search analyzer having the lowercase filter. For example, the below query returns "Harry Potter" for 'harry' but returns zero results for 'Harry':
{ "query": { "prefix": { "Name.raw" : "Harry" } } }
I have verified using the _analyze API that both my analyzers do indeed lowercase the text "Harry" to "harry". Where am I going wrong?
From the ES documentation I understand I need to analyze the Name field in two different ways to enable use of both Prefix and Term queries:
using the "keyword" tokenizer to enable the Prefix query (I have applied this on a .raw field)
using a standard analyzer to enable the Term (I have applied this on the Name field)
I have checked duplicate questions such as this one but the answers have not helped
My mapping and settings are below
ES Index Mapping
{
"myIndex": {
"mappings": {
"pages": {
"properties": {
"Id": {},
"Name": {
"type": "text",
"fields": {
"raw": {
"type": "text",
"analyzer": "keywordAnalyzer",
"search_analyzer": "pageSearchAnalyzer"
}
},
"analyzer": "pageSearchAnalyzer"
},
"Tokens": {}, // Other fields not important for this question
}
}
}
}
}
ES Index Settings
{
"myIndex": {
"settings": {
"index": {
"analysis": {
"filter": {
"ngram": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "15"
}
},
"analyzer": {
"keywordAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "keyword"
},
"pageSearchAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding"
],
"type": "custom",
"tokenizer": "standard"
},
"pageIndexAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding",
"ngram"
],
"type": "custom",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1",
"uuid": "l2AXoENGRqafm42OSWWTAg",
"version": {}
}
}
}
}
Prefix queries don't analyze the search terms, so the text you pass into it bypasses whatever would be used as the search analyzer (in your case, the configured search_analyzer: pageSearchAnalyzer) and evaluates Harry as-is directly against the keyword-tokenized, custom-filtered harry potter that was the result of the keywordAnalyzer applied at index time.
In your case here, you'll need to do one of a few different things:
Since you're using a lowercase filter on the field, you could just always use lowercase terms in your prefix query (using application-side lowercasing if necessary)
Run a match query against an edge_ngram-analyzed field instead of a prefix query like described in the ES search_analyzer docs
Here's an example of the latter:
1) Create the index w/ ngram analyzer and (recommended) standard search analyzer
PUT my_index
{
"settings": {
"index": {
"analysis": {
"filter": {
"ngram": {
"type": "edgeNGram",
"min_gram": "2",
"max_gram": "15"
}
},
"analyzer": {
"pageIndexAnalyzer": {
"filter": [
"trim",
"lowercase",
"asciifolding",
"ngram"
],
"type": "custom",
"tokenizer": "keyword"
}
}
}
}
},
"mappings": {
"pages": {
"properties": {
"name": {
"type": "text",
"fields": {
"ngram": {
"type": "text",
"analyzer": "pageIndexAnalyzer",
"search_analyzer": "standard"
}
}
}
}
}
}
}
2) Index some sample docs
POST my_index/pages/_bulk
{"index":{}}
{"name":"Harry Potter"}
{"index":{}}
{"name":"Hermione Granger"}
3) Run the a match query against the ngram field
POST my_index/pages/_search
{
"query": {
"match": {
"query": "Har",
"operator": "and"
}
}
}
I think it is better to use match_phrase_prefix query without using .keyword suffix. Check the docs at here https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query-phrase-prefix.html

Custom sorting in elastic search

I have some documents in elastic search with completion suggester. I search for some value like Stack, the results are shown in the order given below:
Stack Overflow
Stack-Overflow
Stack
StackOver
StackOverflow
I want the result to be displayed in the order:
Stack
StackOver
StackOverflow
Stack Overflow
Stack-Overflow
i.e, the exacts matches should come first instead of results which space or special characters.
TIA
It all depends on the way you are analysing the string you are querying upon. I will suggest that you apply more than one analyser on the same string field. Below is an example of the mapping of the "name" field over which you want auto complete/suggester feature:
"name": {
"type": "string",
"analyzer": "keyword_analyzer",
"fields": {
"name_ac": {
"type": "string",
"index_analyzer": "string_autocomplete_analyzer",
"search_analyzer": "keyword_analyzer"
}
}
}
Here, keyword_analyzer and string_autocomplete_analyzer are analysers defined in your index settings. Below is an example:
"keyword_analyzer": {
"type": "custom",
"filter": [
"lowercase"
],
"tokenizer": "keyword"
}
"string_autocomplete_analyzer": {
"type": "custom",
"filter": [
"lowercase"
,
"autocomplete"
],
"tokenizer": "whitespace"
}
Here autocomplete is an analysis filter:
"autocomplete": {
"type": "edgeNGram",
"min_gram": "1",
"max_gram": "10"
}
After having set this, when searching in Elasticsearch for the auto suggestions, you can make use of multiMatch queries instead of the normal match queries and here you provide boosts to individual fields in the multiMatch. Below is a example in java:
QueryBuilders.multiMatchQuery(yourSearchString,"name^3","name_ac");
You may need to alter the boost (^3) as per your needs.
If even this does not satisfy your requirements, you can look at having one more analyser which analyse the string based on first word and include that field in the multiMatch. Below is an example of such an analyser:
"first_word_name_analyzer": {
"type": "custom",
"filter": [
"lowercase"
,
"whitespace_merge"
,
"edgengram"
],
"tokenizer": "keyword"
}
With these analysis filters:
"whitespace_merge": {
"pattern": "\s+",
"type": "pattern_replace",
"replacement": " "
},
"edgengram": {
"type": "edgeNGram",
"min_gram": "1",
"max_gram": "32"
}
You may have to do some trials on the boost values in order to reach the most optimum results based on your requirements. Hope this helps.

Erroneour match using snowball analyzer

Iam using snowball analyzer for the stemming of the words. But this has mapped the words "insider" and "inside" to the same stem word "insid" which is totally wrong. How can I improve these kind of stemming of words in elasticsearch.
That's how snowball analyzer works. It's simply too aggressive. Based on my own experience, I tend to use modified English analyzer
I remove stemmer filter as it's too aggressive. Just like snowball
I replace it with kstem, which is lightweight english filter. Does a great job.
I add hunspell dictionary token filter to normalize words (instead of stemmer)
I add asciifolding filter to normalize letter, so things such as rĂ´le and role would be equal.
That would be it:
{
"settings": {
"analysis": {
"filter": {
"english_hunspell" : {
"type" : "hunspell",
"language" : "en_GB"
},
"english_stop": {
"type": "stop",
"stopwords": "_english_"
},
"english_possessive_stemmer": {
"type": "stemmer",
"language": "possessive_english"
}
},
"analyzer": {
"english": {
"tokenizer": "standard",
"filter": [
"asciifolding",
"english_possessive_stemmer",
"lowercase",
"english_stop",
"kstem",
"english_hunspell"
]
}
}
}
}
}

Using prefix query in eleasticsearch for an integer field

I am trying to use the prefix query to return a list of possible matches. The prefix field is an integer that is not analyzed. I would expect that:
{
"prefix": {"id":"1"}
}
Would return all documents where the id starts with 1 (e.g. 1, 10, 11,12,13 etc}. However, it only returns an exact match (e.g. 1).
Does the prefix query work on integer fields?
I'm still learning elasticsearch too, but I believe the field has to be a string. You can work around this by using a multi_field type and adding a string field that you can run your prefix query against. I use the following for a number of fields (integers and non-integers) that I need to run prefix queries against:
Under settings.analysis in your mapping, add:
"analyzer": {
"str_filtered_search_analyzer": {
"tokenizer": "keyword",
"filter": [
"lowercase"
]
},
"str_prefix_analyzer": {
"tokenizer": "keyword",
"filter": [
"lowercase",
"prefix"
]
}
}
And then under mappings.type.properties (replace type with your type e.g. user):
"id": {
"type": "multi_field",
"fields": {
"id": {
"type": "integer"
},
"prefix": {
"type": "string",
"index_analyzer": "str_prefix_analyzer",
"search_analyzer": "str_filtered_search_analyzer"
}
}
}

Resources