how to require minimum length letters from query to match in elasticsearch - elasticsearch

I want to require that the query be at least 5 matching consecutive characters for matching a particular field. They can be somewhat fuzzy (would be ideal if the longer the sequence is, the fuzzier it can be).

In this example I defined n-gram with no min 5 characters in gram. That way it is possible to match with at least 5 characters.
PUT teste
{
"mappings": {
"properties": {
"name": {
"type": "text",
"fields": {
"ngram": {
"type": "text",
"analyzer": "shingle_analyzer"
}
}
}
}
},
"settings": {
"analysis": {
"analyzer": {
"shingle_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"shingle_filter"
]
}
},
"filter": {
"shingle_filter": {
"type": "edge_ngram",
"min_gram": 5,
"max_gram": 8
}
}
}
}
}
POST teste/_doc
{
"name":"example text match fiver terms sequence"
}
GET teste/_search
{
"query": {
"match": {
"name.ngram": "exampl"
}
}
}

Related

What is the approach for edge_ngram search query to have a correct word slop count?

I'm trying to make a search-as-you-type search with some fuzziness within n fields. Also, the distance between tokens is a must, so I've decided to use edge_ngrams with a bool query. But as far the tokens are edge_ngrams the slop is calculated in the same way - by ngrams instead of words.
Initial conditions:
Index settings PUT http://localhost:9200/test
{
"mappings": {
"properties": {
"someField": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search"
},
"anotherField": {
"type": "text"
}
}
},
"settings": {
"number_of_shards": "1",
"number_of_replicas": "1",
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "autocomplete",
"filter": [
"lowercase"
]
},
"autocomplete_search": {
"tokenizer": "lowercase"
}
},
"tokenizer": {
"autocomplete": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 20,
"token_chars": [
"letter"
]
}
}
}
}
}
Sample document POST http://localhost:9200/test/_create/1
{
"someField": "one two three four five six seven eight nine ten eleven",
"anotherField": "one two three four five six seven eight nine ten eleven"
}
Search request POST http://localhost:9200/test/_search?typed_keys=true
{
"highlight": {
"fields": {
"someField": {},
"anotherField": {}
}
},
"query": {
"bool": {
"must": {
"dis_max": {
"tie_breaker": 0.9,
"queries": [
{
"match_phrase": {
"someField": {
"query": "thre elev",
"slop": 24
}
}
},
{
"match_phrase": {
"anotherField": {
"query": "thre elev",
"slop": 24
}
}
}
]
}
},
"filter": [
// my custom filters...
]
}
}
}
I expect to see the example document searching for "thre elev" - this works fine.
But the problem is that there are 7 words between three and eleven, but the edge_ngram tokenization affects it, so the real slop is higher & unpredictable. So if I use slop of 24 then it works, but that's not norma, I guess...
Is there any way to tune the match_phrase search to avoid this?
I don't have enough ES expertise to define the alternative wayout for this type of search, but maybe could anyone suggest another approach for this?

Wildcard / regexp in a phrase which has space

Create an index:
Here I an using edge_ngram
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"my_type": {
"properties": {
"city": {
"type": "keyword",
"fields": {
"raw": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
}
}
}
POST my_index/my_type/1
{
"text": "2 #Quick Foxes lived and died"
}
POST my_index/my_type/2
{
"text": "2 #Quick Foxes lived died"
}
Now when we search
GET my_index/my_type/_search
{
"query": {
"query_string": {
"default_operator" : "AND",
"query" : "f* d*",
"fields": ["text.raw"]
}
}
}
Only ID 2 should list. But nothing returns.
when you try this
GET my_index/my_type/_search
{
"query": {
"query_string": {
"default_operator" : "AND",
"query" : "f* d*",
"fields": ["text"]
}
}
}
It will return both.
If we have an index with huge data and if we wanted to search wildcards, how we will do it?
single keyword will work, but if we add phrases like which i mentioned in the example, it won't give you any proper result.
To generate a regex expression you can follow these websites:-
Generate regex expression here- http://buildregex.com/
and test your string with expression generated from here https://regex101.com/

Elastic search Unorderered Partial phrase matching with ngram

Maybe I am going down the wrong route, but I am trying to set up Elasticsearch to use Partial Phrase matching to return parts of words from any order of a sentence.
Eg. I have the following input
test name
tester name
name test
namey mcname face
test
And I hope to do a search for "test name" (or "name test"), and I hope all of these return (hopefully sorted in order of score). I can do partial searches, and also can do out of order searches, but not able to combine the 2. I am sure this would be a very common issue.
Below is my Settings
{
"myIndex": {
"settings": {
"index": {
"analysis": {
"filter": {
"mynGram": {
"type": "nGram",
"min_gram": "2",
"max_gram": "5"
}
},
"analyzer": {
"custom_analyser": {
"filter": [
"lowercase",
"mynGram"
],
"type": "custom",
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "nGram",
"min_gram": "2",
"max_gram": "5"
}
}
}
}
}
}
}
My mapping
{
"myIndex": {
"mappings": {
"myIndex": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"analyzer": "custom_analyser"
}
}
}
}
}
}
And my query
{
"query": {
"bool": {
"must": [{
"match_phrase": {
"name": {
"query": "test name",
"slop": 5
}
}
}]
}
}
}
Any help would be greatly appreciated.
Thanks in advance
not sure if you found your solution - I bet you did because this is such an old post, but I was on the hunt for the same thing and found this: Query-Time Search-as-you-type
Look up slop.

Get exact match after doing mapping as not_analyzed

I have elasticsearch type I mapped as below,
mappings": {
"jardata": {
"properties": {
"groupID": {
"index": "not_analyzed",
"type": "string"
},
"artifactID": {
"index": "not_analyzed",
"type": "string"
},
"directory": {
"type": "string"
},
"jarFileName": {
"index": "not_analyzed",
"type": "string"
},
"version": {
"index": "not_analyzed",
"type": "string"
}
}
}
}
I am using index of directory as analyzed since I want give only the last folder and get the results, But when I want to search a specific directory I need to give the whole path since there can be same folder in two paths. The problem here is since it is analyzed it will all data instead the specific one I want.
The problem here is I want to act it like both analyzed and not_analyzed. is there a way for that?
Let's say you have the following document indexed:
{
"directory": "/home/docs/public"
}
The standard analyzer is not enough in your case as it will create following terms while indexing:
[home, docs, public]
Note that it misses [/home/docs/public] token - characters like "/" etc. are acting as separators here.
One solution could be to use NGram tokenizer with punctuation character class in token_chars list. Elasticsearch would treat "/" as it would be a letter or digit. This would allow to search with following tokens:
[/hom, /home, ..., /home/docs/publi, /home/docs/public, ..., /docs/public, etc...]
Index mapping:
{
"settings": {
"analysis": {
"analyzer": {
"ngram_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 4,
"max_gram": 18,
"token_chars": [
"letter",
"digit",
"punctuation"
]
}
}
}
},
"mappings": {
"jardata": {
"properties": {
"directory": {
"type": "string",
"analyzer": "ngram_analyzer"
}
}
}
}
}
Now both search queries:
{
"query": {
"bool" : {
"must" : {
"term" : {
"directory": "/docs/private"
}
}
}
}
}
and
{
"query": {
"bool" : {
"must" : {
"term" : {
"directory": "/home/docs/private"
}
}
}
}
}
will give the indexed document in result.
One thing you have to consider is the maximum length of the token that is specified in "max_gram" setting. In case of directory paths it could be necessary to have it longer.
Alternative solution is to use Whitespace tokenizer, that breaks the phrase into terms only on whitespaces, and NGram filter with following mapping:
{
"settings": {
"analysis": {
"filter": {
"ngram_filter": {
"type": "ngram",
"min_gram": 4,
"max_gram": 20
}
},
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"ngram_filter"
]
}
}
}
},
"mappings": {
"jardata": {
"properties": {
"directory": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}
update the mapping of the directory field to contain raw field like this:
"directory": {
"type": "string",
"fields": {
"raw": {
"index": "not_analyzed",
"type": "string"
}
}
}
And modify your query to include directory.raw which will treat it like not_analyzed. Refer this.

Elasticsearch custom analyzer with ngram and without word delimiter on hyphens

I am trying to index strings that contain hyphens but do not contain spaces, periods or any other punctuation. I do not want to split up the words based on hyphens, instead I would like to have the hyphens be part of the indexed text.
For example, my 6 text strings would be:
magazineplayon
magazineofhorses
online-magazine
best-magazine
friend-of-magazines
magazineplaygames
I would like to be able to search these string for the text containing "play" or for the text starting with "magazine".
I have been able to use ngram to make the text containing "play" work properly. However, the hyphen is causing text to split and it is including results where "magazine" is in the word after a hyphen. I only want words starting at the beginning of the string with "magazine" to appear.
Based on the sample above, only these 3 should appear when beginning with "magazine":
magazineplayon
magazineofhorses
magazineplaygames
Please help with my ElasticSearch Index Sample:
DELETE /sample
PUT /sample
{
"settings": {
"index.number_of_shards":5,
"index.number_of_replicas": 0,
"analysis": {
"filter": {
"nGram_filter": {
"type": "nGram",
"min_gram": 2,
"max_gram": 20,
"token_chars": [
"letter",
"digit"
]
},
"word_delimiter_filter": {
"type": "word_delimiter",
"preserve_original": true,
"catenate_all" : true
}
},
"analyzer": {
"ngram_index_analyzer": {
"type" : "custom",
"tokenizer": "lowercase",
"filter" : ["nGram_filter", "word_delimiter_filter"]
}
}
}
}
}
PUT /sample/1/_create
{
"name" : "magazineplayon"
}
PUT /sample/3/_create
{
"name" : "magazineofhorses"
}
PUT /sample/4/_create
{
"name" : "online-magazine"
}
PUT /sample/5/_create
{
"name" : "best-magazine"
}
PUT /sample/6/_create
{
"name" : "friend-of-magazines"
}
PUT /sample/7/_create
{
"name" : "magazineplaygames"
}
GET /sample/_search
{
"query": {
"wildcard": {
"name": "*play*"
}
}
}
GET /sample/_search
{
"query": {
"wildcard": {
"name": "magazine*"
}
}
}
Update 1
I updated all my create statements to use TEST after sample:
PUT /sample/test/7/_create
{
"name" : "magazinefairplay"
}
I then ran the following command to return only names that had the word "play" in them instead of doing the wildcard search. This worked correctly and returned only two records.
POST /sample/test/_search
{
"query": {
"bool": {
"minimum_should_match": 1,
"should": [
{"match": { "name.substrings": "play" }}
]
}
}
}
I ran the following command to return only names that started with "magazine". My expectation was that "online-magazine", "best-magazine" and "friend-of-magazines" would not appear. However, all seven records were returned including these three.
POST /sample/test/_search
{
"query": {
"bool": {
"minimum_should_match": 1,
"should": [
{"match": { "name.prefixes": "magazine" }}
]
}
}
}
Is there a way to filter out the prefix where the hyphen is used?
You're on the right path, however, you need to also add another analyzer that leverages the edge-ngram token filter in order to make the "starts with" contraint work. You can keep the ngram for checking fields that "contain" a given word, but you need edge-ngram to check that a field "starts with" some token.
PUT /sample
{
"settings": {
"index.number_of_shards": 5,
"index.number_of_replicas": 0,
"analysis": {
"filter": {
"nGram_filter": {
"type": "nGram",
"min_gram": 2,
"max_gram": 20,
"token_chars": [
"letter",
"digit"
]
},
"edgenGram_filter": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 20
}
},
"analyzer": {
"ngram_index_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"nGram_filter"
]
},
"edge_ngram_index_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"edgenGram_filter"
]
}
}
}
},
"mappings": {
"test": {
"properties": {
"name": {
"type": "string",
"fields": {
"prefixes": {
"type": "string",
"analyzer": "edge_ngram_index_analyzer",
"search_analyzer": "standard"
},
"substrings": {
"type": "string",
"analyzer": "ngram_index_analyzer",
"search_analyzer": "standard"
}
}
}
}
}
}
}
Then your query will become (i.e. search for all documents whose name field contains play or starts with magazine)
POST /sample/test/_search
{
"query": {
"bool": {
"minimum_should_match": 1,
"should": [
{"match": { "name.substrings": "play" }},
{"match": { "name.prefixes": "magazine" }}
]
}
}
}
Note: don't use wildcard for searching for substrings, as it will kill the performance of your cluster (more info here and here)

Resources