ElasticSearch Search-as-you-type field type field with partial search - elasticsearch

I recently updating my ngram implementation settings to use Search-as-you-type field type.
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/search-as-you-type.html
This worked great but I noticed that partial searching does not work.
If I search for number 00060434 I get the desired result but I would also like to be able to search for 60434, then it should return document 3.
Is there a way todo it with the Search-as-you-type field type or can i only do this with ngrams?
PUT searchasyoutype_example
{
"settings": {
"analysis": {
"analyzer": {
"englishAnalyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"trim",
"ascii_folding"
]
}
},
"filter": {
"ascii_folding": {
"type": "asciifolding",
"preserve_original": true
}
}
}
},
"mappings": {
"properties": {
"number": {
"type": "search_as_you_type",
"analyzer": "englishAnalyzer"
},
"fullName": {
"type": "search_as_you_type",
"analyzer": "englishAnalyzer"
}
}
}
}
PUT searchasyoutype_example/_doc/1
{
"number" : "00069794",
"fullName": "Employee 1"
}
PUT searchasyoutype_example/_doc/2
{
"number" : "00059840",
"fullName": "Employee 2"
}
PUT searchasyoutype_example/_doc/3
{
"number" : "00060434",
"fullName": "Employee 3"
}
GET searchasyoutype_example/_search
{
"query": {
"multi_match": {
"query": "00060434",
"type": "bool_prefix",
"fields": [
"number",
"number._index_prefix",
"fullName",
"fullName._index_prefix"
]
}
}
}

I think you need to query on number,number._2gram & number._3gram like below:
GET searchasyoutype_example/_search
{
"query": {
"multi_match": {
"query": "00060434",
"type": "bool_prefix",
"fields": [
"number",
"number._2gram",
"number._3gram",
]
}
}
}
search_as_you_type creates the 3 sub fields. You can check more on this article how it works:
https://ashish.one/blogs/search-as-you-type/

Related

Elasticsearch : using fuzzy search to find abbreviations

I have indexed textual articles which mentions company names, like apple and lemonade, and am trying to search for these companies using their abbreviations like APPL and LMND but fuzzy search is giving other results, for example, searching with LMND gives land which is mentioned in the text but it doesn't output lemonade whichever parameters I tried.
First question
Is fuzzy search the suitable solution for such search ?
Second question
what could be a good parameter values ranges to support my problem ?
UPDATE
I have tried synonym filter
{
"settings": {
"index": {
"analysis": {
"filter": {
"synonyms_filter": {
"type": "synonym",
"synonyms": [
"apple,APPL",
"lemonade,LMND"
]
}
},
"analyzer": {
"synonym_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"synonyms_filter"
]
}
}
}
}
},
"mappings": {
"properties": {
"transcript_data": {
"properties": {
"words": {
"type": "nested",
"properties": {
"word": {
"type": "text",
"search_analyzer":"synonym_analyzer"
}
}
}
}
}
}
}
}
and for SEARCH I used
{
"_source": false,
"query": {
"nested": {
"path": "transcript_data.words",
"query": {
"match": {
"transcript_data.words.word": "lmnd"
}
}
}
}
}
but it's not working
I believe that the best option for you is the use of synonyms, they serve exactly what you need.
I'll leave an example and the link to an article explaining some details.
PUT teste
{
"settings": {
"index": {
"analysis": {
"filter": {
"synonyms_filter": {
"type": "synonym",
"synonyms": [
"apple,APPL",
"lemonade,LMND"
]
}
},
"analyzer": {
"synonym_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"synonyms_filter"
]
}
}
}
}
},
"mappings": {
"properties": {
"transcript_data": {
"properties": {
"words": {
"type": "nested",
"properties": {
"word": {
"type": "text",
"analyzer":"synonym_analyzer"
}
}
}
}
}
}
}
}
POST teste/_bulk
{"index":{}}
{"transcript_data": {"words":{"word":"apple"}}}
GET teste/_search
{
"query": {
"nested": {
"path": "transcript_data.words",
"query": {
"match": {
"transcript_data.words.word": "appl"
}
}
}
}
}

Elastic synonyms are taking over other words

On this sequence of commands:
Create the index:
PUT /test_index?
{
"settings": {
"analysis": {
"analyzer": {
"GermanCompoundWordsAnalyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"german_compound_synonym",
"german_normalization"
]
}
},
"filter": {
"german_compound_synonym": {
"type": "synonym",
"synonyms": [
"teppichläufer, auslegware läufer"
]
}
}
}
},
"mappings": {
"_doc": {
"properties": {
"name": {
"type": "text",
"analyzer": "GermanCompoundWordsAnalyzer"
}
}
}
}
}
Adding a few documents:
POST test_index/_doc/
{
"sku" : "kimchy",
"name" : "teppichläufer alfa"
}
POST test_index/_doc/
{
"sku" : "kimchy",
"name" : "teppichläufer beta"
}
Search for one document (I would expect), but 2 are returning :(
GET /test_index/_search
{
"query": {
"match": {
"name": {
"query": "teppichläufer beta",
"operator": "and"
}
}
}
}
I will get both documents since the synonym teppichläufer, auslegware läufer, läufer will endup on the position 1 and 'substitute' the beta. If I remove the "analyzer": "GermanCompoundWordsAnalyzer", I will just get one document as expected.
How do I use this synonyms and don't have this issue?
POST /test_index/_search
{
"query": {
"bool" : {
"should": [
{
"query_string": {
"default_field": "name",
"query": "teppichläufer beta"
, "default_operator": "AND"
}
}
]
}
}
}
After a little more search I found it on the documentations. This a RFM problems, sorry guys.
I tried with:
https://www.elastic.co/guide/en/elasticsearch/reference/master/analysis-synonym-graph-tokenfilter.html
The funny part is that it makes the NDCG of the results worst :)

Elastic search Unorderered Partial phrase matching with ngram

Maybe I am going down the wrong route, but I am trying to set up Elasticsearch to use Partial Phrase matching to return parts of words from any order of a sentence.
Eg. I have the following input
test name
tester name
name test
namey mcname face
test
And I hope to do a search for "test name" (or "name test"), and I hope all of these return (hopefully sorted in order of score). I can do partial searches, and also can do out of order searches, but not able to combine the 2. I am sure this would be a very common issue.
Below is my Settings
{
"myIndex": {
"settings": {
"index": {
"analysis": {
"filter": {
"mynGram": {
"type": "nGram",
"min_gram": "2",
"max_gram": "5"
}
},
"analyzer": {
"custom_analyser": {
"filter": [
"lowercase",
"mynGram"
],
"type": "custom",
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "nGram",
"min_gram": "2",
"max_gram": "5"
}
}
}
}
}
}
}
My mapping
{
"myIndex": {
"mappings": {
"myIndex": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
},
"analyzer": "custom_analyser"
}
}
}
}
}
}
And my query
{
"query": {
"bool": {
"must": [{
"match_phrase": {
"name": {
"query": "test name",
"slop": 5
}
}
}]
}
}
}
Any help would be greatly appreciated.
Thanks in advance
not sure if you found your solution - I bet you did because this is such an old post, but I was on the hunt for the same thing and found this: Query-Time Search-as-you-type
Look up slop.

ElasticSearch: How to use edge_ngram and have real relevant hits to display first

I'm new with elasticsearch and I'm trying to develop a search for an ecommerce to suggested 5~10 matching products to the user.
As it should work while the user is typing, we found in the official documentation the use of edge_ngram and it KIND OF worked. But as we searched to test, the results were not the expected. As shows the example below (in our test)
Searching example
As it is shown in the image, the result for the term "Furadeira" (Power Drill) returns accessories before the power drill itself. How can I enhance the results? Even the order where the match is found in the string would help me, I guess.
So, this is the code I have until now:
//PUT example
{
"settings": {
"number_of_shards": 1,
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 20
},
"portuguese_stop": {
"type": "stop",
"stopwords": "_portuguese_"
},
"portuguese_stemmer": {
"type": "stemmer",
"language": "light_portuguese"
}
},
"analyzer": {
"portuguese": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"portuguese_stop",
"portuguese_stemmer"
]
},
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
}
}
/* mapping */
//PUT /example/products/_mapping
{
"products": {
"properties": {
"name": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "standard"
}
}
}
}
/* Search */
//GET /example/products/_search
{
"query" : {
"query_string": {
"query" : "furadeira",
"type" : "most_fields", // Tried without this aswell
"fields" : [
"name^8",
"model^10",
"manufacturer^4",
"description"
]
}
}
}
/* Product example */
// PUT example/products/38313
{
"name": "FITA VEDA FRESTA (ESPUMA 4503) 12X5 M [ H0000164055 ]",
"description": "Caracteristicas do produto:Ve…Diminui ruidos indesejaveis.",
"price":21.90,
"product_id": 38313,
"image": "http://placehold.it/200x200",
"quantity": 92,
"width": 20.200,
"height": 1.500,
"length": 21.500,
"weight": 0.082,
"model": "167083",
"manufacturer": "3M DO BRASIL"
}
Thanks in advance.
you could enhance your query to be a so-called boolean query, which contains your existing query in a must clause, but have an additional query in a should clause, that matches exactly (not using the ngrammed field). If the query matches the should clause it will be scored higher.
See the bool query documentation.
let's assume you have a field that differentiates the Main product from Accessories. I call it level_field.
now you can have two approaches to go:
1) boost up The Main product _score by adding 'should' operation:
put your main query in the must operation and in should operation use level_field to boost the _score of documents which are the Main products.
{
"query": {
"bool": {
"must": {
"match": {
"name": {
"query": "furadeira"
}
}
},
"should": [
{ "match": {
"level_field": {
"query": "level1",
"boost": 3
}
}},
{ "match": {
"level_field": {
"query": "level2",
"boost": 2
}
}}
]
}
}
}
2) in second approach you can decrease _score for documents that they are not the Main products by using boosting query:
{
"query": {
"boosting": {
"positive": {
"query_string": {
"query" : "furadeira",
"type" : "most_fields",
"fields" : [
"name^8",
"model^10",
"manufacturer^4",
"description"
]
}
}
},
"negative": {
"term": {
"level_field": {
"value": "level2"
}
}
},
"negative_boost": 0.2
}
}
}
I hope it helps

Elasticsearch index search for currency $ and £ signs

In some of my documents I have $ or £ symbols. I want to search for £ and retrieve documents containing that symbol. I've gone through the documentation but I'm getting some cognitive dissonance.
# Delete the `my_index` index
DELETE /my_index
# Create a custom analyzer
PUT /my_index
{
"settings": {
"analysis": {
"char_filter": {
"&_to_and": {
"type": "mapping",
"mappings": [
"&=> and ",
"$=> dollar "
]
}
},
"analyzer": {
"my_analyzer": {
"type": "custom",
"char_filter": [
"html_strip",
"&_to_and"
],
"tokenizer": "standard",
"filter": [
"lowercase"
]
}
}
}
}
}
This returns "the", "quick", "and", "brown", "fox" just as the documentation states:
# Test out the new analyzer
GET /my_index/_analyze?analyzer=my_analyzer&text=The%20quick%20%26%20brown%20fox
This returns "the", "quick", "dollar", "brown", "fox"
GET /my_index/_analyze?analyzer=my_analyzer&text=The%20quick%20%24%20brown%20fox
Adding some records:
PUT /my_index/test/1
{
"title": "The quick & fast fox"
}
PUT /my_index/test/1
{
"title": "The daft fox owes me $100"
}
I would have thought if I search for "dollar", I would get a result? Instead I get no results:
GET /my_index/test/_search
{ "query": {
"simple_query_string": {
"query": "dollar"
}
}
}
Or even using '$' with an analyzer:
GET /my_index/test/_search
{ "query": {
"query_string": {
"query": "dollar10",
"analyzer": "my_analyzer"
}
}
}
Your problem is that you specify a custom analyzer but you never use that. If you use term vertors you can verify that. So follow that steps:
When creating and index set custom analyzer for the `title field:
GET /my_index
{
"settings": {
"analysis": {
"char_filter": {
"&_to_and": {
"type": "mapping",
"mappings": [
"&=> and ",
"$=> dollar "
]
}
},
"analyzer": {
"my_analyzer": {
"type": "custom",
"char_filter": [
"html_strip",
"&_to_and"
],
"tokenizer": "standard",
"filter": [
"lowercase"
]
}
}
}
}, "mappings" :{
"test" : {
"properties" : {
"title" : {
"type":"string",
"analyzer":"my_analyzer"
}
}
}
}
}
Insert data:
PUT my_index/test/1
{
"title": "The daft fox owes me $100"
}
Check for term vectors:
GET /my_index/test/1/_termvectors?fields=title
Response:
{
"_index":"my_index",
"_type":"test",
"_id":"1",
"_version":1,
"found":true,
"took":3,
"term_vectors":{
"title":{
"field_statistics":{
"sum_doc_freq":6,
"doc_count":1,
"sum_ttf":6
},
"terms":{
"daft":{
"term_freq":1,
"tokens":[
{
"position":1,
"start_offset":4,
"end_offset":8
}
]
},
"dollar100":{ <-- You can see it here
"term_freq":1,
"tokens":[
{
"position":5,
"start_offset":21,
"end_offset":25
}
]
},
"fox":{
"term_freq":1,
"tokens":[
{
"position":2,
"start_offset":9,
"end_offset":12
}
]
},
"me":{
"term_freq":1,
"tokens":[
{
"position":4,
"start_offset":18,
"end_offset":20
}
]
},
"owes":{
"term_freq":1,
"tokens":[
{
"position":3,
"start_offset":13,
"end_offset":17
}
]
},
"the":{
"term_freq":1,
"tokens":[
{
"position":0,
"start_offset":0,
"end_offset":3
}
]
}
}
}
}
}
Now search:
GET /my_index/test/_search
{
"query": {
"match": {
"title": "dollar100"
}
}
}
That will find the match. But searching with query string as:
GET /my_index/test/_search
{ "query": {
"simple_query_string": {
"query": "dollar100"
}
}
}
won't find anything. Because it searches special _all field. And as I can see it aggregates fields as they are not analyzed:
GET /my_index/test/_search
{
"query": {
"match": {
"_all": "dollar100"
}
}
}
does not find a result. But:
GET /my_index/test/_search
{
"query": {
"match": {
"_all": "$100"
}
}
}
finds. I am not sure but the reason for that can be that the default analyzer is not the custom analyzer. To set a custom analyzer as default check:
Changing the default analyzer in ElasticSearch or LogStash
http://elasticsearch-users.115913.n3.nabble.com/How-we-can-change-Elasticsearch-default-analyzer-td4040411.html
http://grokbase.com/t/gg/elasticsearch/148kwsxzee/overriding-built-in-analyzer-and-set-it-as-default
http://elasticsearch-users.115913.n3.nabble.com/How-to-set-the-default-analyzer-td3935275.html

Resources