elasticsearch partial searching with search as you type - elasticsearch

i have documents with a field called title having data like "the lord of the rings","lord of the rings","the ring",etc
I would like to do a search as you type feature.
So if user types "th", the order of the results should be -
"the lord of the ring",
"the ring",
"lord of the rings"
since i want the strings that start with "th" to appear first and alphabetically.
i tried looking into edgengrams, but that does it for every word in the string.
I would like to do it only from beginning of string.
Can you please let me know what are the analyzers i need to use to achieve this?
Thanks

This is the best link I've seen so far :
Search like a Google with Elasticsearch. Autocomplete, Did you mean and search for items

You can try Match Phrase Prefix Query:
{
"query": {
"match_phrase_prefix": {
"text": "the"
}
}
}
Hope this helps

Related

Elastic search, search data as where like 'TEXT%'?

I want to find sentences or words that start with the characters I'm looking for, what should I do for it?
For example:
get the data list like this
automatic car
car
carpet
car accessories
car battery
cast
game cards
race car
When I search for the word "car", I find the following data.
car
car accessories
car battery
carpet
I find the following data when I search for the word "ca"
cast
car
car accessories
car battery
carpet
that is, I don't want him to search the whole sentence, I just want him to search for words that start with search characters.
To give an example with sql, I would like to make an equivalent search to where like 'car%'
You can achieve that using the Wildcard query
GET /_search
{
"query": {
"wildcard": {
"field_name": {
"value": "ca*"
}
}
}
}
Additionally, if you want to implement autocomplete like feature - read Suggesters documentation
Elasticsearch has a feature called prefix query, that returns documents that contain a specific prefix in a provided field.Also wildcard should also work for you.
GET /_search
{
"query": {
"prefix" : { "your_index_field" : "car" }
}
}
SEE MORE: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-prefix-query.html#prefix-query-short-ex

elasticsearch: or operator, number of matches

Is it possible to score my searches according to the number of matches when using operator "or"?
Currently query looks like this:
"query": {
"function_score": {
"query": {
"match": {
"tags.eng": {
"query": "apples banana juice",
"operator": "or",
"fuzziness": "AUTO"
}
}
},
"script_score": {
"script": # TODO
},
"boost_mode": "replace"
}
}
I don't want to use "and" operator, since I want documents containing "apple juice" to be found, as well as documents containing only "juice", etc. However a document containing the three words should score more than documents containing two words or a single word, and so on.
I found a possible solution here https://github.com/elastic/elasticsearch/issues/13806
which uses bool queries. However I don't know how to access the tokens (in this example: apples, banana, juice) generated by the analyzer.
Any help?
Based on the discussions above I came up with the following solution, which is a bit different that I imagined when I asked the question, but works for my case.
First of all I defined a new similarity:
"settings": {
"similarity": {
"boost_similarity": {
"type": "scripted",
"script": {
"source": "return 1;"
}
}
}
...
}
Then I had the following problem:
a query for "apple banana juice" had the same score for a doc with tags ["apple juice", "apple"] and another doc with tag ["banana", "apple juice"]. Although I would like to score the second one higher.
From the this other discussion I found out that this issue was caused because I had a nested field. And I created a usual text field to address it.
But I also was wanted to distinguish between a doc with tags ["apple", "banana", "juice"] and another doc with tag ["apple banana juice"] (all three words in the same tag). The final solution was therefore to keep both fields (a nested and a text field) for my tags.
Finally the query consists of bool query with two should clauses: the first should clause is performed on the text field and uses an "or" operator. The second should clause is performed on the nested field and uses and "and operator"
Despite I found a solution for this specific issue, I still face a few other problems when using ES to search for tagged documents. The examples in the documentation seem to work very well when searching for full texts. But does someone know where I can find something more specific to tagged documents?

Search text in elastic search ignoring uppercase and lowercase alphabet

First of all i am new to elastic search. I have field skillName:"Android Sdk". I map this field as keyword in elastic search. But problem is that when i search by something like
POST _search
{
"query": {
"match" : { "skillName" : "Android sdk" }
}
}
sdk is small in search query. It does not give me any result. How can i search ignoring lower or upper case of text when field is mapped as keyword
Yes, it's ignoring the case different from the original, since you used keyword analyzer, which didn't do anything with the token, but rather preserving it as it is. In your case it will do a match only if you query exact same token
So, I would propose to change this behaviour and at least apply lowercase token filter, so you will be able to match terms with different register.
To search case insensitive on a keyword field you need to use a normalizer, which was introduced in 5.2.0. See here for an example.
You can apply different analyzers to same « field » and have one for full text search and another one for sorting, aggregations.
Try the following:
{
"query": {
"query_string": {
"fields": [
"skillName"
],
"query": "Android sdk"
}
}
}

Is it possible to chain fquery filters in elastic search with exact matches?

I have been having trouble writing a method that will take in various search parameters in elasticsearch. I was working with queries that looked like this:
body:
{query:
{filtered:
{filter:
{and:
[
{term: {some_term: "foo"}},
{term: {is_visible: true}},
{term: {"term_two": "something"}}]
}
}
}
}
Using this syntax I thought I could chain these terms together and programatically generate these queries. I was using simple strings and if there was a term like "person_name" I could split the query into two and say "where person_name match 'JOHN'" and where person_name match 'SMITH'" getting accurate results.
However, I just came across the "fquery" upon asking this question:
Escaping slash in elasticsearch
I was not able to use this "and"/"term" filter searching a value with slashes in it, so I learned that I can use fquery to search for the full value, like this
"fquery": {
"query": {
"match": {
"by_line": "John Smith"
But how can I search like this for multiple items? IT seems that when i combine fquery and my filtered/filter/and/term queries, my "and" term queries are ignored. What is the best practice for making nested / chained queries using elastic search ?
As in the comment below, yes I can just add fquery to the "and" block like so
{:filtered=>
{:filter=>
{:and=>[
{:term=>{:is_visible=>true}},
{:term=>{:is_private=>false}},
{:fquery=>
{:query=>{:match=>{:sub_location=>"New JErsey"}}}}]}}}
Why would elasticsearch also return results with "sub_location" = "new York"? I would like to only return "new jersey" here.
A match query analyzes the input and by default it is a boolean OR query if there are multiple terms after the analysis. In your case, "New JErsey" gets analyzed into the terms "new" and "jersey". The match query that you are using will search for documents in which the indexed value of field "sub_location" is either "new" or "jersey". That is why your query also matches documents where the value of field "sub_location" is "new York" because of the common term "new".
To only match for "new jersey", you can use the following version of the match query:
{
"query": {
"match": {
"sub_location": {
"query": "New JErsey",
"operator": "and"
}
}
}
}
This will not match documents where the value of field "sub_location" is "New York". But, it will match documents where the value of field "sub_location" is say "York New" because the query finally translates into a boolean query like "York" AND "New". If you are fine with this behaviour, well and good, else read further.
All these issues arise because you are using the default analyzer for the field "sub_location" which breaks tokens at word boundaries and indexes them. If you really do not care about partial matches and want to always match the entire string, you can make use of custom analyzers to use Keyword Tokenizer and Lowercase Token Filter. Mind you, going ahead with this approach will need you to re-index all your documents again.

Elasticsearch prefer exact match over partial matches when doing typeahead searches

I have configured es to do autocomplete and I can also get exact match preferred over suggestions.
For example if someone type London, the api returns London first then Londonderry. But if someone type Londo then es returns Londonderry first then London. Surely, London is a closer match than Londonderry.
Same thing happens with "New York" and York. "New York" is preferred over York when I search for York.
I am using the solution provided here.
Favor exact matches over nGram in elasticsearch
This code was helpfull for me:
"query": {
"match": {
"message": {
"query": inputQuery,
"fuzziness": 3,
"prefix_length": 2
}
}
}
first of all you should use fuzziness - ES documentation
I hope it will help you also.

Resources