Elasticsearch: Use match query along with autocomplete - elasticsearch

I want to use match query along with autocomplete suggestion in ES5. Basically I want to restrict my autocomplete result based on an attribute, like autocomplete should return result within a city only.
MatchQueryBuilder queryBuilder = QueryBuilders.matchQuery("cityName", city);
SuggestBuilder suggestBuilder = new SuggestBuilder()
.addSuggestion("region", SuggestBuilders.completionSuggestion("region").text(text));
SearchResponse response = client.prepareSearch(index).setTypes(type)
.suggest(suggestBuilder)
.setQuery(queryBuilder)
.execute()
.actionGet();
The above doesn't seem to work correctly. I am getting both the results in the response both independent of each other.
Any suggestion?

It looks like the suggestion builder is creating a completion suggester. Completion suggesters are stored in a specialized structure that is separate from the main index, which means it has no access to your filter fields like cityName. To filter suggestions you need to explicitly define those same filter values when you create the suggestion, separate to the attributes you are indexing for the document to which the suggestion is attached. These suggester filters are called context. More information can be found in the docs.
The docs linked to above are going to explain this better than I can, but here is a short example. Using a mapping like the following:
"auto_suggest": {
"type": "completion",
"analyzer": "simple",
"contexts": [
{
"name": "cityName",
"type": "category",
"path": "cityName"
}
]
}
This section of the index settings defines a completion suggester called auto_suggest with a cityName context that can be used to filter the suggestions. Note that the path value is set, which means this context filter gets its value from the cityName attribute in your main index. You can remove the path value if you want to explicitly set the context to something that isn't already in the main index.
To request suggestions while providing context, something like this in combination with the settings above should work:
"suggest": {
"auto_complete":{
"text":"Silv",
"completion": {
"field" : "auto_suggest",
"size": 10,
"fuzzy" : {
"fuzziness" : 2
},
"contexts": {
"cityName": [ "Los Angeles" ]
}
}
}
}
Note that this request also allows for fuzziness, to make it a little resilient to spelling mistakes. It also restricts the number of suggestions returned to 10.
It's also worth noting that in ES 5.x completion suggester are document centric, so if multiple documents have the same suggestion, you will receive duplicates of that suggestion if it matches the characters entered. There's an option in ES 6 to de-duplicate suggestions, but nothing similar in 5.x. Again it's best to think of completion suggesters existing in their own index, specifically an FST, which is explained in more detail here.

Related

Elasticsearch 7 number_format_exception for input value as a String

I have field in index with mapping as :
"sequence_number" : {
"type" : "long",
"copy_to" : [
"_custom_all"
]
}
and using search query as
POST /my_index/_search
{
"query": {
"term": {
"sequence_number": {
"value": "we"
}
}
}
}
I am getting error message :
,"index_uuid":"FTAW8qoYTPeTj-cbC5iTRw","index":"my_index","caused_by":{"type":"number_format_exception","reason":"For input string: \"we\""}}}]},"status":400}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:260) ~[elasticsearch-rest-client-7.1.1.jar:7.1.1]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:238) ~[elasticsearch-rest-client-7.1.1.jar:7.1.1]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.1.1.jar:7.1.1]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.1.1.jar:7.1.1]
at
How can i ignore number_format_exception errors, so the query just doesn't return anything or ignores this filter in particular - either is acceptable.
Thanks in advance.
What you are looking for is not possible, ideally, you should have coherce enabled on your numeric fields so that your index doesn't contain dirty data.
The best solution is that in your application which generated the Elasticsearch query(you should have a check for NumberFormatExcepton if you are searching for numeric fields as your index doesn't contain the dirty data in the first place and reject the query if you get an exception in your application).
Edit: Another interesting approach is to validate the data before inserting into ES, using the Validate API as suggested by #prakash, only thing is that it would add another network call but if your application is not latency-sensitive, it can be used as a workaround.

How to boost Elasticsearch results based on another field?

Kinda simple use case but cannot come up with good solution.
Basically I have two indexed fields: content and keywords (keyword tokenizer), where content is a long text field and keywords contain important terms within that content. When I query with some long text, I have to boost those results based on the keywords present in the matching document.
I tried querying the complete text on both content and keywords field, but it is too slow or it throws too_many_clauses error for text with more than 40 words.
{"query": {
"match": {
"keywords": {
"query": "some long text",
"analyzer": "custom_analyzer"
}
}
}}
Is there any better way? Would percolator work here?
I can relate this to my application, which is similar to Stackoverflow, which consists of question and answers, for a question, there is subject, body, tags etc.
Subject here relates to your keyword indexed field and body relate to your content indexed field. Normally subject contains the important keywords about the post, which is also the case with you.
Now coming to solution part,
How we solve it by querying both on subject and body indexed fields but boost subject by a factor of 15, which is configurable.
ES query which we use:
{
"query": {
"multi_match" : {
"query" : "this is a test",
"fields" : [ "subject^15", "message" ]
}
}
}
This ES doc also has a similar example where they are boosting a subject field in multi_match query by a factor of 3.
Let me know if you have any questions.

Elasticsearch 6.2: terms query require lowercase input when searching on keyword

I've created an example index, with the following mapping:
{
"_doc": {
"_source": {
"enabled": False
},
"properties": {
"status": { "type": "keyword" }
}
}
}
And indexed a document:
{"status": "CMP"}
When searching the documents with this status with a terms query, I find no results:
{
"query" : {
"terms": { "status": ["CMP"]}
}
}
However, if I make the same query by putting the input in lowercase, I will find my document:
{
"query" : {
"terms": { "status": ["cmp"]}
}
}
Why is it? Since I'm searching on a keyword field, the indexed content should not be analyzed and should match an uppercase value...
no more #Oliver Charlesworth Now - in Elastic 6.x - you could continue to use a keyword datatype, lowercasing your text with a normalizer,doc here. However in every cases you should change your index mapping and reindex your docs
The index and mapping creation and the search were part of a test suite. It seems that the setup part of the test suite was not executed, and the mapping was not applied to the index.
The index was then using the default types instead of the mapping types, resulting of the use of string fields instead of keywords.
After changing the setup method of the automated tests, the mappings are well applied to the index, and the uppercase values for the status "CMP" are now matching documents.
The symptoms you're seeing shouldn't occur, unless something else is wrong.
A keyword index is not analysed, so your index should contain only CMP. A terms query is also not analysed, etc. so your index is searched only for CMP. Hence there should be a match.

Elasticsearch to wildcard search email addresses

I'm trying to use elasticsearch for a project I'm working on. I was wondering if someone could help steer me in the right direction. I'm using an index with 100+ million records.
I need to be able to search with a wildcard query like the following:
b*g#gmail.com
b*g#*.com
*gus#gmail.com
br*gu*#gmail.com
*g*#*
When I try using Wildcard and other searches, I don't get completely expected results.
What type of search with elasticsearch should I look into implementing? Is ElasticSearch even the right tool to be using? The source I'm pulling this out of is Mysql, so if not I may consider using Sphinx or Solr.
I assume that you have tried out the wildcard query as described here.
However, it has very different behaviour if your email is analyzed versus not analyzed. I would suggest you delete your index and change your mapping. e.g.
PUT /emails
{
"mappings": {
"email": {
"properties": {
"email": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
Once you have this, you can just do the normal wildcard query or query_string. e.g.
GET emails/_search
{
"query": {
"wildcard": {
"email": {
"value": "s*com"
}
}
}
}
As an aside, when you just index email without setting it as not_analyzed, the default mapping actually splits up the email prefix from the domain and so that's why you don't get results for when you do s*#gmail.com. You would still get results for s* or *gmail.com but for your case, using not_analyzed works correctly. If you want to support case insensitivity, then you might want to look at a custom analyzer that uses the uax_url_email tokenizer as described here.

Is it possible to add filters when performing a GET elasticsearch?

I have a situation where I want to filter the results not when performing a search but rather a GET using elasticsearch. Basically I have a Document that has a status field indicating that the entity has a state of discarded. When performing the GET I need to check the value of this field thereby excluding it if the status is indeed one of "discarded".
I know i can do this using a search with a term query, but what about when using a GET against the index based on Document ID?
Update: Upon further investigation, it seems the only way to do this is to use percolation or a search. I hope I am wrong if anyone has any suggestions I am all ears.
Just to clarify I am using the Java API.
thanks
Try something like this:
curl http://domain/my_index/_search -d '{
"filter": {
"and": [
{
"ids" : {
"type" : "my_type",
"values" : ["123"]
}
},
{
"term" : {
"discarded" : "false"
}
}
]
}
}
NOTE: you can also use a missing filter if the discarded field does not exist on some docs.
NOTE 2: I don't think this will be markedly slower than a normal get request either...

Resources