How to add a user defined field and value to an elasticsearch query - elasticsearch

Goal: I want a query which adds a discriminator field to distinguish between fuzzy results and non-fuzzy results.
Consider these documents:
curl -X POST "localhost:9200/_bulk" -H 'Content-Type: application/json' -d'
{
"index": {
"_index": "dishes",
"_type": "dish",
"_id": "1"
}
}
{
"name": "butter chicken"
}
{
"index": {
"_index": "dishes",
"_type": "dish",
"_id": "2"
}
}
{
"name": "chicken burger"
}
'
Consider the following query:
curl -X POST "localhost:9200/dishes/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"bool": {
"should": [
{
"term": {
"name": "burger"
}
},
{
"fuzzy": {
"name": {
"value": "burger"
}
}
}
],
"minimum_should_match": 1,
"boost": 1.0
}
}
}
'
Can I have a result with an additional tag created during query (it is not in the document) that can be used to discriminate between what is a fuzzy result and what is a non-fuzzy result.
...
"hits" : [
{
"_index" : "dishes",
"_type" : "dish",
"_id" : "2",
"_score" : 1.3862942,
"_source" : {
"name" : "chicken burger"
},
"is_fuzzy": false
},
{
"_index" : "dishes",
"_type" : "dish",
"_id" : "1",
"_score" : 0.46209806,
"_source" : {
"name" : "butter chicken"
},
"is_fuzzy": true
}
]
Scripted fields could have been ideal. But no luck yet.
I have a requirement to present the non-fuzzy results before fuzzy results. So sorting on is_fuzzy and then _score is guaranteed to work. (The actual query is more complex.)
sort: [
{
"is_fuzzy": {
"order": "desc"
}
},
{
"_score": {
"order": "desc"
}
}

One more option is to use named queries but your term filters will need to be slightly reworked:
GET dishes/_search
{
"query": {
"bool": {
"should": [
{
"term": {
"name": {
"value": "burger",
"_name": "not_fuzzy"
}
}
},
{
"fuzzy": {
"name": {
"value": "burger",
"_name": "fuzzy"
}
}
}
],
"minimum_should_match": 1,
"boost": 1
}
}
}
yielding
[
{
"_index":"dishes",
"_type":"dish",
"_id":"2",
"_score":1.3862944,
"_source":{
"name":"chicken burger"
},
"matched_queries":[ <---
"fuzzy",
"not_fuzzy"
]
},
{
"_index":"dishes",
"_type":"dish",
"_id":"1",
"_score":0.46209806,
"_source":{
"name":"butter chicken"
},
"matched_queries":[ <---
"fuzzy"
]
}
]

Related

Should and Filter combination in ElasticSearch

I have this query which return the correct result
GET /person/_search
{
"query": {
"bool": {
"should": [
{
"fuzzy": {
"nameDetails.name.nameValue.surname": {
"value": "Pibba",
"fuzziness": "AUTO"
}
}
},
{
"fuzzy": {
"nameDetails.nameValue.firstName": {
"value": "Fawsu",
"fuzziness": "AUTO"
}
}
}
]
}
}
}
and the result is below:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 3.6012557,
"hits" : [
{
"_index" : "person",
"_type" : "_doc",
"_id" : "70002",
"_score" : 3.6012557,
"_source" : {
"gender" : "Male",
"activeStatus" : "Inactive",
"deceased" : "No",
"nameDetails" : {
"name" : [
{
"nameValue" : {
"firstName" : "Fawsu",
"middleName" : "L.",
"surname" : "Pibba"
},
"nameType" : "Primary Name"
},
{
"nameValue" : {
"firstName" : "Fausu",
"middleName" : "L.",
"surname" : "Pibba"
},
"nameType" : "Spelling Variation"
}
]
}
}
}
]
}
But when I add the filter for Gender, it returns no result
GET /person/_search
{
"query": {
"bool": {
"should": [
{
"fuzzy": {
"nameDetails.name.nameValue.surname": {
"value": "Pibba",
"fuzziness": "AUTO"
}
}
},
{
"fuzzy": {
"nameDetails.nameValue.firstName": {
"value": "Fawsu",
"fuzziness": "AUTO"
}
}
}
],
"filter": [
{
"term": {
"gender": "Male"
}
}
]
}
}
}
Even I just use filter, it return no result
GET /person/_search
{
"query": {
"bool": {
"filter": [
{
"term": {
"gender": "Male"
}
}
]
}
}
}
You are not getting any search result, because you are using the term query (in the filter clause). Term query will return the document only if it has an exact match.
A standard analyzer is used when no analyzer is specified, which will tokenize Male to male. So either you can search for male instead of Male or use any of the below solutions.
If you have not defined any explicit index mapping, you need to add .keyword to the gender field. This uses the keyword analyzer instead of the standard analyzer (notice the ".keyword" after gender field). Try out this below query -
{
"query": {
"bool": {
"filter": [
{
"term": {
"gender.keyword": "Male"
}
}
]
}
}
}
Search Result:
"hits": [
{
"_index": "66879128",
"_type": "_doc",
"_id": "1",
"_score": 0.0,
"_source": {
"gender": "Male",
"activeStatus": "Inactive",
"deceased": "No",
"nameDetails": {
"name": [
{
"nameValue": {
"firstName": "Fawsu",
"middleName": "L.",
"surname": "Pibba"
},
"nameType": "Primary Name"
},
{
"nameValue": {
"firstName": "Fausu",
"middleName": "L.",
"surname": "Pibba"
},
"nameType": "Spelling Variation"
}
]
}
}
}
]
If you have defined index mapping, then modify the mapping for gender field as shown below
{
"mappings": {
"properties": {
"gender": {
"type": "keyword"
}
}
}
}

Elasticsearch: index boost with completion suggester

Is it possible to use index boost when using completion suggester in Elasticsearch? I have tried many different ways but doesn't seem to work. Haven't found any reference in the documentation claiming that it does not work for completion suggester. Example:
POST index1,index2/_search
{
"suggest" : {
"name_suggest" : {
"text" : "my_query",
"completion" : {
"field" : "name_suggest",
"size" : 7,
"fuzzy" :{}
}
}
},
"indices_boost" : [
{ "index1" : 2 },
{ "index2" : 1.5 }
]
}
The above does not return boosted scores. The scores are the same compared to running it without the indices_boost parameter.
Tried few options but these didn't work directly, instead, you can define the weight of a document at index-time, and these could be used as a workaround to get the boosted document, below is the complete example.
Index mapping same for index1, index2
{
"mappings": {
"properties": {
"suggest": {
"type": "completion"
},
"title": {
"type": "keyword"
}
}
}
}
Index doc 1 with weight in index-1
{
"suggest": {
"input": [
"Nevermind",
"Nirvana"
],
"weight": 30
}
}
Similar doc is inserted in index-2 with diff weight
{
"suggest": {
"input": [
"Nevermind",
"Nirvana"
],
"weight": 10 --> note less weight
}
}
And the simple search will now sort it according to weight
{
"suggest": {
"song-suggest": {
"prefix": "nir",
"completion": {
"field": "suggest"
}
}
}
}
And search result
{
"text": "Nirvana",
"_index": "index-1",
"_type": "_doc",
"_id": "1",
"_score": 34.0,
"_source": {
"suggest": {
"input": [
"Nevermind",
"Nirvana"
],
"weight": 30
}
}
},
{
"text": "Nirvana",
"_index": "index-2",
"_type": "_doc",
"_id": "1",
"_score": 30.0,
"_source": {
"suggest": {
"input": [
"Nevermind",
"Nirvana"
],
"weight": 10
}
}
}
]

Filter elastic search data when fields contain ~

I have bunch of documents like below. I want to filter the data where projectkey starts with ~.
I did read some articles which says ~ is an operator in Elastic query so cannot really filter with that.
Can someone help to form the search query for /branch/_search API ??
{
"_index": "branch",
"_type": "_doc",
"_id": "GAz-inQBJWWbwa_v-l9e",
"_version": 1,
"_score": null,
"_source": {
"branchID": "refs/heads/feature/12345",
"displayID": "feature/12345",
"date": "2020-09-14T05:03:20.137Z",
"projectKey": "~user",
"repoKey": "deploy",
"isDefaultBranch": false,
"eventStatus": "CREATED",
"user": "user"
},
"fields": {
"date": [
"2020-09-14T05:03:20.137Z"
]
},
"highlight": {
"projectKey": [
"~#kibana-highlighted-field#user#/kibana-highlighted-field#"
],
"projectKey.keyword": [
"#kibana-highlighted-field#~user#/kibana-highlighted-field#"
],
"user": [
"#kibana-highlighted-field#user#/kibana-highlighted-field#"
]
},
"sort": [
1600059800137
]
}
UPDATE***
I used prerana's answer below to use -prefix in my query
Something is still wrong when i use prefix and range - i get below error - What am i missing ??
GET /branch/_search
{
"query": {
"prefix": {
"projectKey": "~"
},
"range": {
"date": {
"gte": "2020-09-14",
"lte": "2020-09-14"
}
}
}
}
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "[prefix] malformed query, expected [END_OBJECT] but found [FIELD_NAME]",
"line": 6,
"col": 5
}
],
"type": "parsing_exception",
"reason": "[prefix] malformed query, expected [END_OBJECT] but found [FIELD_NAME]",
"line": 6,
"col": 5
},
"status": 400
}
If I understood your issue well, I suggest the creation of a custom analyzer to search the special character ~.
I did a test locally as follows while replacing ~ to __SPECIAL__ :
I created an index with a custom char_filter alongside with the addition of a field to the projectKey field. The name of the new multi_field is special_characters.
Here is the mapping:
PUT wildcard-index
{
"settings": {
"analysis": {
"char_filter": {
"special-characters-replacement": {
"type": "mapping",
"mappings": [
"~ => __SPECIAL__"
]
}
},
"analyzer": {
"special-characters-analyzer": {
"tokenizer": "standard",
"char_filter": [
"special-characters-replacement"
]
}
}
}
},
"mappings": {
"properties": {
"projectKey": {
"type": "text",
"fields": {
"special_characters": {
"type": "text",
"analyzer": "special-characters-analyzer"
}
}
}
}
}
}
Then I ingested the following contents in the index:
"projectKey": "content1 ~"
"projectKey": "This ~ is a content"
"projectKey": "~ cars on the road"
"projectKey": "o ~ngram"
Then, the query was:
GET wildcard-index/_search
{
"query": {
"match": {
"projectKey.special_characters": "~"
}
}
}
The response was:
"hits" : [
{
"_index" : "wildcard-index",
"_type" : "_doc",
"_id" : "h1hKmHQBowpsxTkFD9IR",
"_score" : 0.43250346,
"_source" : {
"projectKey" : "content1 ~"
}
},
{
"_index" : "wildcard-index",
"_type" : "_doc",
"_id" : "iFhKmHQBowpsxTkFFNL5",
"_score" : 0.3034693,
"_source" : {
"projectKey" : "This ~ is a content"
}
},
{
"_index" : "wildcard-index",
"_type" : "_doc",
"_id" : "-lhKmHQBowpsxTkFG9Kg",
"_score" : 0.3034693,
"_source" : {
"projectKey" : "~ cars on the road"
}
}
]
Please let me know If you have any issue, I will be glad to help you.
Note: This method works if there is a blank space after the ~. You can see from the response that the 4th data was not displayed.
while #hansley answer would work, but it requires you to create a custom analyzer and still as you mentioned you want to get only the docs which starts with ~ but in his result I see all the docs containing ~, so providing my answer which requires very less configuration and works as required.
Index mapping default, so just index below docs and ES will create a default mapping with .keyword field for all text field
Index sample docs
{
"title" : "content1 ~"
}
{
"title" : "~ staring with"
}
{
"title" : "in between ~ with"
}
Search query should fetch obly 2nd docs from sample docs
{
"query": {
"prefix" : { "title.keyword" : "~" }
}
}
And search result
"hits": [
{
"_index": "pre",
"_type": "_doc",
"_id": "2",
"_score": 1.0,
"_source": {
"title": "~ staring with"
}
}
]
Please refer prefix query for more info
Update 1:
Index Mapping:
{
"mappings": {
"properties": {
"date": {
"type": "date"
}
}
}
}
Index Data:
{
"date": "2015-02-01",
"title" : "in between ~ with"
}
{
"date": "2015-01-01",
"title": "content1 ~"
}
{
"date": "2015-02-01",
"title" : "~ staring with"
}
{
"date": "2015-02-01",
"title" : "~ in between with"
}
Search Query:
{
"query": {
"bool": {
"must": [
{
"prefix": {
"title.keyword": "~"
}
},
{
"range": {
"date": {
"lte": "2015-02-05",
"gte": "2015-01-11"
}
}
}
]
}
}
}
Search Result:
"hits": [
{
"_index": "stof_63924930",
"_type": "_doc",
"_id": "2",
"_score": 2.0,
"_source": {
"date": "2015-02-01",
"title": "~ staring with"
}
},
{
"_index": "stof_63924930",
"_type": "_doc",
"_id": "4",
"_score": 2.0,
"_source": {
"date": "2015-02-01",
"title": "~ in between with"
}
}
]

How to sort result set in order of matching words

How to sort result set in order of matching words?
I have a couple words "heinz meyer"
my query returns:
Heinz A. Meyer
Heinz Meyer GmbH Heizung-Sanitär
Heinz Meyer
Karl-Heinz Meyer GmbH
but i need, order by positions matching like next :
Heinz Meyer
Heinz Meyer GmbH Heizung-Sanitär
Heinz A. Meyer
Karl-Heinz Meyer GmbH
my query is:
{
"query": {
"bool": {
"must": [{
"wildcard": {
"name": "heinz*"
}
}, {
"wildcard": {
"name": "meyer*"
}
}],
"must_not": [],
"should": [],
"filter": {
"bool": {
"must": [{
"range": {
"latestRevenueStatistics.revenue": {
"gte": "0",
"lte": "40000000"
}
}
}, {
"range": {
"latestRevenueStatistics.number_of_employees": {
"gte": "0",
"lte": "300"
}
}
}, {
"term": {
"addresses.postal_code_length": 5
}
}]
}
}
}
},
"from": 0,
"size": 10
}
FINAL SOLUTION:
{
"query": {
"bool": {
"must": [{
"wildcard": {
"name": "heinz*"
}
}, {
"wildcard": {
"name": "mayer*"
}
}, {
"span_near": {
"clauses": [{
"span_term": {
"name": {
"value": "heinz"
}
}
}, {
"span_term": {
"name": {
"value": "mayer"
}
}
}],
"slop": 4,
"in_order": true
}
}],
"must_not": [],
"should": [{
"span_first": {
"match": {
"span_term": {
"name": "heinz"
}
},
"end": 1
}
}, {
"span_first": {
"match": {
"span_term": {
"name": "mayer"
}
},
"end": 2
}
}],
"filter": {
"bool": {
"must": [{
"range": {
"latestRevenueStatistics.revenue": {
"gte": "0",
"lte": "40000000"
}
}
}, {
"range": {
"latestRevenueStatistics.number_of_employees": {
"gte": "0",
"lte": "300"
}
}
}, {
"term": {
"addresses.postal_code_length": 5
}
}]
}
}
}
},
"from": 0,
"size": 10
}
You can implement the match query using combination of Span First, Span Term and Span Near Query
For the sake of simplicity, I've created a sample index with only one field labeled name of type text along with the below documents.
Documents:
POST sortindex/_doc/1
{
"name": "Heinz A. Meyer"
}
POST sortindex/_doc/2
{
"name": "Heinz Meyer GmbH Heizung-Sanitär"
}
POST sortindex/_doc/3
{
"name": "Heinz Meyer"
}
POST sortindex/_doc/4
{
"name": "Karl-Heinz Meyer GmbH"
}
Query:
POST sortindex/_search
{
"query": {
"bool": {
"must": [
{
"span_near": { <---- Span Near Query
"clauses": [
{
"span_term": { <---- Span Term Query
"name": {
"value": "heinz"
}
}
},
{
"span_term": {
"name": {
"value": "meyer"
}
}
}
],
"slop": 4, <---- Retrieve all docs having both heinz and meyer with distance of <= 4 words
"in_order": true <---- Heinz must always come before Meyer
}
}
],
"should": [
{
"span_first": { <---- Span First Query
"match": {
"span_term": { <---- Span Term Query
"name": "heinz"
}
},
"end": 1 <---- Retrieve docs having heinz's postition <= 1 and > 0 i.e. the first word
}
}
]
}
}
}
Notice that Span Near is placed in must clause whereas Span First is placed in should clause. That way the documents conforming to the should clause would get higher score as compared to the ones that doesn't match.
Internally for both, we search using Span Term which is nothing but like a term query but it is specifically mean for using with Span Queries.
I'd suggest you to go through the links if you would like to understand more on Span Queries.
From the link:
Span queries are low-level positional queries which provide expert
control over the order and proximity of the specified terms. These are
typically used to implement very specific queries on legal documents
or patents.
Response:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 4,
"relation" : "eq"
},
"max_score" : 0.38327998,
"hits" : [
{
"_index" : "sortindex",
"_type" : "_doc",
"_id" : "3",
"_score" : 0.38327998,
"_source" : {
"name" : "Heinz Meyer"
}
},
{
"_index" : "sortindex",
"_type" : "_doc",
"_id" : "2",
"_score" : 0.26893127,
"_source" : {
"name" : "Heinz Meyer GmbH Heizung-Sanitär"
}
},
{
"_index" : "sortindex",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.25940484,
"_source" : {
"name" : "Heinz A. Meyer"
}
},
{
"_index" : "sortindex",
"_type" : "_doc",
"_id" : "4",
"_score" : 0.19908611,
"_source" : {
"name" : "Karl-Heinz Meyer GmbH"
}
}
]
}
}
You can go ahead and add the above query to the one you have.
Hope this helps!

How to see which of the queries in boolean is matched?

I have given multiple queries using the bool query. Now it can happen that some of them might have matches and some queries might not have matches in the database. How can I know which of the queries had a match?
For example, here I have a bool query with two should conditions against the field landMark.
{
"query": {
"bool": {
"should": [
{
"match": {
"landMark": "wendys"
}
},
{
"match": {
"landMark": "starbucks"
}
}
]
}
}
}
How can I know which one of them matched in the above query if only one of them matches the documents?
You can use named queries for this purpose. Try this
{
"query": {
"bool": {
"should": [
{
"match": {
"landMark": {
"query": "wendys",
"_name": "wendy match"
}
}
},
{
"match": {
"landMark": {
"query": "starbucks",
"_name": "starbucks match"
}
}
}
]
}
}
}
you can use any _name . In response you will get something like this
"matched_queries": ["wendy match"]
so you will be able to tell which query matched that specific document.
Named query is certainly the way to go.
LINK - https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-named-queries-and-filters.html
Idea of named query is simple , you tag a name to each of your query and in the result , it shows which all tags matched per document.
curl -XPOST 'http://localhost:9200/data/data' -d ' { "landMark" : "wendys near starbucks" }'
curl -XPOST 'http://localhost:9200/data/data' -d ' { "landMark" : "wendys" }'
curl -XPOST 'http://localhost:9200/data/data' -d ' { "landMark" : "starbucks" }'
Hence create you query in this fashion -
curl -XPOST 'http://localhost:9200/data/_search?pretty' -d '{
"query": {
"bool": {
"should": [
{
"match": {
"landMark": {
"query": "wendys",
"_name": "wendy_is_a_match"
}
}
},
{
"match": {
"landMark": {
"query": "starbucks",
"_name": "starbuck_is_a_match"
}
}
}
]
}
}
}'
{
"took" : 7,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 3,
"max_score" : 0.581694,
"hits" : [ {
"_index" : "data",
"_type" : "data",
"_id" : "AVMCNNCY3OZJfBZCJ_tO",
"_score" : 0.581694,
"_source": { "landMark" : "wendys near starbucks" },
"matched_queries" : [ "starbuck_is_a_match", "wendy_is_a_match" ] ---> "Matched tags
}, {
"_index" : "data",
"_type" : "data",
"_id" : "AVMCNS0z3OZJfBZCJ_tQ",
"_score" : 0.1519148,
"_source": { "landMark" : "starbucks" },
"matched_queries" : [ "starbuck_is_a_match" ]
}, {
"_index" : "data",
"_type" : "data",
"_id" : "AVMCNRsF3OZJfBZCJ_tP",
"_score" : 0.04500804,
"_source": { "landMark" : "wendys" },
"matched_queries" : [ "wendy_is_a_match" ]
} ]
}
}

Resources