I'm trying to create an elasticsearch query that looks for multiple fields. This works fine so far. However, I would like to refine this.
Let's say the word was indexed: "test". However, when I search for "tes" he does not find that word for me, but I would like to show it already - but the combination with my query brings me to a challenge.
{
"multi_match" : {
"query": "*" + query + "*",
"type": "cross_fields",
"operator": "and",
"fields": ["article.number^1","article.name_de^1", "article.name_en^5", "article.name_fr^5", "article.description^1"],
"tie_breaker": 0,
}
Depending on your constraints, here are your options.
If you wish to use wildcard before/after your search term, you can use wildcard query. This has high processing cost at query time.
If you are fine with additional storage cost, you can opt to tokenize your input during analysis process. See ngram tokenizer. Beware that if you have long strings, it can quickly explode the storage requirement.
Related
Suppose I have a query clause like,
{
"query":
{
"query_string": {
"query": "ads spark~",
"fields": [
"flowName",
"projectName"
],
"default_operator": "and"
}
}
}
For this the explain output is:
"explanation": "+(projectName:ads | flowName:ads) +(projectName:spark~1 | flowName:spark~1)"
Whereas if I remove the fuzzy operator from query. Updated query clause below,
{
"query":
{
"query_string": {
"query": "ads spark",
"fields": [
"flowName",
"projectName"
],
"default_operator": "and"
}
}
}
I get a different explain output,
"explanation": "(projectName:ads spark | flowName:ads spark)"
Any idea why the tokens generated as different in both cases?
When you use fuzzy queries the way the query is parsed and constructed in Lucene differs from the normal behavior.
The one you see with the explanation is the Lucene query built from the query text.
When using fuzziness most of the text analysis is not done, only the filters that work on a per-character basis are allowed, as you can read in the documentation [1][2].
In this first case, since you are using fuzziness, the query text is split by whitespaces. Then, for each term a mandatory clause is built (the AND operator states that each term MUST appear in the document). You can call this a "term centric" query. Then each term is searched across the multiple fields in input with a disjunction (|) clause.
You therefore see "ads MUST be in projectName OR flowName, AND spark (with variations within the Levenshtein_distance) MUST be in projectName OR flowName".
In the second case, no fuzziness is used. Here the query is passed to each field and then the terms will follow the corresponding field text analysis (if any). You may call this a "field centric" query. Therefore you see "ads spark MUST be in projectName OR flowName" to have a document match.
You are effectively moving from an "I want all the terms to appear in the document" (it could be in different fields) to "I want all terms to appear in a single field".
If you want an in-depth analysis you can read this blog post https://sease.io/2021/05/apache-solr-sow-parameter-split-on-whitespace-and-multi-field-full-text-search.html. This is relative to Solr but Elasticsearch applies the same behavior.
We have an Elastic Search structure that specifies fields in a multi_match query like this:
"multi_match": {
"query": "find this string",
"fields": ["*_id^20", "*_name^20", "*"]
}
This works great - except under certain circumstances like when query is "Find NOWAK". This is because "NOW" is a reserved word for date searching and field "*" matches fields that are defined as dates.
So what I would like to do is ignore fields that match "*_at".
Is there way to tell Elastic Search to ignore certain fields in a multi_match query?
If the answer to that is "no" then the follow up question is how to escape the search term so that it won't trigger key words
Running version 6.7
Try this:
Exclude a field on a Elasticsearch query
curl -XGET 'localhost:9200/testidx/items/_search?pretty=true' -d '{
"query" : {
"query_string": {
"fields": ["title", "field2", "field3"], <-- add this
"query": "Titulo"
}},
"_source" : {
"exclude" : ["*.body"]
}
}'
Apparently the answer is "No: there is not a way to tell ElasticSearch to ignore certain fields in a multi_match query"
For my particular issue I found an inexpensive way to find the necessary white-listed fields (this is performed outside the scope of ElasticSearch otherwise I would post it here) and list those in place of the "*" when building the query.
I am hopeful someone will tell me I'm wrong, but I don't think I am.
I'm having trouble finding the solution for a use case here.
Basically, it's pretty simple : I need to perform a "contains" query, like a SQL like '%...%'.
I've seen there is a regexp query, which I actually managed to get working perfectly, but as it seems to scale badly, i'm trying out nGrams. Now, I've played around with them before and know "how they work", but the behaviour isn't the one I expect it to be.
Basically, i've configured my analyzer to be mingram =2, maxgram = 20. Say I index a user called "Christophe". I want the query "Chris" to actually match, which it does, since Chris is a 5-gram of Christophe. The problem is, "Risotto" matches aswell, because it gets broken down into Ngrams and ultimately "is" is a 2-gram of "Christophe" and so it matches aswell.
What I need is the analyzer to actually break down the indexed field in nGrams at indexing time, and compare those to the FULL text query. Risotto should match Risotto, XXXRisottoXXX and so on, but not Risolo or something where the nGrams do match.
Is there any solution ?
You need to use search_analyzer setting to have distinct index time and search time analyzers.
Sample from docs:
"mappings": {
"my_type": {
"properties": {
"text": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "standard"
}
}
}
}
In our elasticsearch we have indexed some persons where each person can have multiple taggings.
Take for example 2 persons (fullname - (taggings)):
Bart Newman - (bart,engineer,ceo)
Bart Holland - (developer,employer)
Our searchquery
{
"multi_match": {
"type": "most_fields",
"query": "bart developer",
"operator": "or",
"boost": 5,
"fields": [
"fullname^5",
"taggings.tag.name^5"
],
"fuzziness": 0
}
}
Let's say we are searching on "bart developer". Then we should expect that Bart Holland should come before Bart Newman, but because Bart Newman has bart in his fullname and bart as tag, he scores higher then Bart Holland does.
Is there a way where I can configure that matches on different words (bart, developer) can score higher then multiple matches on one word (bart).
I already tried the and-operator without success.
Thanks!
This is kind of expected with most fields query, it is field-centric rather than term-centric, From the Docs
most_fields being field-centric rather than term-centric: it looks for
the most matching fields, when really what we’re interested is the
most matching terms.
Another problem is Inverse Document Frequency which is also likely in your case. I guess only few documents have tag named bart which is why its IDF is very high and hence gets higher score.
As given in the above links, you should see how documents are scored with validate and explain.
There are couple of ways to solve this issue
1) You can use custom _all field, i.e copy both full name and tag information to new field with copy_to parameter and then query on it but you have to reindex your data for that
2) I think better solution would be to use cross fields, it takes term-centric approach. From the Docs
The cross_fields type first analyzes the query string to produce a
list of terms, and then it searches for each term in any field.
It also solves IDF issue by blending it across all fields.
This should solve your issue.
{
"query": {
"multi_match": {
"type": "cross_fields",
"query": "bart developer",
"operator": "or",
"fields": [
"fullname",
"tagging.tag.name"
],
"fuzziness": 0
}
}
}
Hope this helps!
Haystack generates elasticsearch queries to get results from elasticsearch. The queries get prepended with a filter containing the following query:
"query": {
"query_string": {
"query": "django_ct:(customers.customer)"
}
}
What is the meaning of the django_ct(..) query? Is this a function that haystack installs in elasticsearch? Is it some caching magic? Can I get rid of this part altogether?
The reason why I'm asking is that I have to build a custom query to use an elasticsearch multi_field. In order to change the queries I want to understand first how haystack generates its own queries.
Haystack uses Django's content types to determine which model attributes to search against in Elasticsearch. This is not really best practice, but it's how it's done in HS.
Basically, the code in HS looks something like this:
app_name, model_name = django_ct.split('.')
ct = ContentType.objects.get_by_natural_key(app_name, model_name)
model = ct.model_class()
# do stuff with model
So, you really don't want to ignore it when using haystack, if you are indexing more than one model in your index.
I have a couple other answers based on elasticsearch here: index analyzer vs query analyzer in haystack - elasticsearch? and here: Django Haystack Distinct Value for Field
EDIT regarding multi-fields:
I've used Haystack and multifields in the past, so I'm not sure you need to write you own backend. The key is understanding how haystack creates searches. As I said in one of the other posts, everything goes into query_string and from there it creates a lucene based search string. Again, not really best practice.
So let's say you have a multi-field that looks like this:
"some_field": {
"type": "multi_field",
"fields": {
"some_field_edgengram": {
"type": "string",
"index": "analyzed",
"index_analyzer": "autocomplete_index",
"search_analyzer": "autocomplete_search"
},
"some_field": {
"type": "string",
"index": "not_analyzed"
}
}
},
In haystack, you can just search against some_field and some_field_edgengram directly.
For example SearchQuerySet().filter(some_field="cat") and SearchQuerySet().filter(some_field_edgengram="cat") will both work, but the first will only match tokens that have cat exactly and the second will match cat, cats, catlin, catch, etc, at least using my edgengram analyzers.
However, just because you use haystack for indexing and search doesn't mean you have to use it for 100% of your search solutions. In the past, I've used PYES in some areas of the app and haystack in others, because haystack lacked the support for more advanced features and the query_string parsing was losing some of the finer grained accuracy we were looking for.
In your case, you could get results from the search engine via elasticutils or python-elasticseach directly for some more advanced searches and use haystack for the other more routine searches.