Lucene performance drop with single character - performance

I am currently using Lucene to search a large amount of documents.
Most commonly it is being searched on the name of the object in the document.
I am using the standardAnalyser with a null list of stop words. This means words like 'and' will be searchable.
The search term looks like this (+keys:bunker +keys:s*)(keys:0x000bunkers*)
the 0x000 is a prefix to make sure that it comes higher up the list of results.
the 'keys' field also contains other information like postcode.
So must match at least one of those.
Now with the background done on with the main problem.
For some reason when I search a term with a single character. Whether it is just 's' or bunker 's' it takes around 1.7 seconds compared to say 'bunk' which will take less than 0.5 seconds.
I have sorting, I have tried it with and without that no difference. I have tried it with and without the prefix.
Just wondering if anyone else has come across anything like this, or will have any inkling of why it would do this.
Thank you.

The most commonly used terms in your index will be the slowest terms to search on.
You're using StandardAnalyzer which does not remove any stop words. Further, it splits words on punctuation, so John's is indexed as two terms John and s. These splits are likely creating a lot of occurrences of s in your index.
The more occurrences of a term in your index, the more work Lucene has to do at search-time. A term like bunk likely occurs much less in your index by orders of magnitude, thus it requires a lot less work to process at search-time.

Related

Why are Lucene/Elasticsearch prefix queries slower than term queries?

I've been recently reading about Lucene and Elasticsearch and it seems the following are true (correct me if i'm wrong):
prefix queries are slower than term queries
suffix queries (* ing) are slower than prefix queries (ing *)
This seems like a strange combination of properties. Perhaps I need to broaden my scope of data structures I'm considering, but if a segment were structured like a hash table, I could easily see that 1 would be true (the term query would be O(1) and a prefix query would require a full scan) however 2 would not be true (both prefix and suffix would require a full scan). If the segment were laid out like a sorted array, I could easily see that 2 would be true (a prefix query could be performed with a binary search O(log n) and the suffix would require a full scan) however 1 would no longer be true (both a term and prefix query would require a binary search).
My only other thought is that there might be some combination of both hash and sort going on to account for this behavior (ex. hash to some partition and sort within that partition). However my understanding is that Elasticsearch partitions by a document identifier but the inverted index key is a term. So a query for a term still requires the request being sent to all partitions.
Can anyone provide me with some intuition as to how/why this behavior exists?
Note:
https://www.youtube.com/watch?v=T5RmMNDR5XI would suggest that a segment is structured similar to a sorted array rather than a hash table.
The reason I believe 1 is true is https://medium.com/#mourjo_sen/a-detailed-comparison-between-autocompletion-strategies-in-elasticsearch-66cb9e9c62c4 mentions "The most important reason why prefix-like queries should never be used in production environments is that they are extremely slow. The reason for this is that the tokens in ES are not directly prefix-able"
The reason I believe 2 is true is https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html mentions "Allowing a wildcard at the beginning of a word (eg "*ing") is particularly heavy, because all terms in the index need to be examined, just in case they match
I'm not that familiar with ES specific details so they might be doing something else than plain Solr - but #1 is not the case usually.
A prefix match will be more expensive than looking up a single term, but it's not that much more expensive. It can be compared to doing a range search (which you can perform if you want to - field:[aa TO ab) could be compared to doing field:aa* (in theory); effectively retrieving all tokens that lie within that range, then resolving the document set that matches those tokens.
The fact that there are more tokens that match means that you can't simply take the list attached to a single token (a matching term) and retrieve those documents, but you have to retrieve a possibly large set of matching tokens and then compute the document set for that. However, this is not a very expensive computation, but it is more expensive than just a single match. The lookup can be done by finding the starting and end indexed of the matching tokens in the index, then retrieving all terms between those two and find the set of matching document ids.
A query of foo* against an indexed with the following terms:
bar, baz, foo, foobar, spam
^----------^
will collect the list of documents attached to foo and foobar, merge it and then retrieve the documents.
Slower does not mean that it's catastrophic or not optimised in any way; just that it's more expensive than a direct match where the set of documents has already been determined. However, you probably have more than one term in your query already, so the same process (albeit slightly higher up in the hierarchy) happens there as well.
A postfix match (your #2) - i.e. matching a wildcard at the beginning of the token - is expensive, since all tokens in the index usually has to be considered. The index have the terms sorted alphanumerically, so when you want to only look at the end of the string you have to consider that each token could match, regardless of where it's located in the index - so you get a full index scan. However, if this is a use case you see happening often, you can use the reverse wildcard filter. This works by reversing the string and having tokens that match the terms in reverse order, so that foo is indexed as oof and a wildcard search gets turned into oof* instead.
A query of *ar against an indexed with the following terms:
bar, baz, foo, foobar, spam
?! ? ? ?! ?
will have to look at each term to decide if it ends with ar.
The reason for using an EdgeNGramFilter (your comment / #3) is that you move as much of the required processing as possible to indexing time (doing the work that you know do query time, even if prefix queries aren't really expensive, they still have a cost), and additionally: wildcard queries does not support most analysis. So many people end up with wildcard queries against a set of tokens that have been stemmed or otherwise processed, and are then surprised when their wildcard queries doesn't generate a match. Only a small subset of filters can be applied to wildcard queries (such as the LowercaseFilter). Those filters are known as being "Multi term aware", since the terms the process can end up being expanded to multiple terms before collection of documents happen.
Another reason is that using an EdgeNGramFilter will give you proper frequency scores for each prefix, giving you effective scoring for prefixed terms as well.

Elastic Search substring search

I need to implement search by substring. It is supposed to work the same like “CTRL + F” that highlight a word if its substring matches it.
The search is going to be performed by two fields only:
Name - no more than 255 chars
Id - no more than 200 chars
However, number of records going to be pretty large about a million.
So far I’m using querystring search by keywords wrapped with wildcards but it will definitely lead to performance problems later on once number of records will start growing.
Do you have any suggestions how would I do more performance wise solution?
Searching with leading wildcards is going to be extremely slow on a large index
Avoid beginning patterns with * or ?. This can increase the iterations
needed to find matching terms and slow search performance.
As written in documentation wildcards queries are very slow.
Better to use ngram strategy if you want it to be fast at query time. If you want to search by partial match, word prefix, or any substring match it is better to use n-gram tokenizer, which will improve the full-text search.
The ngram tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits N-grams of each word of the specified length.
Please go through this SO answer, that includes a working example for a partial match using ngrams

Matching words and valid sub-words in elasticseach

I've been working with ElasticSearch within an existing code base for a few days, so I expect that the answer is easy once I know what I'm doing. I want to extend a search to yield the same results when I search with a compound word, like "eyewitness", or its component words separated by a whitespace, like "eye witness".
For example, I have a catalog of toy cars that includes both "firetruck" toys and "fire truck" toys. I would like to ensure that if someone searched on either of these terms, the results would include both the "firetruck" and the "fire truck" entries.
I attempted to do this at first with the "fuzziness" of a match, hoping that "fire truck" would be considered one transform away from "firetruck", but that does not work: ES fuzziness is per-word and will not add or remove whitespace characters as a valid transformation.
I know that I could do some brute-forcing before generating the query by trying to come up with additional search terms by breaking big words into smaller words and also joining smaller words into bigger words and checking all of them against a dictionary, but that falls apart pretty quickly when "fuzziness" and proper names are part of the task.
It seems like this is exactly the kind of thing that ES should do well, and that I simply don't have the right vocabulary yet for searching for the solution.
Thanks, everyone.
there are two things you could could do:
you could split words into their compounds, i.e. firetruck would be split into two tokens fire and truck, see here
you could use n-grams, i.e. for 4 grams the original firetruck get split into the tokens fire, iret, retr, etru, truc, ruck. In queries, the scoring function helps you ending up with pretty decent results. Check out this.
Always remember to do the same tokenization on both the analysis and the query side.
I would start with the ngrams and if that is not good enough you should go with the compounds and split them yourself - but that's a lot of work depending on the vocabulary you have under consideration.
hope the concepts and the links help, fricke

Solr Boosting Logic Concepts

I'm trying to understand boosting and if boosting is the answer to my problem.
I have an index and that has different types of data.
EG: Index Animals. One of the fields is animaltype. This value can be Carnivorous, herbivorous etc.
Now when a we query in search, I want to show results of type carnivorous at top, and then the herbivorous type.
Also would it be possible to show only say top 3 results from a type and then remaining from other types?
Let assume for a herbivourous type we have a field named vegetables. This will have values only for a herbivourous animaltype.
Now, can it be possible to have boosting rules specified as follows:
Boost Levels:
animaltype:Carnivorous
then animaltype:Herbivorous and vegatablesfield: spinach
then animaltype:herbivoruous and vegetablesfield: carrot
etc. Basically boosting on various fields at various levels. Im new to this concept. It would really helpful to get some inputs/guidance.
Thanks,
Kasturi Chavan
Your example is closer to sorting than boosting, as you have a priority list for how important each document is - while boosting (in Solr) is usually applied a bit more fluent, meaning that there is no hard line between documents of type X and type Y.
However - boosting with appropriately large values will in effect give you the same result, putting the documents into different score "areas" which will then give you the sort order you're looking for. You can see the score contributed by each term by appending debugQuery=true to your query. Boosting says that 'a document with this value is z times more important than those with a different value', but if the document only contains low scoring tokens from the search (usually words that are very common), while other documents contain high scoring tokens (words that are infrequent), the latter document might still be considered more important.
Example: Searching for "city paris", where most documents contain the word 'city', but only a few contain the word 'paris' (but does not contain city). Even if you boost all documents assigned to country 'germany', the score contributed from city might still be lower - even with the boost factor than what 'paris' contributes alone. This might not occur in real life, but you should know what the boost actually changes.
Using the edismax handler, you can apply the boost in two different ways - one is to use boost=, which is multiplicative, or to use either bq= or bf=, which are additive. The difference is how the boost contributes to the end score.
For your example, the easiest way to get something similar to what you're asking, is to use bq (boost query):
bq=animaltype:Carnivorous^1000&
bq=animaltype:Herbivorous^10
These boosts will probably be large enough to move all documents matching these queries into their own buckets, without moving between groups. To create "different levels" as your example shows, you'll need to tweak these values (and remember, multiple boosts can be applied to the same document if something is both herbivorous and eats spinach).
A different approach would be to create a function query using query, if and similar functions to result in a single integer value that you can use as a sorting value. You can also calculate this value when indexing the document if it's static (which your example is), and then sort by that field instead. It will require you to reindex your documents if the sorting values change, but it might be an easy and effective solution.
To achieve the "Top 3 results from a type" you're probably going to want to look at Result grouping support - which makes it possible to get "x documents" for each value in a single field. There is, as far as I know, no way to say "I want three of these at the top, then the rest from other values", except for doing multiple queries (and excluding the three you've already retrieved from the second query). Usually issuing multiple queries works just as fine (or better) performance wise.

How do you Index Files for Fast Searches?

Nowadays, Microsoft and Google will index the files on your hard drive so that you can search their contents quickly.
What I want to know is how do they do this? Can you describe the algorithm?
The simple case is an inverted index.
The most basic algorithm is simply:
scan the file for words, creating a list of unique words
normalize and filter the words
place an entry tying that word to the file in your index
The details are where things get tricky, but the fundamentals are the same.
By "normalize and filter" the words, I mean things like converting everything to lowercase, removing common "stop words" (the, if, in, a etc.), possibly "stemming" (removing common suffixes for verbs and plurals and such).
After that, you've got a unique list of words for the file and you can build your index off of that.
There are optimizations for reducing storage, techniques for checking locality of words (is "this" near "that" in the document, for example).
But, that's the fundamental way it's done.
Here's a really basic description; for more details, you can read this textbook (free online): http://informationretrieval.org/¹
1). For all files, create an index. The index consists of all unique words that occur in your dataset (called a "corpus"). With each word, a list of document ids is associated; each document id refers to a document that contains the word.
Variations: sometimes when you generate the index you want to ignore stop words ("a", "the", etc). You have to be careful, though ("to be or not to be" is a real query composed of stopwords).
Sometimes you also stem the words. This has more impact on search quality in non-English languages that use suffixes and prefixes to a greater extent.
2) When a user enters a query, look up the corresponding lists, and merge them. If it's a strict boolean query, the process is pretty straightforward -- for AND, a docid has to occur in all the word lists, for OR, in at least one wordlist, etc.
3) If you want to rank your results, there are a number of ways to do that, but the basic idea is to use the frequency with which a word occurs in a document, as compared to the frequency you expect it to occur in any document in the corpus, as a signal that the document is more or less relevant. See textbook.
4) You can also store word positions to infer phrases, etc.
Most of that is irrelevant for desktop search, as you are more interested in recall (all documents that include the term) than ranking.
¹ previously on http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html, accessible via wayback machine
You could always look into something like Apache Lucene.
Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.

Resources