Partial word tokenizers vs Word oriented tokenizers Elasticsearch - elasticsearch

reading the link below I am looking for some use case/example in which will be better using Ngram-tokenizing or standard tokenizer doing some comperison.
I hope elastic documentation will include more examples and comparisons in future.
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html
Can someone help me?
Thank you.

The Elastic documentation does include more examples. You can find them in the dedicated page of each tokenizer (here is the standard, here is the ngram).
In general, you might want to use an ngram tokenizer to implement a search-as-you-type functionality, such as the auto-suggest in a search input.

Related

Is there a list of punctuations removed in standard analyzer in Elasticsearch?

In the official documentation of the standard analyzer in Elasticsearch, it is mentioned that "It removes most punctuation".
I need the list of the punctuations that this standard analyzer removes. Can someone point me to any reference or section of the Elasticsearch source code that might be useful in this regard?
Elastic search standard Analyzer based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex here. There are couple of rules and regex that you can find in the documentation in detail.

Elasticsearch: Return same search results regardless of diacritics/accents

I've got a word in the text (e.g. nagymező) and I want to be able to type in the search query nagymező or nagymezo and it should show this text which contains that word in the search results.
How can it be accomplished?
You want to use a Unicode folding strategy, probably the asciifolding filter. I'm not sure which version of Elasticsearch you're on, so here are a couple of documentation links:
asciifolding for ES 2.x (older version, but much more detailed guide)
asciifolding for ES 6.3
The trick is to remove the diacritics when you index them so they don't bother you anymore.
Have a look at ignore accents in elastic search with haystack
and also at https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-analyzers.html (look for 'diacritic' on the page).
Then, just because it will probably be useful to someone one day or the other, know that the regular expression \p{L} will match any Unicode letter :D
Hope this helps,

When are Stemmers used in ElasticSearch?

I am confused about when stemmers are used in ElasticSearch.
In the Dealing with Human Language/Reducing Words to Their Root Form section I see that stemmers are used to strip words into their root forms. This lead me to believe that Stemmers were used as a token filter on an analyzer.
But a token filter only filters the token, does not actually reduce words to their root forms.
So, where are stemmers used?
In fact, you can do stemming with a token filter in an analyzer. That is exactly how stemming works in ES. Have a look at the documentation for Stemmer Token Filter.
ES also provides the Snowball Analyzer, which is a convenient analyzer to use for stemming.
Otherwise, if there is a different type of stemming you would like to use, you can always build your own Custom Analyzer. This gives you complete control over the stemming solution that works best for you, as discussed here in the guide.
Hope this helps!

Custom Language Stemmer for Elasticsearch

Is there any way how to create new stemmer? There is for example analyzer for czech language already built in with czech language stemmer. This algorithm was made by some guys in Netherlands. It's not that bad, but for the native speaker it is clear that those honorable guys does not speak the language. If I would like to create my own stemming algorithm, how can I do it in the Elasticsearch?
Thanks.
Elasticsearch is based on Lucene, so this answer is about how to add a custom stemmer to Lucene.
This is how I implemented Lucene's Analyzer interface based on a custom stemmer (or lemmatizer, to be more precise):
https://code.google.com/p/hunglish-webapp/source/browse/trunk/src/main/java/hu/mokk/hunglish/lucene/analysis/StemmerAnalyzer.java
See also these two classes:
https://code.google.com/p/hunglish-webapp/source/browse/trunk/src/main/java/hu/mokk/hunglish/lucene/analysis/CompoundStemmerTokenFilter.java
https://code.google.com/p/hunglish-webapp/source/browse/trunk/src/main/java/hu/mokk/hunglish/jmorph/LemmatizerWrapper.java
Note, that this is for an older version of Lucene, 3.2/3.3. The same implementation would probably be more simple for new versions.
https://code.google.com/p/hunglish-webapp/source/browse/trunk/pom.xml

Lightweight fuzzy search library

Can you suggest some light weight fuzzy text search library?
What I want to do is to allow users to find correct data for search terms with typos.
I could use full-text search engines like Lucene, but I think it's an overkill.
Edit:
To make question more clear here is a main scenario for that library:
I have a large list of strings. I want to be able to search in this list (something like MSVS' intellisense) but it should be possible to filter this list by string which is not present in it but close enough to some string which is in the list.
Example:
Red
Green
Blue
When I type 'Gren' or 'Geen' in a text box, I want to see 'Green' in the result set.
Main language for indexed data will be English.
I think that Lucene is to heavy for that task.
Update:
I found one product matching my requirements. It's ShuffleText.
Do you know any alternatives?
Lucene is very scalable—which means its good for little applications too. You can create an index in memory very quickly if that's all you need.
For fuzzy searching, you really need to decide what algorithm you'd like to use. With information retrieval, I use an n-gram technique with Lucene successfully. But that's a special indexing technique, not a "library" in itself.
Without knowing more about your application, it won't be easy to recommend a suitable library. How much data are you searching? What format is the data? How often is the data updated?
I'm not sure how well Lucene is suited for fuzzy searching, the custom library would be better choice. For example, this search is done in Java and works pretty fast, but it is custom made for such task:
http://www.softcorporation.com/products/people/
Soundex is very 'English' in it's encoding - Daitch-Mokotoff works better for many names, especially European (Germanic) and Jewish names. In my UK-centric world, it's what I use.
Wiki here.
You didn't specify your development platform, but if its PHP then suggest you look at the ZEND Lucene lubrary :
http://ifacethoughts.net/2008/02/07/zend-brings-lucene-to-php/
http://framework.zend.com/manual/en/zend.search.lucene.html
As it LAMP its far lighter than Lucene on Java, and can easily be extended for other filetypes, provided you can find a conversion library or cmd line converter - there are lots of OSS solutions around to do this.
Try Walnutil - based on Lucene API - integrated to SQL Server and Oracle DBs . You can create any type of index and then use it. For simple search you can use some methods from walnutilsoft, for more complicated search cases you can use Lucene API. See web based example where was used indexes created from Walnutil Tools. Also you can see some code example written on Java and C# which you can use it for creating different type of search.
This tools is free.
http://www.walnutilsoft.com/
If you can choose to use a database, I recommend using PostgreSQL and its fuzzy string matching functions.
If you can use Ruby, I suggest looking into the amatch library.
#aku - links to working soundex libraries are right there at the bottom of the page.
As for Levenshtein distance, the Wikipedia article on that also has implementations listed at the bottom.
A powerful, lightweight solution is sphinx.
It's smaller then Lucene and it supports disambiguation.
It's written in c++, it's fast, battle-tested, has libraries for every env and it's used by large companies, like craigslists.org

Resources