Practical usage of keyword analyzer - elasticsearch

What is the scenario when you would need to use a mapping of Keyword-Analyzer compared to marking it as not_analyzed with Doc values turned on for the field.
From the elasticsearch documentation, it seems that if a field is not analyzed , then it is better to turn doc_Values on for this field.
The elasticsearch documentation also states specifically that [sic] Note, when using mapping definitions, it might make more sense to simply mark the field as not_analyzed
I am a bit confused as to why the Keyword Analyzer will ever be used ?

According to a core committer, both are equivalent.
That wouldn't be the case of the keyword tokenizer, though, which can be combined with other filters (lowercase, etc) and thus participate in many different ways of tokenizing your input.

I had the exact usecase for keyword analyzer, analyzers allow you to add character filters in addition to common filters and that's where they are used.
I had a field containing numeric values including whitespaces. Numbers may be persian , for ex ۱۲۳۴۵۶۷۸۹ or English, so i needed a char filter to normalize them to english ones with a char filter without any other tokenizing process, when I marked the field not_analyzed I was unable to use my char filter.

Related

How to value exact match higher than term frequency in elasticsearch?

I have an index that has several title fields.
main_title,
sub_titles,
preferred_titles
etc.
These texts fields also have a suggest field each where I run a custom analyzer that uses edge-n-gram tokenizer so we can search as we type.
I would like to value exact match over term frequency. And exact match in main_title is worth more than exact match in preferred_titles.
Anyone know how I can achieve this? Thanks in advance.
I have tried a bool_query with multi_match_query in the must clause. The multi_match is crossfields with no fields attached with the operator 'and'.
I have both the text fields and the suggest fields in the should cluase. Each text field is in a match_query with a boost and the operator 'and'. Each suggest field is in a match_phrase_query with a boost and the operator 'and'. The issue is that several boosts are added on top of the scores and I end up with very inflated scores.

How to search exact word in a test in Elastic Search

Let's say I have two texts:
Text 1 - "The fox has been living in the wood cabin for days."
Text 2 - "The wooden hammer is a dangerous weapon."
And I would like to search for the word "wood", without it matching me "wooden hammer". How would I do that in Elastic Search or nest?
Term query is used for exact matches search. However it's not recommended to use it against text fields, the following quote from term query documentation:
To better search text fields, the match query also analyzes your
provided search term before performing a search. This means the match
query can search text fields for analyzed tokens rather than an exact
term.
The term query does not analyze the search term. The term query only
searches for the exact term you provide. This means the term query may
return poor or no results when searching text fields.
The problem with text exact matches, as described in the Term query documentation:
By default, Elasticsearch changes the values of text fields as part of
analysis. This can make finding exact matches for text field values
difficult.
So, the documents data is modified (i.e., analyzed) before indexing. This depends on the index mapping definition for each field, defaults to the default index analyzer, or the standard analyzer.
But the default standard analyzer will not change the token "Wooden" to "Wood", this might happen if you used stemming for this field.
This means, if you don't use a different analyzer or stemming, querying with "Wood" shouldn't match "Wooden" token.
To summarize: Indexed data is modified/analyzed before indexing (based on the field mapping definition). Match query analyze the search query, while Term query doesn't analyze the search query. So you have to properly chose the field mapping and the search query to better suit your use case
For some use cases, like storing email addressed, phone numbers or keyword fields that always have the same value, consider using the Keyword type, which is suitable for exact matches in these use cases. However, ES recommends:
Avoid using keyword fields for full-text search. Use the text field
type instead.
So for better visibility and practical solution for your use case, it's better to elaborate more the field mapping you use and what you want to achieve.

What's the difference between Search-as-you-type datatype and Edge NGram Tokenizer?

Can't understand the difference between setting a Search-as-you-type datatype to a field, setting an Edge NGram Tokenizer in analyzer, and adding an index_prefixes parameter. It seems to me that they do the same job after all.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-as-you-type.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-edgengram-tokenizer.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-prefixes.html
edge_ngram is a tokenizer, which means it kicks in at indexing time to tokenize your input data. There is also a edge_ngram token filter. Both are similar but work at different levels.
search_as_you_type is a field type which contains a few sub-fields, one of which is called _index_prefix and which leverages the edge_ngram tokenizer.
So basically, what you see in the edge_ngram tokenizer documentation has actually been leveraged when they decided to add the new search_as_you_type field type.
Rafiqul is correct that search_as_you_go is built using edge_ngram, but it also incorporates the concept of shingles. Shingles are sets of words, which allows search_as_you_go to better handle multi-word queries.
Note that search_as_you_go requires the words to be in the order entered, which is especially ideal for known entities like movie titles than free form documents.

How to query all fields individually with ElasticSearch

As I understand it, ElasticSearch searches on the magic _all field by default. The problem with this seems to be that if a field uses a different index analyzer, the analyzed data from this field is not searched.
I've had success with searching on the fields ['domain', '_all'] but I really need to avoid having to manually specify each field which was analyzed differently. I see fields supports wildcards but seemingly not '' on its own. I could do a, b*, c*, d* etc. but this seems a tad inefficient.
the special field "_all" is discontinued and copy_to function can be used instead as per the official documentation. This approach allows one to create a computed field (managed by elastic search) that one can specify to copy data from other fields to mimic _all search.
However there is an alternative approach through the use of multi_match providing wildcard field names as part of the query. This works just like the earlier mechanism searching "_all" field.
{"multi_match":{"query":"java","fields":["*"]}}]}}

Is it possible to set a custom analyzer to not tokenize in elasticsearch?

I want to treat the field of one of the indexed items as one big string even though it might have whitespace. I know how to do this by setting a non-custom field to be 'not-analyzed', but what tokenizer can you use via a custom analyzer?
The only tokenizer items I see on elasticsearch.org are:
Edge
NGram
Keyword
Letter
Lowercase
NGram
Standard
Whitespace
Pattern
UAX URL Email
Path
Hierarchy
None of these do what I want.
The Keyword tokenizer is what you are looking for.
The Keyword tokenizer doesn't really do:
When searching, it'll tokenize the entire query string into a single token, making text queries behave like a term query.
The issue I run into is that I want to add filters and then search indexed keywords in a long text (Keyword assignment). I would say there's no tokenizer that could do this, and that the normalizer can't accept necessary filters. The workaround for me is to prepare the text before feeding it to elasticsearch.

Resources