Search with hyphen, without and with a space - elasticsearch

How can I tokenize a hyphenated term such that I can search using the following acceptance criteria:
with a hyphen (co-trimoxazole)
without a hyphen (cotrimoxazole)
with a space (co trimoxazole)
I managed to use the standard analyzer which tokenizes on hyphens both on the index side and query side which allows me to search on:
cotrimoxazole
co-trimoxazole
but not
co trimoxazole

I would suggest usage of combo analyzers.
Here create 2 analyzer , one which tokenizes based on standard analyzer and other based on white space.
This should work fine for you.

Related

How to search exact word in a test in Elastic Search

Let's say I have two texts:
Text 1 - "The fox has been living in the wood cabin for days."
Text 2 - "The wooden hammer is a dangerous weapon."
And I would like to search for the word "wood", without it matching me "wooden hammer". How would I do that in Elastic Search or nest?
Term query is used for exact matches search. However it's not recommended to use it against text fields, the following quote from term query documentation:
To better search text fields, the match query also analyzes your
provided search term before performing a search. This means the match
query can search text fields for analyzed tokens rather than an exact
term.
The term query does not analyze the search term. The term query only
searches for the exact term you provide. This means the term query may
return poor or no results when searching text fields.
The problem with text exact matches, as described in the Term query documentation:
By default, Elasticsearch changes the values of text fields as part of
analysis. This can make finding exact matches for text field values
difficult.
So, the documents data is modified (i.e., analyzed) before indexing. This depends on the index mapping definition for each field, defaults to the default index analyzer, or the standard analyzer.
But the default standard analyzer will not change the token "Wooden" to "Wood", this might happen if you used stemming for this field.
This means, if you don't use a different analyzer or stemming, querying with "Wood" shouldn't match "Wooden" token.
To summarize: Indexed data is modified/analyzed before indexing (based on the field mapping definition). Match query analyze the search query, while Term query doesn't analyze the search query. So you have to properly chose the field mapping and the search query to better suit your use case
For some use cases, like storing email addressed, phone numbers or keyword fields that always have the same value, consider using the Keyword type, which is suitable for exact matches in these use cases. However, ES recommends:
Avoid using keyword fields for full-text search. Use the text field
type instead.
So for better visibility and practical solution for your use case, it's better to elaborate more the field mapping you use and what you want to achieve.

Searching for error codes with regex pattern in Kibana

I am trying to search my logs within the search bar of Kibana UI for error codes that consist of:
a fixed 3 digit String
a minus
a 5 digit number
, e.g. TED-12345. The error codes can be located anywhere inside the message field
I tried the following Regex message: /.*TED-[0-9]{5}.*/ but it did not return the expected results. I probably mixed query and "search bar syntax". Can anybody suggest the correct regex?
Please make sure you have the Lucene syntax enabled for your queries, because Kibana Query Language does not support regular expressions.
From docs: https://www.elastic.co/guide/en/kibana/master/kuery-query.html
KQL has a different set of features than the Lucene query syntax. KQL is able to query nested fields and scripted fields. KQL does not support regular expressions or searching with fuzzy terms. To use the legacy Lucene syntax, click KQL next to the Search field, and then turn off KQL.

Elastic search giving strange results

I am following this tutorial on elastic search.
Two employees have 'about' value as:
"about": "I love to go rock climbing"
"about": "I like to collect rock albums"
I run following query:
GET /megacorp/employee/_search {"query":{"match":{"about":"rock coll"}}}
Both above entries are returned, but surprisingly wit same score:
"_score": 0.2876821
Shouldn't the second one must have higher score as it has 'about' value containing both 'rock' and 'coll' while first one only contains 'rock'?
That totally depends on what analyzer you are using. if you are using standard or english analyzer this result is correct. I recommend you to spend some time working with elasticsearch's Analyze API to get familiar how each analyzer affect your text.
By the way, if you want second document to have higher score, take a look at Partial matching.
When we search on a full-text field, we need to pass the query string through the same analysis process as we have when we index a document, to ensure that we are searching for terms in the same form as those that exist in the index.
Analysis process usually consists of normalization and tokenization (the string is tokenized into individual terms by a tokenizer).
As for match Query:
If you run a match query against a full-text field, it will analyze the query string by using the correct analyzer for that field before executing the search. It just looks for the words that are specified.
So, in your match query Elasticsearch will look for occurrences of the whole separate words: rock or/and coll.
Your 2nd document doesn't contain a separate word coll but was matched by the word rock.
Conclusion: the 2 documents are equivalent in their _score value (they were matched by the same word rock)
Elasticsearch analyzes each text field before storing it. The default analyzer (standard analyzer) splits the text based on whitespaces and lowercases it. The output of analysis process is a list of tokens which are used to match your query tokens. If any of the tokens match exactly the relevant document is returned. That's being said, your second document doesn't contain the token col and that's why you are having the same score for both documents.
Even if you build your custom analyzer and use stemming, the word collect won't be stemmed as coll.
You can build custom analyzers in which you can specify that tokens should be of length 1 character, then Elasticsearch will consider each single character as a token and you can search for the existence of any character in your documents.

Practical usage of keyword analyzer

What is the scenario when you would need to use a mapping of Keyword-Analyzer compared to marking it as not_analyzed with Doc values turned on for the field.
From the elasticsearch documentation, it seems that if a field is not analyzed , then it is better to turn doc_Values on for this field.
The elasticsearch documentation also states specifically that [sic] Note, when using mapping definitions, it might make more sense to simply mark the field as not_analyzed
I am a bit confused as to why the Keyword Analyzer will ever be used ?
According to a core committer, both are equivalent.
That wouldn't be the case of the keyword tokenizer, though, which can be combined with other filters (lowercase, etc) and thus participate in many different ways of tokenizing your input.
I had the exact usecase for keyword analyzer, analyzers allow you to add character filters in addition to common filters and that's where they are used.
I had a field containing numeric values including whitespaces. Numbers may be persian , for ex ۱۲۳۴۵۶۷۸۹ or English, so i needed a char filter to normalize them to english ones with a char filter without any other tokenizing process, when I marked the field not_analyzed I was unable to use my char filter.

Is it possible to set a custom analyzer to not tokenize in elasticsearch?

I want to treat the field of one of the indexed items as one big string even though it might have whitespace. I know how to do this by setting a non-custom field to be 'not-analyzed', but what tokenizer can you use via a custom analyzer?
The only tokenizer items I see on elasticsearch.org are:
Edge
NGram
Keyword
Letter
Lowercase
NGram
Standard
Whitespace
Pattern
UAX URL Email
Path
Hierarchy
None of these do what I want.
The Keyword tokenizer is what you are looking for.
The Keyword tokenizer doesn't really do:
When searching, it'll tokenize the entire query string into a single token, making text queries behave like a term query.
The issue I run into is that I want to add filters and then search indexed keywords in a long text (Keyword assignment). I would say there's no tokenizer that could do this, and that the normalizer can't accept necessary filters. The workaround for me is to prepare the text before feeding it to elasticsearch.

Resources