I have Text field in solr that I would like it to be sorted in special manner
title
-------
The Book
When Available
Since the (when,on) words are included in my stop words list when I query and sort asc the fields I would like them to appear as :
When Available ( first )
The Book ( second )
So far I've tried it with various combinations of
<fieldType name="sortString" class="solr.TextField" sortMissingLast="true" omitNorms="true">
<analyzer type="index">
<tokenizer class="solr.KeywordTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" enablePositionIncrements="true"/>
.......
</analyzer>
</fieldType>
and so on, with no success,
Is it possible to achieve this ??
I suspect that this will not work.
The stopword filtering filters out tokens that match your stopwords, but the keyword tokenizer doesn't actually break the text into multiple tokens. Since the entire title is one token and isn't one of your stopwords, the filter does nothing.
And yet, you can't use any other tokenizer in a sort text field.
So I see two options:
One, use PatternReplaceFilter to apply regular expressions and match/remove your stopwords within the unbroken text value.
Two, remove the stopwords within code that prepares your Solr documents for submission to Solr.
Both have significant disadvantages compared to the builtin Solr stopword filter. I personally use option two pretty heavily. The first option could get extremely difficult to manage with more than a few stopwords.
Related
Let's say I have two texts:
Text 1 - "The fox has been living in the wood cabin for days."
Text 2 - "The wooden hammer is a dangerous weapon."
And I would like to search for the word "wood", without it matching me "wooden hammer". How would I do that in Elastic Search or nest?
Term query is used for exact matches search. However it's not recommended to use it against text fields, the following quote from term query documentation:
To better search text fields, the match query also analyzes your
provided search term before performing a search. This means the match
query can search text fields for analyzed tokens rather than an exact
term.
The term query does not analyze the search term. The term query only
searches for the exact term you provide. This means the term query may
return poor or no results when searching text fields.
The problem with text exact matches, as described in the Term query documentation:
By default, Elasticsearch changes the values of text fields as part of
analysis. This can make finding exact matches for text field values
difficult.
So, the documents data is modified (i.e., analyzed) before indexing. This depends on the index mapping definition for each field, defaults to the default index analyzer, or the standard analyzer.
But the default standard analyzer will not change the token "Wooden" to "Wood", this might happen if you used stemming for this field.
This means, if you don't use a different analyzer or stemming, querying with "Wood" shouldn't match "Wooden" token.
To summarize: Indexed data is modified/analyzed before indexing (based on the field mapping definition). Match query analyze the search query, while Term query doesn't analyze the search query. So you have to properly chose the field mapping and the search query to better suit your use case
For some use cases, like storing email addressed, phone numbers or keyword fields that always have the same value, consider using the Keyword type, which is suitable for exact matches in these use cases. However, ES recommends:
Avoid using keyword fields for full-text search. Use the text field
type instead.
So for better visibility and practical solution for your use case, it's better to elaborate more the field mapping you use and what you want to achieve.
Can't understand the difference between setting a Search-as-you-type datatype to a field, setting an Edge NGram Tokenizer in analyzer, and adding an index_prefixes parameter. It seems to me that they do the same job after all.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-as-you-type.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-edgengram-tokenizer.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-prefixes.html
edge_ngram is a tokenizer, which means it kicks in at indexing time to tokenize your input data. There is also a edge_ngram token filter. Both are similar but work at different levels.
search_as_you_type is a field type which contains a few sub-fields, one of which is called _index_prefix and which leverages the edge_ngram tokenizer.
So basically, what you see in the edge_ngram tokenizer documentation has actually been leveraged when they decided to add the new search_as_you_type field type.
Rafiqul is correct that search_as_you_go is built using edge_ngram, but it also incorporates the concept of shingles. Shingles are sets of words, which allows search_as_you_go to better handle multi-word queries.
Note that search_as_you_go requires the words to be in the order entered, which is especially ideal for known entities like movie titles than free form documents.
I apparently misunderstood how nGram works with Elasticsearch. I wanted to be able to efficiently search for a substring. That way I could type 'loud' and still find words like 'clouds'. I have my nGram tokenizer set up to have min=2 and max=10.
Apparently, nGram splits up the search term ('loud') into 'lo', 'ou', 'ud', 'lou', 'oud' and 'loud'. In some cases this is nice because it will find 'louder' if I search for 'cloud'. However, I think generally it just confuses my users.
Is there a way to prevent Elasticsearch from splitting up the search term? I tried using quotes in the querystring but that doesn't seem to work.
You should specify 2 separate analyzers for index and for search in your mapping, called index_analyzer and search_analyzer. Index analyzer is the same, as search analyzer, but with nGram filter added.
I want to treat the field of one of the indexed items as one big string even though it might have whitespace. I know how to do this by setting a non-custom field to be 'not-analyzed', but what tokenizer can you use via a custom analyzer?
The only tokenizer items I see on elasticsearch.org are:
Edge
NGram
Keyword
Letter
Lowercase
NGram
Standard
Whitespace
Pattern
UAX URL Email
Path
Hierarchy
None of these do what I want.
The Keyword tokenizer is what you are looking for.
The Keyword tokenizer doesn't really do:
When searching, it'll tokenize the entire query string into a single token, making text queries behave like a term query.
The issue I run into is that I want to add filters and then search indexed keywords in a long text (Keyword assignment). I would say there's no tokenizer that could do this, and that the normalizer can't accept necessary filters. The workaround for me is to prepare the text before feeding it to elasticsearch.
it is too simple to describe:
q=mydynamicfield_txt:"video"
I want only hits when mydynamicfield is exact "video.
Other way round, how to supress hits, where "video" is only part of the field (like "home video").
Is this supported with Solr3.1 out of the box, or do I have to add my own special brackets like "SOLRSTARTSOLR video SOLRENDSOLR" in my index, to retrieve later my term between "START" and "END". Kind of manual regex anchoring.
This is PITA cause it needs special handling in index/gui and breaks highlighting.
Where is the way to go?
regards
Peter
(=PA=)
One solution to create untokenized(KeywordAnalyzed) field and search within it - all your text will be distinct token in Solr index.
Other solution is to write filter which will read token count from index and compare to query tokens i.e. filter entities where doc_tokens > query_tokens assuming that all query tokens are matched.