how to add a full phrase tokenizer in Nest for Elasticsearch? - elasticsearch

when i create a search using facets, i want the facet results to be on the whole phrase, not the individual word. and i want it NOT to be case sensitive - as 'not_analyzed' would do.
for example, if i have a music json object and want to organize the facet results based on genre, i want to see each genre as the whole genre term (rhythm and blues) and not one facet for 'rhythm' and one for 'blues', and i want to be able to search on 'rhythm and blues' and have it match 'Rhythm and Blues' (notice case).
it seems the elasticsearch documentation suggests using a custom analyzer of a tokenizer and lowercase filter.
here's the suggestion from elasticsearch i mentioned: (mid-page)
http://www.elasticsearch.org/blog/starts-with-phrase-matching/
I want to be able to say something like (in my POCO in pseudo code):
[ElasticProperty(Analyzer = "tokenizer, lowercase"]
public string Genre { get; set; }

Use the multi field type in your mapping. Doing so will allow you to index the Genre field in two ways- analyzed (using the standard or lowercase analyzer) for conducting searches, and not_analyzed for faceting.
For more advanced mappings like this, the attribute based mapping in NEST won't cut it. You'll have to use the fluent API, for example:
client.CreatIndex("songs", c => c
.AddMapping<Song>(m => m
.MapFromAttributes()
.Properties(props => props
.MultiField(mf => mf
.Name(s => s.Genre)
.Fields(f => f
.String(s => s.Name(o => o.Genre).Analyzer("standard"))
.String(s => s.Name(o => o.Genre.Suffix("raw")).Index(FieldIndexOption.not_analyzed)))))));
Hope this helps!

Related

Creating an Index with English Analyzer using Nest

I am using nest to create my Elasticsearch Index. I have two questions:
Question 1. How can I add the settings to use english analyzer with a fall back for Standard Analyzer?
This is how I am creating my Index:
Uri _node = new Uri("elasticUri");
ConnectionSettings _connectionSettings = new ConnectionSettings(_node)
.DefaultIndex("MyIndexName")
.DefaultMappingFor<POCO>(m => m
.IndexName("MyIndexName")
);
IElasticClient _elasticClient = new ElasticClient(_connectionSettings);
var createIndexResponse = _elasticClient.CreateIndex("MyIndexName", c => c
.Mappings(m => m
.Map<POCO>(d => d.AutoMap())
)
);
Looking at the examples Here, I am also not sure what should I pass for "english_keywords", "english_stemmer", etc
Question 2: If I use English Analyzer, will Elasticsearch automatically realize that the terms: "Barbecue" and "BBQ" are synonyms? Or do I need to explicitly pass a list of Synonyms to ES?
Take a look at the NEST documentation for configuring a built-in analyzer for an index.
The documentation for the english analyzer simply demonstrates how you could reimplement the english analyzer yourself, as a custom analyzer, with the built-in analysis components, if you need to customize any part of the analysis. If you don't need to do this, simply use english as the name for the analyzer for a field
client.CreateIndex("my_index", c => c
.Mappings(m => m
.Map<POCO>(mm => mm
.AutoMap()
.Properties(p => p
.Text(t => t
.Name(n => n.MyProperty)
.Analyzer("english")
)
)
)
)
);
Will use the built-in english analyzer for the MyProperty field on POCO.
The english analyzer will not perform automatic synonym expansion for you, you'll need to configure the synonyms that are relevant to your search problem. You have two choices with regards to synonyms
Perform synonym expansion at index time on the index input. This will result in faster search at the expense of being a relatively fixed approach.
Perform synonym expansion at query time on the query input. This will result in slower search, but affords the flexibility to more easily add new synonym mappings as and when you need to.
You can always take the approach of using both, that is, indexing the synonyms that you expect to be relevant to your search use case, and adding new synonyms at query time, as you discover them to be relevant to your use case.

Simple query without a specified field searching in whole ElasticSearch index

Say we have an ElasticSearch instance and one index. I now want to search the whole index for documents that contain a specific value. It's relevant to the search for this query over multiple fields, so I don't want to specify every field to search in.
My attempt so far (using NEST) is the following:
var res2 = client.Search<ElasticCompanyModelDTO>(s => s.Index("cvr-permanent").AllTypes().
Query(q => q
.Bool(bo => bo
.Must( sh => sh
.Term(c=>c.Value(query))
)
)
));
However, the query above results in an empty query:
I get the following output, ### ES REQEUST ### {} , after applying the following debug on my connectionstring:
.DisableDirectStreaming()
.OnRequestCompleted(details =>
{
Debug.WriteLine("### ES REQEUST ###");
if (details.RequestBodyInBytes != null) Debug.WriteLine(Encoding.UTF8.GetString(details.RequestBodyInBytes));
})
.PrettyJson();
How do I do this? Why is my query wrong?
Your problem is that you must specify a single field to search as part of a TermQuery. In fact, all ElasticSearch queries require a field or fields to be specified as part of the query. If you want to search every field in your document, you can use the built-in "_all" field (unless you've disabled it in your mapping.)
You should be sure you really want a TermQuery, too, since that will only match exact strings in the text. This type of query is typically used when querying short, unanalyzed string fields (for example, a field containing an enumeration of known values like US state abbreviations.)
If you'd like to query longer full-text fields, consider the MultiMatchQuery (it lets you specify multiple fields, too.)
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-multi-match-query.html
Try this
var res2 = client.Search<ElasticCompanyModelDTO>(s =>
s.Index("cvr-permanent").AllTypes()
.Query(qry => qry
.Bool(b => b
.Must(m => m
.QueryString(qs => qs
.DefaultField("_all")
.Query(query))))));
The existing answers rely on the presence of _all. In case anyone comes across this question at a later date, it is worth knowing that _all was removed in ElasticSearch 6.0
There's a really good video explaining the reasons behind this and the way the replacements work from ElasticOn starting at around 07:30 in.
In short, the _all query can be replaced by a simple_query_string and it will work with same way. The form for the _search API would be;
GET <index>/_search
{
"query": {
"simple_query_string" : {
"query": "<queryTerm>"
}
}
}
The NEST pages on Elastic's documentation for this query are here;

ElasticSearch - Filter stop words from Top words

I have a list of documents I am indexing like this:
ElasticIndex.CreateIndex(IndexName, _ => _
.Mappings(__ => __
.Map<AlbumMetadata>(
M => M.AutoMap()
.Properties(P => P.Text(T => T.Name(N => N.Keywords)
.Analyzer("stop")
.Fields(F => F.Keyword(K => K.Name("keywords"))))))));
In my class AlbumMetaData, the field Keywords is a list:
[Keyword]
public List<string> Keywords { get; set; }
When I want to retrieve the top terms, I do the following query (you can ignore Category and Type, they're not relevant to the problem):
var Match = Driver.Search<AlbumMetadata>(_ => _
.Query(Q => Q
.Term(P => P.Category, (int)Category) && Q
.Term(P => P.Type, (int)Type))
.Source(F => F.Includes(S => S.Fields(L => L.Keywords)))
.Aggregations(A => A
.Terms("Tags", T => T
.Field(E => E.Keywords)
.Size(Limit)
)
));
var Tags = Match.Aggs.Terms("Tags").Buckets.ToDictionary(K => K.Key, V => V.DocCount);
The problem is that in the output, I get some stop words as well as some symbols, like / - & |
What am I doing wrong?
Edit:
In order to clarify the question, here is what I am trying to achieve:
I have documents that have titles (full English sentences) and tags (list of single words, sometimes a tag is a two word tag).
I need to be able to perform a search that will find documents based on the title and tags (and ideally using word stems, ignoring plurals, etc).
I also need to extract the list of top words. The Keywords list is a concatenation of all words from the title and all the entries from the tags list.
Is the way I create the index appropriate in this context? Also, is the way I do the aggregation the right way?
There's a few things:
When you create the index, .AutoMap() on the mapping will infer Elasticsearch field datatypes from the POCO property types and the attributes applied to them. Then, .Properties() overrides any of these inferred mappings. So, the end result of your mapping for Keywords is a text datatype field with the stop analyzer applied, and a multi-field sub field of "keywords" (queryable via "keywords.keywords"), set as a keyword datatype.
The aggregation is running on the "keywords" text field with the stop analyzer applied. The stop analyzer uses English stop words by default, but you can configure the stop analyzer with other stop words by defining a custom stop analyzer in the index. The stop analyzer will not remove symbols like /, -, & and |.
With a terms aggregation, you generally want to get back aggregations on the verbatim terms for a field, which you can get with your mapping by using the "keywords.keywords" field in the aggregation. You can apply a normalizer to a keyword field which is similar to an analyzer, except it produces only one token. This is because a keyword field uses doc_values, an on-disk columnar data structure that is suited for well performing, large scale aggregations.
You can run the aggregation on a text field too as you're doing, but you also need to enable fielddata and be aware of how it works. text fields can't use doc_values.

How to use wildcards with ngrams in ElasticSearch

Is it possible to combine wildcard matches and ngrams in ElasticSearch? I'm already using ngrams of length 3-11.
As a very small example, I have records C1239123 and C1230123. The user wants to return both of these. This is the only info they know: C123?12
The above case won't work on my full match analyzer because the query is missing the 3 on the end. I was under the impression wildcard matches would work out of the box, but if I perform a search similar to the above I get gibberish.
Query:
.Search<ElasticSearchProject>(a => a
.Size(100)
.Query(q => q
.SimpleQueryString(query => query
.OnFieldsWithBoost(b => b
.Add(f => f.Summary, 2.1)
.Add(f => f.Summary.Suffix("ngram"), 2.0)
.Query(searchQuery))));
Analyzer:
var projectPartialMatch = new CustomAnalyzer
{
Filter = new List<string> { "lowercase", "asciifolding" },
Tokenizer = "ngramtokenizer"
};
Tokenizer:
.Tokenizers(t=>t
.Add("ngramtokenizer", new NGramTokenizer
{
TokenChars = new[] {"letter","digit","punctuation"},
MaxGram = 11,
MinGram = 3
}))
EDIT:
The main purpose is to allow the user to tell the search engine exactly where the unknown characters are. This preserves the match order. I do not ngram the query, only the indexed fields.
EDIT 2 with more test results:
I had simplified my prior example a bit too much. The gibberish was being caused by punctuation filters. With a proper example there's no gibberish, but results aren't returned in a relevant order. Seeing below, I'm unsure why the first 2 results match at all. Ngram is not applied to the query.
Searching for c.a123?.7?0 gives results in this order:
C.A1234.560
C.A1234.800
C.A1234.700 <--Shouldn't this be first?
C.A1234.950
To anyone looking for a resolution to this, wildcards are used on ngrammed tokens by default. My problem was due to my queries having punctuation in them and using a standard analyzer on my query (which breaks on punctuation).
Duc.Duong's suggestion to use the Inquisitor plugin helped show exactly how data would be analyzed.

How to do facet search with mpdreamz Nest

does anybody know how to do facet search with Nest?
My index is https://gist.github.com/3606852
would like to search for some keyword in 'NumberEvent' and dispaly the result if the keyword exist.Please help me !!!
This is using the assumption that the MyPoco class exists and maps to your elasticsearch document. If it doesn't you can use dynamic but you'l have to swap the lambda based field selectors with strings.
var result = client.Search<MyPoco>(s=>s
.From(0)
.Size(10)
.Filter(ff=>ff.
.Term(f=>f.Categories.Types.Events.First().NumberEvent.event, "keyword")
)
.FacetTerm(q=>q.OnField(f=>f.Categories.Types.Facets.First().Person.First().entity))
);
result.Documents now holds your documents
result.Facet<TermFacet>(f => f.Categories.Types.Facets.First().Person.First().entity); now holds your facets
Your document seems a bit strange though in the sense that it already has Facets with counts in them.

Resources