I am writing a query in nest for elasticsearch that matches to a list of countries - it cutrrently matches whenever any of the countries in the list is present in ESCountryDescription (a list of countries). I only want to match when all of the countries in CountryList match ESCountryDescription. I believe that I need to use MinimumShouldMatch as in this example http://www.elastic.co/guide/en/elasticsearch/reference/0.90/query-dsl-terms-query.html
a.Terms(t => t.ESCountryDescription, CountryList)
But I cannot find a way of adding MinimumShouldMatch into my query above.
You can apply MinimumShouldMatch patameter in TermsDescriptor. Here is an example:
var lookingFor = new List<string> { "netherlands", "poland" };
var searchResponse = client.Search<IndexElement>(s => s
.Query(q => q
.TermsDescriptor(t => t.OnField(f => f.Countries).MinimumShouldMatch("100%").Terms(lookingFor))));
or
var lookingFor = new List<string> { "netherlands", "poland" };
var searchResponse = client.Search<IndexElement>(s => s
.Query(q => q
.TermsDescriptor(t => t.OnField(f => f.Countries).MinimumShouldMatch(lookingFor.Count).Terms(lookingFor))));
And this is the whole example
class Program
{
public class IndexElement
{
public int Id { get; set; }
[ElasticProperty(Index = FieldIndexOption.NotAnalyzed)]
public List<string> Countries { get; set; }
}
static void Main(string[] args)
{
var indexName = "sampleindex";
var uri = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(uri).SetDefaultIndex(indexName).EnableTrace(true);
var client = new ElasticClient(settings);
client.DeleteIndex(indexName);
client.CreateIndex(
descriptor =>
descriptor.Index(indexName)
.AddMapping<IndexElement>(
m => m.MapFromAttributes()));
client.Index(new IndexElement {Id = 1, Countries = new List<string> {"poland", "germany", "france"}});
client.Index(new IndexElement {Id = 2, Countries = new List<string> {"poland", "france"}});
client.Index(new IndexElement {Id = 3, Countries = new List<string> {"netherlands"}});
client.Refresh();
var lookingFor = new List<string> { "germany" };
var searchResponse = client.Search<IndexElement>(s => s
.Query(q => q
.TermsDescriptor(t => t.OnField(f => f.Countries).MinimumShouldMatch("100%").Terms(lookingFor))));
}
}
Regarding your problem
For terms: "netherlands" you will get document with Id 3
For terms: "poland" and "france" you will get documents with Id 1 and 2
For terms: "germany" you will get document with Id 1
For terms: "poland", "france" and "germany" you will get document
with Id 1
I hope this is your point.
Instead of doing
.Query(q => q
.Terms(t => t.ESCountryDescription, CountryList))
You can use the command below
.Query(q => q
.TermsDescriptor(td => td
.OnField(t => t.ESCountryDescription)
.MinimumShouldMatch(x)
.Terms(CountryList)))
See this for unit tests in elasticsearch-net Github repository.
Related
We are using a filter as per following:
filters.Add(fq => fq
.Term(t => t
.Field(f => f.LocalityId)
.Value(locationParams[2])) || fq
.GeoShape(g => g
.Field("locationShape")
.Relation(GeoShapeRelation.Within)
.IndexedShape(f => f
.Id(searchCriteria.spLocationId)
.Index(indexName)
.Path("geometry")
)
)
);
However, if the geometry field is missing, Elasticsearch throws an exception.
Is there anyway to avoid this by using a default (Null Value) in the mapping or any other way.
It is not possible to avoid the exception in this case. Elasticsearch assumes that the parameters that the user provides to a pre-indexed shape are valid.
Ideally, the values supplied to the indexed shape should be constrained in a manner that prevents an end user from supplying invalid values. If that is unfeasible, you could run a bool query with filter clauses of exists query and ids query on the indexName index before adding the indexed shape geoshape query filter to the search request.
For example
private static void Main()
{
var documentsIndex = "documents";
var shapesIndex = "shapes";
var host = "localhost";
var pool = new SingleNodeConnectionPool(new Uri($"http://{host}:9200"));
var settings = new ConnectionSettings(pool)
.DefaultMappingFor<Document>(m => m.IndexName(documentsIndex))
.DefaultMappingFor<Shape>(m => m.IndexName(shapesIndex));
var client = new ElasticClient(settings);
if (client.Indices.Exists(documentsIndex).Exists)
client.Indices.Delete(documentsIndex);
client.Indices.Create(documentsIndex, c => c
.Map<Document>(m => m
.AutoMap()
)
);
if (client.Indices.Exists(shapesIndex).Exists)
client.Indices.Delete(shapesIndex);
client.Indices.Create(shapesIndex, c => c
.Map<Shape>(m => m
.AutoMap()
)
);
client.Bulk(b => b
.IndexMany(new [] {
new Document
{
Id = 1,
LocalityId = 1,
LocationShape = GeoWKTReader.Read("POLYGON ((30 20, 20 15, 20 25, 30 20))")
},
new Document
{
Id = 2,
LocalityId = 2
},
})
.IndexMany(new []
{
new Shape
{
Id = 1,
Geometry = GeoWKTReader.Read("POLYGON ((20 35, 10 30, 10 10, 30 5, 45 20, 20 35))")
}
})
.Refresh(Refresh.WaitFor)
);
var shapeId = 1;
var searchResponse = client.Search<Shape>(s => s
.Size(0)
.Query(q => +q
.Ids(i => i.Values(shapeId)) && +q
.Exists(e => e.Field("geometry"))
)
);
Func<QueryContainerDescriptor<Document>, QueryContainer> geoShapeQuery = q => q;
if (searchResponse.Total == 1)
geoShapeQuery = q => +q
.GeoShape(g => g
.Field("locationShape")
.Relation(GeoShapeRelation.Within)
.IndexedShape(f => f
.Id(shapeId)
.Index(shapesIndex)
.Path("geometry")
)
);
client.Search<Document>(s => s
.Query(q => +q
.Term(t => t
.Field(f => f.LocalityId)
.Value(2)
) || geoShapeQuery(q)
)
);
}
public class Document
{
public int Id { get; set; }
public int LocalityId { get; set; }
public IGeoShape LocationShape { get; set; }
}
public class Shape
{
public int Id { get; set; }
public IGeoShape Geometry { get; set; }
}
If var shapeId = 1; is changed to var shapeId = 2; then the geoshape query is not added to the filter clauses when searching on the documents index.
I have a list of users, each user has string array property called Tags. I am trying to get a unique list of tags and a total count, any idea what I a missing here? I am using LinqPad to write my test query, please see the example code:
void Main()
{
List<User> users = new List<User>(){
new User {Id = 1, Tags = new string[]{"tag1", "tag2"}},
new User {Id = 2, Tags = new string[]{"tag3", "tag7"}},
new User {Id = 3, Tags = new string[]{"tag7", "tag8"}},
new User {Id = 4, Tags = new string[]{"tag1", "tag4"}},
new User {Id = 5 },
};
var uniqueTags = users.Where(m=>m.Tags != null).GroupBy(m=>m.Tags).Select(m=> new{TagName = m.Key, Count = m.Count()});
uniqueTags.Dump();
// RESULT should BE:
// tag1 - Count(2)
// tag2 - Count(1)
// tag3 - Count(1)
// tag4 - Count(1)
// tag7 - Count(2)
// tag8 - Count(1)
}
public class User{
public int Id {get;set;}
public string[] Tags {get;set;}
}
You can flatten to IEnumerable<string> before grouping:
var uniqueTags = users.SelectMany(u => u.Tags ?? new string[0])
.GroupBy(t => t)
.Select(g => new { TagName = g.Key, Count = g.Count() } );
LINQPad C# Expression version:
new[] {
new { Id = 1, Tags = new[] { "tag1", "tag2" } },
new { Id = 2, Tags = new[] { "tag3", "tag7" } },
new { Id = 3, Tags = new[] { "tag7", "tag8" } },
new { Id = 4, Tags = new[] { "tag1", "tag4" } },
new { Id = 5, Tags = (string[])null }
}
.SelectMany(u => u.Tags ?? Enumerable.Empty<string>())
.GroupBy(t => t)
.Select(g => new { TagName = g.Key, Count = g.Count() } )
I'm using the NEST client with the following syntax:
_server.Search<Document>(s => s.Index(_config.SearchIndex)
.Query(q => q.MatchAll(p => p)).Aggregations(
a => a
.Terms("type", st => st
.Field(p => p.Type)
)));
However I keep getting the following exception
A first chance exception of type 'System.NotSupportedException' occurred in mscorlib.dll
Additional information: Collection is read-only.
It only seems to occur when I'm using aggregations, the field of Type has the following mapping:
[Keyword(Name = "Type")]
public string Type { get; set; }
I would double check the versions of NEST and Elasticsearch.Net that you are using. I just tried the following example with Elasticsearch 5.1.2 and NEST 5.0.1 and don't see the issue
void Main()
{
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var defaultIndex = "default-index";
var connectionSettings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex);
var client = new ElasticClient(connectionSettings);
if (client.IndexExists(defaultIndex).Exists)
client.DeleteIndex(defaultIndex);
client.CreateIndex(defaultIndex, c => c
.Mappings(m => m
.Map<Document>(mm => mm.AutoMap())
)
);
var documents = Enumerable.Range(1, 100).Select(i =>
new Document
{
Type = $"Type {i % 5}"
}
);
client.IndexMany(documents);
client.Refresh(defaultIndex);
var searchResponse = client.Search<Document>(s => s
.Index(defaultIndex)
.Query(q => q.MatchAll(p => p))
.Aggregations(a => a
.Terms("type", st => st
.Field(p => p.Type)
)
)
);
foreach (var bucket in searchResponse.Aggs.Terms("type").Buckets)
Console.WriteLine($"key: {bucket.Key}, count {bucket.DocCount}");
}
public class Document
{
[Keyword(Name = "Type")]
public string Type { get; set; }
}
this outputs
key: Type 0, count 20
key: Type 1, count 20
key: Type 2, count 20
key: Type 3, count 20
key: Type 4, count 20
I have 2 Types in one index and both have Suggest field (as required )
public class LegalAreaSearchModel : LegalAreaModel
{
public SuggestField Suggest
{
get
{
List<string> input = new List<string>();
string[] childArea = !string.IsNullOrEmpty(this.LegalArea) ? this.LegalArea.Split(' ') : new string[] { "" };
string[] ParentArea = !string.IsNullOrEmpty(this.Parent) ? this.Parent.Split(' ') : new string[] { "" };
input.AddRange(childArea);
input.AddRange(ParentArea);
return
new SuggestField
{
Input = input,
Output = this.LegalArea,
Payload = new
{
Id = this.RowId,
Name = !string.IsNullOrEmpty(this.Parent) ? this.Parent + "/" + this.LegalArea : this.LegalArea,
Type = "Specialization"
},
Weight = !string.IsNullOrEmpty(this.LegalArea) ? this.LegalArea.Length : 0
};
}
}
}
AND
public class LegalDocumentSearchModel : LegalDocumentModel
{
public LegalDocumentSearchModel()
{
//this = ObjectCopier.Clone<LegalDocumentSearchModel>(legalDocumentSearchModel);
SupplierDetails = new List<LegalDocumentSupplierDetailForSearch>();
CategoryDetails = new List<CategoryDetails>();
}
[Nested()]
public List<LegalDocumentSupplierDetailForSearch> SupplierDetails { get; set; }
[Nested()]
public List<CategoryDetails> CategoryDetails { get; set; }
public SuggestField Suggest
{
get
{
return
new SuggestField
{
Input = new List<string>(!string.IsNullOrEmpty(this.Name) ? this.Name.Split(' ') : new string[] { "" }) { this.Name },
Output = this.Name,
Payload = new
{
Id = this.RowId,
SEOFriendlyURLName = !string.IsNullOrEmpty(this.SEOFriendlyURLName) ? string.Concat(this.SEOFriendlyURLName) : string.Empty,
Name = this.Name,
ProductType = this.ProductType,
Description = this.Description
},
Weight = !string.IsNullOrEmpty(this.Description) ? this.Description.Length : 0
};
}
}
}
public class LegalDocumentSupplierDetailForSearch
{
public string PinCode { get; set; }
public string Lattitude { get; set; }
public string Longitude { get; set; }
}
public class CategoryDetails
{
public int CategoryId { get; set; }
public string CategoryName { get; set; }
}
now my search is as follows
List<AdvocateListingSuggestionModel> lsResult = new List<AdvocateListingSuggestionModel>();
var result = _searchProvider.Client.Suggest<LegalAreaSearchModel>(s => s
.Index(SearchConfigurationManager.DefaultSearchIndex)
.Completion("ml-la-suggestions", c => c
.Text(Search)
.Field(p => p.Suggest)
.Fuzzy(fz => fz
.Fuzziness(Fuzziness.Auto))
)
);
I am getting result from other type as well(mix) of both how can we restrict result from only one type
Type1.LegalAreaSearchModel
Type 2.LegalDocumentSearchModel
my index creation is as follows
public static bool CheckForIndex(ElasticSearchProvider searchProvider)
{
Nest.IndexExistsRequest idr = new Nest.IndexExistsRequest(Nest.Indices.Index(new Nest.IndexName() { Name = SearchConfigurationManager.DefaultSearchIndex }));
var indxres = searchProvider.Client.IndexExists(idr);
if (!indxres.Exists)
{
searchProvider.Client.CreateIndex(SearchConfigurationManager.DefaultSearchIndex, i => i
.Settings(s => s
.NumberOfShards(2)
.NumberOfReplicas(0)
.Analysis(analysis => analysis
.Tokenizers(tokenizers => tokenizers
.Pattern("ml-id-tokenizer", p => p.Pattern(#"\W+"))
)
.TokenFilters(tokenfilters => tokenfilters
.WordDelimiter("ml-id-words", wd => wd
.SplitOnCaseChange()
.PreserveOriginal()
.SplitOnNumerics()
.GenerateNumberParts(false)
.GenerateWordParts()
)
)
.Analyzers(analyzers => analyzers
.Custom("ml-id-analyzer", c => c
.Tokenizer("ml-id-tokenizer")
.Filters("ml-id-words", "lowercase")
)
.Custom("ml-id-keyword", c => c
.Tokenizer("keyword")
.Filters("lowercase")
)
)
)
));
}
return true;
}
Creatining Document Type as follows
public bool CreateDocumentIndex()
{
bool retVal = false;
if (Common.CheckForIndex(_searchProvider))
{
var res = _searchProvider.Client.Map<LegalDocumentSearchModel>(m =>
m.Index(SearchConfigurationManager.DefaultSearchIndex)
.Type(this.type)
.AutoMap()
.Properties(ps => ps
.String(s => s
.Name(p => p.Id)
.Analyzer("ml-id-analyzer")
.Fields(f => f
.String(p => p.Name("keyword").Analyzer("ml-id-keyword"))
.String(p => p.Name("raw").Index(FieldIndexOption.NotAnalyzed))
)
)
.Completion(c => c
.Name(p => p.Suggest)
.Payloads()
)
));
retVal = res.IsValid;
}
return retVal;
}
creating LegalArea Type as follows
public bool CreateLegalAreaIndex()
{
bool retVal = false;
if (Common.CheckForIndex(_searchProvider))
{
var res = _searchProvider.Client.Map<LegalAreaSearchModel>(m =>
m.Index(SearchConfigurationManager.DefaultSearchIndex)
.Type(this.type)
.AutoMap()
.Properties(ps => ps
.String(s => s
.Name(p => p.Id)
.Analyzer("ml-id-analyzer")
.Fields(f => f
.String(p => p.Name("keyword").Analyzer("ml-id-keyword"))
.String(p => p.Name("raw").Index(FieldIndexOption.NotAnalyzed))
)
)
.Completion(c => c
.Name(p => p.Suggest)
.Payloads()
)
));
retVal = res.IsValid;
}
return retVal;
}
now when i am running legal area suggestions as follows
public List<AdvocateListingSuggestionModel> LegalAreaSuggestion(string Search)
{
List<AdvocateListingSuggestionModel> lsResult = new List<AdvocateListingSuggestionModel>();
var result = _searchProvider.Client.Suggest<LegalAreaSearchModel>(s => s
.Index(SearchConfigurationManager.DefaultSearchIndex)
.Completion("ml-la-suggestions", c => c
.Text(Search)
.Field(p => p.Suggest)
.Fuzzy(fz => fz
.Fuzziness(Fuzziness.Auto))
)
);
if (result.IsValid)
{
lsResult = result.Suggestions["ml-la-suggestions"]
.FirstOrDefault()
.Options
.Select(suggest => suggest.Payload<AdvocateListingSuggestionModel>()).ToList();
}
return lsResult;
}
I am getting results of LegalDocumentSearchModel as well.pls advise
Thanks
Suggesters work at the index level, that is, one Finite State Transducer (FST) is created for a field per index so if you have two types with the same field name in the same index (both fields must be of the same type), both types will have data represented in the one FST.
We can see this with a simple example. Here we have two types with completion fields that we're going to index into the same index
public class Band
{
public int Id { get; set;}
public CompletionField<object> Suggest { get; set;}
}
public class Sport
{
public int Id { get; set;}
public CompletionField<object> Suggest { get; set; }
}
void Main()
{
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var defaultIndex = "default-index";
var connectionSettings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex);
var client = new ElasticClient(connectionSettings);
// just so we can re-run this example...
if (client.IndexExists(defaultIndex).Exists)
client.DeleteIndex(defaultIndex);
client.CreateIndex(defaultIndex, ci => ci
.Mappings(m => m
.Map<Band>(l => l
.AutoMap()
.Properties(p => p
.Completion(c => c
.Name(n => n.Suggest)
)
)
)
.Map<Sport>(l => l
.AutoMap()
.Properties(p => p
.Completion(c => c
.Name(n => n.Suggest)
)
)
)
)
);
var bands = new List<Band>
{
new Band { Id = 1, Suggest = new CompletionField<object> { Input = new [] {"Bowling for Soup"} } },
new Band { Id = 2, Suggest = new CompletionField<object> { Input = new [] {"Fastball"} } },
new Band { Id = 3, Suggest = new CompletionField<object> { Input = new [] {"Dropkick Murphys"} } },
new Band { Id = 4, Suggest = new CompletionField<object> { Input = new [] {"Yellowcard"} } },
new Band { Id = 5, Suggest = new CompletionField<object> { Input = new [] {"American Football"} } },
};
client.IndexMany(bands);
var sports = new List<Sport>
{
new Sport { Id = 1, Suggest = new CompletionField<object> { Input = new [] {"Bowling"} } },
new Sport { Id = 2, Suggest = new CompletionField<object> { Input = new [] {"Football"} } },
new Sport { Id = 3, Suggest = new CompletionField<object> { Input = new [] {"Baseball"} } },
new Sport { Id = 4, Suggest = new CompletionField<object> { Input = new [] {"Table Tennis"} } },
new Sport { Id = 5, Suggest = new CompletionField<object> { Input = new [] {"American Football"} } },
};
client.IndexMany(sports);
client.Refresh(defaultIndex);
var suggestResponse = client.Suggest<Band>(s => s
.Completion("suggestion", cs => cs
.Text("Bo")
.Field(f => f.Suggest)
)
);
}
We get back the following from our suggest call
{
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"suggestion" : [ {
"text" : "Bo",
"offset" : 0,
"length" : 2,
"options" : [ {
"text" : "Bowling",
"score" : 1.0
}, {
"text" : "Bowling for Soup",
"score" : 1.0
} ]
} ]
}
We get a suggestion of both a Band type and a Sport type in the results. Perhaps not what we expected or wanted, but that's how suggesters work. Note that in the .Suggest<T>() call, the type T is used to provide strongly typed access to properties that may be used in the query. In this example, the .Suggest field of type Band.
The recommended solution here is to have separate indices for different types; you can still query across multiple indices when you need to and avoid the pitfalls that can come with having multiple types in the same index.
With that aside, you can get this to work by using a Context Suggester and taking advantage of category context for the completion type mapping. Ideally, you want to use a field that exists within the document for the category to afford you flexibility in the category, but you can however use a metadata field if you want to. I recommend investigating and researching approaches for your domain.
With our previous example, if we now include category context in the mapping (same code as before but with this updated mapping)
client.CreateIndex(defaultIndex, ci => ci
.Mappings(m => m
.Map<Band>(l => l
.AutoMap()
.Properties(p => p
.Completion(c => c
.Name(n => n.Suggest)
.Context(sc => sc
.Category("type", csc => csc
// recommend that you use another field on
// the document here instead of a metadata field
.Field("_type")
)
)
)
)
)
.Map<Sport>(l => l
.AutoMap()
.Properties(p => p
.Completion(c => c
.Name(n => n.Suggest)
.Context(sc => sc
.Category("type", csc => csc
// recommend that you use another field on
// the document here instead of a metadata field
.Field("_type")
)
)
)
)
)
)
);
And use category context whilst searching
var suggestResponse = client.Suggest<Band>(s => s
.Completion("suggestion", cs => cs
.Text("Bo")
.Field(f => f.Suggest)
.Context(d => d.Add("type", "band"))
)
);
then we get the desired result
{
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"suggestion" : [ {
"text" : "Bo",
"offset" : 0,
"length" : 2,
"options" : [ {
"text" : "Bowling for Soup",
"score" : 1.0
} ]
} ]
}
I am trying to create some completion suggesters on some of my fields. My document class looks like this:
[ElasticType(Name = "rawfiles", IdProperty = "guid")]
public class RAW
{
[ElasticProperty(OmitNorms = true, Index = FieldIndexOption.NotAnalyzed, Type = FieldType.String, Store = true)]
public string guid { get; set; }
[ElasticProperty(OmitNorms = true, Index = FieldIndexOption.Analyzed, Type = FieldType.String, Store = true, IndexAnalyzer = "def_analyzer", SearchAnalyzer = "def_analyzer_search", AddSortField = true)]
public string filename { get; set; }
[ElasticProperty(OmitNorms = true, Index = FieldIndexOption.Analyzed, Type = FieldType.String, Store = true, IndexAnalyzer = "def_analyzer", SearchAnalyzer = "def_analyzer_search")]
public List<string> tags { get { return new List<string>(); } }
}
And here is how I am trying to create the completion fields
public bool CreateMapping(ElasticClient client, string indexName)
{
IIndicesResponse result = null;
try
{
result = client.Map<RAW>(
c => c.Index(indexName)
.MapFromAttributes()
.AllField(f => f.Enabled(false))
.SourceField(s => s.Enabled())
.Properties(p => p
.Completion(s => s.Name(n => n.tags.Suffix("comp"))
.IndexAnalyzer("standard")
.SearchAnalyzer("standard")
.MaxInputLength(20)
.Payloads()
.PreservePositionIncrements()
.PreserveSeparators())
.Completion(s2 => s2.Name(n=>n.filename.Suffix("comp"))
.IndexAnalyzer("standard")
.SearchAnalyzer("standard")
.MaxInputLength(20)
.Payloads()
.PreservePositionIncrements()
.PreserveSeparators())
)
);
}
catch (Exception)
{
}
return result != null && result.Acknowledged;
}
My problem is that this is only creating a single completion field named "comp". I was under the impression that this will create two completion fields, one named filename.comp and the other named tags.comp.
I then tried the answer on this SO question but this complicated the matter even worse as now my two fields were mapped as a completion field only.
Just to be clear, I want to create a multi-field (field) that has a data, sort and completion fileds. Much like the one in this example
This is how you can reproduce auto-complete example from attached by you article.
My simple class(we are going to implement auto-complete on Name property)
public class Document
{
public int Id { get; set; }
public string Name { get; set; }
}
To create multi field mapping in NEST we have to define mapping in such manner:
var indicesOperationResponse = client.CreateIndex(descriptor => descriptor
.Index(indexName)
.AddMapping<Document>(m => m
.Properties(p => p.MultiField(mf => mf
.Name(n => n.Name)
.Fields(f => f
.String(s => s.Name(n => n.Name).Index(FieldIndexOption.Analyzed))
.String(s => s.Name(n => n.Name.Suffix("sortable")).Index(FieldIndexOption.NotAnalyzed))
.String(s => s.Name(n => n.Name.Suffix("autocomplete")).IndexAnalyzer("shingle_analyzer"))))))
.Analysis(a => a
.Analyzers(b => b.Add("shingle_analyzer", new CustomAnalyzer
{
Tokenizer = "standard",
Filter = new List<string> {"lowercase", "shingle_filter"}
}))
.TokenFilters(b => b.Add("shingle_filter", new ShingleTokenFilter
{
MinShingleSize = 2,
MaxShingleSize = 5
}))));
Let's index some documents:
client.Index(new Document {Id = 1, Name = "Tremors"});
client.Index(new Document { Id = 2, Name = "Tremors 2: Aftershocks" });
client.Index(new Document { Id = 3, Name = "Tremors 3: Back to Perfection" });
client.Index(new Document { Id = 4, Name = "Tremors 4: The Legend Begins" });
client.Index(new Document { Id = 5, Name = "True Blood" });
client.Index(new Document { Id = 6, Name = "Tron" });
client.Index(new Document { Id = 7, Name = "True Grit" });
client.Index(new Document { Id = 8, Name = "Land Before Time" });
client.Index(new Document { Id = 9, Name = "The Shining" });
client.Index(new Document { Id = 10, Name = "Good Burger" });
client.Refresh();
Now, we are ready to write prefix query :)
var searchResponse = client.Search<Document>(s => s
.Query(q => q
.Prefix("name.autocomplete", "tr"))
.SortAscending(sort => sort.Name.Suffix("sortable")));
This query will get us
Tremors 2: Aftershocks
Tremors 3: Back to Perfection
Tremors 4: The Legend Begins
Tron
True Blood
True Grit
Hope this will help you.
Recently, guys from NEST prepared great tutorial about NEST and elasticsearch. There is a part about suggestions, it should be really useful for you.