Elasticsearch multi-word, multi-field search with analyzers - elasticsearch

I want to use elasticsearch for multi-word searches, where all the fields are checked in a document with the assigned analyzers.
So if I have a mapping:
{
"settings": {
"analysis": {
"analyzer": {
"folding": {
"tokenizer": "standard",
"filter": [ "lowercase", "asciifolding" ]
}
}
}
},
"mappings" : {
"typeName" :{
"date_detection": false,
"properties" : {
"stringfield" : {
"type" : "string",
"index" : "folding"
},
"numberfield" : {
"type" : "multi_field",
"fields" : {
"numberfield" : {"type" : "double"},
"untouched" : {"type" : "string", "index" : "not_analyzed"}
}
},
"datefield" : {
"type" : "multi_field",
"fields" : {
"datefield" : {"type" : "date", "format": "dd/MM/yyyy||yyyy-MM-dd"},
"untouched" : {"type" : "string", "index" : "not_analyzed"}
}
}
}
}
}
}
As you see I have different types of fields, but I do know the structure.
What I want to do is starting a search with a string to check all fields using the analyzers too.
For example if the query string is:
John Smith 2014-10-02 300.00
I want to search for "John", "Smith", "2014-10-02" and "300.00" in all the fields, calculating the relevance score as well. The better solution is the one that have more field matches in a single document.
So far I was able to search in all the fields by using multi_field, but in that case I was not able to parse 300.00, since 300 was stored in the string part of multi_field.
If I was searching in "_all" field, then no analyzer was used.
How should I modify my mapping or my queries to be able to do a multi-word search, where dates and numbers are recognized in the multi-word query string?
Now when I do a search, error occurs, since the whole string cannot be parsed as a number or a date. And if I use the string representation of the multi_search then 300.00 will not be a result, since the string representation is 300.
(what I would like is similar to google search, where dates, numbers and strings are recognized in a multi-word query)
Any ideas?
Thanks!

Using whitespace as filter in analyzer and then applying this analyzer as search_analyzer to fields in mapping will split query in parts and each of them would be applied to index to find the best matching. And using ngram for index_analyzer would very improve results.
I am using following setup for query:
"query": {
"multi_match": {
"query": "sample query",
"fuzziness": "AUTO",
"fields": [
"title",
"subtitle",
]
}
}
And for mappings and settings:
{
"settings" : {
"analysis": {
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"standard",
"lowercase",
"ngram"
]
}
},
"filter": {
"ngram": {
"type": "ngram",
"min_gram": 2,
"max_gram": 15
}
}
},
"mappings": {
"title": {
"type": "string",
"search_analyzer": "whitespace",
"index_analyzer": "autocomplete"
},
"subtitle": {
"type": "string"
}
}
}
See following answer and article for more details.

Related

Completion suggester and exact matches in Elasticsearch

I'm a bit surprised by the behavior that elasticsearch completion has sometimes. I've set up a mapping that has a suggest field. In the input of the suggest field I put 3 elements that are the name, the isin and the issuer of one security.
Here is the mapping that I use :
"suggest": {
"type" : "completion",
"analyzer" : "simple"
}
When I want to query my index with this query :
{
"suggest": {
"my_suggestion": {
"prefix": "FR0011597335",
"completion": {
"field": "suggest"
}
}
}
}
I get a list of results but not necessarilly with my exact prefix and most of the time with the exact match not at the top.
So I'd like to know if there is a way to boost exact matches in a suggestion and make such exact term matches to be in first position when possible.
I think my problem is solved by using a custom analyzer : the simple one was not convenient for the entries I had.
"settings": {
"analysis": {
"char_filter": {
"punctuation": {
"type": "mapping",
"mappings": [".=>"]
}
},
"filter": {},
"analyzer": {
"analyzer_text": {
"tokenizer": "standard",
"char_filter": ["punctuation"],
"filter": ["lowercase", "asciifolding"]
}
}
}
},
and
"suggest": {
"type" : "completion",
"analyzer" : "analyzer_text"
}

Elastic Search,lowercase search doesnt work

I am trying to search again content using prefix and if I search for diode I get results that differ from Diode. How do I get ES to return result where both diode and Diode return the same results? This is the mappings and settings I am using in ES.
"settings":{
"analysis": {
"analyzer": {
"lowercasespaceanalyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"articles": {
"properties": {
"title": {
"type": "text"
},
"url": {
"type": "keyword",
"index": "true"
},
"imageurl": {
"type": "keyword",
"index": "true"
},
"content": {
"type": "text",
"analyzer" : "lowercasespaceanalyzer",
"search_analyzer":"whitespace"
},
"description": {
"type": "text"
},
"relatedcontentwords": {
"type": "text"
},
"cmskeywords": {
"type": "text"
},
"partnumbers": {
"type": "keyword",
"index": "true"
},
"pubdate": {
"type": "date"
}
}
}
}
here is an example of the query I use
POST _search
{
"query": {
"bool" : {
"must" : {
"prefix" : { "content" : "capacitance" }
}
}
}
}
it happens because you use two different analyzers at search time and at indexing time.
So when you input query "Diod" at search time because you use "whitespace" analyzer your query is interpreted as "Diod".
However, because you use "lowercasespaceanalyzer" at index time "Diod" will be indexed as "diod". Just use the same analyzer both at search and index time, or analyzer that lowercases your strings because default "whitespace" analyzer doesn't https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-whitespace-analyzer.html
There will be no term of Diode in your index. So if you want to get same results, you should let your query context analyzed by same analyzer.
You can use Query string query like
"query_string" : {
"default_field" : "content",
"query" : "Diode",
"analyzer" : "lowercasespaceanalyzer"
}
UPDATE
You can analyze your context before query.
AnalyzeResponse resp = client.admin().indices()
.prepareAnalyze(index, text)
.setAnalyzer("lowercasespaceanalyzer")
.get();
String analyzedContext = resp.getTokens().get(0);
...
Then use analyzedContext as new query context.

Elasticsearch not analyzed field

I have an analyzed field which contains the following: 'quick brown foxes' and another one which contains: 'quick brown fox'.
I want to find those documents which explicity contains 'foxes' (not fox). As I know I have to create a multi-field with an analyzed and not-analyzed subfield (see my mapping below). But how can I query this?
Here's an example (note that my analyzer is set to hungarian but I guess this not matters here):
{
"settings" : {
"number_of_replicas": 0,
"number_of_shards": 1,
"analysis" : {
"analyzer" : {
"hu" : {
"tokenizer" : "standard",
"filter" : [ "lowercase", "hu_HU" ]
}
},
"filter" : {
"hu_HU" : {
"type" : "hunspell",
"locale" : "hu_HU",
"language" : "hu_HU"
}
}
}
},
"mappings": {
"foo": {
"_source": { "enabled": true },
"properties": {
"text": {
"type": "string",
"analyzer": "hu",
"store": false,
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed",
"store": false
}
}
}
}
}
}
}
Queries that I tried: match, term, span_term, query_string. All were executed on text and text.raw field.
"index": "not_analyzed" means this field will be not analyzed at all (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index.html). So it will not be even split into words. I believe this is not what you want.
Instead of that, you need to add new analyzer, which will include only tokenizer whitespace (https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-whitespace-tokenizer.html):
"analyzer" : {
"hu" : {
"tokenizer" : "standard",
"filter" : [ "lowercase", "hu_HU" ]
},
"no_filter":{
"tokenizer" : "whitespace"
}
}
Then you need to use this new analyzer for your field:
"raw": {
"type": "string",
"analyzer": "no_filter",
"store": false
}

How do I search for partial accented keyword in elasticsearch?

I have the following elasticsearch settings:
"settings": {
"index":{
"analysis":{
"analyzer":{
"analyzer_keyword":{
"tokenizer":"keyword",
"filter":["lowercase", "asciifolding"]
}
}
}
}
}
The above works fine for the following keywords:
Beyoncé
Céline Dion
The above data is stored in elasticsearch as beyonce and celine dion respectively.
I can search for Celine or Celine Dion without the accent and I get the same results. However, the moment I search for Céline, I don't get any results. How can I configure elasticsearch to search for partial keywords with the accent?
The query body looks like:
{
"track_scores": true,
"query": {
"bool": {
"must": [
{
"multi_match": {
"fields": ["name"],
"type": "phrase",
"query": "Céline"
}
}
]
}
}
}
and the mapping is
"mappings" : {
"artist" : {
"properties" : {
"name" : {
"type" : "string",
"fields" : {
"orig" : {
"type" : "string",
"index" : "not_analyzed"
},
"simple" : {
"type" : "string",
"analyzer" : "analyzer_keyword"
}
},
}
I would suggest this mapping and then go from there:
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"analyzer_keyword": {
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
}
},
"mappings": {
"test": {
"properties": {
"name": {
"type": "string",
"analyzer": "analyzer_keyword"
}
}
}
}
}
Confirm that the same analyzer is getting used at query time. Here are some possible reasons why that might not be happening:
you specify a separate analyzer at query time on purpose that is not performing similar analysis
you are using a term or terms query for which no analyzer is applied (See Term Query and the section title "Why doesn’t the term query match my document?")
you are using a query_string query (E.g. see Simple Query String Query) - I have found that if you specify multiple fields with different analyzers and so I have needed to separate the fields into separate queries and specify the analyzer parameter (working with version 2.0)

Partial Search using Analyzer in ElasticSearch

I am using elasticsearch to build the index of URLs.
I extracted one URL into 3 parts which is "domain", "path", and "query".
For example: testing.com/index.html?user=who&pw=no will be separated into
domain = testing.com
path = index.html
query = user=who&pw=no
There is problems when I wanted to partial search domain in my index such as "user=who" or "ing.com".
Is it possible to use "Analyzer" when I search even I didn't use "Analyzer" when indexing?
How can I do partial search based on the analyzer ?
Thank you very much.
2 approaches:
1. Wildcard search - easy and slow
"query": {
"query_string": {
"query": "*ing.com",
"default_field": "domain"
}
}
2. Use an nGram tokenizer - harder but faster
Index Settings
"settings" : {
"analysis" : {
"analyzer" : {
"my_ngram_analyzer" : {
"tokenizer" : "my_ngram_tokenizer"
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "1",
"max_gram" : "50"
}
}
}
}
Mapping
"properties": {
"domain": {
"type": "string",
"index_analyzer": "my_ngram_analyzer"
},
"path": {
"type": "string",
"index_analyzer": "my_ngram_analyzer"
},
"query": {
"type": "string",
"index_analyzer": "my_ngram_analyzer"
}
}
Querying
"query": {
"match": {
"domain": "ing.com"
}
}
Trick with query string is split string like "user=who&pw=no" to tokens ["user=who&pw=no", "user=who", "pw=no"] at index time. That allows you to make easily queries like "user=who". You could do this with pattern_capture token filter, but there may be better ways to do this as well.
You can also make hostname and path more searchable with path_hierarchy tokenizer, for example "/some/path/somewhere" becomes ["/some/path/somewhere", "/some/path/", "/some"]. You can index also hostname with with path_hierarchy hierarcy tokenizer by using setting reverse: true and delimiter: ".". You may also want to use some stopwords-filter to exclude top-level domains.

Resources