Does analyzer prevent fields from highlighting? - elasticsearch

could you help me with little problem regarding language-specific analyzers and highliting in elasticsearch?
I need search documents by a query string and highlight matched strings.
here is my mapping:
{
"usr": {
"properties": {
"text0": {
"type": "string",
"analyzer": "english"
},
"text1": {
"type": "string"
}
}
}
}
Note, that for "text0" field "english" analyzer is set, and for "text1" field is used standard analyzer by default.
In my index there is one document for now:
hits": [{
"_index": "tt",
"_type": "usr",
"_id": "AUxvIPAv84ayQMZV-3Ll",
"_score": 1,
"_source": {
"text0": "highlighted. need to be highlighted.",
"text1": "highlighted. need to be highlighted."
}
}]
Consider following query:
{
"query": {
"query_string" : {
"query" : "highlighted"
}
},
"highlight" : {
"fields" : {
"*" : {}
}
}
}
I've expected each field in the document to be highlighted, but highlighting appeared only in "text1" field (where is no analyzer set):
"hits": [{
"_type": "usr",
"_source": {
"text0": "highlighted. need to be highlighted.",
"text1": "highlighted. need to be highlighted."
},
"_score": 0.19178301,
"_index": "tt",
"highlight": {
"text1": [
"<em>highlighted</em>. need to be <em>highlighted</em>."
]
},
"_id": "AUxvIPAv84ayQMZV-3Ll"
}]
Let's consider the following query(I expected "highlighted" matches "highlight" because of analyzer):
{
"query": {
"query_string" : {
"query" : "highlight"
}
},
"highlight" : {
"fields" : {
"*" : {}
}
}
}
But there was no hist in response at all: (Did the english analyzer even work here?)
"hits": {
"hits": [],
"total": 0,
"max_score": null
}
At last, consider some curl commands (requests and responses):
curl "http://localhost:9200/tt/_analyze?field=text0" -d "highlighted"
{"tokens":[{
"token":"highlight",
"start_offset":0,
"end_offset":11,
"type":"<ALPHANUM>",
"position":1
}]}
curl "http://localhost:9200/tt/_analyze?field=text1" -d "highlighted"
{"tokens":[{
"token":"highlighted",
"start_offset":0,
"end_offset":11,
"type":"<ALPHANUM>",
"position":1
}]}
We see, by passing text through the english and standard analyzers, the result is different.
Finally, the question: does analyzer prevent fields from highlighting? How can I get my fields highlighted while full-text search?
P.S. I use elasticsearch v1.4.4 on my local machine with windows 8.1.

It has to do with your query. You are using the query_string query and you are not specifying the field so it is searching on the _all field by default.
That is why you're seeing the strange results. Change your query to a multi_match query that searches on both fields:
{
"query": {
"multi_match": {
"fields": [
"text1",
"text0"
],
"query": "highlighted"
}
},
"highlight": {
"fields": {
"*": {}
}
}
}
Now highlight results for both fields will returned in the response.

Related

Username search in Elasticsearch

I want to implement a simple username search within Elasticsearch. I don't want weighted username searches yet, so I would expect it wouldn't be to hard to find resources on how do this. But in the end, I came across NGrams and lot of outdated Elasticsearch tutorials and I completely lost track on the best practice on how to do this.
This is now my setup, but it is really bad because it matches so much unrelated usernames:
{
"settings": {
"index" : {
"max_ngram_diff": "11"
},
"analysis": {
"analyzer": {
"username_analyzer": {
"tokenizer": "username_tokenizer",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"username_tokenizer": {
"type": "ngram",
"min_gram": "1",
"max_gram": "12"
}
}
}
},
"mappings": {
"properties": {
"_all" : { "enabled" : false },
"username": {
"type": "text",
"analyzer": "username_analyzer"
}
}
}
}
I am using the newest Elasticsearch and I just want to query similar/exact usernames. I have a user db and users should be able to search for eachother, nothing to fancy.
If you want to search for exact usernames, then you can use the term query
Term query returns documents that contain an exact term in a provided field. If you have not defined any explicit index mapping, then you need to add .keyword to the field. This uses the keyword analyzer instead of the standard analyzer.
There is no need to use an n-gram tokenizer if you want to search for the exact term.
Adding a working example with index data, index mapping, search query, and search result
Index Mapping:
{
"mappings": {
"properties": {
"username": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
Index Data:
{
"username": "Jack"
}
{
"username": "John"
}
Search Query:
{
"query": {
"term": {
"username.keyword": "Jack"
}
}
}
Search Result:
"hits": [
{
"_index": "68844541",
"_type": "_doc",
"_id": "1",
"_score": 0.2876821,
"_source": {
"username": "Jack"
}
}
]
Edit 1:
To match for similar terms, you can use the fuzziness parameter along with the match query
{
"query": {
"match": {
"username": {
"query": "someting",
"fuzziness":"auto"
}
}
}
}
Search Result will be
"hits": [
{
"_index": "68844541",
"_type": "_doc",
"_id": "3",
"_score": 0.6065038,
"_source": {
"username": "something"
}
}
]

Returning documents that match multiple wildcard string queries

I'm new to Elasticsearch and would greatly appreciate help on this
In the query below I only want the first document to be returned, but instead both documents are returned. How can I write a query to search for two wildcard strings on two separate fields, but only return documents that match?
I think what's being returned currently is score dependent, but I don't need the score.
POST /pr/_doc/1
{
"type": "Type ONE",
"currency":"USD"
}
POST /pr/_doc/2
{
"type": "Type TWO",
"currency":"USD"
}
GET /pr/_search
{
"query": {
"bool": {
"must": [
{
"simple_query_string": {
"query": "Type ON*",
"fields": ["type"],
"analyze_wildcard": true
}
},
{
"simple_query_string": {
"query": "US*",
"fields": ["currency"],
"analyze_wildcard":true
}
}
]
}
}
}
Use below query which uses the default_operator: AND and query string for in depth information and further reading.
Search query
{
"query": {
"query_string": {
"query": "(Type ON*) AND (US*)",
"fields" : ["type", "currency"],
"default_operator" : "AND"
}
}
}
Index your sample docs and it returns your expected doc only:
"hits": [
{
"_index": "multiplequery",
"_type": "_doc",
"_id": "1",
"_score": 2.1823215,
"_source": {
"type": "Type ONE",
"currency": "USD"
}
}
]

Elasticsearch - pass fuzziness parameter in query_string

I have a fuzzy query with customized AUTO:10,20 fuzziness value.
{
"query": {
"match": {
"name": {
"query": "nike",
"fuzziness": "AUTO:10,20"
}
}
}
}
How to convert it to a query_string query? I tried nike~AUTO:10,20 but it is not working.
It's possible with query_strng as well, let me show using the same example as OP provided, both match_query provided by OP matches and query_string fetches the same document with same score.
And according to this and this ES docs, Elasticsearch supports AUTO:10,20 format, which is shown in my example as well.
Also
Index mapping
{
"mappings": {
"properties": {
"name": {
"type": "text"
}
}
}
}
Index some doc
{
"name" : "nike"
}
Search query using match with fuzziness
{
"query": {
"match": {
"name": {
"query": "nike",
"fuzziness": "AUTO:10,20"
}
}
}
}
And result
"hits": [
{
"_index": "so-query",
"_type": "_doc",
"_id": "1",
"_score": 0.9808292,
"_source": {
"name": "nike"
}
}
]
Query_string with fuzziness
{
"query": {
"query_string": {
"fields": ["name"],
"query": "nike",
"fuzziness": "AUTO:10,20"
}
}
}
And result
"hits": [
{
"_index": "so-query",
"_type": "_doc",
"_id": "1",
"_score": 0.9808292,
"_source": {
"name": "nike"
}
}
]
Lucene syntax only allows you to specify "fuzziness" with the tilde symbol "~", optionally followed by 0, 1 or 2 to indicate the edit distance.
Elasticsearch Query DSL supports a configurable special value for AUTO which then is used to build the proper Lucene query.
You would need to implement that logic on your application side, by evaluating the desired edit distance based on the length of your search term and then use <searchTerm>~<editDistance> in your query_string-query.

elasticsearch: How to rank first appearing words or phrases higher

For example, if I have the following documents:
1. Casa Road
2. Jalan Casa
Say my query term is "cas"... on searching, both documents have same scores. I want the one with casa appearing earlier (i.e. document 1 here) and to rank first in my query output.
I am using an edgeNGram Analyzer. Also I am using aggregations so I cannot use the normal sorting that happens after querying.
You can use the Bool Query to boost the items that start with the search query:
{
"bool" : {
"must" : {
"match" : { "name" : "cas" }
},
"should": {
"prefix" : { "name" : "cas" }
},
}
}
I'm assuming the values you gave is in the name field, and that that field is not analyzed. If it is analyzed, maybe look at this answer for more ideas.
The way it works is:
Both documents will match the query in the must clause, and will receive the same score for that. A document won't be included if it doesn't match the must query.
Only the document with the term starting with cas will match the query in the should clause, causing it to receive a higher score. A document won't be excluded if it doesn't match the should query.
This might be a bit more involved, but it should work.
Basically, you need the position of the term within the text itself and, also, the number of terms from the text. The actual scoring is computed using scripts, so you need to enable dynamic scripting in elasticsearch.yml config file:
script.engine.groovy.inline.search: on
This is what you need:
a mapping that is using term_vector set to with_positions, and edgeNGram and a sub-field of type token_count:
PUT /test
{
"mappings": {
"test": {
"properties": {
"text": {
"type": "string",
"term_vector": "with_positions",
"index_analyzer": "edgengram_analyzer",
"search_analyzer": "keyword",
"fields": {
"word_count": {
"type": "token_count",
"store": "yes",
"analyzer": "standard"
}
}
}
}
}
},
"settings": {
"analysis": {
"filter": {
"name_ngrams": {
"min_gram": "2",
"type": "edgeNGram",
"max_gram": "30"
}
},
"analyzer": {
"edgengram_analyzer": {
"type": "custom",
"filter": [
"standard",
"lowercase",
"name_ngrams"
],
"tokenizer": "standard"
}
}
}
}
}
test documents:
POST /test/test/1
{"text":"Casa Road"}
POST /test/test/2
{"text":"Jalan Casa"}
the query itself:
GET /test/test/_search
{
"query": {
"bool": {
"must": [
{
"function_score": {
"query": {
"term": {
"text": {
"value": "cas"
}
}
},
"script_score": {
"script": "termInfo=_index['text'].get('cas',_POSITIONS);wordCount=doc['text.word_count'].value;if (termInfo) {for(pos in termInfo){return (wordCount-pos.position)/wordCount}};"
},
"boost_mode": "sum"
}
}
]
}
}
}
and the results:
"hits": {
"total": 2,
"max_score": 1.3715843,
"hits": [
{
"_index": "test",
"_type": "test",
"_id": "1",
"_score": 1.3715843,
"_source": {
"text": "Casa Road"
}
},
{
"_index": "test",
"_type": "test",
"_id": "2",
"_score": 0.8715843,
"_source": {
"text": "Jalan Casa"
}
}
]
}

Elastic Search in a complex document

I have a document stated below. I would like to do a search but I could not do it as I lacked the knowledge. Please help. How can I do searches in ElasticSearch in complex aggregates?
My Document
{
"_index": "vehicles",
"_type": "car",
"_id": "e16bd474-fa8e-4858-ab6c-3bbb3d0aa603",
"_version": 1,
"found": true,
"_source": {
"Type": {
"Name": "Mustang"
}
}
}
My Search Query
GET _search
{
"query":{
"filtered": {
"filter": {
"term": {
"Name": "Mustang"
}
}
}
},
"from":0,
"size":10
}
The Standard Analyzer is being applied to your Name field, so the term Mustang is being stored in the index as mustang. Change your query to use "Name": "mustang" and you should get a match.
If you only want the doc with "Name" : "Mustang" you can use
"query" : {
"bool" : {
"must" : {
"term" : {
"Name" : "Mustang"
}
}
}
}
There are two issues:
You are using term filter which is searching for Mustang token in the index, however the standard analyzer is being used so it is actually indexed as mustang.
You are searching in the wrong field. You should be using nested notation e.g. Type.Name
This query should work as expected:
{"query":{ "filtered": { "filter": {
"term": { "Type.Name": "mustang" }
}}}}

Resources