elasticsearch query string dont search by word part - elasticsearch

I'm sending this request
curl -XGET 'host/process_test_3/14/_search' -d '{
"query" : {
"query_string" : {
"query" : "\"*cor interface*\"",
"fields" : ["title", "obj_id"]
}
}
}'
And I'm getting correct result
{
"took": 12,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 5.421598,
"hits": [
{
"_index": "process_test_3",
"_type": "14",
"_id": "141_dashboard_14",
"_score": 5.421598,
"_source": {
"obj_type": "dashboard",
"obj_id": "141",
"title": "Cor Interface Monitoring"
}
}
]
}
}
But when I want to search by word part, as example
curl -XGET 'host/process_test_3/14/_search' -d '
{
"query" : {
"query_string" : {
"query" : "\"*cor inter*\"",
"fields" : ["title", "obj_id"]
}
}
}'
I'm getting no results back:
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : []
}
}
What am I doing wrong?

This is because your title field has probably been analyzed by the standard analyzer (default setting) and the title Cor Interface Monitoring has been tokenized as the three tokens cor, interface and monitoring.
In order to search any substring of words, you need to create a custom analyzer which leverages the ngram token filter in order to also index all substrings of each of your tokens.
You can create your index like this:
curl -XPUT localhost:9200/process_test_3 -d '{
"settings": {
"analysis": {
"analyzer": {
"substring_analyzer": {
"tokenizer": "standard",
"filter": ["lowercase", "substring"]
}
},
"filter": {
"substring": {
"type": "nGram",
"min_gram": 2,
"max_gram": 15
}
}
}
},
"mappings": {
"14": {
"properties": {
"title": {
"type": "string",
"analyzer": "substring_analyzer"
}
}
}
}
}'
Then you can reindex your data. What this will do is that the title Cor Interface Monitoring will now be tokenized as:
co, cor, or
in, int, inte, inter, interf, etc
mo, mon, moni, etc
so that your second search query will now return the document you expect because the tokens cor and inter will now match.

+1 to Val's solution.
Just wanted to add something.
Since your query is relatively simple, you may want to have a look at match/match_phrase queries. Match queries does have the regex parsing like query_string and are thus lighter.
You can find the details here: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html

Related

Query on Elastic Search on multiple criterias

I have this document in elastic search
{
"_index" : "master",
"_type" : "_doc",
"_id" : "q9IGdXABeXa7ITflapkV",
"_score" : 0.0,
"_source" : {
"customer_acct" : "64876457056",
"ssn_number" : "123456789",
"name" : "Julie",
"city" : "NY"
}
I wanted to query the master index , with the customer_acct and ssn_number to retrive the entire document. I wanted to disable scoring and relevance , I have used the below query
curl -X GET "localhost/master/_search/?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"term": {
"customer_acct": {
"value":"64876457056"
}
}
}
}'
I need to include the second criteria in the term query as well which is the ssn_number, how would I do that? , I want to turn off scoring and relevance would that be possible, I am new to Elastic Search and how would I fit the second criteria on ssn_number in the above query that I have tried?
First, you need to define the proper mapping of your index. your customer_acct and ssn_number are of numeric type but you are storing it as a string. Also looking at your sample I can see you have to use long to store them. and then you can just use filter context in your query as you don't need score and relevance in your result. Read more about filter context in official ES doc as well as below snippet from the link.
In a filter context, a query clause answers the question “Does this
document match this query clause?” The answer is a simple Yes or
No — no scores are calculated. Filter context is mostly used for
filtering structured data,
which is exactly your use-case.
1. Index Mapping
{
"mappings": {
"properties": {
"customer_acct": {
"type": "long"
},
"ssn_number" :{
"type": "long"
},
"name" : {
"type": "text"
},
"city" :{
"type": "text"
}
}
}
}
2. Index sample docs
{
"name": "Smithe John",
"city": "SF",
"customer_acct": 64876457065,
"ssn_number": 123456790
}
{
"name": "Julie",
"city": "NY",
"customer_acct": 64876457056,
"ssn_number": 123456789
}
3. Main search query to filter without the score
{
"query": {
"bool": {
"filter": [ --> only filter clause
{
"term": {
"customer_acct": 64876457056
}
},
{
"term": {
"ssn_number": 123456789
}
}
]
}
}
}
Above search query gives below result:
{
"took": 186,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 0.0,
"hits": [
{
"_index": "so-master",
"_type": "_doc",
"_id": "1",
"_score": 0.0, --> notice score is 0.
"_source": {
"name": "Smithe John",
"city": "SF",
"customer_acct": 64876457056,
"ssn_number": 123456789
}
}
]
}
}

Elasticsearch query do not work with # value

When I execute a simple search query on an email it does not return anything to me, unless I remove what follows the "#", why?
I wish to make queries on the e-mails in fuzzy and autocompletion.
ELASTICSEARCH INFOS:
{
"name" : "ZZZ",
"cluster_name" : "YYY",
"cluster_uuid" : "XXX",
"version" : {
"number" : "6.5.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "WWW",
"build_date" : "2018-11-29T23:58:20.891072Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
MAPPING :
PUT users
{
"mappings":
{
"_doc": { "properties": { "mail": { "type": "text" } } }
}
}
ALL DATAS :
[
{ "mail": "firstname.lastname#company.com" },
{ "mail": "john.doe#company.com" }
]
QUERY WORKS :
Term request works but mail == "firstname.lastname#company.com" and not "firstname.lastname"...
QUERY :
GET users/_search
{ "query": { "term": { "mail": "firstname.lastname" } }}
RETURN :
{
"took": 7,
"timed_out": false,
"_shards": { "total": 6, "successful": 6, "skipped": 0, "failed": 0 },
"hits": {
"total": 1,
"max_score": 4.336203,
"hits": [
{
"_index": "users",
"_type": "_doc",
"_id": "H1dQ4WgBypYasGfnnXXI",
"_score": 4.336203,
"_source": {
"mail": "firstname.lastname#company.com"
}
}
]
}
}
QUERY NOT WORKS :
QUERY :
GET users/_search
{ "query": { "term": { "mail": "firstname.lastname#company.com" } }}
RETURN :
{
"took": 0,
"timed_out": false,
"_shards": { "total": 6, "successful": 6, "skipped": 0, "failed": 0 },
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
SOLUTION :
Change mapping (reindex after mapping changes) with uax_url_email analyzer for mails.
PUT users
{
"settings":
{
"index": { "analysis": { "analyzer": { "mail": { "tokenizer":"uax_url_email" } } } }
}
"mappings":
{
"_doc": { "properties": { "mail": { "type": "text", "analyzer":"mail" } } }
}
}
If you use no other tokenizer for your indexed text field, it will use the standard tokenizer, which tokenizes on the # symbol [I don't have a source on this, but there's proof below].
If you use a term query rather than a match query then that exact term will be searched for in the inverted index elasticsearch match vs term query.
Your inverted index looks like this
GET users/_analyze
{
"text": "firstname.lastname#company.com"
}
{
"tokens": [
{
"token": "firstname.lastname",
"start_offset": 0,
"end_offset": 18,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "company.com",
"start_offset": 19,
"end_offset": 30,
"type": "<ALPHANUM>",
"position": 1
}
]
}
To resolve this you could specify your own analyzer for the mail field or you could use the match query, which will analyze your searched text just like how it analyzes the indexed text.
GET users/_search
{
"query": {
"match": {
"mail": "firstname.lastname#company.com"
}
}
}

Elasticsearch geospatial queries returning no hits

I'm using Kibana to look at a geospatial dataset in Elasticsearch for a feature currently under development. There is a index of positions which contains field "loc.coordinates", which is a geo_point, and has as data as such:
loc.coordinates 25.906958000000003, 51.776407000000006
However when I run the following query I get no results:
Query
GET /positions/_search
{
"query": {
"bool" : {
"must" : {
"match_all" : {}
},
"filter" : {
"geo_distance" : {
"distance" : "2000km",
"loc.coordinates" : {
"lat" : 25,
"lon" : 51
}
}
}
}
}
}
Response
{
"took": 12,
"timed_out": false,
"_shards": {
"total": 6,
"successful": 6,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I'm trying to understand why this is, as there are over 250,000 datapoints in the index, and I'm getting no hits regardless of how big the search area is. When I look in the position index mapping I see the following:
"loc": {
"type": "nested",
"properties": {
"coordinates": {
"type": "geo_point"
},
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
I'm new to Elasticsearch and have been making my way through the documentation, but so far I don't see why my geo queries aren't working as expected. What am I doing wrong?
Your loc field is of type nested, so you need to query that field accordingly with a nested query:
GET /positions/_search
{
"query": {
"bool" : {
"filter" : {
"nested": {
"path": "loc",
"query": {
"geo_distance" : {
"distance" : "2000km",
"loc.coordinates" : {
"lat" : 25,
"lon" : 51
}
}
}
}
}
}
}
}

elasticsearch query string probplem with lowercase and underscore

I create a custom analyzer in my Elasticsearch, I want to seperate from only white space of my words in my defined field, is "my_field". And my search needs to be case insensitive, for this feature, I used lowercase filter.
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"trim"
]
}
}
}
}
},
"mappings" : {
"my_type" : {
"properties" : {
"my_field" : {
"type" : "string",
"analyzer" : "my_custom_analyzer"
}
}
}
}
After that this creation, I analyze my sample data:
POST my_index/_analyze
{
"analyzer": "my_custom_analyzer",
"text": "my_Sample_TEXT"
}
and the output is
{
"tokens": [
{
"token": "my_sample_text",
"start_offset": 0,
"end_offset": 14,
"type": "word",
"position": 0
}
]
}
I have many data in my documents and in "my_field" that contains "my_Sample_TEXT" but when I search for this text using query string, result returns 0:
GET my_index/_search
{
"query": {
"query_string" : {
"default_field" : "my_type",
"query" : "*my_sample_text*",
"analyzer" : "my_custom_analyzer",
"enable_position_increments": true,
"default_operator": "AND"
}
}
}
My result is:
{
"took": 9,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I found that, this problem happens when my text has underscore and uppercase text, can anyone help me to fix this problem?
Could you try to change your mapping part of "my_filed" as below:
"my_field" : {
"type" : "string",
"analyzer" : "my_custom_analyzer"
"search_analyzer": "my_custom_analyzer"
}
Because, ES use standart analyzer when you do not set any analyzer. And standart analyzer can create multiple tokens from your underscored text.

Treatment of special characters in elasticsearch

I use the following analyzer:
curl -XPUT 'http://localhost:9200/sample/' -d '
{
"settings" : {
"index": {
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "keyword",
"filter": ["trim", "lowercase"]}
}
}
}
}
}'
Then when I try to insert some documents which contain special characters like % and etc, it converts in to hex.
1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8 -> actual value
1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8
-> stored value.
Sample:
curl -XPUT 'http://localhost:9200/sample/strom/1' -d '{
"user" : "user1",
"message" : "1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8"
}'
The problem started occurring only once the data crossed some million documents. Earlier it used store it as it is.
Now if I try to search using,
1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8
it is not able to retrieve the document. How do I deal with this? The behavior seems to non-deterministic in converting special character to hex.
I am unable to replicate the same issue on localmachine.
Can someone explain the mistake I am making?
That is not how the document is tokenized on my end with that analyzer:
curl -XGET localhost:9200/_analyze?tokenizer=keyword\&filters=trim,lowercase\&pretty -d '1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8'
{
"tokens" : [ {
"token" : "1%2fpjjp3jv2c24idfeu9xphbayxxh%2fdhtbmchb35sdznxo2g8vz4d7gtivy54imix_149c95f02a8",
"start_offset" : 0,
"end_offset" : 80,
"type" : "word",
"position" : 1
} ]
}
Reading the analyzer output above, your example text is converted into a single, lowercase-but-otherwise-identical token given the analyzer shown. Are you sure there is no character filter at play? That's what would do the HTML encoding.
You should be able to run it as:
curl -XGET localhost:9200/sample/_analyze?field=message' -d 'text to analyze'
Since it was not reproducing with the analyzer directly, I tried to reproduce this on my end by creating an index to test it:
curl -XPUT localhost:9200/indexed-analysis -d '
{
"settings": {
"number_of_shards" : 1,
"number_of_replicas" : 0,
"index": {
"analysis": {
"analyzer": {
"default": {
"type": "custom",
"tokenizer": "keyword",
"filter": ["trim", "lowercase"]
}
}
}
}
},
"mappings": {
"indexed" : {
"properties": {
"text" : { "type" : "string" }
}
}
}
}'
curl -XPUT localhost:9200/indexed-analysis/indexed/1 -d '{
"text" :
"1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8"
}'
curl -XGET localhost:9200/indexed-analysis/indexed/1?pretty
This produced the correct, identical result:
{
"_index" : "indexed-analysis",
"_type" : "indexed",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source":{
"text" : "1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8"
}
}
So, I tried _searching for it, and I found it appropriately.
curl -XGET localhost:9200/indexed-analysis/_search -d '{
"query": {
"match": {
"text": "1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8"
}
}
}'
Result:
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.30685282,
"hits": [
{
"_index": "indexed-analysis",
"_type": "indexed",
"_id": "1",
"_score": 0.30685282,
"_source": {
"text": "1%2fPJJP3JV2C24iDfEu9XpHBaYxXh%2fdHTbmchB35SDznXO2g8Vz4D7GTIvY54iMiX_149c95f02a8"
}
}
]
}
}
All of this leads back to three possibilities:
Your search analyzer is different from your index analyzer. This is almost always going to produce unexpected results.
Using default should force it to be used for both reading and writing, but you can/should verify that is actually being used (as opposed to default_index or default_search):
curl -XGET /sample/_settings
curl -XGET /sample/_mapping
If you see analyzers being configured in the mapping for the message field, then that should probably be a red flag.
You have a character filter messing with the indexed string (and it's probably not doing the same thing for your search string, thus pointing back to #1).
There is a bug in the version of Elasticsearch that you are using (hopefully not, but you never know). All of the tests above were done against version 1.3.2.

Resources