In elasticsearch match_phrase query will match full phrase.
match_phrase_prefix query will match phrase as prefix.
for example:
"my_field": "confidence ab"
will match: "confidence above" and "confidence about".
is there query for "match phrase any" like below example:
"my_field": "dence ab"
should fetch match: "confidence above" and "confidence about"
Thanks
There are 2 ways that you can do this
Store the field values as-is in ES by applying keyword analyzer type in mapping => Do a wildcard search
(OR)
Store the field using ngram tokenizer => Do search your data based on your requirement with or without using standard or keyword search analyzers
usually wildcard search are performance inefficient .
Please do let me know on your progress based on my above suggestions so that I can help you further if needed
You need to define the mapping of your field to keyword like below:
PUT test
{
"mappings": {
"properties": {
"name":{
"type": "keyword"
}
}
}
}
Then search over this field using wildcard like below:
GET test/_search
{
"query": {
"wildcard": {
"name": {
"value": "*dence ab*"
}
}
}
}
Please let me know if your have any problem with this.
In your case, the simplest solution is using Query string query or Simple query string query. The latter one is less strict with the query syntax error.
First, make sure that your field is mapped with type text. The example below create a mapping for field named my_field under the test-index.
{
"test-index" : {
"mappings" : {
"properties" : {
"my_field" : {
"type" : "text"
}
}
}
}
}
Then, for searching, use query string query with wild-cards.
{
"query": {
"query_string": {
"fields": ["my_field"],
"query": "*dence ab*"
}
}
}
Related
In order to search for the exact string : "AGA>23/180#20210212" I've tried the below match queries :
{"query": { "match" : {"mid": "AGA>23/180#20210212"}}}
{"query": {"bool": { "must" : [ { "match" : { "mid": "AGA>23/180#23221"}}]}}}
elastic search matches on "AGA>135/880#20210212" & "AGA>212/880#20210212"
So it seems the values 135 & 212 are treated like wildcards.
If I use query instead: {"query": { "term" : {"mid": "AGA>23/180#20210212"}}}
then 0 results are returned.
How to search for value "AGA>23/180#20210212" only ?
The term query returns documents that contain an exact term in a
provided field.
By default standard analyzer is used. It will provide grammar based tokenization for AGA>23/180#20210212 and will generate the following tokens.
aga, 23, 180, 20210212
Due to this, the match query matches on "AGA>135/880#20210212" & "AGA>212/880#20210212"
To search for the exact term you need to add .keyword to the mid field (if you have not explicitly defined any mapping for the field). This uses the keyword analyzer instead of the standard analyzer (notice the ".keyword" after mid field). Try out this below query -
{
"query": {
"term": {
"mid.keyword": "AGA>23/180#20210212"
}
}
}
OR you can change your index mapping to
{
"mappings": {
"properties": {
"mid": {
"type": "keyword"
}
}
}
}
What is the best way to query exact value of a field in elasticsearch? Say for example I have:
profile: {
email: "test#email.com"
}
How do I check if there's exactly the same test#email.com email in a profile?
Whenever you require exact search you can define data type of that field as keyword. If you require both partial search (analyzed) and exact search on the same field you can define a sub field for the same and refer to that subfield when exact search is required.
So the field definition looks as below:
"email": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
You can then use term query to perform exact search.
{
"query": {
"term": {
"email.keyword": "test#email.com"
}
}
}
NOTE: Defining the type as keyword results in case-sensitive exact search.
I am using ES version 2.3. I have index some documents which have the structure like this :
{
"BUSINESSLINE" :"ABC CORP",
"NAME" : "John"
....
...
}
The field BUSINESSLINE is not_analyzed string.
The problem is that this query returns results :
{
"query": {
"multi_match" : {
"query": "ABC",
"fields": [ "_all" ]
}
}
}
But this one does not (It shows no hits!):
{
"query": {
"multi_match" : {
"query": "ABC",
"fields": [ "BUSINESSLINE " ]
}
}
}
Any help is appreciated, I tried to google and research but I am not able to able find any reason for this.
Thanks!
Yes, you are correct. The query matches the document because of _all filed which is a big string constructed by concatenating all fields by the space separator. And it is also analysed which is why your query is being matched.
You can read more about it here.
I am trying to create a search that will return me exactly what i requested.
For instance let's say i have 2 documents with a field named 'Val'
First doc have a value of 'a - Copy', second document is 'a - Copy (2)'
My goal is to search exactly the value 'a - Copy' and find only the first document in my returned results and not both of them with different similarity rankings
When i try most of the usual queries like:
GET test/_search
{
"query": {
"match": {
"Val": {
"query": "a - copy",
"type": "phrase"
}
}
}
}
or:
GET /test/doc/_search
{
"query": {
"query_string": {
"default_field": "Val",
"query": "a - copy"
}
}
}
I get both documents all the time
There is a very good documentation for finding exact values in ES:
https://www.elastic.co/guide/en/elasticsearch/guide/current/_finding_exact_values.html
It shows you how to use the term filter and it mentions problems with analyzed fields, too.
To put it in a nutshell you need to run a term filter like this (I've put your values in):
GET /test/doc/_search
{
"query" : {
"filtered" : {
"query" : {
"match_all" : {}
},
"filter" : {
"term" : {
"Val" : "a - copy"
}
}
}
}
}
However, this doesn't work with analyzed fields. You won't get any results.
To prevent this from happening, we need to tell Elasticsearch that
this field contains an exact value by setting it to be not_analyzed.
There are multiple ways to achieve that. e.g. custom field mappings.
Yes, you are getting that because your field is, most likely, analyzed and split into tokens.
You need an analyzer similar to this one
"custom_keyword_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
which uses the keyword tokenizer and the lowercase filter (I noticed you indexed upper case letters, but expect to search with lowercase letters).
And then use a term filter to search your documents.
I am attempting to do a query where I filter on term for a specific term. This is the query I am attempting to run:
{
"query": {
"filtered": {
"filter": {
"term": {
"tags": "sports"
}
}
}
},
"sort": {
"timestamp": "desc"
}
}
When I run the same query with a different field (ex: "type": "blog_post") it works, so I am confident in the syntax.
I checked to make sure that tags was properly mapped (I checked at "http://server_name/index/_mapping") and it was.
I also checked that there are documents with "tags" : "sports" in Elasticsearch.
Any ideas what the issue could be? It is only that field, all others work, and "tags" is indexed.
What is the mapping/analyzer you have defined for the field "tags"? If you have not defined any analyzer then it will be analysed using the standard analyzer which in turn will give stemmed token "sport" instead of "sports"
If you do a term search or term filter the input is not analyzed, and will try to search for an exact match. So search for term "sports" won't match.
You should either change the mapping for tags to "not_analyzed" or change the search query to something other than term, like query string query.
Based on a use case you've described I assume tags is mapped as an array of values. That said, term filter can only be used for exact matches.
What I would try is to use terms filter or exist filter instead and change the query to this:
"terms" : { "tags" : "sports" }
or this
"exists" : { "tags" : "sports" }