How do i write a search query in elastic when the field name is unknown - elasticsearch

I want to use the elastic search to query documents who has the matching values of "dc883c6f24776ad6ce1f86c41b5cf87cfb784e85".
Please find the structure of the source document attached in the image. Is it possible to query this with something like below using *[wild char] when the field name is unknown for us.
"query" : {
"constant_score" : {
"filter" : {
"terms" : {
"commitId.persistence-statics-service.*":["dc883c6f24776ad6ce1f86c41b5cf87cfb784e85"]
}
}
}
}

You can use query_string or multi_match to specify a wildcard in the field name part. I think the multi_match is simpler though:
{
"query": {
"constant_score": {
"filter": {
"multi_match": {
"query": "dc883c6f24776ad6ce1f86c41b5cf87cfb784e85",
"fields": [
"commitId.persistence-statics-service.*"
],
"analyzer": "keyword"
}
}
}
}
}

Try to Use query_string . Its very powerful in partial search in ES ,check my answer link :-
{
"query": {
"query_string": {
"fields" : ["commitId.persistence-statics-service.*"] ,
"query": "*dc883c6f24776ad6ce1f86c41b5cf87cfb784e85*"
}
}
}
Query_string is more powerful than multi_match link2

Related

Multi-query match_phrase_prefix elasticsearch

I would like to query 2 different prefixes for the same field. The code below works exactly how I would like it to when working with on field:
GET /logstash-*/_search
{
"query": {
"match_phrase_prefix" : {
"type" : {
"query" : "job-source"
}
}
}
}
I could not find in the docs how to do this with two queries (I found how to search in multiple fields). I have tried a boolean should and the snippet below but both are not giving me the results I am looking for.
GET /logstash-*/_search
{
"query": {
"match_phrase_prefix" : {
"type" : {
"query" : ["job-source","job-find"]
}
}
}
}
How do I query for only documents that have type:job-source or type:job-find as the prefix?
Thank you in advance,
You can combine two match_phrase_prefix queries using should and set minimum_should_match to 1.
Sample Query:
{
"query":
{
"bool":
{
"should": [
{
"match_phrase_prefix":
{
"type": "job-source"
}
},
{
"match_phrase_prefix":
{
"type": "job-find"
}
}],
"minimum_should_match": 1
}
}
}

all enabled to true and keyword token analyzer does not return any results

I am using keyword token analyzer in my elastic search given below
{
"settings" : {
"analysis" : {
"analyzer" : {
"default" : {
"type" : "keyword"
}
}
}
}
}
My order mapping is here
{
"order": {
"_all": {"enabled" : true},
"properties": {
"OrderData": {
"properties": {
"BusinessRuleData": {
.........
}
So now when I querying using the following json
{
"query": {
"bool": {
"must": [
{
"query_string": {
"default_field": "_all",
"query": "SomeText"
}
}
]
}
}
}
I dont get any results for this. Where as if I change my analyzer to "standard" then _all search works fine. Any answers are appreciated.
I think its probably indexed as whole. I think we have to search whole document which does not make sense. So the point is we have to use standard tokenizer for _all indexing.

Elastic search filter

I am new to Elastic search . Please help me in finding the filter/query to be written to match the exact records using Java API.
Below is the mongodb record .I need to get both the record matching the word 'Jerry' using elastic search.
{
"searchcontent" : [
{
"key" : "Jerry",
"sweight" : "1"
},{
"key" : "Kate",
"sweight" : "1"
},
],
"contentId" : "CON_4",
"keyword" : "TEST",
"_id" : ObjectId("55ded619e4b0406bbd901a47")
},
{
"searchcontent" : [
{
"key" : "TOM",
"sweight" : "2"
},{
"key" : "Kruse",
"sweight" : "2"
}
],
"contentId" : "CON_3",
"keyword" : "Jerry",
"_id" : ObjectId("55ded619e4b0406ccd901a47")
}
And if you would like to search in all the fields.
Then you can just do a match _all query,
POST <index name>/<type name>/_search.
{
"query": {
"match" : {
"_all" : "Jerry"
}
}
}
This searches for 'Jerry' in all the fields.
A Multi-Match query is what you need to search across multiple fields. Below query will search for the word "jerry" in both the fields "searchcontent.key" and "keyword" which is what you want.
POST <index name>/<type name>/_search
{
"query": {
"multi_match": {
"query": "jerry",
"fields": [
"searchcontent.key",
"keyword"
]
}
}
}
There is no single solution, it depends how you map your data in elastic search and what you are indexing
GET /intu/_settings
You can use: query string.
If you don't need to combine filter you can remove bool and should.
From the documentation: "The bool query takes a more-matches-is-better approach, so the score from each matching must or should clause will be added together to provide the final _score for each document."
For example:
GET /yourstaff/_search
{
"query": {
"filtered": {
"query": {
"bool": {
"should":
{
"query_string": {
"query": "jerry"
}
}
}
}
}
}
}
Take a look to the documentation:
Query string
Term vs full-search
Bool query
Use Sense to figure out what results you want to have
Using filter is a better option as it caches the results..
{
"query":
{
"bool":
{
"should":
[
{
"term":
{
"searchcontent.key":"jerry"
}
},
{
"term":
{
"keyword":"jerry"
}
}
]
}
}
}
https://www.elastic.co/blog/found-optimizing-elasticsearch-searches
A suggested read for better search.

ElasticSearch multi_match query over multiple fields with Fuzziness

How can I add fuzziness to a multi_match query? So if someone is to search for 'basball' it would still find 'baseball' articles. Currently my query looks like this:
POST /newspaper/articles/_search
{
"query": {
"function_score": {
"query": {
"multi_match": {
"query": "baseball",
"type": "phrase",
"fields": [
"subject^3",
"section^2.5",
"article^2",
"tags^1.5",
"notes^1"
]
}
}
}
}
}
One option I was looking at is to do something like this, just don't know if this is the best option. It's important to keep the sorting based on the scoring:
"query" : {
"query_string" : {
"query" : "subject:basball^3 section:basball^2.5 article:basball^2",
"fuzzy_prefix_length" : 1
}
}
Suggestions?
To add fuzziness to a multiquery you need to add the fuzziness property as described here:
{
"query": {
"function_score": {
"query": {
"multi_match": {
"query": "baseball",
"type": "phrase",
"fields": [
"subject^3",
"section^2.5",
"article^2",
"tags^1.5",
"notes^1"
],
"fuzziness" : "AUTO",
"prefix_length" : 2
}
}
}
}
}
Please notice that prefix_length explained in the doc as:
The number of initial characters which will not be “fuzzified”. This helps to reduce the number of terms which must be examined. Defaults to 0.
To check the possible values of fuzziness please visit the ES docs.

Elasticsearch: how to disable scoring on a field?

I am new to Elasticsearch and please forgive me if the answer is obvious.
Here is what I have for the mapping of the field in question:
"condition" : { "type" : "string", "store" : "no", "index": "not_analyzed", "omit_norms" : "true" }
I need search on this field, but I need 100% string match (no stemming, etc.) on a sub-string (blank separated). An example of this field in a document is as follows:
{
"condition": "abc xyz"
}
An example query is:
/_search?q=condition:xyz
Is the above mapping correct? I also used omit_norms (true). Is this a correct thing to do in my case?
How can I disable scoring on this field? Can I do it in mapping? What is the best way of doing it? (Actually I need to disable scoring on more than one. I do have fields that need scoring)
Thanks and regards!
Using omit_norms:true will not take the length of the field into consideration for the scoring, Elasticsearch won't index the norms information. So if you don't want to use scoring that is a good thing to do as it will save you some disk space.
If you're not interested in scoring in your queries use a filtered query:
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"bool": {
"must": {
"term": {
"condition": "abc xyz"
}
}
}
}
}
}
}
The new syntax for a filtered query is now:
{
"query": {
"bool": {
"must": {
"match_all": {}
},
"filter": {
"term": {
"condition": "abc"
}
}
}
}
}

Resources