Elasticsearch filter fields by data - elasticsearch

I need to use an API with an elasticsearch backend. I don't have any controls about that server but I need to reduce the fields I get in the responses. I already figured out how to use "filter_path" and how to include specific fields by their name. But I didn't find anything about how to dismiss specific fields by data. I already tried using nested terms but with no positive results.
My actual JSON query looks like this:
{
"_source" : ["field1", "field2", "field3.1", "field3.2"],
"query": {
"bool" : {
"must" : {
"query_string" : {
"query" : "*"
}
}
}
}
}
And the current output like this:
{
"hits": {
"hits": [
{
"_source": {
"field1": "data",
"field2": "data",
"field3": [
{
"field3.1": "SPECIALDATA",
"field3.2": "SPECIALDATA"
},
{
"field3.1": "data",
"field3.2": "data"
},
{
"field3.1": "data",
"field3.2": "data"
}
]
}
}
I only want field3.1 and 3.2 with the SPECIALDATA in it. All other fields with 3.1 and 3.2 should be dismissed.
Is this possible? And if so, how do I have to change my query to achieve this?

Related

Elasticsearch: How to filter results with a specific word in a value using elasticsearch

I need to add a parameter to my search that filters results containing a specific word in a value. The query is searching for user history records and contains a url key. I need to filter out /history and any other url containing that string.
Here's my current query:
GET /user_log/_search
{
"size" : 50,
"query": {
"match": {
"user_id": 56678
}
}
}
Here's an example of a record, boiled down to just the value we're looking at:
"_source": {
"url": "/history?page=2&direction=desc",
},
How can the parameters of the search be changed to filter out this result.
You can use the filter param of boolean query in Elasticsearch.
if your url field is of type keyword, you can use the below query
{
"query": {
"bool": {
"must": {
"match": {
"user_id": 56678
}
},
"filter": { --> note filter
"term": {
"url": "/history"
}
}
}
}
}
I found a way to solve my specific issue. Instead of filtering on the url I'm filtering on a different value. Here's what I'm using now:
{
"size" : 50,
"query": {
"bool" : {
"must" : {
"match" : { "user_id" : 56678 }
},
"must_not": {
"match" : { "controller": "History" }
}
}
}
}
I'm still going to leave this question open for a while to see if anyone has other ways of solving the original problem.

How to get the best matching document in Elasticsearch?

I have an index where I store all the places used in my documents. I want to use this index to see if the user mentioned one of the places in the text query I receive.
Unfortunately, I have two documents whose name is similar enough to trick Elasticsearch scoring: Stockholm and Stockholm-Arlanda.
My test phrase is intyg stockholm and this is the query I use to get the best matching document.
{
"size": 1,
"query": {
"bool": {
"should": [
{
"match": {
"name": "intyig stockholm"
}
}
],
"must": [
{
"term": {
"type": {
"value": "4"
}
}
},
{
"terms": {
"name": [
"intyg",
"stockholm"
]
}
},
{
"exists": {
"field": "data.coordinates"
}
}
]
}
}
}
As you can see, I use a terms query to find the interesting documents and I use a match query in the should part of the root bool query to use scoring to get the document I want (Stockholm) on top.
This code worked locally (where I run ES in a container) but it broke when I started testing on a cluster hosted in AWS (where I have the exact same dataset). I found this explaining what happens and adding the search type argument actually fixes the issue.
Since the workaround is best not used on production, I'm looking for ways to have the expected result.
Here are the two documents:
// Stockholm
{
"type" : 4,
"name" : "Stockholm",
"id" : "42",
"searchableNames" : [
"Stockholm"
],
"uniqueId" : "Place:42",
"data" : {
"coordinates" : "59.32932349999999,18.0685808"
}
}
// Stockholm-Arlanda
{
"type" : 4,
"name" : "Stockholm-Arlanda",
"id" : "1832",
"searchableNames" : [
"Stockholm-Arlanda"
],
"uniqueId" : "Place:1832",
"data" : {
"coordinates" : "59.6497622,17.9237807"
}
}

Enriching the Data in Elastic Search

We will be ingesting data into an Index (Index1), however one of the fields in the document(field1) is an ENUM value, which needs to be converted into a value (string) using a lookup through a rest api call.
the rest api call gives a JSON in response like this which has string values for all the ENUMS.
{
values : {
"ENUMVALUE1" : "StringValue1",
"ENUMVALUE2" : "StringValue2"
}
}
I am thinking of making an index from this response document and use that for the lookup.
The incoming document has field1 as ENUMVALUE1 or ENUMVALUE2 (only one of them) and we want to eventually save StringValue1 or StringValue2 in the document under field1 and not ENUMVALUE1.
I went through the documentation of enrichment processor however I am not sure if that is the correct approach to handle this scenario.
While forming the match enrich policy I am not sure how match_field and enrich_fields should be configured.
Could you please advise if this can be done in Elastic and if yes what possible options do I have if the above one is not an optimal approach.
OK, 150-200 enums might not be enough to use an enrich index, but here is a potential solution.
You first need to build the source index containing all enum mappings, it would look like this:
POST enums/_doc/_bulk
{"index":{}}
{"enum_id": "ENUMVALUE1", "string_value": "StringValue1"}
{"index":{}}
{"enum_id": "ENUMVALUE2", "string_value": "StringValue2"}
Then you need to create an enrich policy out of this index:
PUT /_enrich/policy/enum-policy
{
"match": {
"indices": "enums",
"match_field": "enum_id",
"enrich_fields": [
"string_value"
]
}
}
POST /_enrich/policy/enum-policy/_execute
Once it's built (with 200 values it should take a few seconds), you can start building your ingest pipeline using an ingest processor:
PUT _ingest/pipeline/enum-pipeline
{
"description": "Enum enriching pipeline",
"processors": [
{
"enrich" : {
"policy_name": "enum-policy",
"field" : "field1",
"target_field": "tmp"
}
},
{
"set": {
"if": "ctx.tmp != null",
"field": "field1",
"value": "{{tmp.string_value}}"
}
},
{
"remove": {
"if": "ctx.tmp != null",
"field": "tmp"
}
}
]
}
Testing this pipeline, we get this:
POST _ingest/pipeline/enum-pipeline/_simulate
{
"docs": [
{
"_source": {
"field1": "ENUMVALUE1"
}
},
{
"_source": {
"field1": "ENUMVALUE4"
}
}
]
}
Results =>
{
"docs" : [
{
"doc" : {
"_source" : {
"field1" : "StringValue1" <--- value has been replaced
}
}
},
{
"doc" : {
"_source" : {
"field1" : "ENUMVALUE4" <--- value has NOT been replaced
}
}
}
]
}
For the sake of completeness, I'm sharing the other solution without enrich index, so you can test both and use whichever makes most sense for you.
In this second option, we're simply going to use an ingest pipeline with a script processor whose parameters contain a map of your enums. field1 will be replaced by whatever value is mapped to the enum value it contains, or will keep its value if there's no corresponding enum value.
PUT _ingest/pipeline/enum-pipeline
{
"description": "Enum enriching pipeline",
"processors": [
{
"script": {
"source": """
ctx.field1 = params.getOrDefault(ctx.field1, ctx.field1);
""",
"params": {
"ENUMVALUE1": "StringValue1",
"ENUMVALUE2": "StringValue2",
... // add all your enums here
}
}
}
]
}
Testing this pipeline, we get this
POST _ingest/pipeline/enum-pipeline/_simulate
{
"docs": [
{
"_source": {
"field1": "ENUMVALUE1"
}
},
{
"_source": {
"field1": "ENUMVALUE4"
}
}
]
}
Results =>
{
"docs" : [
{
"doc" : {
"_source" : {
"field1" : "StringValue1" <--- value has been replaced
}
}
},
{
"doc" : {
"_source" : {
"field1" : "ENUMVALUE4" <--- value has NOT been replaced
}
}
}
]
}
So both solutions would work for your case, you just need to pick up the one that is the best fit. Just know that in the first option, if your enums change, you'll need to rebuild your source index and enrich policy, while in the second case, you just need to modify the parameters map of your pipeline.

Elasticsearch 5.1: applying additional filters to the "more like this" query

Building a search engine on top of emails. MLT is great at finding emails with similar bodies or subjects, but sometimes I want to do something like: show me the emails with similar content to this one, but only from joe#yahoo.com and only during this date range. This seems to have been possible with ES 2.x, but it seems that 5.x doesn't allow allow filtration on fields other than that being considered for similarity. Am I missing something?
i still can't figure how to do what i described. Imagine I have an index of emails with two types for the sake of simplicity: body and sender. I know now to find messages that are restricted to a sender, the posted query would be something like:
{
"query": {
"bool": {
"filter": {
"bool": {
"must": [
{
"term": {
"sender": "mike#foo.com"
}
}
]
}
}
}
}
}
Similarly, if I wish to know how to find messages that are similar to a single hero message using the contents of the body, i can issue a query like:
{
"query": {
"more_like_this": {
"fields" : ["body"],
"like" : [{
"_index" : "foo",
"_type" : "email",
"_id" : "a1af33b9c3dd436dabc1b7f66746cc8f"
}],
"min_doc_freq" : 2,
"min_word_length" : 2,
"max_query_terms" : 12,
"include" : "true"
}
}
}
both of these queries specify the results by adding clauses inside the query clause of the root object. However, any way I try to put these together gives me parse exceptions. I can't find any examples of documentations that would say, give me emails that are similar to this hero, but only from mike#foo.com
You're almost there, you can combine them both using a bool/filter query like this, i.e. make an array out of your filter and put both constraints in there:
{
"query": {
"bool": {
"filter": [
{
"term": {
"sender": "mike#foo.com"
}
},
{
"more_like_this": {
"fields": [
"body"
],
"like": [
{
"_index": "foo",
"_type": "email",
"_id": "a1af33b9c3dd436dabc1b7f66746cc8f"
}
],
"min_doc_freq": 2,
"min_word_length": 2,
"max_query_terms": 12,
"include": "true"
}
}
]
}
}
}

Highlight not working along with term lookup filter

I'm new to elastic search and have started exploring it from the past few days. My requirement is to get the matched keywords highlighted.
So I have 2 indices
http://localhost:9200/lookup/type/1?pretty
Output
{
"_index" : "lookup",
"_type" : "type",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source":{"terms":["Apache
Storm","Kafka","MR","Pig","Hive","Hadoop","Mahout"]}
}
And another one as following:-
http://localhost:9200/skillsetanalyzer/resume/_search?fields=keySkills
output
{"took":19,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":3,"max_score":1.0,"hits":[{"_index":"skillsetanalyzer","_type":"resume","_id":"1","_score":1.0,"fields":{"keySkills":["Core
Java","J2EE","Struts 1.x","SOAP based
Web Services using JAX-WS","Maven","Ant","JMS","Apache
Storm","Kafka","RDBMS
(MySQL","Tomcat","Weblogic","Eclipse","Toad","TIBCO
product Suite (Administrator","Business
Work","Designer","EMS)","CVS","SVN"]}},
And below query returns the correct results but does not highlight the matched keywords.
curl -XGET 'localhost:9200/skillsetanalyzer/resume/_search?pretty' -d '
{
"query":
{"filtered":
{"filter":
{"terms":
{"keySkills":
{"index":"lookup",
"type":"type",
"id":"1",
"path":"terms"
},
"_cache_key":"1"
}
}
}
},
"highlight": {
"fields":{
"keySkills":{}
}
}
}'
Field "KeySkills" is not analyzed and its type is String. I'm not able to make out what is wrong with the
query.
Please help in providing the necessary pointers.
~Shweta
Highlighting works against the Query, you are just filtering the results. You need to specify highlight_query along with your filters like this
{
"query": {
"filtered": {
"filter": {
"terms": {
"keySkills": [
"MR","Pig","Hive"
]
}
}
}
},
"highlight": {
"fields": {
"keySkills": {
"highlight_query": {
"terms": {
"keySkills": [
"MR","Pig","Hive"
]
}
}
}
}
}
}
I hope this helps.

Resources