Is there a way in elasticsearch where I can cast a string to a long value at query time?
I have something like this in my document:
"attributes": [
{
"key": "age",
"value": "23"
},
{
"key": "name",
"value": "John"
},
],
I would like to write a query to get all the persons that have an age > 23. For that I need to cast the value to an int such that I can compare it when the key is age.
The above document is an example very specific to this problem.
I would greatly appreciate your help.
Thanks!
You can use scripting for that
POST /index/type/_search
{
"query": {
"filtered": {
"filter": {
"script": {
"script": "foreach(attr : _source['attributes']) {if ( attr['key']=='age') { return attr['value'] > ageValue;} } return false;",
"params" : {
"ageValue" : 23
}
}
},
"query": {
"match_all": {}
}
}
}
}
UPD: Note that dynamic scripting should be enabled in elasticsearch.yml.
Also, I suppose you can archive better query performance by refactoring you document structure and applying appropriate mapping for age field.
Related
Say I create an index people which will take entries that will have two properties: name and friends
PUT /people
{
"mappings": {
"properties": {
"friends": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
and I put two entries, each one of them has two friends.
POST /people/_doc
{
"name": "Jack",
"friends": [
"Jill", "John"
]
}
POST /people/_doc
{
"name": "Max",
"friends": [
"John", "John" # Max will have two friends, but both named John
]
}
Now I want to search for people that have multiple friends
GET /people/_search
{
"query": {
"bool": {
"filter": [
{
"script": {
"script": {
"source": "doc['friends.keyword'].length > 1"
}
}
}
]
}
}
}
This will only return Jack and ignore Max. I assume this is because we are actually traversing the inversed index, and John and John create only one token - which is 'john' so the length of the tokens is actually 1 here.
Since my index is relatively small and performance is not the key, I would like to actually traverse the source and not the inversed index
GET /people/_search
{
"query": {
"bool": {
"filter": [
{
"script": {
"script": {
"source": "ctx._source.friends.length > 1"
}
}
}
]
}
}
}
But according to the https://github.com/elastic/elasticsearch/issues/20068 the source is supported only when updating, not when searching, so I cannot.
One obvious solution to this seems to take the length of the field and store it to the index. Something like friends_count: 2 and then filter based on that. But that requires reindexing and also this appears as something that should be solved in some obvious way I am missing.
Thanks a lot.
There is a new feature in ES 7.11 as runtime fields a runtime field is a field that is evaluated at query time. Runtime fields enable you to:
Add fields to existing documents without reindexing your data
Start working with your data without understanding how it’s structured
Override the value returned from an indexed field at query time
Define fields for a specific use without modifying the underlying schema
you can find more information here about runtime fields, but how you can use runtime fields you can do something like this:
Index Time:
PUT my-index/
{
"mappings": {
"runtime": {
"friends_count": {
"type": "keyword",
"script": {
"source": "doc['#friends'].size()"
}
}
},
"properties": {
"#timestamp": {"type": "date"}
}
}
}
You can also use runtime fields in search time for more information check here.
Search Time
GET my-index/_search
{
"runtime_mappings": {
"friends_count": {
"type": "keyword",
"script": {
"source": "ctx._source.friends.size()"
}
}
}
}
Update:
POST mytest/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"source": "ctx._source.arrayLength = ctx._source.friends.size()"
}
}
You can update all of your document with query above and adjust your query.
For everyone wondering about the same issue, I think #Kaveh answer is the most likely way to go, but I did not manage to make it work in my case. It seems to me that source is created after the query is performed and therefore you cannot access source for the purposes of filtering query.
This leaves you with two options:
filter the result on the application level (ugly and slow solution)
actually save the filed length in a separate field. Such as friends_count
possibly there is another option I don't know about(?).
For example I have a date field delivery_datetime in Index, I have to show the user for the current day whether a particular parcel is Due today or Over Due or Not Due
I can't create a separate field and do reindex because it's based on current date and that changes every day, for instance if I have to calculate while indexing I have to reindex every day, and that's not feasible because I have a lot of data.
I may use update by query but my index is frequently updated via a Python script, thought we don't have ACID property here we'll have version conflict.
For my knowledge I think my only option is to use Scripted Field.
If I have to write the logic in pseudocode:
Due - delivery_datetime.dateOnly == now.dateOnly
Over Due - delivery_datetime.dateOnly < now.dateOnly
Not Due - delivery_datetime.dateOnly > now.dateOnly
Thought I have a lot of data if I generate CSV I don't want scripted field to make major impact on cluster performance.
So I need some help to do this efficiently in scripted field, or if there were any completely different solution will also be greatly helpful.
Expecting help by providing painless script if Scripted Field is the only solution.
Once we've ruled out doc upserts/updates there are essentially 2 approaches to this: script_fields or filter aggregations.
Let's first assume your mapping looks similar to:
{
"mappings": {
"properties": {
"delivery_datetime": {
"type": "object",
"properties": {
"dateOnly": {
"type": "date",
"format": "dd.MM.yyyy"
}
}
}
}
}
}
Now, if we filter all our packages by, say, its ID and want to know in which due-state it is, we can create 3 script fields like so:
GET parcels/_search
{
"_source": "timeframe_*",
"script_fields": {
"timeframe_due": {
"script": {
"source": "doc['delivery_datetime.dateOnly'].value.dayOfMonth == params.nowDayOfMonth",
"params": {
"nowDayOfMonth": 8
}
}
},
"timeframe_overdue": {
"script": {
"source": "doc['delivery_datetime.dateOnly'].value.dayOfMonth < params.nowDayOfMonth",
"params": {
"nowDayOfMonth": 8
}
}
},
"timeframe_not_due": {
"script": {
"source": "doc['delivery_datetime.dateOnly'].value.dayOfMonth > params.nowDayOfMonth",
"params": {
"nowDayOfMonth": 8
}
}
}
}
}
which'll return something along the lines of:
...
"fields" : {
"timeframe_due" : [
true
],
"timeframe_not_due" : [
false
],
"timeframe_overdue" : [
false
]
}
It's trivial and the date math has a significant weak point that'll be addressed below.
Alternatively, we can use 3 filter aggregations and similarly filter only 1 document in question out like so:
GET parcels/_search
{
"size": 0,
"query": {
"ids": {
"values": [
"my_id_thats_due_today"
]
}
},
"aggs": {
"due": {
"filter": {
"range": {
"delivery_datetime.dateOnly": {
"gte": "now/d",
"lte": "now/d"
}
}
}
},
"overdue": {
"filter": {
"range": {
"delivery_datetime.dateOnly": {
"lt": "now/d"
}
}
}
},
"not_due": {
"filter": {
"range": {
"delivery_datetime.dateOnly": {
"gt": "now/d"
}
}
}
}
}
}
yielding
...
"aggregations" : {
"overdue" : {
"doc_count" : 0
},
"due" : {
"doc_count" : 1
},
"not_due" : {
"doc_count" : 0
}
}
Now the advantages of the 2nd approach are as follows:
There are no scripts involved -> faster execution.
More importantly, you don't have to worry about day-of-month math like Dec 15th being later than Nov 20th but the trivial day-of-month comparison would yield otherwise. You can implement something similar in your scripts but more complexity equals worse execution speed.
You can ditch the ID filtering and use those aggregated counts in an internal dashboard. Possibly even a customer dashboard but regular customers rarely have lots of parcels which would be reasonable to aggregate.
Answering my own question, here is what worked for me.
Scripted Field Script:
def DiffMillis = 0;
if(!doc['delivery_datetime'].empty) {
// Converting each to days, 1000*60*60*24 = 86400000
DiffMillis = (new Date().getTime() / 86400000) - (doc['delivery_datetime'].value.getMillis() / 86400000);
}
doc['delivery_datetime'].empty ? "No Due Date": (DiffMillis==0?"Due": (DiffMillis>0?"Over Due":"Not Due") )
I specifically used ternary operator, because if I use if else then I have to use return, if I use return I faced search_phase_execution_exception while adding filters for the scripted field.
I store in Elasticsearc objects like that:
{
"userName": "Cool User",
"orders":[
{
"orderType": "type1",
"amount": 500
},
{
"orderType": "type2",
"amount": 1000
}
]
}
And all is ok while I`m searching by 'orders.orderType' or 'orders.amount' fields.
But what query I have to use for getting objects, which has 'orders.amount >= 500' and 'orders.orderType=type2'?
I`ve tried to query like that:
{
"query": {
"bool": {
"must": [
{
"range": {
"orders.amount": {
"from": "499"
}
}
},
{
"query_string": {
"query": "type2",
"fields": [
"orders.orderType"
]
}
}
]
}
}
}
..but this request returns records that has 'orders.orderType=type2' OR 'orders.amount >= 500'.
Please help me to construct query, that will look for objects that has object inside orders array and it object has to have amount >= 500 AND 'orderType=type2'.
Finally, I found blog post that describes exactly my case.
https://www.bmc.com/blogs/elasticsearch-nested-searches-embedded-documents/
Thanks for help.
Per our requirement we need to find the max ID of the document before adding new document. Problem here is doc may contain string data also So had to use inline script on the elastic query to find out max id only for the document which has integer data otherwise returning 0. am using following inline script query to find max-key but not working. can you help me onthis ?.
{
"size":0,
"query":
{"bool":
{"filter":[
{"term":
{"Name":
{
"value":"Test2"
}
}}
]
}},
"aggs":{
"MaxId":{
"max":{
"field":"Key","script":{
"inline":"((doc['Key'].value).isNumber()) ? Integer.parseInt(doc['Key'].value) : 0"}}
}
}
}
The error is because the max aggregation only supports numeric fields, i.e. you cannot specify a string field (i.e. Key) in a max aggregation.
Simply remove the "field":"Key" part and only keep the script part
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"term": {
"Name": "Test2"
}
}
]
}
},
"aggs": {
"MaxId": {
"max": {
"script": {
"source": "((doc['Key'].value).isNumber()) ? Integer.parseInt(doc['Key'].value) : 0"
}
}
}
}
}
I'm trying to learn elasticsearch with a simple example application, that lists quotations associated with people. The example mapping might look like:
{
"people" : {
"properties" : {
"name" : { "type" : "string"},
"quotations" : { "type" : "string" }
}
}
}
Some example data might look like:
{ "name" : "Mr A",
"quotations" : [ "quotation one, this and that and these"
, "quotation two, those and that"]
}
{ "name" : "Mr B",
"quotations" : [ "quotation three, this and that"
, "quotation four, those and these"]
}
I would like to be able to use the querystring api on individual quotations, and return the people who match. For instance, I might want to find people who have a quotation that contains (this AND these) - which should return "Mr A" but not "Mr B", and so on. How can I achieve this?
EDIT1:
Andrei's answer below seems to work, with data values now looking like:
{"name":"Mr A","quotations":[{"value" : "quotation one, this and that and these"}, {"value" : "quotation two, those and that"}]}
However, I can't seem to get a query_string query to work. The following produces no results:
{
"query": {
"nested": {
"path": "quotations",
"query": {
"query_string": {
"default_field": "quotations",
"query": "quotations.value:this AND these"
}
}
}
}
}
Is there a way to get a query_string query working with a nested object?
Edit2: Yes it is, see Andrei's answer.
For that requirement to be achieved, you need to look at nested objects, not to query a flattened list of values but individual values from that nested object. For example:
{
"mappings": {
"people": {
"properties": {
"name": {
"type": "string"
},
"quotations": {
"type": "nested",
"properties": {
"value": {
"type": "string"
}
}
}
}
}
}
}
Values:
{"name":"Mr A","quotations":[{"value": "quotation one, this and that and these"}, {"value": "quotation two, those and that"}]}
{"name":"Mr B","quotations":[{"value": "quotation three, this and that"}, {"value": "quotation four, those and these"}]}
Query:
{
"query": {
"nested": {
"path": "quotations",
"query": {
"bool": {
"must": [
{ "match": {"quotations.value": "this"}},
{ "match": {"quotations.value": "these"}}
]
}
}
}
}
}
Unfortunately there is no good way to do that.
https://web.archive.org/web/20141021073225/http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/complex-core-fields.html
When you get a document back from Elasticsearch, any arrays will be in
the same order as when you indexed the document. The _source field
that you get back contains exactly the same JSON document that you
indexed.
However, arrays are indexed — made searchable — as multi-value fields,
which are unordered. At search time you can’t refer to “the first
element” or “the last element”. Rather think of an array as a bag of
values.
In other words, it is always considering all values in the array.
This will return only Mr A
{
"query": {
"match": {
"quotations": {
"query": "quotation one",
"operator": "AND"
}
}
}
}
But this will return both Mr A & Mr B:
{
"query": {
"match": {
"quotations": {
"query": "this these",
"operator": "AND"
}
}
}
}
If scripting is enabled, this should work:
"script": {
"inline": "for(element in _source.quotations) { if(element == 'this' && element == 'these') {return true;} }; return false;"
}