My Elasticsearch mapping looks like roughly like this:
{
"myIndex": {
"mappings": {
"myType": {
"_timestamp": {
"enabled": true,
"store": true
},
"properties": {
"LastUpdated": {
"type": "date",
"format": "dateOptionalTime"
}
/* lots of other properties */
}
}
}
}
}
So, _timestamp is enabled, and there's also a LastUpated property on every document. LastUpdated can have a different value than _timestamp: sometimes, documents get updated physically (e.g. updates to denormalized data) which updates _timestamp, but LastUpdated remains unchanged because the document hasn't actually been "updated" from a business perspective.
Also, there are many of documents without a LastUpdated value (mostly old data).
What I'd like to do is run an aggregation which counts the number of documents per calendar day (kindly ignore the fact that the dates need to be midnight-aligned, please). For every document, use LastUpdated if it's there, otherwise use _timestamp.
Here's what I've tried:
{
"aggregations": {
"counts": {
"terms": {
"script": "doc.LastUpdated == empty ? doc._timestamp : doc.LastUpdated"
}
}
}
}
The bucketization appears to work to some extent, but the keys in the result looks weird:
buckets: [
{
key: org.elasticsearch.index.fielddata.ScriptDocValues$Longs#7ba1f463doc_count: 300544
}{
key: org.elasticsearch.index.fielddata.ScriptDocValues$Longs#5a298acbdoc_count: 257222
}{
key: org.elasticsearch.index.fielddata.ScriptDocValues$Longs#6e451b5edoc_count: 101117
},
...
]
What's the proper way to run this aggregation and get meaningful keys (i.e. timestamps) in the result?
I've tested and made a groovy script for you,
POST index/type/_search
{
"aggs": {
"counts": {
"terms": {
"script": "ts=doc['_timestamp'].getValue();v=doc['LastUpdated'].getValue();rv=v?:ts;rv",
"lang": "groovy"
}
}
}
}
This returns the required result.
Hope this helps!! Thanks!!
Related
We're working with Elasticsearch for the first time and we are currently deciding on what would be the best solution for our problem at hand.
We are receiving Event based logs (in JSON form) from our applications directly to Elasticsearch index. These logs are highly interconnected (they share a common unique ID) and therefore we need to convert/aggregate them in an Entity-centered fashion.
Each event usually has a status change in the target field. There are more statuses than just start/end. Document has more data which can be used to create more than just one Entity-centered index.
{
*uniqueID*: ain123in145512kn
name: Bob
target: {
eventStart: {timestamp: 2020-06-01T13:50:55.000Z}
}
}
{
*uniqueID*: ain123in145512kn
name: Bob
target: {
eventStop: {timestamp: 2021-06-01T13:50:55.000Z}
}
}
We were already able to join these documents using Python or Logstash. We basically created an index that contains the following documents:
{
*uniqueID*: ain123in145512kn
name: Bob
target: {
eventStart: {timestamp: 2020-06-01T13:50:55.000Z},
eventStop: {timestamp: 2021-06-01T13:50:55.000Z}
**time_dif_Start_Stop : xxxx**
}
}
We assigned all events document ID that is the same as uniqueID which updated them automatically. Next step just calculated the difference between eventStart and eventStop timestamps.
We have certain requirements for our pipeline so we would prefer if data never has to leave elasticsearch. Therefore, we are wondering if it is possible to do this with any of the tools that already exist in the ELK stack or are hosted in the Elastic cloud? We tried using Transforms but we were only able to calculate aggregated fields in a new index. Is it possible to also basically merge/update all the documents into a single one with this tool or any other? It would be ideal for us as it is running on a schedule and we do not need any external tools to modify documents.
Any other suggestions or help would also be greatly appreciated.
A transform sounds good. I tried the following quick example:
PUT test/_doc/1
{
"uniqueID": "one",
"eventStart": {
"timestamp": "2020-06-01T13:50:55.000Z"
}
}
PUT test/_doc/2
{
"uniqueID": "one",
"eventStop": {
"timestamp": "2020-06-01T13:53:55.000Z"
}
}
PUT test/_doc/3
{
"uniqueID": "one",
"eventStop": {
"timestamp": "2020-06-01T13:54:55.000Z"
}
}
PUT test/_doc/4
{
"uniqueID": "other",
"eventStop": {
"timestamp": "2020-06-01T13:54:55.000Z"
}
}
GET test/_mapping
POST _transform/_preview
{
"source": {
"index": "test"
},
"dest": {
"index": "test_transformed"
},
"pivot": {
"group_by": {
"id": {
"terms": {
"field": "uniqueID.keyword"
}
}
},
"aggregations": {
"event_count": {
"value_count": {
"field": "_id"
}
},
"start": {
"min": {
"field": "eventStart.timestamp"
}
},
"stop": {
"max": {
"field": "eventStop.timestamp"
}
},
"duration": {
"bucket_script": {
"buckets_path": {
"start": "start.value",
"stop": "stop.value"
},
"script": """
return (params.stop - params.start)/1000; //in seconds (initially in ms)
"""
}
}
}
}
}
Which generates the following result — aggregation and calculation look correct:
[
{
"duration" : 240.0,
"stop" : "2020-06-01T13:54:55.000Z",
"event_count" : 3,
"start" : "2020-06-01T13:50:55.000Z",
"id" : "one"
},
{
"stop" : "2020-06-01T13:54:55.000Z",
"event_count" : 1,
"start" : null,
"id" : "other"
}
]
PS: I've turned the answer into a blog post that dives a little deeper into the general topic :)
I am working on a scenario to aggregate daily data per user. The data processed realtime and stored in elasticsearch. Now I wanno use elasticsearch feature for aggregating data in real time.Iv'e read about Transfrom in elasticsearch and found this is the case we need.
The problem is when the source index is updated, the destination index which is proposed to calculate aggregation is not updated. This is the case I have tested:
source_index data model:
{
"my_datetime": "2021-06-26T08:50:59",
"client_no": "1",
"my_date": "2021-06-26",
"amount": 1000
}
and the transform I defined:
PUT _transform/my_transform
{
"source": {
"index": "dest_index"
},
"pivot": {
"group_by": {
"client_no": {
"terms": {
"field": "client_no"
}
},
"my_date": {
"terms": {
"field": "my_date"
}
}
},
"aggregations": {
"sum_amount": {
"sum": {
"field": "amount"
}
},
"count_amount": {
"value_count": {
"field": "amount"
}
}
}
},
"description": "total amount sum per client",
"dest": {
"index": "my_analytic"
},
"frequency": "60s",
"sync": {
"time": {
"field": "my_datetime",
"delay": "10s"
}
}
}
Now when I add another document or update current documents in source index, destination index is not updated and not consider new documents.
Also note that elasticsearch version I used is 7.13
I also changed date field to be timestamp(epoch format like 1624740659000) but still have the same problem.
What am I doing wrong here?
Could it be that your "my_datetime" is further in the past than the "delay": "10s" (plus the time of "frequency": "60s")?
The docs for sync.field note:
In general, it’s a good idea to use a field that contains the ingest timestamp. If you use a different field, you might need to set the delay such that it accounts for data transmission delays.
You might just need a higher delay.
Say I create an index people which will take entries that will have two properties: name and friends
PUT /people
{
"mappings": {
"properties": {
"friends": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
and I put two entries, each one of them has two friends.
POST /people/_doc
{
"name": "Jack",
"friends": [
"Jill", "John"
]
}
POST /people/_doc
{
"name": "Max",
"friends": [
"John", "John" # Max will have two friends, but both named John
]
}
Now I want to search for people that have multiple friends
GET /people/_search
{
"query": {
"bool": {
"filter": [
{
"script": {
"script": {
"source": "doc['friends.keyword'].length > 1"
}
}
}
]
}
}
}
This will only return Jack and ignore Max. I assume this is because we are actually traversing the inversed index, and John and John create only one token - which is 'john' so the length of the tokens is actually 1 here.
Since my index is relatively small and performance is not the key, I would like to actually traverse the source and not the inversed index
GET /people/_search
{
"query": {
"bool": {
"filter": [
{
"script": {
"script": {
"source": "ctx._source.friends.length > 1"
}
}
}
]
}
}
}
But according to the https://github.com/elastic/elasticsearch/issues/20068 the source is supported only when updating, not when searching, so I cannot.
One obvious solution to this seems to take the length of the field and store it to the index. Something like friends_count: 2 and then filter based on that. But that requires reindexing and also this appears as something that should be solved in some obvious way I am missing.
Thanks a lot.
There is a new feature in ES 7.11 as runtime fields a runtime field is a field that is evaluated at query time. Runtime fields enable you to:
Add fields to existing documents without reindexing your data
Start working with your data without understanding how it’s structured
Override the value returned from an indexed field at query time
Define fields for a specific use without modifying the underlying schema
you can find more information here about runtime fields, but how you can use runtime fields you can do something like this:
Index Time:
PUT my-index/
{
"mappings": {
"runtime": {
"friends_count": {
"type": "keyword",
"script": {
"source": "doc['#friends'].size()"
}
}
},
"properties": {
"#timestamp": {"type": "date"}
}
}
}
You can also use runtime fields in search time for more information check here.
Search Time
GET my-index/_search
{
"runtime_mappings": {
"friends_count": {
"type": "keyword",
"script": {
"source": "ctx._source.friends.size()"
}
}
}
}
Update:
POST mytest/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"source": "ctx._source.arrayLength = ctx._source.friends.size()"
}
}
You can update all of your document with query above and adjust your query.
For everyone wondering about the same issue, I think #Kaveh answer is the most likely way to go, but I did not manage to make it work in my case. It seems to me that source is created after the query is performed and therefore you cannot access source for the purposes of filtering query.
This leaves you with two options:
filter the result on the application level (ugly and slow solution)
actually save the filed length in a separate field. Such as friends_count
possibly there is another option I don't know about(?).
Let's assume there's a (simplified) index:
PUT test
{
"mappings": {
"properties": {
"numeric_field_sometimes_empty": {
"type": "integer"
},
"numeric_field_always_present": {
"type": "integer"
}
}
}
}
with numeric fields that may or may not be present in some (potentially all filtered) docs:
POST test/_doc
{
"numeric_field_always_present": 10
}
POST test/_doc
{
"numeric_field_always_present": 20
}
I'm wanting to execute a bucket_script to calculate certain trends and since bucket_script needs to be a child of a multi-bucket agg, I simulate one using filters. After that nothing stands in the way of creating my numeric single-bucket sub-aggs like so:
GET test/_search
{
"size": 0,
"aggs": {
"multibucket_simulator": {
"filters": {
"filters": {
"all": {
"match_all": {}
}
}
},
"aggs": {
"avg_empty": {
"avg": {
"field": "numeric_field_sometimes_empty"
}
},
"avg_non_null": {
"avg": {
"field": "numeric_field_always_present"
}
},
"diff": {
"bucket_script": {
"buckets_path": {
"now": "avg_empty.value",
"before": "avg_non_null.value"
},
"script": """
return (params.now != null ? params.now : 0)
- (params.before != null ? params.before : 0)
""",
"format": "###.##"
}
}
}
}
}
}
Since I know that some of those sub-aggs' results may be null (a strict null type, not 0), I check whether that was the case with the ternary operators and proceed to return the value difference. This yields:
{
"aggregations":{
"multibucket_simulator":{
"buckets":{
"all":{
"doc_count":2,
"avg_non_null":{
"value":15.0
},
"avg_empty":{
"value":null
}
}
}
}
}
}
with the diff bucket script sub-agg being fully left out. That's suboptimal...
I've tried removing .value from the paths so I can access and check .value directly in the script -- to no avail.
The question is then -- why are the null buckets being skipped and, also, is there an alternative besides bucket_script for this use case?
The docs state that
The specified metric must be numeric and the script must return a
numeric value.
I suppose it's a matter of discussion whether null falls into that category.
EDIT 1
With that being said, setting a missing parameter on each avg agg solves this problem.
EDIT 2 & Actual Anwer
It was the gap_policy. It defaults to skip and needed to be set to insert_zeros. Here's the reason.
I have documents that I want to index/search with ElasticSearch. These documents may contain multiple dates, and in some cases, the dates are actually date ranges. I'm wondering if someone can help me figure out how to write a query that does the right thing (or how to properly index my document so I can query it).
An example is worth a thousand words. Suppose the document contains two marriage date ranges: 2005-05-05 to 2007-07-07 and 2012-12-012 to 2014-03-03.
If I index each date range in start and end date fields, and write a typical range query, then a search for 2008-01-01 will return this record because one marriage will satisfy one of the inequalities and the other will satisfy the other. I don't know how to get ES to keep the two date ranges separate. Obviously, having marriage1 and marriage2 fields would resolve this particular problem, but in my actual data set I have an unbounded number of dates.
I know that ES 5.2 supports the date_range data type, which I believe would resolve this issue, but I'm stuck with 5.1 because I'm using AWS's managed ES.
Thanks in advance.
You can use nested objects for this purpose.
PUT /records
{
"mappings": {
"record": {
"properties": {
"marriage": {
"type": "nested",
"properties": {
"start": { "type": "date" },
"end": { "type": "date" },
"person1": { "type": "string" },
"person2": { "type": "string" }
}
}
}
}
}
}
PUT /records/record/1
{
"marriage": [ { "start" : "2005-05-05","end" :"2007-07-07" , "person1" : "","person2" :"" },{"start": "2012-12-12","end": "2014-03-03","person1" : "","person2" :"" }]
}
POST /records/record/_search
{
"query": {
"nested": {
"path": "marriage",
"query": {
"range": {
"marriage.start": { "gte": "2008-01-01", "lte": "2015-02-03"}
}
}
}
}