Count of unique aggregration doc_count in ElasticSearch - elasticsearch

Using ElasticSearch 7.0, I can get how many log I have for each user with an aggregation :
"aggs": {
"by_user": {
"terms": {
"field": "user_id",
}
}
}
This returns me something like:
user32: 25
user52: 20
user10: 20
...
What I would like is to know how many user have 25 logs, and how many user have 20 logs etc. The ideal result would be something like :
25: 1
20: 2
19: 4
12: 54
Because 54 users have 12 logs lines.
How can I make an aggregation that returns this result ?

It sounds like you can use Bucket Script Aggregation to simplify your query but the problem is that there is still open PR on this topic.
So, for now i think the simplest is to use painless script with Scripted Metric Aggregation. I recommend you to carefully read about the stages of its execution.
In terms of code I know it's not the best algorithm for your problem but quick and dirty your query could look something like this:
GET my_index/_search
{
"size": 0,
"query" : {
"match_all" : {}
},
"aggs": {
"profit": {
"scripted_metric": {
"init_script" : "state.transactions = [:];",
"map_script" :
"""
def key = doc['firstName.keyword'];
if (key != null && key.value != null) {
def value = state.transactions[key.value];
if(value==null) value = 0;
state.transactions[key.value] = value+1
}
""",
"combine_script" : "return state.transactions",
"reduce_script" :
"""
def result = [:];
for (state in states) {
for (item in state.entrySet()) {
def key=item.getValue().toString();
def value = result[key];
if(value==null)value = 0;
result[key]=value+1;
}
}
return result;
"""
}
}
}
}

Related

Aggregating sequence of connected events

Lets say I have events like this in my log
{type:"approval_revokation", approval_id=22}
{type:"approval", request_id=12, approval_id=22}
{type:"control3", request_id=12}
{type:"control2", request_id=12}
{type:"control1", request_id=12}
{type:"request", request_id=12 requesting_user="user1"}
{type:"registration", userid="user1"}
I would like to do a search that aggregates one bucket for each approval_id containing all events connected to it as above. As you see there is not a single id field that can be used throughout the events, but they are all connected in a chain.
The reason I would like this is to feed this into a anomaly detector to verify things like that all controls where executed and validate registration event for a eventual approval.
Can this be done using aggregation or are there any other suggestion?
If there's no single unique "glue" parameter to tie these events together, I'm afraid the only choice is a brute-force map-reduce iterator on all the docs in the index.
After ingesting the above events:
POST _bulk
{"index":{"_index":"events","_type":"_doc"}}
{"type":"approval_revokation","approval_id":22}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"approval","request_id":12,"approval_id":22}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control3","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control2","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control1","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"request","request_id":12,"requesting_user":"user1"}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"registration","userid":"user1"}
we can link them together like so:
POST events/_search
{
"size": 0,
"aggs": {
"log_groups": {
"scripted_metric": {
"init_script": "state.groups = [];",
"map_script": """
int fetchIndex(List groups, def key, def value, def backup_key) {
if (key == null || value == null) {
// nothing to search
return -1
}
return IntStream.range(0, groups.size())
.filter(i -> groups.get(i)['docs']
.stream()
.anyMatch(_doc -> _doc.get(key) == value
|| (backup_key != null
&& _doc.get(backup_key) == value)))
.findFirst()
.orElse(-1);
}
def approval_id = doc['approval_id'].size() != 0
? doc['approval_id'].value
: null;
def request_id = doc['request_id'].size() != 0
? doc['request_id'].value
: null;
def requesting_user = doc['requesting_user.keyword'].size() != 0
? doc['requesting_user.keyword'].value
: null;
def userid = doc['userid.keyword'].size() != 0
? doc['userid.keyword'].value
: null;
HashMap valueMap = ['approval_id':approval_id,
'request_id':request_id,
'requesting_user':requesting_user,
'userid':userid];
def found = false;
for (def entry : valueMap.entrySet()) {
def field = entry.getKey();
def value = entry.getValue();
def backup_key = field == 'userid'
? 'requesting_user'
: field == 'requesting_user'
? 'userid'
: null;
def found_index = fetchIndex(state.groups, field, value, backup_key);
if (found_index != -1) {
state.groups[found_index]['docs'].add(params._source);
if (approval_id != null) {
state.groups[found_index]['approval_id'] = approval_id;
}
found = true;
break;
}
}
if (!found) {
HashMap nextInLine = ['docs': [params._source]];
if (approval_id != null) {
nextInLine['approval_id'] = approval_id;
}
state.groups.add(nextInLine);
}
""",
"combine_script": "return state",
"reduce_script": "return states"
}
}
}
}
returning the grouped events + the inferred approval_id:
"aggregations" : {
"log_groups" : {
"value" : [
{
"groups" : [
{
"docs" : [
{...}, {...}, {...}, {...}, {...}, {...}, {...}
],
"approval_id" : 22
},
{ ... }
]
}
]
}
}
Keep in mind that such scripts are going to be quite slow, esp. when run on large numbers of events.

Elasticsearch: Multiply each nested element plus aggregation

Let's imagine an index composed of 2 documents like this one:
doc1 = {
"x":1,
"y":[{brand:b1, value:1},
{brand:b2, value:2}]
},
doc2 = {
"x":2,
"y":[{brand:b1, value:0},
{brand:b2, value:3}]
}
Is it possible to multiply each values of y by x for each document and then do sum aggregation based on brand term to get this result:
b1: 1
b2: 8
If not, could it be done with any other mapping types ?
This is a highly custom use-case so I don't think there's some sort of a pre-optimized mapping for it.
What I would suggest is the following:
Set up an index w/ y being nested:
PUT xy/
{"mappings":{"properties":{"y":{"type":"nested"}}}}
Ingest the docs from your example:
POST xy/_doc
{"x":1,"y":[{"brand":"b1","value":1},{"brand":"b2","value":2}]}
POST xy/_doc
{"x":2,"y":[{"brand":"b1","value":0},{"brand":"b2","value":3}]}
Use a scripted_metric aggregation to compute the products and add them up in a shared HashMap:
GET xy/_search
{
"size": 0,
"aggs": {
"multiply_and_add": {
"scripted_metric": {
"init_script": "state.by_brands = [:]",
"map_script": """
def x = params._source['x'];
for (def brand_pair : params._source['y']) {
def brand = brand_pair['brand'];
def product = x * brand_pair['value'];
if (state.by_brands.containsKey(brand)) {
state.by_brands[brand] += product;
} else {
state.by_brands[brand] = product;
}
}
""",
"combine_script": "return state",
"reduce_script": "return states"
}
}
}
}
which would yield something along the lines of
{
...
"aggregations":{
"multiply_and_add":{
"value":[
{
"by_brands":{ <----
"b2":8,
"b1":1
}
}
]
}
}
}
UPDATE
The combine_script could look like this:
def combined_states = [:];
for (def state : states) {
for (def brand_pair : state['by_brands'].entrySet()) {
def key = brand_pair.getKey();
def value = brand_pair.getValue();
if (combined_states.containsKey(key)) {
combined_states[key] += (float)value;
break;
}
combined_states[key] = (float)value
}
}

Elasticsearch scripted_metric null_pointer_exception

I'm trying to use the scripted_metric aggs of Elasticsearch and normally, it's working perfectly fine with my other scripts
However, with script below, I'm encountering an error called "null_pointer_exception" but they're just copy-pasted scripts and working for 6 modules already
$max = 10;
{
"query": {
"match_all": {}
//omitted some queries here, so I just turned it into match_all
}
},
"aggs": {
"ARTICLE_CNT_PDAY": {
"histogram": {
"field": "pub_date",
"interval": "86400"
},
"aggs": {
"LATEST": {
"nested": {
"path": "latest"
},
"aggs": {
"SUM_SVALUE": {
"scripted_metric": {
"init_script": "
state.te = [];
state.g = 0;
state.d = 0;
state.a = 0;
",
"map_script": "
if(state.d != doc['_id'].value){
state.d = doc['_id'].value;
state.te.add(state.a);
state.g = 0;
state.a = 0;
}
state.a = doc['latest.soc_mm_score'].value;
",
"combine_script": "
state.te.add(state.a);
double count = 0;
for (t in state.te) {
count += ((t*10)/$max)
}
return count;
",
"reduce_script": "
double count = 0;
for (a in states) {
count += a;
}
return count;
"
}
}
}
}
}
}
}
}
I tried running this script in Kibana, and here's the error message:
What I'm getting is, that there's something wrong with the reduce_script portion, tried to change this part:
FROM
for (a in states) {
count += a;
}
TO
for (a in states) {
count += 1;
}
And worked perfectly fine, I felt that the a variable isn't getting what it's supposed to hold
Any ideas here? Would appreciate your help, thank you very much!
The reason is explained here:
If a parent bucket of the scripted metric aggregation does not collect any documents an empty aggregation response will be returned from the shard with a null value. In this case the reduce_script's states variable will contain null as a response from that shard. reduce_script's should therefore expect and deal with null responses from shards.
So obviously one of your buckets is empty, and you need to deal with that null like this:
"reduce_script": "
double count = 0;
for (a in states) {
count += (a ?: 0);
}
return count;
"

Elasticsearch complex multi bucket time aggregation - Session data to User count

Having dataset with user session data like this:
{'username':'TestUser',
'sessionStartTime' : '2019-02-14 09:00:00'
'sessionEndTime' : ''2019-02-14 10:20:00'},
{'username':'User2',
'sessionStartTime' : '2019-02-14 02:00:00'
'sessionEndTime' : ''2019-02-14 12:00:00'}
Is there an easy way to query elastic for an multi-bucket aggregated sum of sessions in a time range?
So basically I want to query for time range 09:00:00 to 11:00:00 and get a aggregated hourly result like this:
{'bucketStart' : '2019-02-14 09:00:00',
'bucketEnd' : '2019-02-14 10:00:00',
'sessioncount' : 2},
{'bucketStart' : '2019-02-14 10:00:00',
'bucketEnd' : '2019-02-14 11:00:00',
'sessioncount' : 1}
Goal of this is, to use the resulting data to draw a line graph for "online" users sessions count, having only the session data in database.
Ok I made this on my date (day by day) so adjust the 360000 * 24 (nombre of ms in the date_histogram interval, day for me).
The second thing you could have to do is to cut your date by hour (i mean 14:03 => 14:00, 12:37 => 12h etc..., rounding up for end time and down for start time)
I am not a pro in painless, so I store agg result in a predefined array (size 99), maybe we can do it with a list of something dynamic. Anyway if your session could be longer than 99 hours, adjust it.
The script create a agg by day array, splitting hour by hour the dates.
{
"query": {
// your filter query
},
"aggs": {
"active_alerts": {
"date_histogram": {
"interval": "day",
"script": {
"inline": "def currentDate=(doc['sessionStartTime'].value); def endDate=(doc['sessionEndTime'].value); def combined=[99]; def counter = 0; while ((currentDate < endDate) && (counter < 99)) { combined[counter] = currentDate; currentDate += 3600000 * 24 } return combined",
"lang":"painless"
}
}
}
}
}
Hope it helps, let me know ;)
Full solutions for reference :
Additionnaly, allows for open-ended range where the "to" can be absent.
In Kibana, add the following script to the X series Date Histogram agg:
{"script": {
"lang": "painless",
"source": "
def currentDate=(doc['from'].value);
def endDate=(doc['to']);
def endDateValue = endDate.size() == 0 ? ZonedDateTime.ofInstant(Calendar.getInstance().toInstant(), ZoneOffset.UTC): endDate.value;
def combined = new ArrayList();
while ((currentDate.isBefore(endDateValue))) { combined.add(currentDate); currentDate = currentDate.plusDays(1) } return combined"
},
"field": null,
"calendar_interval": "1d"
}
For an ES API agg :
GET /<index>/_search
{
"query": {
"match_all": {}
},
"aggs": {
"fromto_range":{
"date_histogram" : {
"script": {
"lang": "painless",
"source": "def currentDate=(doc['from'].value); def endDate=(doc['to']); def endDateValue = endDate.size() == 0 ? ZonedDateTime.ofInstant(Calendar.getInstance().toInstant(), ZoneOffset.UTC): endDate.value; def combined = new ArrayList(); while ((currentDate.isBefore(endDateValue))) { combined.add(currentDate); currentDate = currentDate.plusDays(1) } return combined"
},
"calendar_interval":"1d"
}
}
}
}

Script to return array for scripted metric aggregation from combine

For scripted metric aggregation , in the example shown in the documentation , the combine script returns a single number.
Instead here , can i pass an array or hash ?
I tried doing it , though it did not return any error , i am not able to access those values from reduce script.
In reduce script per shard i am getting an instance when converted to string read as 'Script2$_run_closure1#52ef3bd9'
Kindly let me know , if this can be accomplished in any way.
At least for Elasticsearch version 1.5.1 you can do so.
For example, we can modify Elasticsearch example (scripted metric aggregation) to receive an average profit (profit divided by number of transactions):
{
"query": {
"match_all": {}
},
"aggs": {
"avg_profit": {
"scripted_metric": {
"init_script": "_agg['transactions'] = []",
"map_script": "if (doc['type'].value == \"sale\") { _agg.transactions.add(doc['amount'].value) } else { _agg.transactions.add(-1 * doc['amount'].value) }",
"combine_script": "profit = 0; num_of_transactions = 0; for (t in _agg.transactions) { profit += t; num_of_transactions += 1 }; return [profit, num_of_transactions]",
"reduce_script": "profit = 0; num_of_transactions = 0; for (a in _aggs) { profit += a[0] as int; num_of_transactions += a[1] as int }; return profit / num_of_transactions as float"
}
}
}
}
NOTE: this is just a demo for an array in the combine script, you can calculate average easily without using any arrays.
The response will look like:
"aggregations" : {
"avg_profit" : {
"value" : 42.5
}
}

Resources