Elasticsearch: find documents with distinct values and then aggregate over them - elasticsearch

My index has a log-like structure: I insert a version of a document whenever an event occurs. For example, here are documents in the index:
{ "key": "a", subkey: 0 }
{ "key": "a", subkey: 0 }
{ "key": "a", subkey: 1 }
{ "key": "a", subkey: 1 }
{ "key": "b", subkey: 0 }
{ "key": "b", subkey: 0 }
{ "key": "b", subkey: 1 }
{ "key": "b", subkey: 1 }
I'm trying to construct a query in ElasticSearch which is basically equivalent to the following SQL query:
SELECT COUNT(*), key, subkey
FROM (SELECT DISTINCT key, subkey FROM t)
The answer to this query would obviously be
(1, a, 0)
(1, a, 1)
(1, b, 0)
(1, b, 1)
How would I replicate this query in Elasticsearch? I came up with the following:
GET test_index/test_type/_search?search_type=count
{
"aggregations": {
"count_aggr": {
"terms": {
"field": "concatenated_key"
},
"aggs": {
"sample_doc": {
"top_hits": {
"size": 1
}
}
}
}
}
}
concatenated_key is a concatenation of key and subkey. This query would create a bucket for each (key, subkey) combination and return a sample document from each bucket. However, I don't know how can I aggregate over the fields of _source.
Would appreciate any ideas. Thanks!

If you don't have the possibility to re-index the documents and to add your own concatenated key field, this is a way of doing it:
GET /my_index/my_type/_search?search_type=count
{
"aggs": {
"key_agg": {
"terms": {
"field": "key",
"size": 10
},
"aggs": {
"sub_key_agg": {
"terms": {
"field": "subkey",
"size": 10
}
}
}
}
}
}
It will give you something like this:
"buckets": [
{
"key": "a",
"doc_count": 4,
"sub_key_agg": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": 0,
"doc_count": 2
},
{
"key": 1,
"doc_count": 2
}
]
}
},
{
"key": "b",
"doc_count": 4,
"sub_key_agg": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": 0,
"doc_count": 2
},
{
"key": 1,
"doc_count": 2
}
]
}
}
]
where you have the key - "key": "a" - and then each combination with this key and the number of docs that match key=a and subkey=0 or key=a and subkey=1:
"buckets": [
{
"key": 0,
"doc_count": 2
},
{
"key": 1,
"doc_count": 2
}
]
Same goes for the other key.

Related

Elasticsearch - Sort results of Terms aggregation by key string length

I am querying ES with a Terms aggregation to find the first N unique values of a string field foo where the field contains a substring bar, and the document matches some other constraints.
Currently I am able to sort the results by the key string alphabetically:
{
"query": {other constraints},
"aggs": {
"my_values": {
"terms": {
"field": "foo.raw",
"include": ".*bar.*",
"order": {"_key": "asc"},
"size": N
}
}
}
}
This gives results like
{
...
"aggregations": {
"my_values": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 145,
"buckets": [
{
"key": "aa_bar_aa",
"doc_count": 1
},
{
"key": "iii_bar_iii",
"doc_count": 1
},
{
"key": "z_bar_z",
"doc_count": 1
}
]
}
}
}
How can I change the order option so that the buckets are sorted by the length of the strings in the foo key field, so that the results are like
{
...
"aggregations": {
"my_values": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 145,
"buckets": [
{
"key": "z_bar_z",
"doc_count": 1
},
{
"key": "aa_bar_aa",
"doc_count": 1
},
{
"key": "iii_bar_iii",
"doc_count": 1
}
]
}
}
}
This is desired because a shorter string is closer to the search substring so is considered a 'better' match so should appear earlier in the results than a longer string.
Any alternative way to sort the buckets by how similar they are to the original substring would also be helpful.
I need the sorting to occur in ES so that I only have to load the top N results from ES.
I worked out a way to do this.
I used a sub-aggregation per dynamic bucket to calculate the length of the key string as another field.
Then I was able to sort by this new length field first, then by the actual key so keys of the same length are sorted alphabetically.
{
"query": {other constraints},
"aggs": {
"my_values": {
"terms": {
"field": "foo.raw",
"include": ".*bar.*",
"order": [
{"key_length": "asc"},
{"_key": "asc"}
],
"size": N
},
"aggs": {
"key_length": {
"max": {"script": "doc['foo.raw'].value.length()" }
}
}
}
}
}
This gave me results like
{
...
"aggregations": {
"my_values": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 145,
"buckets": [
{
"key": "z_bar_z",
"doc_count": 1
},
{
"key": "aa_bar_aa",
"doc_count": 1
},
{
"key": "dd_bar_dd",
"doc_count": 1
},
{
"key": "bbb_bar_bbb",
"doc_count": 1
}
]
}
}
}
which is what I wanted.

Elasticsearch return document ids while doing aggregate query

Is it possible to get an array of elasticsearch document id while group by, i.e
Current output
"aggregations": {,
"types": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Text Document",
"doc_count": 3310
},
{
"key": "Unknown",
"doc_count": 15
},
{
"key": "Document",
"doc_count": 13
}
]
}
}
Desired output
"aggregations": {,
"types": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Text Document",
"doc_count": 3310,
"ids":["doc1","doc2", "doc3"....]
},
{
"key": "Unknown",
"doc_count": 15,
"ids":["doc11","doc12", "doc13"....]
},
{
"key": "Document",
"doc_count": 13
"ids":["doc21","doc22", "doc23"....]
}
]
}
}
Not sure if this is possible in elasticsearch or not,
below is my aggregation query:
{
"size": 0,
"aggs": {
"types": {
"terms": {
"field": "docType",
"size": 10
}
}
}
}
Elasticsearch version:
6.3.2
You can use top_hits aggregation which will return all documents under an aggregation. Using source filtering you can select fields under hits
Query:
"aggs": {
"district": {
"terms": {
"field": "docType",
"size": 10
},
"aggs": {
"docs": {
"top_hits": {
"size": 10,
"_source": ["ids"]
}
}
}
}
}
For anyone interested, another solution is to create a custom key value using a script to create a string of delineated values from the doc, including the id. It may not be pretty, but you can then parse it out later - and if you just need something minimal like the doc id, it may be worth it.
{
"size": 0,
"aggs": {
"types": {
"terms": {
"script": "doc['docType'].value+'::'+doc['_id'].value",
"size": 10
}
}
}
}

How to get the count of a pair of field values?

I need to build a heatmap from the data I have in elasticsearch. The heatmap is the count of cases where two specific fields have the same value. For the data
{'name': 'john', 'age': '10', 'car': 'peugeot'}
{'name': 'john', 'age': '10', 'car': 'audi'}
{'name': 'john', 'age': '12', 'car': 'fiat'}
{'name': 'mary', 'age': '3', 'car': 'mercedes'}
I would like to get the number of unique pairs for the values of name and age. That would be
john, 10, 2
john, 12, 1
mary, 3, 1
I could get all the events and make the count myself but I was hoping that there would be some magical aggregation which could provide that.
It would not be a problem to have it in a nested form, such as
{
'john':
{
'10': 2,
'12': 1
},
'mary':
{
'3': 1
},
}
or whatever is practical.
You can use Inner aggregation. Use query like
POST count-test/_search
{
"size": 0,
"aggs": {
"group By Name": {
"terms": {
"field": "name"
},
"aggs": {
"group By age": {
"terms": {
"field": "age"
}
}
}
}
}
}
Output won't be like as you mentioned but like.
"aggregations": {
"group By Name": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "john",
"doc_count": 3,
"group By age": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "10",
"doc_count": 2
},
{
"key": "12",
"doc_count": 1
}
]
}
},
{
"key": "mary",
"doc_count": 1,
"group By age": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "3",
"doc_count": 1
}
]
}
}
]
}
}
Hope this helps!!
You can use a term aggregation with a script:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_multi_field_terms_aggregation
Like this you can "concat" what you want such as :
{
"aggs" : {
"data" : {
"terms" : {
"script" : {
"source": "doc['name'].value + doc['name'].age",
"lang": "painless"
}
}
}
}
}
(Not sure about the string concat syntax).

Elasticsearch count doc_count occurrences on aggs

I have an elasticsearch aggregation query like this.
{
"size":0,
"aggs": {
"Domains": {
"terms": {
"field": "domains",
"size": 0
},
"aggs":{
"Identifier": {
"terms": {
"field":"alertIdentifier",
"size": 0
}
}
}
}
}
}
And it results in bucket aggregation like following:
"aggregations": {
"Domains": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "IT",
"doc_count": 147,
"Identifier": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "-2623493027134706869",
"doc_count": 7
},
{
"key": "-6590617724257725266",
"doc_count": 7
},
{
"key": "1106147277275983835",
"doc_count": 4
},
{
"key": "-3070527890944301111",
"doc_count": 4
},
{
"key": "-530975388352676402",
"doc_count": 3
},
{
"key": "-6225620509938623294",
"doc_count": 2
},
{
"key": "1652134630535374656",
"doc_count": 1
},
{
"key": "4191687133126999365",
"doc_count": 8
},
{
"key": "6882920925888555081",
"doc_count": 2
}
]
}
}
What I need is to count the number of doc_counts occurrences like this:
1 times: 0
2 times: 2
3 times: 1
equal or more than 4 times: 5
any idea how to build the ES query to count the occurrences of doc_count?
Thanks in advance.
below the ES query:
POST /xt-history*/_search
{
"query": {
"filtered": {"query": {"match_all": {} },
"filter": {
"and": [
{"term": {"type": "10"}}
]
}
}
},
"size": 0,
"aggs": {
"repetitions": {
"scripted_metric": {
"init_script" : "_agg['all'] = []; _agg['all2'] = [];",
"map_script" : "_agg['all'].add(_source['alert']['alertIdentifier'])",
"combine_script" : "for (alertId in _agg['all']) { _agg['all2'].add(alertId); }; return _agg['all2']",
"reduce_script" : "all3 = []; answer = {}; answer['one'] = []; answer['two'] = []; answer['three'] = []; answer['four'] = []; answer['five'] = []; answer['five_plus'] = []; for (alertIds in _aggs) { for (alertId1 in alertIds) { all3.add(alertId1); }; }; for (alertId in all3) { if (answer['five_plus'].contains(alertId)) { } else if(answer['five'].contains(alertId)) {answer['five'].remove(alertId); answer['five_plus'].add(alertId);} else if(answer['four'].contains(alertId)) {answer['four'].remove(alertId); answer['five'].add(alertId);} else if(answer['three'].contains(alertId)) {answer['three'].remove(alertId); answer['four'].add(alertId);} else if(answer['two'].contains(alertId)) {answer['two'].remove(alertId); answer['three'].add(alertId);} else if(answer['one'].contains(alertId)) {answer['one'].remove(alertId); answer['two'].add(alertId);} else {answer['one'].add(alertId);}; }; fans = []; fans.add(answer['one'].size()); fans.add(answer['two'].size()); fans.add(answer['three'].size()); fans.add(answer['four'].size()); fans.add(answer['five'].size()); fans.add(answer['five_plus'].size()); return fans"
}
}
}
}
query output:
{
"took": 4770,
"timed_out": false,
"_shards": {
"total": 190,
"successful": 189,
"failed": 0
},
"hits": {
"total": 334,
"max_score": 0,
"hits": []
},
"aggregations": {
"repetitions": {
"value": [
63,
39,
3,
10,
2,
13
]
}
}
}
where first value is the number of repetitions for doc_count=1, second value is the number of repetitions for doc_count=2, ... last value is the number of repetition for doc_count >=5

Elasticsearch Terms or Cardinality Aggregation - Order by number of distinct values

Friends,
I am doing some analysis to find unique pairs from 100s of millions of documents. The mock example is as shown below:
doc field1 field2
AAA : BBB
AAA : CCC
PPP : QQQ
PPP : QQQ
XXX : YYY
XXX : YYY
MMM : NNN
90% of the document contains an unique pair as shown above in doc 3, 4, 5, 6 and 7 which I am not interested on my aggregation result. I am interested to aggregate doc 1 and 2.
Terms Aggregation Query:
"aggs": {
"f1": {
"terms": {
"field": "FIELD1",
"min_doc_count": 2
},
"aggs": {
"f2": {
"terms": {
"field": "FIELD2"
}
}
}
}
}
Term Aggregation Result
"aggregations": {
"f1": {
"buckets": [
{
"key": "PPP",
"doc_count": 2,
"f2": {
"buckets": [
{
"key": "QQQ",
"doc_count": 2
}
]
}
},
{
"key": "XXX",
"doc_count": 2,
"f2": {
"buckets": [
{
"key": "YYY",
"doc_count": 2
}
]
}
},
{
"key": "AAA",
"doc_count": 2,
"f2": {
"buckets": [
{
"key": "BBB",
"doc_count": 1
},
{
"key": "CCC",
"doc_count": 1
}
]
}
}
]
}
}
I am interested only on key AAA to be in the aggregation result. What is the best way to filter the aggregation result containing distinct pairs?
I tried with cardinality aggregation which result unque value count. However I am not able to filter out what I am not interested from the aggregation results.
Cardinality Aggregation Query
"aggs": {
"f1": {
"terms": {
"field": "FIELD1",
"min_doc_count": 2
},
"aggs": {
"f2": {
"cardinality": {
"field": "FIELD2"
}
}
}
}
}
Cardinality Aggregation Result
"aggregations": {
"f1": {
"buckets": [
{
"key": "PPP",
"doc_count": 2,
"f2": {
"value" : 1
}
},
{
"key": "XXX",
"doc_count": 2,
"f2": {
"value" : 1
}
},
{
"key": "AAA",
"doc_count": 2,
"f2": {
"value" : 2
}
}
]
}
}
Atleast if I could sort by cardinal value, that would be help me to find some workarounds. Please help me in this regard.
P.S: Writing a spark/mapreduce program to post process/filter the aggregation result is not expected solution for this issue.
I suggest to use filter query along with aggregations, since you are only interested in field1=AAA.
I have a similar example here.
For example, I have an index of all patients in my hospital. I store their drug use in a nested object DRUG. Each patient could take different drugs, and each could take a single drug for multiple times.
Now if I wanted to find the number of patients who took aspirin at least once, the query could be:
{
"size": 0,
"_source": false,
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"nested": {
"path": "DRUG",
"filter": {
"bool": {
"must": [{ "term": { "DRUG.NAME": "aspirin" } }]
}}}}}},
"aggs": {
"DRUG_FACETS": {
"nested": {
"path": "DRUG"
},
"aggs": {
"DRUG_NAME_FACETS": {
"terms": { "field": "DRUG.NAME", "size": 0 },
"aggs": {
"DISTINCT": { "cardinality": { "field": "DRUG.PATIENT" } }
}
}}}}
}
Sample result:
{
"took": 6,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 6,
"max_score": 0,
"hits": []
},
"aggregations": {
"DRUG_FACETS": {
"doc_count": 11,
"DRUG_NAME_FACETS": {
"buckets": [
{
"key": "aspirin",
"doc_count": 6,
"DISTINCT": {
"value": 6
}
},
{
"key": "vitamin-b",
"doc_count": 3,
"DISTINCT": {
"value": 2
}
},
{
"key": "vitamin-c",
"doc_count": 2,
"DISTINCT": {
"value": 2
}
}
]
}
}
}
}
The first one in the buckets would be aspirin. But you can see other 2 patients had also taken vitamin-b when they took aspirin.
If you change the field value of DRUG.NAME to another drug name for example "vitamin-b", I suppose you would get vitamin-b in the first position of the buckets.
Hopefully this is helpful to your question.
A bit late, hope it would help for others.
A simple approach is to filter only 'AAA' records in top aggregation:
{
"size": 0,
"aggregations": {
"filterAAA": {
"filter": {
"term": {
"FIELD1": "AAA"
}
},
"aggregations": {
"f1": {
"terms": {
"field": "FIELD1",
"min_doc_count": 2
},
"aggregations": {
"f2": {
"terms": {
"field": "FIELD2"
}
}
}
}
}
}
}
}

Resources