I am trying to get the unique count for all labels used on a set of documents. In order to do that, and have the json returned in the bucket (cardinality doesnt return json and count together), I need to write a pipeline query.
My query gets me half way there, but I'm missing the second part that counts the number of buckets a label is in.
Here's my query
{
"size":0,
"aggs" : {
unique_count : {
"composite" : [
"metadataId" : {
"terms" :{"field" : "document.metadata.id"}
},
"label" : {
"terms" :{"field" : "document.label"}
}
]
}
}
}
This produces
...
"buckets" : [
{
"key" : {
"metadataId" : "1",
"label" : "label one"
},
"doc_count" : 2
},
{
"key" : {
"metadataId" : "2",
"label" : "label one"
},
"doc_count" : 1
},
{
"key" : {
"metadataId" : "3",
"label" : "label three"
},
"doc_count" : 3
}
]
...
The problem I'm facing is that each bucket is considered unique and the sum of the unique counts is what I would like to return. For example, in the buckets above the label "label one" is contained within two buckets, so it's doc_count should be 2, while "label three" should have a doc_count of 1.
After the last phase in the pipeline I'd like to see the following output:
"buckets" : [
{
"label" : "label one"
"doc_count" : 2
},
{
"label" : "label three"
"doc_count" : 1
}
]
I've tried all sorts of things, but they're just not getting me close to the output I need. Can anyone point me in the right direction?
Try with the nested terms aggregations where first level aggs would be on label and the second level on metadataId field. The aggs block should look something like:
"aggs" : {
"labels": {
"terms": {
"field": "label.keyword",
"size": 1000
},
"aggs": {
"metadata": {
"terms": {
"field": metadataId.keyword",
"size": 1000
}
}
}
}
}
As output, you will get buckets of labels with key as label value and doc_count with count of docs matching that label. Each label bucket will have a nested buckets of metadataId with key as metadataId value and doc_count with count of docs matching that label and metadataId.
Related
I need a stats on elasticsearch. I can't make the request.
I would like to know the number of people per appointment.
appointment index mapping
{
"id" : "383577",
"persons" : [
{
"id" : "1",
},
{
"id" : "2",
}
]
}
what i would like
"buckets" : [
{
"key" : "1", <--- appointment of 1 person
"doc_count" : 1241891
},
{
"key" : "2", <--- appointment of 2 persons
"doc_count" : 10137
},
{
"key" : "3", <--- appointment of 3 persons
"doc_count" : 8064
}
]
Thank you
The easiest way to do this is to create another integer field containing the length of the persons array and aggregating on that field.
{
"id" : "383577",
"personsCount": 2, <---- add this field
"persons" : [
{
"id" : "1",
},
{
"id" : "2",
}
]
}
The non-optimal way of achieving what you expect is to use a script that will return the length of the persons array dynamically, but be aware that this is sub-optimal and can potentially harm your cluster depending on the volume of data you have:
GET /_search
{
"aggs": {
"persons": {
"terms": {
"script": "doc['persons.id'].size()"
}
}
}
}
If you want to update all your documents to create that field you can do it like this:
POST index/_update_by_query
{
"script": {
"source": "ctx._source.personsCount = ctx._source.persons.length"
}
}
However, you'll also need to modify the logic of your indexing application to create that new field.
I am looking for elastic search aggregation + mapping
that will return the most common list for a certain field.
For example for docs:
{"ToneCurvePV2012": [1,2,3]}
{"ToneCurvePV2012": [1,5,6]}
{"ToneCurvePV2012": [1,7,8]}
{"ToneCurvePV2012": [1,2,3]}
I wish for the aggregation result:
[1,2,3] (since it appears twice).
so far any aggregation that i made would return: 1
This is not possible with default terms aggregation. You need to use terms aggregation with script. Please note that this might impact your cluster performance.
Here, i have used script which will create string from array and used it for aggregation. so if you have array value like [1,2,3] then it will create string representation of it like '[1,2,3]' and that key will be used for aggregation.
Below is sample query you can use to generate aggregation as you expected:
POST index1/_search
{
"size": 0,
"aggs": {
"tone_s": {
"terms": {
"script": {
"source": "def value='['; for(int i=0;i<doc['ToneCurvePV2012'].length;i++){value= value + doc['ToneCurvePV2012'][i] + ',';} value+= ']'; value = value.replace(',]', ']'); return value;"
}
}
}
}
}
Output:
{
"hits" : {
"total" : {
"value" : 4,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"tone_s" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "[1,2,3]",
"doc_count" : 2
},
{
"key" : "[1,5,6]",
"doc_count" : 1
},
{
"key" : "[1,7,8]",
"doc_count" : 1
}
]
}
}
}
PS: key will be come as string and not as array in aggregation response.
I'm collecting logs through Elastic Search. The logs are collected as below.
ex.
{
"name" : "John"
"team" : "IT"
"startTime" : "21:00"
"result" : "pass"
},
{
"name" : "James"
"team" : "HR"
"startTime" : "21:04"
"result" : "pass"
},
{
"name" : "Paul"
"team" : "IT"
"startTime" : "21:05"
"result" : "pass"
},
{
"name" : "Jackson"
"team" : "Marketing"
"startTime" : "21:30"
"result" : "fail"
},
{
"name" : "John"
"team" : "IT"
"startTime" : "21:41"
"result" : "pass"
},
.....and so on
If you run the query below on these collected logs,
GET logData/_search
{
"size": 0,
"aggs": {
"Documents_per_team": {
"terms": {
"field": "team"
}
}
}
}
The following results will be exposed.
"aggregations" : {
"Documents_per_team" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "IT",
"doc_count" : 70
},
{
"key" : "Marketing",
"doc_count" : 55
},
{
"key" : "HR",
"doc_count" : 11
}
]
}
}
}
What I want is to eliminate duplication if the name of the document is duplicated in this result.
[AS-IS]
As shown above, the IT team count is exposed to 70
[The result I want]
if John performed 50 times, Kate performed 10 times, Paul performed 10 times, the IT team count 3 is exposed. (Because there are three of IT team member)
Can I get a team-by-team result after removing duplicates?
Thanks
You've got two options:
a cardinality sub-aggregation (straightforward, but approximate and not very scalable, albeit only in very specific/advanced situations)
or a scripted metric aggregation (slower, more verbose but exact).
Both approaches assume that the names are unique per team-level. If they're not, you'll need to adjust accordingly. Also, it is assumed that the name is mapped to be of type keyword, just like the team. If not, you'll need to replace them with your_field.keyword
1. Cardinality
{
"size": 0,
"aggs": {
"Documents_per_team": {
"terms": {
"field": "team"
},
"aggs": {
"unique_names_per_team": {
"cardinality": {
"field": "name"
}
}
}
}
}
}
2. Scripted Metric
{
"size": 0,
"aggs": {
"Documents_per_team": {
"scripted_metric": {
"init_script": "state.by_department = [:]; state.dept_vs_name = [:];",
"map_script": """
def dept = doc['team'].value;
def name = doc['name'].value;
def name_already_considered = state.by_department.containsKey(dept) && state.dept_vs_name[dept].containsKey(name);
if (name_already_considered) {
return;
}
if (state.by_department.containsKey(dept)) {
state.by_department[dept] += 1;
} else {
state.by_department[dept] = 1
}
if (!state.dept_vs_name.containsKey(dept)) {
// init new map & set is first member
state.dept_vs_name[dept] = [name:true];
} else if (!state.dept_vs_name[dept].containsKey(name)) {
state.dept_vs_name[dept][name] = true;
}
""",
"combine_script": "return state.by_department",
"reduce_script": "return states"
}
}
}
}
Note: If you also wish to see the underlying dept vs. name breakdown, you can modify the combine_script to return the whole state, i.e. return state.
We index the following products:
{
"id": "1",
"name": "the-name",
"categories": [
{
"id" : 10,
"name" : "cat-1"
},
{
"id" : 20,
"name" : "cat-2"
}
]
}
We are doing an aggregation on categories.id using :
REQUEST:
//...
"aggs": {
"by_cat": {
"terms": {
"field": "categories.id",
"size": 10
}
}
}
---
RESPONSE:
// ...
"by_cat" : {
"buckets" : [
{
"key" : 10,
"doc_count" : 804
},
{
"key" : 20,
"doc_count" : 327
},
It works well, however, each bucket contains only the categories.id in the key field. What we would like is to be able to have the name of the category in the bucket, for example :
// ...
"buckets" : [
{
"key" : 10,
"metadata": {
"name": "cat-1"
},
"doc_count" : 804
},
{
"key" : 20,
"metadata": {
"name": "cat-2"
},
"doc_count" : 327
},
What is the good way to do that ? We found two to get this information but they both looks "hackish" :
Using top_hits with size 1 and source limited to categories, it will retrieve one document per bucket containing the information we need. This first solution doesn't look performance-wise and the more aggregation we have, the more bloated is the response.
Adding a new column id_name which concatenate id and name and doing the term aggregation on it. It looks more like a hack, and may be complicated if many fields.
We also tried by mixing field and script in terms but it doesn't help.
metadata looked exactly what we wanted but it is global for all the buckets and not dynamic.
Do we have other way to retrieve this information ?
elasticSearch (ES) term aggregation result is approximate in term of the finalists and their counts. https://www.elastic.co/guide/en/elasticsearch/reference/1.6/search-aggregations-bucket-terms-aggregation.html
I'd like to have the accurate counts for the for the estimated finalists, despite that the finalist are not accurate. I want to eliminate per bucket document count error.
I am thinking to issue a second query that's filtered by the finalists, and since I know the number of finalists, I can count them accurately if I set size=#finalists.
Using the example from the link above: after I have the top 5 Products: a,z,c,g,b from the first aggregation result, I want to find their accurate counts:
{
...
"aggregations" : {
"products" : {
"doc_count_error_upper_bound" : 46,
"buckets" : [
{
"key" : "Product A",
"doc_count" : 100,
"doc_count_error_upper_bound" : 0
},
{
"key" : "Product Z",
"doc_count" : 52,
"doc_count_error_upper_bound" : 2
},
...
]
}
}
}
Now the doc_counts are estimated, I can issue a second query filtered by the product ids:
{
...
"query": {
"filtered": {
"filter": {
"terms": {"product": ["Product A", "Product Z","Product C","Product G","Product B"]}
}
}
},
"aggs":{
"products":{
"terms":{
"field": "product",
"size": 5,
"shard_size": 5
}
}
}
}
My questions are:
does this give me the correct counts on a,z,c,g,b?
is there a better way to do this? inside one query, maybe nested aggregation?
the parsing aggregation results to prepare filters is done with JAVA code, and it is error-prone. Is there an example of this task? or can it be done by ES ?
Thanks in advance.