Dividing counts of two different queries in kibana - elasticsearch

I am trying to create a lucene expression for displaying division on counts of two queries. Both queries contain textual information and both results are in message field. I am not sure how to write this correctly. So far what i have done is without any luck -
doc['message'].value/doc['message'].value
for first query message contain text as - "404 not found"
for second query message contain text as - "500 error"
what i want to do is count(404 not found)/count(500 error)
I would appreciate any help.

I'm going to add the disclaimer that it would be significantly cleaner to just run two separate counts and perform the calculation on the client side like this:
GET /INDEX/_search
{
"size": 0,
"aggs": {
"types": {
"terms": {
"field": "type",
"size": 10
}
}
}
}
Which would return something like (except using your distinct keys instead of the types in my example):
"aggregations": {
"types": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Article",
"doc_count": 881
},
{
"key": "Page",
"doc_count": 301
}
]
}
Using that, take your distinct counts and calculated the average.
With the above being stated, here is the hacky way I was able to put together from (via single request) this
GET /INDEX/_search
{
"size": 0,
"aggs": {
"parent_agg": {
"terms": {
"script": "'This approach is a weird hack'"
},
"aggs": {
"four_oh_fours": {
"filter": {
"term": {
"message": "404 not found"
}
},
"aggs": {
"count": {
"value_count": {
"field": "_index"
}
}
}
},
"five_hundreds": {
"filter": {
"term": {
"message": "500 error"
}
},
"aggs": {
"count": {
"value_count": {
"field": "_index"
}
}
}
},
"404s_over_500s": {
"bucket_script": {
"buckets_path": {
"four_oh_fours": "four_oh_fours.count",
"five_hundreds": "five_hundreds.count"
},
"script": "return params.four_oh_fours / (params.five_hundreds == 0 ? 1: params.five_hundreds)"
}
}
}
}
}
}
This should return an aggregate value based on the calculation within the script.
If someone can offer an approach aside from these two, I would love to see it. Hope this helps.
Edit - Same script done via "expression" type rather than painless (default). Just replace the above script value with the following:
"script": {
"inline": "four_oh_fours / (five_hundreds == 0 ? 1 : five_hundreds)",
"lang": "expression"
}
Updated the script here to accomplish the same thing via Lucene expressions

Related

Search and aggregation on two indices

Two indexes are created with the dates.
First index mapping:
PUT /index_one
{
"mappings": {
"properties": {
"date_start": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss.SSSZZ||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
Second index mapping:
PUT /index_two
{
"mappings": {
"properties": {
"date_end": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss.SSSZZ||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
Need to find a date in a certain range and perform aggregation average of the dates difference.
Tried to make a request like this:
GET /index_one,index_two/_search?scroll=1m&q=[2021-01-01+TO+2021-12-31]&filter_path=aggregations,hits.total.value,hits.hits
{
"aggs": {
"filtered_dates": {
"filter": {
"bool": {
"must": [
{
"exists": {
"field": "date_start"
}
},
{
"exists": {
"field": "date_end"
}
}
]
}
},
"aggs": {
"avg_date": {
"avg": {
"script": {
"lang": "painless",
"source": "doc['date_end'].value.toInstant().toEpochMilli() - doc['date_begin'].value.toInstant().toEpochMilli()"
}
}
}
}
}
}
}
I get the following response to the request:
{
"hits": {
"total": {
"value": 16508
},
"hits": [
{
"_index": "index_one",
"_type": "_doc",
"_id": "93a34c5b-101b-45ea-9965-96a2e0446a28",
"_score": 1.0,
"_source": {
"date_begin": "2021-02-26 07:26:29.732+0300"
}
}
]
},
"aggregations": {
"filtered_dates": {
"meta": {},
"doc_count": 0,
"avg_date": {
"value": null
}
}
}
}
Can you please tell me if it is possible to make a query with search and aggregation over two indices in Elasticsearch? If so, how?
If you stored date_start on the document which contains date_end, it'd be much easier to figure out the average — check my answer to Store time related data in ElasticSearch.
Now, the script context operates on one single document at a time and has "no clue" about the other, potentially related docs. So if you don't store both dates at the same time in at least one doc, you'd need to somehow connect the docs nonetheless.
One option would be to use their ids:
POST index_one/_doc
{ "id":1, "date_start": "2021-01-01" }
POST index_two/_doc
{ "id":1, "date_end": "2021-12-31" }
POST index_one/_doc/2
{ "id":2, "date_start": "2021-01-01" }
POST index_two/_doc/2
{ "id":2, "date_end": "2021-01-31" }
After that, it's possible to:
Target multiple indices — as you already do.
Group the docs by their IDs and select only those that include at least 2 buckets (assuming two buckets represent the start & the end).
Obtain the min & max dates — essentially cherry-picking the date_start and date_end to be used later down the line.
Use a bucket_script aggregation to calculate their difference (in milliseconds).
Leverage a top-level average bucket aggregation to run over all the difference buckets and ... average them.
In concrete terms:
GET /index_one,index_two/_search?scroll=1m&q=[2021-01-01+TO+2021-12-31]&filter_path=aggregations,hits.total.value,hits.hits
{
"aggs": {
"grouped_by_id": {
"terms": {
"field": "id",
"min_doc_count": 2,
"size": 10
},
"aggs": {
"min_date": {
"min": {
"field": "date_start"
}
},
"max_date": {
"max": {
"field": "date_end"
}
},
"diff": {
"bucket_script": {
"buckets_path": {
"min": "min_date",
"max": "max_date"
},
"script": "params.max - params.min"
}
}
}
},
"avg_duration_across_the_board": {
"avg_bucket": {
"buckets_path": "grouped_by_id>diff",
"gap_policy": "skip"
}
}
}
}
If everything goes right, you'll end up with:
...
"aggregations" : {
"grouped_by_id" : {
...
},
"avg_duration_across_the_board" : {
"value" : 1.70208E10 <-- 17,020,800,000 milliseconds ~ 4,728 hrs
}
}
⚠️ Caveat: note that the 2nd level terms aggregation has an adjustable size. You'll probably need to increase it to cover more docs. But there are theoretical and practical limits as to how far it makes sense to increase it.
📖 Shameless plug: this was inspired in part by the chapter Aggregations & Buckets in my recently published Elasticsearch Handbook — containing lots of other real-world, non-trivial examples 🙌

Term aggregation on filtered array items

I want to aggregate on term that are inside an array but I am only interested in some of the array item. I made up a simplified example. Basically I want to aggregate on Type.string if Type.field is valid.
POST so/question
{
"Type": [
[
{
"field": "invalid",
"string": "A"
}
],
[
{
"field": "valid",
"string": "B"
}
]
]
}
GET /so/_search
{
"size": 0,
"aggs": {
"xxx": {
"filter": {
"term": {
"Type.field": "valid"
}
},
"aggs": {
"yyy": {
"terms": {
"field": "Type.string.keyword",
"min_doc_count": 0
}
}
}
}
}
}
The agregation result has 2 keys whereas I only need the "B" key.
"aggregations": {
"xxx": {
"doc_count": 1,
"yyy": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "A",
"doc_count": 1
},
{
"key": "B",
"doc_count": 1
}
]
}
}
}
Is there a way to aggregate on array items which match the filter?
Unfortunately I can't change the data format which would be the obvious solution.
Unless, the documents are of Nested Type, I don't think its possible with simple array types because of the way Elasticsearch Flattens the objects and stores them.
Querying anything on these flattened objects will give you completely unexpected results.
Now I've come up with the below query, making use of Terms Aggregation using Script works perfectly fine for the document you've mentioned in the question
POST so/_search
{
"size": 0,
"aggs": {
"xxx": {
"filter": {
"term": {
"Type.field": "valid"
}
},
"aggs": {
"yyy": {
"terms": {
"script": {
"source": """
int size = doc['Type.string.keyword'].values.length;
for(int i=0; i<size; i++){
String myString = doc['Type.string.keyword'][i];
if(myString.equals("B") && doc['Type.field.keyword'][i].equals("valid")){
return myString;
}
}""",
"lang": "painless"
}
}
}
}
}
}
}
However if you ingest the below document, you see that the aggregation response would be completely different. That is because, array types doesn't store each Type.field value and Type.string value in an ith location in their respective arrays.
POST so/question/2
{
"Type": [
[
{
"field": "valid",
"string": "A"
}
],
[
{
"field": "invalid",
"string": "B"
}
]
]
}
Notice even the below simple Bool query wouldn't work as expected and ends up displaying both the documents.
POST so/_search
{
"query": {
"bool": {
"must": [
{ "match": { "Type.field.keyword": "valid" }},
{ "match": { "Type.string.keyword": "B" }}
]
}
}
}
Hope it helps!

Aggregate over multiple fields without subaggregation

I have documents in my ElasticSearch which have two fields. I want to build an aggregate over the combination of these, kind of like in SQL GROUP BY field_A, field_B and get a row per existing combination. I read everywhere that I should use subaggregation for this.
{
"aggs": {
"sales_by_article": {
"terms": {
"field": "catalogs.article_grouping",
"size": 1000000,
"order": {
"total_amount": "desc"
}
},
"aggs": {
"total_amount": {
"sum": {
"script": "Math.round(doc['amount.value'].value*100)/100.0"
}
},
"sales_by_submodel": {
"terms": {
"field": "catalogs.submodel_grouping",
"size": 1000,
"order": {
"total_amount": "desc"
}
},
"aggs": {
"total_amount": {
"sum": {
"script": "Math.round(doc['amount.value'].value*100)/100.0"
}
}
}
}
}
}
},
"size": 0
}
With the following simplified result:
{
"aggregations": {
"sales_by_article": {
"buckets": [
{
"key": "19114",
"total_amount": {
"value": 426794.25
},
"sales_by_submodel": {
"buckets": [
{
"key": "12",
"total_amount": {
"value": 51512.200000000004
}
},
...
]
}
},
...
]
}
}
}
However, the problem with this is that the ordering is not what I want. In this particular case, it first orders the articles based on total_amount per article, and then within an article it orders the submodels based on total_amount per submodel. However, what I want to achieve is to only have the deepest level and get an aggregation for the combination of article and submodel, ordered by the total_amount of this combination. This is the result I would like:
{
"aggregations": {
"sales_by_article_and_submodel": {
"buckets": [
{
"key": "1911412",
"total_amount": {
"value": 51512.200000000004
}
},
...
]
}
}
}
It's discussed in the docs a bit here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_multi_field_terms_aggregation
Basically you can use a script to create a term which is derived from each document (using as many fields as you want) at query run time, but it will be slow. If you are doing it for ad hoc analysis, it'll work fine. If you need to serve these requests at some high rate, then you probably want to make a field in your model that is a combination of the two fields you're interested in, so the index is populated for you already.
Example query using the script approach:
GET agreements/agreement/_search?size=0
{
"aggs" : {
"myAggregationName" : {
"terms" : {
"script" : {
"source": "doc['owningVendorCode'].value + '|' + doc['region'].value",
"lang": "painless"
}
}
}
}
}
I have learned I should use composite aggregates for this.

Aggregating with multiple fields returned in ElasticSearch

Suppose I have a relative simple index with the following fields...
"testdata": {
"properties": {
"code": {
"type": "integer"
},
"name": {
"type": "string"
},
"year": {
"type": "integer"
},
"value": {
"type": "integer"
}
}
}
I can write a query to get the total sum of the values aggregated by the code like so:
{
"from":0,
"size":0,
"aggs": {
"by_code": {
"terms": {
"field": "code"
},
"aggs": {
"total_value": {
"sum": {
"field": "value"
}
}
}
}
}
}
And this returns the following (abridged) results:
"aggregations": {
"by_code": {
"doc_count_error_upper_bound": 478,
"sum_other_doc_count": 328116,
"buckets": [
{
"key": 236948,
"doc_count": 739,
"total_value": {
"value": 12537
}
},
However, this data is being fed to a web front-end, where it is required both the code and the name is displayed. So, the question is, is it possible to amend the query somehow to also return the name field, as well as the code field, in the results?
So, for example, the results can look a bit like this:
"aggregations": {
"by_code": {
"doc_count_error_upper_bound": 478,
"sum_other_doc_count": 328116,
"buckets": [
{
"key": 236948,
"code": 236948,
"name": "Test Name",
"doc_count": 739,
"total_value": {
"value": 12537
}
},
I've read up on sub-aggregations, but in this case there is a one-to-one relationship between code and name (so, you wouldn't have different names for the same key). Also, in my real case, there are 5 other fields, like description, that I would like to return, so I am wondering if there was another way to do it.
In SQL (from which this data originally came from before it was swapped to ElasticSearch) I would write the following query
SELECT Code, Name, SUM(Value) AS Total_Value
FROM [TestData]
GROUP BY Code, Name
You can achieve this using scripting, i.e. instead of specifying a field, you specify a combination of fields:
{
"from":0,
"size":0,
"aggs": {
"by_code": {
"terms": {
"script": "[doc.code.value, doc.name.value].join('-')"
},
"aggs": {
"total_value": {
"sum": {
"field": "value"
}
}
}
}
}
}
note: you need to make sure to enable dynamic scripting for this to work

Elasticsearch: how to scope aggregations to your query and filter?

I have been playing around with elasticsearch query and filter for some time now but never worked with aggregations before. The idea that we can scope the aggregations with our query seems quite amazing to me but I want to understand how to do it properly so that I do not make any mistakes. Currently all my search queries are designed this way:
{
"query": {
},
"filter": {
},
"from": 0,
"size": 60
}
Now, when I added some aggregation buckets, the structure became this:
{
"aggs": {
"all_colors": {
"terms": {
"field": "color.name"
}
},
"all_brands": {
"terms": {
"field": "brand_slug"
}
},
"all_sizes": {
"terms": {
"field": "sizes"
}
}
},
"query": {
},
"filter": {
},
"from": 0,
"size": 60
}
However, the results of the aggregation are always the same irrespective of what info I provide in filter.
Now, when I changed the query structure to something like this, it started showing different results:
{
"aggs": {
"all_colors": {
"terms": {
"field": "color.name"
}
},
"all_brands": {
"terms": {
"field": "brand_slug"
}
},
"all_sizes": {
"terms": {
"field": "sizes"
}
}
},
"query": {
"filtered": {
"query": {
},
"filter": {
}
}
},
"from": 0,
"size": 60
}
Does it mean I will have to change the structure of my search queries everywhere to this new filtered type of structure ? Is there any other workaround which allows me to achieve desired results without having to change that much of code ?
Also, another thing I observed is that if my brand_slug field contains multiple keywords like "peter england", then both of these are returned in separate buckets like this:
{
"buckets": [
{
"key": "england",
"doc_count": 368
},
{
"key": "peter",
"doc_count": 368
}
]
}
How can I ensure that both these end up in a same bucket like this:
{
"buckets": [
{
"key": "peter england",
"doc_count": 368
}
]
}
UPDATE: This second part I have been able to accomplish by indexing brand, color and sizes differently like this:
"sizes": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
What you've noticed is by design. Have a look at my answer to a similar question on SO. Basically, input to both aggregation and filter sections is the output of query section. Filtered Query as you've suggested would be the best way to achieve the results you desire. There is another way too. You can use Filter Aggregation. Then you would not need to change your query and filter sections but simply copy the filter section inside the aggregation sections but that in my opinion would be an overkill and a violation of the DRY principle in general.

Resources