Related
I'm trying to do something with Elasticsearch that should be quite simple. I have an index which contains documents of the shape: {"timestamp": int, "pricePerUnit": int, "units": int}. I want to visualize the average price over time in a histogram. Note that I don't want the average of the "pricePerUnit", I want the average price paid per unit, which means finding the total value in each time bucket by multiplying the "pricePerUnit" by the "units" for each document, and summing the total value sold in each document, then dividing by the sum of the total units sold in the time bucket to get the average price paid per unit. A standard Kibana line chart won't work. I can get the average "pricePerUnit * units", but can't divide this aggregation by the sum of the total units. Also can't be done in TSVB, as this doesn't allow for scripts/scripted fields. Can't use timelion, because the "timestamp" field isn't a time field (I know, but there's nothing I can do about it). I'm therefore trying to use Vega. However, I'm running into a problem with nested aggregations. Here's the ES query I'm running:
{
"$schema": "https://vega.github.io/schema/vega/v3.json",
"data": {
"name": "vals",
"url": {
"index": "index_name",
"body": {
"aggs": {
"2": {
"histogram": {
"field": "timestamp",
"interval": 2000,
"min_doc_count": 1
},
"aggs": {
"1": {
"avg": {
"field": "pricePerUnit",
"script": {
"inline": "doc['pricePerUnit'].value * doc['units'].value",
"lang": "painless"
}
}
}
}
}
},
"size": 0,
"stored_fields": [
"*"
],
"script_fields": {
"spend": {
"script": {
"source": "doc['pricePerUnit'].value * doc['units'].value",
"lang": "painless"
}
}
},
"docvalue_fields": [],
"_source": {
"excludes": []
},
"query": {
"bool": {
"must": [],
"filter": [
{
"match_all": {}
},
{
"range": {
"timeslot.startTime": {
"gte": 1621292400,
"lt": 1621428349
}
}
}
],
"should": [],
"must_not": []
}
}
},
"format": {"property": "aggregations.2.buckets"}
}
}
,
"scales": [
{
"name": "yscale",
"type": "linear",
"zero": true,
"domain": {"data": "vals", "field": "1.value"},
"range": "height"
},
{
"name": "xscale",
"type": "time",
"range": "width"
}
],
"axes": [
{"scale": "yscale", "orient": "left"},
{"scale": "xscale", "orient": "bottom"}
],
"marks": [
{
"type": "line",
"encode": {
"update": {
"x": {"scale": "xscale", "field": "key"},
"y": {"scale": "yscale", "field": "1.value"}
}
}
}
]
}
It gives me the following result set:
"took": 1,
"timed_out": false,
"_shards": {
"total": 4,
"successful": 4,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 401,
"max_score": null,
"hits": []
},
"aggregations": {
"2": {
"buckets": [
{
"1": {
"value": 86340
},
"key": 1621316000,
"doc_count": 7
},
{
"1": {
"value": 231592.92307692306
},
"key": 1621318000,
"doc_count": 13
},
{
"1": {
"value": 450529.23529411765
},
"key": 1621320000,
"doc_count": 17
},
{
"1": {
"value": 956080.0555555555
},
"key": 1621322000,
"doc_count": 18
},
{
"1": {
"value": 1199865.5714285714
},
"key": 1621324000,
"doc_count": 14
},
{
"1": {
"value": 875300.7368421053
},
"key": 1621326000,
"doc_count": 19
},
{
"1": {
"value": 926738.8
},
"key": 1621328000,
"doc_count": 20
},
{
"1": {
"value": 3239475.3333333335
},
"key": 1621330000,
"doc_count": 18
},
{
"1": {
"value": 3798063.714285714
},
"key": 1621332000,
"doc_count": 21
},
{
"1": {
"value": 482089.5
},
"key": 1621334000,
"doc_count": 4
},
{
"1": {
"value": 222952.33333333334
},
"key": 1621336000,
"doc_count": 12
},
{
"1": {
"value": 742225.75
},
"key": 1621338000,
"doc_count": 8
},
{
"1": {
"value": 204203.25
},
"key": 1621340000,
"doc_count": 4
},
{
"1": {
"value": 294886
},
"key": 1621342000,
"doc_count": 4
},
{
"1": {
"value": 284393.75
},
"key": 1621344000,
"doc_count": 4
},
{
"1": {
"value": 462800.5
},
"key": 1621346000,
"doc_count": 4
},
{
"1": {
"value": 233321.2
},
"key": 1621348000,
"doc_count": 5
},
{
"1": {
"value": 436757.8
},
"key": 1621350000,
"doc_count": 5
},
{
"1": {
"value": 4569021
},
"key": 1621352000,
"doc_count": 1
},
{
"1": {
"value": 368489.5
},
"key": 1621354000,
"doc_count": 4
},
{
"1": {
"value": 208359.4
},
"key": 1621356000,
"doc_count": 5
},
{
"1": {
"value": 7827146.375
},
"key": 1621358000,
"doc_count": 8
},
{
"1": {
"value": 63873.5
},
"key": 1621360000,
"doc_count": 6
},
{
"1": {
"value": 21300
},
"key": 1621364000,
"doc_count": 1
},
{
"1": {
"value": 138500
},
"key": 1621366000,
"doc_count": 2
},
{
"1": {
"value": 5872400
},
"key": 1621372000,
"doc_count": 1
},
{
"1": {
"value": 720200
},
"key": 1621374000,
"doc_count": 1
},
{
"1": {
"value": 208634.33333333334
},
"key": 1621402000,
"doc_count": 3
},
{
"1": {
"value": 306248.5
},
"key": 1621404000,
"doc_count": 10
},
{
"1": {
"value": 328983.77777777775
},
"key": 1621406000,
"doc_count": 18
},
{
"1": {
"value": 1081724
},
"key": 1621408000,
"doc_count": 10
},
{
"1": {
"value": 2451076.785714286
},
"key": 1621410000,
"doc_count": 14
},
{
"1": {
"value": 1952910.2857142857
},
"key": 1621412000,
"doc_count": 14
},
{
"1": {
"value": 2294818.1875
},
"key": 1621414000,
"doc_count": 16
},
{
"1": {
"value": 2841910.388888889
},
"key": 1621416000,
"doc_count": 18
},
{
"1": {
"value": 2401278.9523809524
},
"key": 1621418000,
"doc_count": 21
},
{
"1": {
"value": 4311845.4
},
"key": 1621420000,
"doc_count": 5
},
{
"1": {
"value": 617102.5333333333
},
"key": 1621422000,
"doc_count": 15
},
{
"1": {
"value": 590469.7142857143
},
"key": 1621424000,
"doc_count": 14
},
{
"1": {
"value": 391918.85714285716
},
"key": 1621426000,
"doc_count": 14
},
{
"1": {
"value": 202163.66666666666
},
"key": 1621428000,
"doc_count": 3
}
]
}
}
}
The problem is that I can't extract the "value" field from the "1" sub-aggregation. I've tried using a flatten transform, but it doesn't seem to work. If anyone can either:
a) Tell me how to solve this specific problem with Vega; or
b) Tell me another way to solve my original problem
I'd be much obliged!
Your DSL query is looking great. If I've read this correctly I believe what you are looking for is a project transform. This can make life a lot easier when dealing with nested variables, as there are certain instances where they just don't function as expected.
You also need to reference data within marks otherwise it will plot nothing.
Below is how to fix this, you'll just need to add your url parameter in.
{
$schema: https://vega.github.io/schema/vega/v3.json
data: [
{
name: vals
url: ... // fill this in
transform: [
{
type: project
fields: [
1.value
doc_count
key
]
as: [
val
doc_count
key
]
}
]
}
]
scales: [
{
name: yscale
type: linear
zero: true
domain: {
data: vals
field: val
}
range: height
}
{
name: xscale
type: time
domain: {
data: vals
field: key
}
range: width
}
]
axes: [
{
scale: yscale
orient: left
}
{
scale: xscale
orient: bottom
}
]
marks: [
{
type: line
from: {
data: vals
}
encode: {
update: {
x: {
scale: xscale
field: key
}
y: {
scale: yscale
field: val
}
}
}
}
]
}
In future if you are having issues, look at the examples found on the Vega Gallery. They also have extensive documentation. These two combined is all you need.
I am indexing some events and trying to get unique hours but the terms aggregation is giving weird response . I have the following query.
{
"size": 0,
"query": {
"bool": {
"must": [
{
"terms": {
"City": [
"Chicago"
]
}
},
{
"range": {
"eventDate": {
"gte": "2018-06-22",
"lte": "2018-06-22"
}
}
}
]
}
},
"aggs": {
"Hours": {
"terms": {
"script": "doc['eventDate'].date.getHourOfDay()"
}
}
}
}
This query produces following response.
"buckets": [
{
"key": "19",
"doc_count": 12
},
{
"key": "9",
"doc_count": 7
},
{
"key": "15",
"doc_count": 4
},
{
"key": "16",
"doc_count": 4
},
{
"key": "20",
"doc_count": 4
},
{
"key": "12",
"doc_count": 2
},
{
"key": "6",
"doc_count": 2
},
{
"key": "8",
"doc_count": 2
},
{
"key": "10",
"doc_count": 1
},
{
"key": "11",
"doc_count": 1
}
]
Now I changed the range to get the events for past one month
{
"range": {
"eventDate": {
"gte": "2018-05-22",
"lte": "2018-06-22"
}
}
}
and the response I got was
"Hours": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 1319,
"buckets": [
{
"key": "22",
"doc_count": 805
},
{
"key": "14",
"doc_count": 370
},
{
"key": "15",
"doc_count": 250
},
{
"key": "21",
"doc_count": 248
},
{
"key": "16",
"doc_count": 195
},
{
"key": "0",
"doc_count": 191
},
{
"key": "13",
"doc_count": 176
},
{
"key": "3",
"doc_count": 168
},
{
"key": "20",
"doc_count": 159
},
{
"key": "11",
"doc_count": 148
}
]
}
As you can see I got buckets with key 6,8,9,10 and 12 in the response of first query but not in the second query which is very strange as documents returned by first query is a small subset of the second query. Is this a bug or am I missing something obvious?
Thanks
Problem This aggregation gives all 'windows' but it is case sensitive. How to do a case insensitive search?
GET /record_new/_search
{"size":0,
"aggs" : {
"software_tags" : {
"terms" : {
"field" : "software_tags.keyword",
"include" : ".*Windows.*",
"size" : 10000,
"order" : { "_term" : "asc" }
}
}
}
}
Mapping
{
"record_new": {
"mappings": {
"record_new": {
"software_tags": {
"full_name": "software_tags",
"mapping": {
"software_tags": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"fielddata": true
}
}
}
}
}
}
}
Response
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 5706542,
"max_score": 0,
"hits": []
},
"aggregations": {
"software_tags": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Bloc-notes (Windows)",
"doc_count": 1
},
{
"key": "Windows CE",
"doc_count": 8
},
{
"key": "Windows CE 5.0",
"doc_count": 2
},
{
"key": "Windows Calculator",
"doc_count": 33
},
{
"key": "Windows Communication Foundation",
"doc_count": 43
},
{
"key": "Windows Contacts",
"doc_count": 1
},
{
"key": "Windows DVD Maker",
"doc_count": 3
},
{
"key": "Windows Defender",
"doc_count": 409
},
{
"key": "Windows Desktop Gadgets",
"doc_count": 14
},
{
"key": "Windows Desktop Update",
"doc_count": 33
},
{
"key": "Windows Display Driver Model",
"doc_count": 64
},
{
"key": "Windows DreamScene",
"doc_count": 5
},
{
"key": "Windows Driver Frameworks",
"doc_count": 1
},
{
"key": "Windows Driver Kit",
"doc_count": 12
},
{
"key": "Windows Driver Model",
"doc_count": 99
},
{
"key": "Windows Easy Transfer",
"doc_count": 3
},
{
"key": "Windows Embedded Automotive",
"doc_count": 1
},
{
"key": "Windows Embedded CE 6.0",
"doc_count": 7
},
{
"key": "Windows Embedded Compact",
"doc_count": 361
},
{
"key": "Windows Embedded Compact 7",
"doc_count": 1
},
{
"key": "Windows Embedded Industry",
"doc_count": 2
},
{
"key": "Windows Essential Business Server 2008",
"doc_count": 2
},
{
"key": "Windows Essentials",
"doc_count": 13
},
{
"key": "Windows Filtering Platform",
"doc_count": 1
},
{
"key": "Windows Firewall",
"doc_count": 588
},
{
"key": "Windows Fundamentals for Legacy PCs",
"doc_count": 21
},
{
"key": "Windows Genuine Advantage",
"doc_count": 60
},
{
"key": "Windows Home Server",
"doc_count": 7
},
{
"key": "Windows Image Acquisition",
"doc_count": 1
},
{
"key": "Windows Insider",
"doc_count": 10
},
{
"key": "Windows Installer",
"doc_count": 562
},
{
"key": "Windows Internal Database",
"doc_count": 2
},
{
"key": "Windows IoT",
"doc_count": 132
},
{
"key": "Windows Live Mail",
"doc_count": 117
},
{
"key": "Windows Live Mesh",
"doc_count": 1
},
{
"key": "Windows Live Messenger",
"doc_count": 1595
},
{
"key": "Windows Live OneCare",
"doc_count": 18
},
{
"key": "Windows Live OneCare Safety Scanner",
"doc_count": 1
},
{
"key": "Windows Live Spaces",
"doc_count": 1
},
{
"key": "Windows Live Toolbar",
"doc_count": 4
},
{
"key": "Windows ME",
"doc_count": 1055
},
{
"key": "Windows Management Instrumentation",
"doc_count": 289
},
{
"key": "Windows Marketplace",
"doc_count": 4
},
{
"key": "Windows Media",
"doc_count": 168
},
{
"key": "Windows Mobile",
"doc_count": 439
},
{
"key": "Windows SideShow",
"doc_count": 1
},
{
"key": "Windows SteadyState",
"doc_count": 6
},
{
"key": "Центр обновления Windows",
"doc_count": 2
}
]
}
}
}
I think you are doing this completely wrong. Searching and getting unique values are different things. How about the following approach?
Note, that I used slightly different settings for the aggregation and I added a query.
GET record_new/_search
{
"size": 0,
"query": {
"term": {
"software_tags": {
"value": "windows"
}
}
},
"aggs": {
"software_tags": {
"terms": {
"field": "software_tags.keyword",
"include" : ".*Windows.*",
"size": 10000,
"order": {
"_count": "desc"
}
}
}
}
}
In the query below, occasionally I receive a "NaN" response (see the response below the query).
I'm assuming that, occasionally, some invalid data gets in to the "amount" field (the one being aggregated). If that is a valid assumption, how can I find those documents with the invalid "amount" fields so I can troubleshoot them?
If that's not a valid assumption, how do I troubleshoot the occasional "NaN" value being returned?
REQUEST:
POST /_msearch
{
"search_type": "query_then_fetch",
"ignore_unavailable": true,
"index": [
"view-2017-10-22",
"view-2017-10-23"
]
}
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"handling-time": {
"gte": "1508706273585",
"lte": "1508792673586",
"format": "epoch_millis"
}
}
},
{
"query_string": {
"analyze_wildcard": true,
"query": "+page:\"checkout order confirmation\" +pageType:\"d\""
}
}
]
}
},
"aggs": {
"2": {
"date_histogram": {
"interval": "1h",
"field": "time",
"min_doc_count": 0,
"extended_bounds": {
"min": "1508706273585",
"max": "1508792673586"
},
"format": "epoch_millis"
},
"aggs": {
"1": {
"sum": {
"field": "amount"
}
}
}
}
}
}
RESPONSE:
{
"responses": [
{
"took": 12,
"timed_out": false,
"_shards": {
"total": 10,
"successful": 10,
"failed": 0
},
"hits": {
"total": 44587,
"max_score": 0,
"hits": []
},
"aggregations": {
"2": {
"buckets": [
{
"1": {
"value": "NaN"
},
"key_as_string": "1508706000000",
"key": 1508706000000,
"doc_count": 2915
},
{
"1": {
"value": 300203.74
},
"key_as_string": "1508709600000",
"key": 1508709600000,
"doc_count": 2851
},
{
"1": {
"value": 348139.5600000001
},
"key_as_string": "1508713200000",
"key": 1508713200000,
"doc_count": 3197
},
{
"1": {
"value": "NaN"
},
"key_as_string": "1508716800000",
"key": 1508716800000,
"doc_count": 3449
},
{
"1": {
"value": "NaN"
},
"key_as_string": "1508720400000",
"key": 1508720400000,
"doc_count": 3482
},
{
"1": {
"value": 364449.60999999987
},
"key_as_string": "1508724000000",
"key": 1508724000000,
"doc_count": 3103
},
{
"1": {
"value": 334914.68
},
"key_as_string": "1508727600000",
"key": 1508727600000,
"doc_count": 2722
},
{
"1": {
"value": 315368.09000000014
},
"key_as_string": "1508731200000",
"key": 1508731200000,
"doc_count": 2161
},
{
"1": {
"value": 102244.34
},
"key_as_string": "1508734800000",
"key": 1508734800000,
"doc_count": 742
},
{
"1": {
"value": 37178.63
},
"key_as_string": "1508738400000",
"key": 1508738400000,
"doc_count": 333
},
{
"1": {
"value": 25345.68
},
"key_as_string": "1508742000000",
"key": 1508742000000,
"doc_count": 233
},
{
"1": {
"value": 85454.47000000002
},
"key_as_string": "1508745600000",
"key": 1508745600000,
"doc_count": 477
},
{
"1": {
"value": 24102.719999999994
},
"key_as_string": "1508749200000",
"key": 1508749200000,
"doc_count": 195
},
{
"1": {
"value": 23352.309999999994
},
"key_as_string": "1508752800000",
"key": 1508752800000,
"doc_count": 294
},
{
"1": {
"value": 44353.409999999996
},
"key_as_string": "1508756400000",
"key": 1508756400000,
"doc_count": 450
},
{
"1": {
"value": 80129.89999999998
},
"key_as_string": "1508760000000",
"key": 1508760000000,
"doc_count": 867
},
{
"1": {
"value": 122797.11
},
"key_as_string": "1508763600000",
"key": 1508763600000,
"doc_count": 1330
},
{
"1": {
"value": 157442.29000000004
},
"key_as_string": "1508767200000",
"key": 1508767200000,
"doc_count": 1872
},
{
"1": {
"value": 198831.71
},
"key_as_string": "1508770800000",
"key": 1508770800000,
"doc_count": 2251
},
{
"1": {
"value": 218384.08000000002
},
"key_as_string": "1508774400000",
"key": 1508774400000,
"doc_count": 2305
},
{
"1": {
"value": 229829.22000000006
},
"key_as_string": "1508778000000",
"key": 1508778000000,
"doc_count": 2381
},
{
"1": {
"value": 217157.56000000006
},
"key_as_string": "1508781600000",
"key": 1508781600000,
"doc_count": 2433
},
{
"1": {
"value": 208877.13
},
"key_as_string": "1508785200000",
"key": 1508785200000,
"doc_count": 2223
},
{
"1": {
"value": "NaN"
},
"key_as_string": "1508788800000",
"key": 1508788800000,
"doc_count": 2166
},
{
"1": {
"value": 18268.14
},
"key_as_string": "1508792400000",
"key": 1508792400000,
"doc_count": 155
}
]
}
},
"status": 200
}
]
}
You can do a search for <fieldName>:NaN (on numeric fields) to find numbers that are set to NaN.
Obviously, once you find those, you can either fix the root cause of the field being set to NaN, or you can exclude those records from the aggregation by adding a -<fieldName>:NaN to the query.
(It turns out that the input was feeding in some garbage characters once in every few million documents.)
I am doing some aggregations. But the results are not at all what I expect, it seems that they are not aggregating over all the documents matching my query in the index, in which case - what good is it?
As an example, First I do this query:
{"index":"datalayer","type":"analysis2","body":{"query":{
"match_all" : {}
},
"aggs" : {
"objects" : {
"terms" : {
"field" : "action"
}
}
}
}}
and the result is 500 hits with aggregations as follows:
"aggregations": {
"objects": {
"buckets": [
{
"key": "thing",
"doc_count": 278
},
{
"key": "hover",
"doc_count": 273
},
{
"key": "embedded",
"doc_count": 57
},
{
"key": "view",
"doc_count": 50
},
{
"key": "widgets",
"doc_count": 49
},
{
"key": "hovered",
"doc_count": 20
},
{
"key": "widgetembed",
"doc_count": 20
},
{
"key": "products",
"doc_count": 19
},
{
"key": "create",
"doc_count": 15
},
{
"key": "image",
"doc_count": 13
}
]
}
}
that's all well and good but I know I have some where the key should be activation.
So if I then do the query
{"index":"datalayer","type":"analysis2","body":{"query":{
"bool": {
"must" : [
{"match": {"object": "Widget"}}
]
}},
"aggs" : {
"objects" : {
"terms" : {
"field" : "action"
}
}
}
}}
then the result is 45 hits with aggregations
"aggregations": {
"objects": {
"buckets": [
{
"key": "widgets",
"doc_count": 41
},
{
"key": "embedded",
"doc_count": 40
},
{
"key": "view",
"doc_count": 32
},
{
"key": "activation",
"doc_count": 9
},
{
"key": "image",
"doc_count": 4
},
{
"key": "create",
"doc_count": 3
},
{
"key": "mapping",
"doc_count": 3
},
{
"key": "widget",
"doc_count": 3
},
{
"key": "adding",
"doc_count": 2
},
{
"key": "edit",
"doc_count": 1
}
]
}
}
as can be seen in these aggregations I have a some keys that are not in my first aggregation of actions matching all documents. Why is that? And what do I have to do to get a bucket with all documents actions in it.
I don't think it can just be that I need to do pagination or something because I have also tried to do
{"index":"datalayer","type":"analysis2","body":{"from":0,"size":500,"query":{
"match_all" : {}
},
"aggs" : {
"objects" : {
"terms" : {
"field" : "action"
}
}
}
}}
with the exact same aggregation result of
"aggregations": {
"objects": {
"buckets": [
{
"key": "thing",
"doc_count": 278
},
{
"key": "hover",
"doc_count": 273
},
{
"key": "embedded",
"doc_count": 57
},
{
"key": "view",
"doc_count": 50
},
{
"key": "widgets",
"doc_count": 49
},
{
"key": "hovered",
"doc_count": 20
},
{
"key": "widgetembed",
"doc_count": 20
},
{
"key": "products",
"doc_count": 19
},
{
"key": "create",
"doc_count": 15
},
{
"key": "image",
"doc_count": 13
}
]
}
}
So, I'm hoping someone can explain to me why the I am not seeing the keys in the buckets that I am expecting here?
From the documentation:
By default, the terms aggregation will return the buckets for the top ten terms ordered by the doc_count. One can change this default behaviour by setting the size parameter.
So, you need to specify a "size" of a number larger than 10 to see more buckets. Or set to 0 to see all the buckets. From the same documentation:
If set to 0, the size will be set to Integer.MAX_VALUE.
"aggs" : {
"objects" : {
"terms" : {
"field" : "action",
"size": 0
}
}
}