I'm searching to display in my Datadog dashboard the last value of a metric in a QueryValue field.
For the moment, I'm using
"queries": [
{
"query": "max:blabla.mycount{$env}",
"data_source": "metrics",
"name": "query1",
"aggregator": "last"
}
]
Is this the right way to do that ? For this series of mycount [20,1,5,3,2], which number will be taken ? Is it really the last one of the serie (2) or the biggest one in the serie (20) ?
Regards,
Blured.
So there's going to be 3 levels of aggregation to consider: the Time Aggregation and Space Aggregation of your query, and then the aggregation of the query value widget on the frontend (which is what you're asking about). For now, let's understand time aggregation by thinking of a time series widget, and then we'll see what happens with the query value widget after.
Space aggregation is the simplest one. The idea is the you have multiple time series being submitted from multiple applications/ servers. If 20 computers send a metric all at the same time, which metric should we pick to display? You decide that with the aggregation chunk of your query, yours is currently set to max.
The idea is that you have to decide which out of the dozens or hundreds of instances of your metric is the one you want to display.
If you don't want to worry about space aggregation, you have to make you query specific enough that only 1 time series exists for that metric. For example a cpu metric will need to be scoped to at least the hostname. For a container metric, hostname isn't enough, you would need at least the container_id. For a database there should be a db_identifier or something that gets you just 1 result back.
Now for time aggregation, let's look at the docs a bit:
As Datadog stores data at a 1 second granularity, it cannot display all real data on graphs. See How data is aggregated in graphs for more details.
For a graph on a 1-week time window, it would require sending hundreds of thousands of values to your browser—and besides, not all these points could be graphed on a widget occupying a small portion of your screen.
...
The Datadog backend tries to keep the number of intervals to a number below ~300.
https://docs.datadoghq.com/dashboards/guide/query-to-the-graph/#proceed-to-time-aggregation
So for example if you are looking at a 5 minute window, the time aggregation will be as granular as possible. there are 300 seconds in 5 minutes, so every interval on the graph will represent 1 second. If we zoomed out to 10 minutes (600 seconds), we can only show data every 2 seconds. So each bucket will represent 2 data points (assuming the metric is submitted every second).
In most scenarios your metrics are being submitted at a 15 second interval. So you won't notice any time aggregation rollups until 15*300=4500 seconds (a bit over an hour).
You control this with the rollup function, as described in the docs. If you don't want to worry about time aggregation, just make sure your time range is zoomed in enough to not have any bucketing.
And now for the last level of aggregation, the query value widget. You now have obtained a set of 300 points from the backend, space and time aggregation has already been applied. Out of those 300 datapoints, which one do you want to display? You could choose the last point, or a sum of the points, or whatever.
Hopefully that helps!
I am able to get requests per second with
ts(abc.xyz.count and url = "some/url")
how to I get total count per hour?
I have a time based index pattern in Kibana. I want to show following metric on Kibana dashboard.
A percentage of the number of document falling under the global time
filter to the total number of document present in the index pattern.
► For Example,
The percentage of the number of document in the last 2 days to the total number of document present in the index pattern. Here the user has applied the time filter to the last 2 days.
Can it, at all, be visualized using Kibana? How?
I don't understand what Search time per second (Δ) means. Is it the delta of number of milliseconds that the search requests took in previous and current refresh interval? Also there is a Query and Fetch time below the chart, not sure what that represents.
Attached is a screenshot:
A query in Elasticsearch actually a 2 phased process:
Query Phase :
During the initial query phase, the query is broadcast to a shard copy (a primary or replica shard) of every shard in the index. Each shard executes the search locally and builds a priority queue of matching documents.
And
Fetch Phase :
The query phase identifies which documents satisfy the search request, but we still need to retrieve the documents themselves. This is the job of the fetch phase.
And that mail explains the Search time per second (Δ) part in detail:
Here is an example for "Search requests per second (Δ)":
- You do some "_search" request
- It hits 15 shards of some indices on that node, so the value of indices -> search -> "query_total" in nodes stats API 2 response
increases by 15
- Bigdesk refresh value is 5000 (5 sec)
As a result the chart should display peak of 3 (15/5) in the Query
line. So if the value is ~1500 in your case then it means in average
an X number of shards is hit by search requests per second where
X=1500*refresh (does it make sense)?
You can see the chart is really only informative (it depends on
refresh interval and number of shards). But there is the cumulative
"query_total" value displayed as well in the web UI.
Similarly, the second chart "Search time per second (Δ)" displays the
average time (in mills) spent in query or fetch phase on the node.
Again this value includes all involved shards on that node.
Search time per second (Δ) based on 2 series seies1 and serie2
they are explained here
looks like chart shows these metrics per time unit
We have Apache log analyzed by Elasticsearch (2.1.0) and Kibana (4.3.0).
Logs are parsed and shipped to Elasticsearch by Logstash running on web servers and reading Apache combined log format.
All works good but now we need analyze more complicated pattern.
We have documents with field “purchase_id” which has integer value (like 130012, 130016, 133552 etc).
We have OTHER documents which have integer field “view_id” with same values (like 130012, 130016, 133552 etc.)
Both fields never appear in same document, because those fields extracted from different URI in Apache log.
Our goal is calculate and visualize percentage of appearance in given time frame of values in “purchase_id” compared to values in “view_id”.
For example, lets say we want to see current purchase rate of item 130012. It may appear in last 30 seconds 1000 times in documents with field “purchase_id” and in same last 30 seconds it may appear 40000 times in documents with field “view_id”.
This is obvious because only small amount of people buy item compared to amount of people exposed to product. I need to calculate and visualize that in time frame there was 1000 times purchase_id of item 130012 and 40000 times view_id of item 130012 then divide 1000 by 40000 and multiply 100% so I get 2.5% visualized on dashboard (for item 130012).
Of course I have many such purchase_id=view_id=(some number):int pairs, so I need calculate percentage for all of them and display, lets say 20 with highest percentage.
This will allow me know the best selling items compared to advertisements we invest.
I would track this issue for kibana.