How to accommodate minutely and hourly data in the same visualisation? - elasticsearch

Current Scenario -
The current dashboard is set to Sum aggregation at minutely level. My dashboard currently works only when interval is set to minutely. If I change the interval the current graph shows incorrect values. This happens due to the fact that there are more than 1 documents generated per minute and the correct value per minute will be the sum of the field values at minutely level.
So even today we are obliged to use minute interval but I'm fine with this.
Now the hourly documents is designed to ingest data after doing all the math( and we have validated the ingestion logic). So there is 1 doc per hour. This is the reason the visualisation is not able to accommodate both types of data.
If I had a scenario like 1 document per minute and then 1 document per hour, then I could have gone with using average metrics or perhaps max metrics but at present the problem is I have to do sum of the doc values for a minute (mandatory), therefore, whatever internal logic applies for minutely data gets also applied to hourly too.
Is there a way where I can show both types of data in the same graph?

Mathematically, the approach is wrong.
Having n documents per minute (where n depends on the no. of hosts in that cluster) and then 1 document per hour per type is illogical from visualisation perspective because the actual value needed was the sum of all n documents generated per min and so the sum metric that was being applied at minutely level was also getting applied at hourly data. If we wanted to accommodate both types of data in the same graph, there is a need of uniformity and thus, aggregate the data at minutely level from other end and then send aggregated data to elastic.

Related

Get the last value of a metric in a Datadog dashboard

I'm searching to display in my Datadog dashboard the last value of a metric in a QueryValue field.
For the moment, I'm using
"queries": [
{
"query": "max:blabla.mycount{$env}",
"data_source": "metrics",
"name": "query1",
"aggregator": "last"
}
]
Is this the right way to do that ? For this series of mycount [20,1,5,3,2], which number will be taken ? Is it really the last one of the serie (2) or the biggest one in the serie (20) ?
Regards,
Blured.
So there's going to be 3 levels of aggregation to consider: the Time Aggregation and Space Aggregation of your query, and then the aggregation of the query value widget on the frontend (which is what you're asking about). For now, let's understand time aggregation by thinking of a time series widget, and then we'll see what happens with the query value widget after.
Space aggregation is the simplest one. The idea is the you have multiple time series being submitted from multiple applications/ servers. If 20 computers send a metric all at the same time, which metric should we pick to display? You decide that with the aggregation chunk of your query, yours is currently set to max.
The idea is that you have to decide which out of the dozens or hundreds of instances of your metric is the one you want to display.
If you don't want to worry about space aggregation, you have to make you query specific enough that only 1 time series exists for that metric. For example a cpu metric will need to be scoped to at least the hostname. For a container metric, hostname isn't enough, you would need at least the container_id. For a database there should be a db_identifier or something that gets you just 1 result back.
Now for time aggregation, let's look at the docs a bit:
As Datadog stores data at a 1 second granularity, it cannot display all real data on graphs. See How data is aggregated in graphs for more details.
For a graph on a 1-week time window, it would require sending hundreds of thousands of values to your browser—and besides, not all these points could be graphed on a widget occupying a small portion of your screen.
...
The Datadog backend tries to keep the number of intervals to a number below ~300.
https://docs.datadoghq.com/dashboards/guide/query-to-the-graph/#proceed-to-time-aggregation
So for example if you are looking at a 5 minute window, the time aggregation will be as granular as possible. there are 300 seconds in 5 minutes, so every interval on the graph will represent 1 second. If we zoomed out to 10 minutes (600 seconds), we can only show data every 2 seconds. So each bucket will represent 2 data points (assuming the metric is submitted every second).
In most scenarios your metrics are being submitted at a 15 second interval. So you won't notice any time aggregation rollups until 15*300=4500 seconds (a bit over an hour).
You control this with the rollup function, as described in the docs. If you don't want to worry about time aggregation, just make sure your time range is zoomed in enough to not have any bucketing.
And now for the last level of aggregation, the query value widget. You now have obtained a set of 300 points from the backend, space and time aggregation has already been applied. Out of those 300 datapoints, which one do you want to display? You could choose the last point, or a sum of the points, or whatever.
Hopefully that helps!

Elasticsearch - Search with in near realtime (1 sec)

I come across the following phrase https://www.elastic.co/guide/en/elasticsearch/reference/6.8/documents-indices.html
When a document is stored, it is indexed and fully searchable in near real-time—​within 1 second.
Assuming the 1 sec is subjective and depends on various factors , can we safely assume it is atleast 1 sec ? And also, I see different time intervals that will kickin as part of the indexing like refresh interval, etc , is this 1 sec is approximately sum of all those intervals (intermediate )
Howmuch realtime it is when we say elasticsearch is (near) realtime search engine
The default refresh interval (controlled by the index setting index.refresh_interval) is one second. The sentence you cite means exactly that. By default, a document you index will be available for search within at most one second, but it can be less than that.
If a refresh happens at instant T and you index a document at that same moment, then the underlying segments will be refreshed in pretty much exactly one second and your document will be searchable after that refresh.
If a refresh happens at instant T, and you index your document 500ms after that instant, then it will be available for search just 500ms after being indexed.
That also means your document could be available just a few milliseconds (say 10ms) after being indexed if you index it at instant T+990ms after the last refresh that happened at instant T.
It's not exact science, so that one second should be taken with a grain of salt, sometimes it could last a tad longer, say 10xx ms, where xx depends on various factors. You should not rely on that duration being nano-exact, though.
So near-real time simply means the duration of that refresh interval (which you can modify).

How to design a system in which we can query top results in last n hours

I was asked this question in an interview. The details were that assume we are getting millions of events. Each event has a timestamp and other details. The systems design requires ability to enable end user to query most frequent records in last 10 minutes or 9 hours or may be 3 months.
Event can be seen as following
event_type: {CRUD + Search}
event_info: xxx
timestamp : ts...
The easiest way to to figure out this is to look at how other stream processing or map reduce libraries do this (and I have feeling your interviewers have seen these libraries). Its basically real time map reduce (you can lookup how that works as well).
I will outline two techniques for event processing. In reality most companies need to do both.
New school Stream processing (real time)
Lets assume for now they don't want the actual events but the more likely case of aggregates (I think that was the intent of your question)
An example stream processing project is pipelinedb (they have how it works on the bottom of their home page).
Events go into use a queue/ring buffer
A worker process reads those events in batches and rolls them up into partial buckets or window.
Finally there is combiner or reducer which takes the micro batches and actually does the updating. An example would be event counts. Because we are using a queue from above events come in ordered and depending on the queue we might be able to have multiple consumers that do the combing operation.
So if you want minute counts you would do rollups per minute and only store the sum of the events for that minute. This turns out to be fairly small space wise so you can store this in memory.
If you wanted those counts for month or day or even year you would just add up all the minute count buckets.
Now there is of course a major problem with this technique. You need to know what aggregates and pivots you would like to collect a priori.
But you get extremely fast look up of results.
Old school data warehousing (partitioning) and Map Reduce (batch processed)
Now lets assume they do want the actual events for a certain time period. This is expensive because if you store all the events in one place the lookup and retrieval is difficult. But if you use the fact that time is hierarchal you can store the events in a tree of tuples.
Reasons you would want the actual events is because you are doing adhoc querying and are willing to wait for the queries to perform.
You need some sort of queue for the stream of events.
A worker reads the queue and partitions the events based on time. For example you would have a partition for a certain day. This is akin to sharding. Many storage systems have support for this (e.g postgres partitions).
When you want a certain number of events over a period you union the partitions.
The partitioning is essentially hierarchal (minutes < hours < days etc) which means you can do tree like operations on them.
There are certain ways to store such events which is called time series data such that the partitioning index is automatic and fast. These are called TSDBs of which you can google for more info.
An example TSDB product would be influxdb.
Now going back to the fact that time (or at least how humans represent it) is organized tree like we can we can preform parallelization operations. This because a tree is DAG (directed acyclic graph). With a DAG you can do some analysis and basically recursively operate on the branches (also known as fork/join).
An example generic parallel storage product would citusdb.
Now of course this method has a massive draw back. It is expensive! Even if you make it fast by increasing the number of nodes you will have to pay for those nodes (distributed shards). An in theory the performance should scale linearly but in practice this does not happen (I will save you the details).
I think you will need to persist the data to the disk as
the query duration is super vague, and data might be loss due to some unforeseen circumstances like process killed, machine failure etc.
you can't keep all the events in memory due to memory
constraints(millions of events)
I would suggest using mysql as the data store with taking timestamp as one of the index key. But two events might have same timestamp. So make a composite index key with auto-increment id + timestamp.
Advantages of Mysql:
Super-reliable with replication
Support all kinds of CRUD operations and queries
On each query you can basically get the range of the timestamps as per your need.
First count the no. of events satisfying the query.
select count(*) from `events` where timestamp >= x and timestamp <=y.
If too many events satisfy the query, query them in batches.
select * from 'events' where timestamp >= x and timestamp <=y limit 1000 offset 0;
select * from 'events' where timestamp >= x and timestamp <=y limit 1000 offset 1000;
and so on.. till offset <= count of events matching the first query.

Bigdesk charts explanation

I don't understand what Search time per second (Δ) means. Is it the delta of number of milliseconds that the search requests took in previous and current refresh interval? Also there is a Query and Fetch time below the chart, not sure what that represents.
Attached is a screenshot:
A query in Elasticsearch actually a 2 phased process:
Query Phase :
During the initial query phase, the query is broadcast to a shard copy (a primary or replica shard) of every shard in the index. Each shard executes the search locally and builds a priority queue of matching documents.
And
Fetch Phase :
The query phase identifies which documents satisfy the search request, but we still need to retrieve the documents themselves. This is the job of the fetch phase.
And that mail explains the Search time per second (Δ) part in detail:
Here is an example for "Search requests per second (Δ)":
- You do some "_search" request
- It hits 15 shards of some indices on that node, so the value of indices -> search -> "query_total" in nodes stats API 2 response
increases by 15
- Bigdesk refresh value is 5000 (5 sec)
As a result the chart should display peak of 3 (15/5) in the Query
line. So if the value is ~1500 in your case then it means in average
an X number of shards is hit by search requests per second where
X=1500*refresh (does it make sense)?
You can see the chart is really only informative (it depends on
refresh interval and number of shards). But there is the cumulative
"query_total" value displayed as well in the web UI.
Similarly, the second chart "Search time per second (Δ)" displays the
average time (in mills) spent in query or fetch phase on the node.
Again this value includes all involved shards on that node.
Search time per second (Δ) based on 2 series seies1 and serie2
they are explained here
looks like chart shows these metrics per time unit

Logstash + ElasticSearch + Kibana combine results from different fields in different documents

We have Apache log analyzed by Elasticsearch (2.1.0) and Kibana (4.3.0).
Logs are parsed and shipped to Elasticsearch by Logstash running on web servers and reading Apache combined log format.
All works good but now we need analyze more complicated pattern.
We have documents with field “purchase_id” which has integer value (like 130012, 130016, 133552 etc).
We have OTHER documents which have integer field “view_id” with same values (like 130012, 130016, 133552 etc.)
Both fields never appear in same document, because those fields extracted from different URI in Apache log.
Our goal is calculate and visualize percentage of appearance in given time frame of values in “purchase_id” compared to values in “view_id”.
For example, lets say we want to see current purchase rate of item 130012. It may appear in last 30 seconds 1000 times in documents with field “purchase_id” and in same last 30 seconds it may appear 40000 times in documents with field “view_id”.
This is obvious because only small amount of people buy item compared to amount of people exposed to product. I need to calculate and visualize that in time frame there was 1000 times purchase_id of item 130012 and 40000 times view_id of item 130012 then divide 1000 by 40000 and multiply 100% so I get 2.5% visualized on dashboard (for item 130012).
Of course I have many such purchase_id=view_id=(some number):int pairs, so I need calculate percentage for all of them and display, lets say 20 with highest percentage.
This will allow me know the best selling items compared to advertisements we invest.
I would track this issue for kibana.

Resources