Logstash + ElasticSearch + Kibana combine results from different fields in different documents - elasticsearch

We have Apache log analyzed by Elasticsearch (2.1.0) and Kibana (4.3.0).
Logs are parsed and shipped to Elasticsearch by Logstash running on web servers and reading Apache combined log format.
All works good but now we need analyze more complicated pattern.
We have documents with field “purchase_id” which has integer value (like 130012, 130016, 133552 etc).
We have OTHER documents which have integer field “view_id” with same values (like 130012, 130016, 133552 etc.)
Both fields never appear in same document, because those fields extracted from different URI in Apache log.
Our goal is calculate and visualize percentage of appearance in given time frame of values in “purchase_id” compared to values in “view_id”.
For example, lets say we want to see current purchase rate of item 130012. It may appear in last 30 seconds 1000 times in documents with field “purchase_id” and in same last 30 seconds it may appear 40000 times in documents with field “view_id”.
This is obvious because only small amount of people buy item compared to amount of people exposed to product. I need to calculate and visualize that in time frame there was 1000 times purchase_id of item 130012 and 40000 times view_id of item 130012 then divide 1000 by 40000 and multiply 100% so I get 2.5% visualized on dashboard (for item 130012).
Of course I have many such purchase_id=view_id=(some number):int pairs, so I need calculate percentage for all of them and display, lets say 20 with highest percentage.
This will allow me know the best selling items compared to advertisements we invest.

I would track this issue for kibana.

Related

Elasticsearch query over huge data

We have over 100 million data store in Elasticsearch.
Dataset are too much to be fully loaded into our service memory.
Each data has a column called amount. The search is to find out several (sometimes over 10 thousand) target data that their sum of the amount equals or close to an input value.
Below is out current solution:
We merge the 100 million data input 4000 buckets by using ES's bucket. Each bucket's amount is the sum of every data it contains.
We load the 4000 buckets into our service. Afterwards we find out the solution mentioned above based on the 4000 buckets.
The obvious disadvantage is the lack of accuracy. The difference between the sum of results we find and the input target is sometimes quite large.
We are three young guys lack of experience, we need some instructions.

Get the last value of a metric in a Datadog dashboard

I'm searching to display in my Datadog dashboard the last value of a metric in a QueryValue field.
For the moment, I'm using
"queries": [
{
"query": "max:blabla.mycount{$env}",
"data_source": "metrics",
"name": "query1",
"aggregator": "last"
}
]
Is this the right way to do that ? For this series of mycount [20,1,5,3,2], which number will be taken ? Is it really the last one of the serie (2) or the biggest one in the serie (20) ?
Regards,
Blured.
So there's going to be 3 levels of aggregation to consider: the Time Aggregation and Space Aggregation of your query, and then the aggregation of the query value widget on the frontend (which is what you're asking about). For now, let's understand time aggregation by thinking of a time series widget, and then we'll see what happens with the query value widget after.
Space aggregation is the simplest one. The idea is the you have multiple time series being submitted from multiple applications/ servers. If 20 computers send a metric all at the same time, which metric should we pick to display? You decide that with the aggregation chunk of your query, yours is currently set to max.
The idea is that you have to decide which out of the dozens or hundreds of instances of your metric is the one you want to display.
If you don't want to worry about space aggregation, you have to make you query specific enough that only 1 time series exists for that metric. For example a cpu metric will need to be scoped to at least the hostname. For a container metric, hostname isn't enough, you would need at least the container_id. For a database there should be a db_identifier or something that gets you just 1 result back.
Now for time aggregation, let's look at the docs a bit:
As Datadog stores data at a 1 second granularity, it cannot display all real data on graphs. See How data is aggregated in graphs for more details.
For a graph on a 1-week time window, it would require sending hundreds of thousands of values to your browser—and besides, not all these points could be graphed on a widget occupying a small portion of your screen.
...
The Datadog backend tries to keep the number of intervals to a number below ~300.
https://docs.datadoghq.com/dashboards/guide/query-to-the-graph/#proceed-to-time-aggregation
So for example if you are looking at a 5 minute window, the time aggregation will be as granular as possible. there are 300 seconds in 5 minutes, so every interval on the graph will represent 1 second. If we zoomed out to 10 minutes (600 seconds), we can only show data every 2 seconds. So each bucket will represent 2 data points (assuming the metric is submitted every second).
In most scenarios your metrics are being submitted at a 15 second interval. So you won't notice any time aggregation rollups until 15*300=4500 seconds (a bit over an hour).
You control this with the rollup function, as described in the docs. If you don't want to worry about time aggregation, just make sure your time range is zoomed in enough to not have any bucketing.
And now for the last level of aggregation, the query value widget. You now have obtained a set of 300 points from the backend, space and time aggregation has already been applied. Out of those 300 datapoints, which one do you want to display? You could choose the last point, or a sum of the points, or whatever.
Hopefully that helps!

Elasticsearch - Search with in near realtime (1 sec)

I come across the following phrase https://www.elastic.co/guide/en/elasticsearch/reference/6.8/documents-indices.html
When a document is stored, it is indexed and fully searchable in near real-time—​within 1 second.
Assuming the 1 sec is subjective and depends on various factors , can we safely assume it is atleast 1 sec ? And also, I see different time intervals that will kickin as part of the indexing like refresh interval, etc , is this 1 sec is approximately sum of all those intervals (intermediate )
Howmuch realtime it is when we say elasticsearch is (near) realtime search engine
The default refresh interval (controlled by the index setting index.refresh_interval) is one second. The sentence you cite means exactly that. By default, a document you index will be available for search within at most one second, but it can be less than that.
If a refresh happens at instant T and you index a document at that same moment, then the underlying segments will be refreshed in pretty much exactly one second and your document will be searchable after that refresh.
If a refresh happens at instant T, and you index your document 500ms after that instant, then it will be available for search just 500ms after being indexed.
That also means your document could be available just a few milliseconds (say 10ms) after being indexed if you index it at instant T+990ms after the last refresh that happened at instant T.
It's not exact science, so that one second should be taken with a grain of salt, sometimes it could last a tad longer, say 10xx ms, where xx depends on various factors. You should not rely on that duration being nano-exact, though.
So near-real time simply means the duration of that refresh interval (which you can modify).

Elasticsearch index by date search performance - to split or not to split

I am currently playing around with Elasticsearch (ES). We are ingesting sensor data and for 3 years we have approximately 1,000,000,000 documents in one index, making the index about 50GB in size. Indexing performance is not that important as new data only arrives every 15 minutes per sensor on average, therefore I want to focus on searching and aggregating performance. We are running a front-end showing basically a dashboard about average values from last week compared to one year before etc.
I am using ES on AWS and after performance on one machine was quite slow, I spun up a cluster with 3 data nodes (each 2 cores, 8 GB mem), and gave the index 3 primary shards and one replica. Throwing computing power at the data certainly improved the situation and more power would help more, but my question is:
Would splitting the index for example by month increase the performance? Or being more specific: is querying (esp. by date) a smaller index faster if I adjust the queries adequatly, or does ES already 'know' where to find specific dates in a shard?
(I know about other benefits of having smaller indices, like being able to roll over and keep only a specific time interval, etc.)
1/ Elasticsearch only knows where to find a specific date in an index if your index is sorted by your date field. You can check the documentation here.
In your use case, it can improve drastically search performance. And since all the data will be added at the "end of the index" since its date sorted, you should not see much of indexation overhead.
2/ Without index sort, smaller time-bounded indices will work better (even if you target all your indices) since it will often allow a rewrite or your range query to a match_all / match_none internal query.
For more information about this behavior you should read this blog post :
Instant Aggregations: Rewriting Queries for Fun and Profit

How to accommodate minutely and hourly data in the same visualisation?

Current Scenario -
The current dashboard is set to Sum aggregation at minutely level. My dashboard currently works only when interval is set to minutely. If I change the interval the current graph shows incorrect values. This happens due to the fact that there are more than 1 documents generated per minute and the correct value per minute will be the sum of the field values at minutely level.
So even today we are obliged to use minute interval but I'm fine with this.
Now the hourly documents is designed to ingest data after doing all the math( and we have validated the ingestion logic). So there is 1 doc per hour. This is the reason the visualisation is not able to accommodate both types of data.
If I had a scenario like 1 document per minute and then 1 document per hour, then I could have gone with using average metrics or perhaps max metrics but at present the problem is I have to do sum of the doc values for a minute (mandatory), therefore, whatever internal logic applies for minutely data gets also applied to hourly too.
Is there a way where I can show both types of data in the same graph?
Mathematically, the approach is wrong.
Having n documents per minute (where n depends on the no. of hosts in that cluster) and then 1 document per hour per type is illogical from visualisation perspective because the actual value needed was the sum of all n documents generated per min and so the sum metric that was being applied at minutely level was also getting applied at hourly data. If we wanted to accommodate both types of data in the same graph, there is a need of uniformity and thus, aggregate the data at minutely level from other end and then send aggregated data to elastic.

Resources