I have two metric already made.
1st metric represents the number of transactions started by client
2nd metric represents the number of transactions received by server
I want to get the number of transactions which failed(are sent by client but not received by server) which is simple subtraction
Can I achieve this in Kibana?
There is a plugin for Kibana 5.0.0+. It is based on the core Metric-Plugin but gives you the ability to output custom aggregates on metric-results by using custom formula and/or JavaScript.
You can check more details Here .
Related
I'm new at Grafana and I'm trying to create a graph that shows the requests count together with the average response time for the requests, I was able to create my requests count but now I'm struggling to add the information with the requests time, there is an option to show both information inside a panel? Or do I need to create two panels, one with the request count and another with the average time?
And another question, there is an option to show the average time in milliseconds?
I've been reviewing different ways to aggregate log messages together that have a start event but no end event. Been struggling with the logstash aggregate filter plugin not sorting correctly and was looking at retrofitting an old entity-centric model for a previous version of elasticsearch Entity-Centric Indexing - Mark Harwood | Elastic Videos when I realized elasticsearch 7.13 transforms introduce the concept of 'latest' which negates my need for a bunch of external scripts (hopefully) to do this.
I am looking at the "Getting Web Session Details by using Scripted Metric Aggregation" sample painless script https://www.elastic.co/guide/en/elasticsearch/reference/current/transform-painless-examples.html#painless-web-session which produces session details, including session duration. Because the logs do not have an end-time, I need to make use of a timeout interval, something like a 30 minute window for aggregating message events based on my group by.
Is this possible to do within the transform by adjusting that script and could anyone help?
I am working in graph panel in grafana and using elastic search as a data source. In the data source, I have memory-used with timestamp. I am trying to give notification alert when the difference is more than 100 MB. How to find memory difference between the memory used in day one and memory used in current day and send alert notification?
You would setup a query which is basically grouped by timestamp and define it based on whether you are looking for the 100 MB difference to be on max value or average. Assuming it is max value- you query would be something like
And then you would set alerts by going to the alert tab based on the query and diff in the values for 24 hours
I have problem with creating metrics and later trigger alerts base on that metric. I have two datasources, both are elasticsearch. One contains documents (logs from service) saying that message was produced to kafka, second contain documents (also logs from service) saying that message was consumed. What I want to achieve is to trigger alert if ratio of produced to consumed messages drop below 1.
Unfortunately it is impossible to use prometheus, for two reasons:
1) counter resets each time service is restarted.
2) second service doesn't have (and wont't have in reasonable time) prometheus integration.
Question is how to approach metrics and alerting based on that data sources? Is it possible? Maybe there is other way to achieve my goal?
The question is somewhat generic (meaning no mapping or code, in general, is provided), so I'll provide an approach.
You can use a watcher upon an aggregation that you will create.
It's relatively straightforward to create a percentage of consume/produce, and based upon that percentage you can trigger an alert via the watcher.
Take a look at this tutorial (official elasticsearch channel) on how to do this. Moreover, check the tutorials for your specific version of elasticsearch. From 5.x to 7.x setting alerts has been significantly improved (this means that for 7.x you might be able to do this via the UI of kibana, but for 5.x you'll probably need to add the alert via indexing json in the appropriate indices .watcher)
I haven't used grafana, but I believe the same approach can be applied. You'll need an aggregation as mentioned before and then you add the alert https://grafana.com/docs/grafana/latest/alerting/rules/
I've recently done a lot of research into graphite with statsD instrumentation. With help of our developer operations team we managed to get multiple servers reporting metrics to graphite, and combine all the metrics. This is partially what we are looking for, however I want to filter the metric collection by server rather than having all the metrics be averaged together. The purpose of this is to monitor metrics collection on a per server basis, as many of our stats could also be used to visualize server uptime and performance. I haven't been able to find anything about how this may be achieved in my research, other than maybe some trickery with the aggregation rules.
You should include the server name as the first path component of the metric name being emitted. When naming metrics, Graphite separates the metric name into path components using . as the delimiter between path components. For example, you may want to use a naming schema like: <data_center>_<environment>_<role>_<node_id>.gauges.cpu.idle_pct This will cause each server to be listed as a separate category on http://graphite_hostname.com/dashboard/
If you need to perform aggregations across servers, you can do that at the graphite layer, or you could emit the same metric under two different names: one metric name that has the first path component as the server name, and one metric name that has the first path component as a value that is shared across all servers you want that metric aggregated across.