How to set up monitoring for Redmine tasks/issues in Grafana? - performance

I plan to set up monitoring for Redmine, with the help of which I can see man-hours spent on tickets, time taken to complete a ticket etc to monitor the productivity of my team. I want to see all of these using Graphana. As of now I think using Prometheus and exposing the Metrics but not sure how. (Might have to create an exporter I think, but not sure if that would work). So basically how can this be possible?

A Prometheus exporter is simply an HTTP server that sits next to your target (Redmine in your case, although I have no experience with it) and whenever it gets a /metrics request it does one or more API calls to the target (assuming Redmine provides an API to query the numbers you need) and returns said numbers as Prometheus metrics with names, labels etc.
Here are the Prometheus clients (that help expose metrics in the format accepted by Prometheus) for Go and Java (look for simpleclient_http or simpleclient_servlet). There is support for many other languages.

Adding on to #Alin's answer to expose Redmine metrics to Prometheus. You would need to install an exporter.
https://github.com/mbeloshitsky/redmine_prometheus.git
Here is a redmine plugin available for prometheus.

You can get the hours and all the data you need through Redmine Rest APIs. Write a little program to fetch and update the data in Graphite or Prometheus. You can perform this task using sensu through creating a metric script in python,ruby or Perl. Next all you have to do is Plotting the graphs. Well thats another race :P
RedMine guide: http://www.redmine.org/projects/redmine/wiki/Rest_api_with_python

Related

Linkerd proxies metrics reset

Good evening,
I’m a student from the university of Rome Tor Vergata. I’m currently working on my master thesis that involves the use of Linkerd.
Very briefly the thesis is about implementing a totally distributed root cause localization system for microservices architectures.
In the metrics collection phase I'm facing an issue with Linkerd since I’m not using Prometheus, but manually scraping metrics from proxies through the /metrics endpoint.
I can’t understand how or when do Linkerd’s proxies reset the various metrics they collect.
Does anybody know if they have a timer? Or is there a way to make them reset metrics after the scraping?
Thanks in advance for any help anyone will give me.
The metrics are stored in memory by the Linkerd proxy as soon as the proxy process starts running.
Most of the metrics are buckets for histograms whose main purpose is to view the data over time, so there isn't a way to reset them and they don't reset themselves.
You could write prometheus queries to select windows of time where you would reset the metrics or you could restart the containers and write queries to filter the metrics on the newer workloads.

How do I instrument my code for Splunk metrics?

I'm brand new to Splunk, having worked exclusively with Prometheus before. The one obvious thing I can't see from looking at the Splunk website is how in my code, I create/expose a metric... if I must provide an HTTP endpoint for consumption, or call into some API to push values, etc. Further, I cannot see which languages Splunk provide libraries for, in order to aid instrumentation - I cannot see where all this low level stuff is documented!
Can anyone help me understand how Splunk works, particularly how it compares to Prometheus?
Usually, programs write their normal log files and Splunk ingests those files so they can be searched and data extracted.
There are other ways to get data into Splunk, though. See https://dev.splunk.com/enterprise/reference for the SDKs available in a few languages.
You could write your metrics to collectd and then send them to Splunk. See https://splunkonbigdata.com/2020/05/09/metrics-data-collection-via-collectd-part-2/
You could write your metrics directly to Splunk using their HTTP Event Collector (HEC). See https://dev.splunk.com/enterprise/docs/devtools/httpeventcollector/

How to consume Google PubSub opencensus metrics using GoLang?

I am new in Google PubSub. I am using GoLang for the client library.
How to see the opencensus metrics that recorded by the google-cloud-go library?
I already success publish a message to Google PubSub. And now I want to see this metrics, but I can not find these metrics in Google Stackdriver.
PublishLatency = stats.Float64(statsPrefix+"publish_roundtrip_latency", "The latency in milliseconds per publish batch", stats.UnitMilliseconds)
https://github.com/googleapis/google-cloud-go/blob/25803d86c6f5d3a315388d369bf6ddecfadfbfb5/pubsub/trace.go#L59
This is curious; I'm surprised to see these (machine-generated) APIs sprinkled with OpenCensus (Stats) integration.
I've not tried this but I'm familiar with OpenCensus.
One of OpenCensus' benefits is that it loosely-couples the generation of e.g. metrics from the consumption. So, while the code defines the metrics (and views), I expect (!?) the API leaves it to you to choose which Exporter(s) you'd like to use and to configure these.
In your code, you'll need to import the Stackdriver (and any other exporters you wish to use) and then follow these instructions:
https://opencensus.io/exporters/supported-exporters/go/stackdriver/#creating-the-exporter
NOTE I encourage you to look at the OpenCensus Agent too as this further decouples your code; you reference the generic Opencensus Agent in your code and configure the agent to route e.g. metrics to e.g. Stackdriver.
For Stackdriver, you will need to configure the exporter with a GCP Project ID and that project will need to have Stackdriver Monitor enabled (and configured). I've not used Stackdriver in some months but this used to require a manual step too. Easiest way to check is to visit:
https://console.cloud.google.com/monitoring/?project=[[YOUR-PROJECT]]
If I understand the intent (!) correctly, I expect API calls will then record stats at the metrics in the views defined in the code that you referenced.
Once you're confident that metrics are being shipped to Stackdriver, the easiest way to confirm this is to query a metric using Stackdriver's metrics explorer:
https://console.cloud.google.com/monitoring/metrics-explorer?project=[[YOUR-PROJECT]]
You may wish to test this approach using the Prometheus Exporter because it's simpler. After configuring the Prometheus Exporter, when you run your code, it will be create an HTTP server and you can curl the metrics that are being generated on:
http://localhost:8888/metrics
NOTE Opencensus is being (!?) deprecated in favor of a replacement solution called OpenTelemetry.

Showing HTTP Request API latency using the Spring Boot Micrometer metrics

We use Prometheus to scrape Spring Boot 2.0.0 metrics and then persist them in InfluxDB.
We then use Grafana to visualize them from InfluxDB.
Our micrometer dependencies are
micrometer-core
micrometer-registry-prometheus
I want to be able to show a latency metric for our REST APIs.
From our Prometheus scraper I can see these metrics are generated for HTTP requests.
http_server_requests_seconds_count
http_server_requests_seconds_sum
http_server_requests_seconds_max
I understand from the micrometer documentation, https://micrometer.io/docs/concepts#_client_side, that latency can be done by combining 2 of the above generated metrics: totalTime / count.
However our data source is InfluxDB which does not support combining measurements, https://docs.influxdata.com/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-do-i-query-data-across-measurements,
so I am unable to implement that function in InfluxDB.
Do I need to provide my own implementation of this latency metric in the Spring Boot component or is their an easier way that I can achieve this?
You essentially can join your measurements in Kapacitor, another component of Influxdata TICK stack.
It's going to be pretty simple with JoinNode, possibly followed by Eval to calculate what you want right in place. There's tons of examples around it in documentation.
Although the problem is different there: you'd unnecessarily overingeneered your solution, and moreover - you're trying to combine two products that has the same purpose, but uses different approach to it. How smart is that?
You're already scraping things with Prometheus? Fine! Stay with it, do the math there, it's simple. And Grafana works with Prometheus too, right out of the box!
You wanna have your data in Influx (I can understand that, it's certainly more advanced)?
Fine! Micrometer can send it right to Influx out of the box - and in at least two ways!
I, personally, don't see any reason to do what you suppose to do, can you share one?

Remote Execution in Ruby (Capistrano or MCollective) to collect cloud server performance metrics

I am looking for a way to collect data remotely from various cloud instances (EC2, Rackpsace). The Rackspace API provides no way for collecting server performance metrics (ie load average, cpu usage, memory) via it's API, otherwise this would have never been asked.
I started looking at solutions like Capistrano or Mcollective (I have also considered collectd), but I am unsure of which one would best suit my application. I am trying to avoid using ssh keys for trending purposes (I don't want to have to keep logging in to collect these metrics) The script I am writing is a Ruby script which reboots a cloud server if it's load average is over a certain number. Because these providers don't expose these metrics via their API, I am looking at a way to gather them myself, and I am new to the Ruby community so after briefing over the documentation for all of these tools, I still haven't been able to get a sense of which framework would work best, or if there are other alternatives.
It sounds like Capistrano is more suited to be a deployment tool, although it can perform remote tasks, so after I read the documentation for that it was pretty much out for the purposes of my script.
MCollective looks really attractive for what I am trying to do but it seems I would have to write my own RPC style plugin for this purpose.
I've also considered plugging into some greater monitoring system such as Nagios, Munin, Zenoss, Hyperic, etc, but I'd rather not install some large bulk monitoring system when all I want to collect is but a few simple metrics.
If your intention is to trigger certain actions based on the system performance (like restarting when cpu usage is too high), you should check out god.
I'm not sure if this is also useful when you want to generate some performance statistics over a longer time period. Personally, I'm using Munin for this, but if you don't like it maybe you can find something on Ruby Toolbox | Server Monitoring.

Resources