Grafana not showing metrics shown by springboot prometheus endpoint - spring-boot

I have configured micrometer and prometheus for my spring boot application, and I can see following metrics(genrated using Timer) at endpoint /actuator/prometheus:
# HELP timer_test_seconds Time spent serving test requests
# TYPE timer_test_seconds summary
timer_test_seconds_count{class="com.test.MyController",exception="none",method="testTimer",} 2.0
timer_test_seconds_sum{class="com.test.MyController",exception="none",method="testTimer",} 8.6461705
# HELP timer_test_seconds_max Time spent serving test requests
# TYPE timer_test_seconds_max gauge
timer_test_seconds_max{class="com.test.MyController",exception="none",method="testTimer",} 4.5578234
But when I run one of this query in Grafana(configured against prometheus instance), I don't see any result.
Is any configuration needed for this?

These are the metrics exposed by your service. Grafana does not interact with your service directly.
Is Prometheus up and scraping this target? You can check if Prometheus has any data at localhost:port/graph.
Is Grafana pointing to your Prometheus endpoint correctly?
Most likely, Prometheus is not scraping your target correctly. Keep in mind that Grafana is unaware of your service at all so you should first check if there is any data in Prometheus.

Related

Micrometer vs metricbeats

Looking to send metrics to elastic search. I have a number of docker services running in Springboot.
What is the difference between using micrometer and metric beats?
There's a nice example in the official observability documentation on how both Metricbeat and Micrometer can be complimentary.
Micrometer provides metrics in a vendor-neutral way. Those metrics are then pulled by Prometheus and Metricbeat (with the prometheus module) is used to forward those metrics to Elasticsearch.
It is also possible to remove Prometheus from the picture and simply configure Micrometer to push metrics directly to Elasticsearch.

Scale SpringBoot App based on Thread Pool State

We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution

How to monitor Apache Kafka metrics?

How can I build a microservice to monitor Kafka metrics?
I don't want to use the confluent control center or any other tool.
Before building anything like a microservice, I would explore the kafka exporter for Prometheus to expose Kafka metrics in prometheus format. You could then use Prometheus server to scrape these metrics and Grafana for dashboarding/visualisations. There's other tools you could use for scraping instead of Prometheus/Grafana, e.g. Elastic Metricbeat (which I mention because you've tagged the question with 'elasticsearch'), but the Prometheus/Grafana combination is quite easy to get up and running - there's also out-of-the-box Grafana dashboards that you can install without having to set this up manually e.g. this one.

How to export multiple microservices jaeger metrics stored in elasticsearch to prometheus

We are able to get latency metrics of multiple microservices using Jaeger. Currently Jaeger stores application metrics in elasticsearch.
My usecase is to get the latency of application from elasticsearch to prometheus.
Is there anyway to read the elasticsearch metrics of Jaeger? I already used elasticsearch-prometheus-exporter which only exports cluster details of ES.
The prometheus-es-exporter provides a way to create metrics using queries.
For further details you can check prometheus-es-exporter#query-metrics

How do I pull Elastic-search metrics into Prometheus using the Elasticseacrh_exporter

I have installed Prometheus into a Kubernetes cluster using the helm stable chart. We run Elastic Search and I want to scrape metrics from this and then create Alerts based on events.
I have installed the elasticsearch exporter via helm but no where can I find how I then import these metrics into Prometheus ?
There is some config I am missing such as creating a scraping job or something. Anyone can help much appreciated.
I connected to the elasticsearch exporter and can see it pulling metrics.
If you're using an elasticsearch exporter it should contain some documentation. There are more than just one solution out there and you didn't specify which one you're using. In my opinion it would be best for you to start from a tutorial like this one which explains step by step the whole process. As you can read there:
Metrics collection of Prometheus follows the pull model. That means,
Prometheus is responsible for getting metrics from the services that
it monitors. This process introduced as scraping. Prometheus server
scrapes the defined service endpoints, collect the matrixes and store
in local database.
which means you need to configure Prometheus to scrape metrics exposed by the elasticsearch exporter you chose.
Official Prometheus documentation will be also a great source of knowledge and good starting point.
EDIT:
If you run your Elasticsearch instance on Kubernetes cluster, you should rather use the Service Discovery mechanism than static configs. More on <kubernetes_sd_config> you can find here.
There are five different types of Kubernetes service discoveries you can use with Prometheus: node, endpoints, service, pod, and ingress. The one which you most probably need in your case is endpoints. Prometheus uses the Kubernetes API to discover targets. Below you have some examples:
https://blog.sebastian-daschner.com/entries/prometheus-kubernetes-discovery
https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml

Resources