How to use Grafana to create graphs with data from Spring Boot - spring-boot

I am new to Grafana and Spring Boot. I am trying to create a Spring Boot application, and use Grafana SimpleJSON Datasource plugin to get data from my Spring Boot APIs and create graphs. I'm following instructions here https://grafana.com/grafana/plugins/grafana-simple-json-datasource/. Right now I am just hard-coding data into my Spring Boot App.
My question is - Are there other better plugins or approaches people would suggest me to use? SimpleJSON seems to require a very specific format of JSON response, and I don't see too many detailed docs online. Is there any way that I can be more free on my JSON responses of my APIs, and set the parameters needed to plot graphs in Grafana?
Thank you.

You can use Micrometer with Spring Boot Actuator framework to expose metric data to a time series database such as Prometheus. Or you can simply write log files, collect them with Promtail and store them in Loki.
At first this might seem like a lot of work to get these things running, but it might be worth it.
I found it surprisingly simple to get the whole monitoring stack running locally with docker-compose:
Add services grafana, prometheus, promtail and loki.
Configure each of them.
The docker-compose might look like this:
version: "3"
services:
prometheus:
image: prom/prometheus:latest
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./config/prometheus/prometheus_local.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
loki:
depends_on:
- promtail
image: grafana/loki:latest
volumes:
- ./config/loki:/etc/loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/loki-local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- .log:/var/log
- ./config/promtail/promtail-docker-config.yaml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
grafana:
depends_on:
- prometheus
- loki
image: grafana/grafana:latest
volumes:
- ./config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./config/grafana/provisioning:/etc/grafana/provisioning
- ./config/grafana/dashboards:/etc/grafana/dashboards
ports:
- "3000:3000"
Sample config files for provisioning grafana can be found in the grafana git repository. Loki provides sample configurations for itself and promtail. For Prometheus, see here.
The official documentaion about installing Grafana with loki can be found here. There is also documentation about the Prometheus configuration.
Now you need to configure your application. Enable and expose the endpoint prometheus as described in the spring boot documention. Configure a log file appender to write the logs to the above configured log directory .log.
Your logs will now get collected by Promtail and sent to Loki. Metrics will get stored in Prometheus. You can use PromQL and LogQL to write Grafana queries and render the results in Grafana panels.
With this solution you can add tags to your data that can later be used by grafana.

Related

Datasource Metrics for Postgres DB are not available at /q/metrics endpoint [QUARKUS]

Quarkus App - Rest service to fetch data from Postgres DB
Quarkus version : 2.9.0.Final
Postgres Extension - quarkus-reactive-pg-client
Micrometer Extension - quarkus-micrometer-registry-prometheus
After adding the above extensions , the postgres db metrics are not available at /q/metrics endpoint.
How to get the pg datasource metrics while using reactive pg drivers
The support for metrics in the reactive client is not ready yet. It's predicted to become available on Quarkus 2.16.
Once is done, you need to enable the DB metrics by setting this property:
quarkus.datasource.metrics.enabled=true
As explained here: https://quarkus.io/guides/datasource#datasource-metrics

Grafana not showing metrics shown by springboot prometheus endpoint

I have configured micrometer and prometheus for my spring boot application, and I can see following metrics(genrated using Timer) at endpoint /actuator/prometheus:
# HELP timer_test_seconds Time spent serving test requests
# TYPE timer_test_seconds summary
timer_test_seconds_count{class="com.test.MyController",exception="none",method="testTimer",} 2.0
timer_test_seconds_sum{class="com.test.MyController",exception="none",method="testTimer",} 8.6461705
# HELP timer_test_seconds_max Time spent serving test requests
# TYPE timer_test_seconds_max gauge
timer_test_seconds_max{class="com.test.MyController",exception="none",method="testTimer",} 4.5578234
But when I run one of this query in Grafana(configured against prometheus instance), I don't see any result.
Is any configuration needed for this?
These are the metrics exposed by your service. Grafana does not interact with your service directly.
Is Prometheus up and scraping this target? You can check if Prometheus has any data at localhost:port/graph.
Is Grafana pointing to your Prometheus endpoint correctly?
Most likely, Prometheus is not scraping your target correctly. Keep in mind that Grafana is unaware of your service at all so you should first check if there is any data in Prometheus.

How to monitor Apache Kafka metrics?

How can I build a microservice to monitor Kafka metrics?
I don't want to use the confluent control center or any other tool.
Before building anything like a microservice, I would explore the kafka exporter for Prometheus to expose Kafka metrics in prometheus format. You could then use Prometheus server to scrape these metrics and Grafana for dashboarding/visualisations. There's other tools you could use for scraping instead of Prometheus/Grafana, e.g. Elastic Metricbeat (which I mention because you've tagged the question with 'elasticsearch'), but the Prometheus/Grafana combination is quite easy to get up and running - there's also out-of-the-box Grafana dashboards that you can install without having to set this up manually e.g. this one.

Implement opentracing in Spring Boot for Datadog

I need to implement tracing opentracing (opentelementary) for datadog in my Spring Boot application with rest controller.
I have a given kubernetes endpoint, to which I should send traces.
Not sure I fully grasp the issue. Here are some steps to collect your traces:
Enable trace collection on Kubernetes and open relevant port (8126) doc
Configure your app to send traces to the right container. Here is an example to adapt based on your situation. doc on java
instrumentation
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Just in case, more info on Open Tracing here

How do I pull Elastic-search metrics into Prometheus using the Elasticseacrh_exporter

I have installed Prometheus into a Kubernetes cluster using the helm stable chart. We run Elastic Search and I want to scrape metrics from this and then create Alerts based on events.
I have installed the elasticsearch exporter via helm but no where can I find how I then import these metrics into Prometheus ?
There is some config I am missing such as creating a scraping job or something. Anyone can help much appreciated.
I connected to the elasticsearch exporter and can see it pulling metrics.
If you're using an elasticsearch exporter it should contain some documentation. There are more than just one solution out there and you didn't specify which one you're using. In my opinion it would be best for you to start from a tutorial like this one which explains step by step the whole process. As you can read there:
Metrics collection of Prometheus follows the pull model. That means,
Prometheus is responsible for getting metrics from the services that
it monitors. This process introduced as scraping. Prometheus server
scrapes the defined service endpoints, collect the matrixes and store
in local database.
which means you need to configure Prometheus to scrape metrics exposed by the elasticsearch exporter you chose.
Official Prometheus documentation will be also a great source of knowledge and good starting point.
EDIT:
If you run your Elasticsearch instance on Kubernetes cluster, you should rather use the Service Discovery mechanism than static configs. More on <kubernetes_sd_config> you can find here.
There are five different types of Kubernetes service discoveries you can use with Prometheus: node, endpoints, service, pod, and ingress. The one which you most probably need in your case is endpoints. Prometheus uses the Kubernetes API to discover targets. Below you have some examples:
https://blog.sebastian-daschner.com/entries/prometheus-kubernetes-discovery
https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml

Resources