Prometheus is not scraping metrics from actuator/prometheus - spring-boot

My actuator-prometheus metrics are reachable under: localhost:5550/linksky/actuator/prometheus
For example, I am seeing metric named "http_server_requests_seconds_count"
I have set up my prometheus with docker-compose.yml:
services:
prometheus:
image: prom/prometheus
ports:
- 9090:9090
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
networks:
monitoring:
aliases:
- prometheus
networks:
monitoring:
and my prometheus.yml
scrape_configs:
- job_name: 'linksky_monitoring'
scrape_interval: 2s
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']
When I am starting prometheus, I can retrieve metric named "scrape_duration_seconds" and I see
that the scrape-job is correct:
But, when I am asking for "http_server_requests_seconds_count", I get no result.
Do I expect something wrong? Why do I have only this metric in prometheus, although the "linksky_monitoring" job seems to be running?
UPDATE and SOLUTION
I need to use a tls-connection, because each request for my spring-boot app has to be with TLS.
For this issue i have extracted key and cert from my p12-Certificate and made follow config:
scrape_configs:
- job_name: 'monitoring'
scrape_interval: 2s
metrics_path: '/jReditt/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']
scheme: https
tls_config:
cert_file: '/etc/prometheus/myApp.cert'
key_file: '/etc/prometheus/myApp.key'
insecure_skip_verify: true
No, it is working fine

Your metrics_path in the prometheus.yml is wrong because it's missing a part of the endpoint. It should be like below (/linksky/actuator/prometheus)
scrape_configs:
- job_name: 'linksky_monitoring'
scrape_interval: 2s
metrics_path: '/linksky/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']

Related

Collecting Containers metrics and Hostsystem metrics with node-exporter?. How can I do this?

My hostmachine is a windows system and I'm running docker desktop. I've running prometheus / node-exporter / cadvisor and grafana in a container.
Currently I get only the metrics of the containers, not from the windows host system.
How is it possible to collect data from host system?
There is a simular question in Stackoverflow but this not work for me, probably it's for a linux host system.
https://stackoverflow.com/questions/66060894/how-to-resolve-prometheus-node-exporter-node-filesystem-device-error-within-do#:~:text=To%20emit%20host%20filesystem%20metrics%20from%20within%20a,so%20it%20knows%20where%20to%20find%20the%20filesystems.
Here is my compose-file:
version: '3'
services:
prometheus:
container_name: Monitoring-Prometheus
image: prometheus
networks:
- monitor-net
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus_db:/var/lib/prometheus
- ./prometheus_db:/prometheus
- ./prometheus_db:/etc/prometheus
- ./alert.rules:/etc/prometheus/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.route-prefix=/'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
ports:
- '1840:9090'
restart: unless-stopped
node-exporter:
container_name: Monitoring-Node-Exporter
image: node-exporter
ports:
- '1841:9100'
cadvisor:
container_name: Monitoring-Cadvisor
image: cadvisor
networks:
- monitor-net
ports:
- '1842:8080/tcp'
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
grafana:
container_name: Monitoring-Grafana
image: grafana:latest
networks:
- monitor-net
ports:
- "1843:3000"
volumes:
- ./grafana_db:/var/lib/grafana
depends_on:
- Monitoring-Prometheus
restart: always
reports:
image: skedler
container_name: Monitoring-Reports
privileged: true
cap_add:
- SYS_ADMIN
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- reportdata:/var/lib/skedler
- ./reporting.yml:/opt/skedler/config/reporting.yml
ports:
- '1844:3001'
networks:
monitor-net:
name: monitoring-network
driver: bridge
volumes:
reportdata:
name: reports-data
driver: local
here my prometheus.yml file:
global:
scrape_interval: 5s
external_labels:
monitor: 'Monitoring'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['host.docker.internal:1840']
- job_name: 'node-exporter'
static_configs:
- targets: ['host.docker.internal:1841']
- job_name: 'cadvisor'
static_configs:
- targets: ['host.docker.internal:1842']
You'll need to run your exporters directly as Windows processes to get metrics from your host. Otherwise, containers are running in a Linux hypervisor, and that's what you'd be getting metrics from with host.docker.internal references.

Prometheus not scrapping metrics from my SpringBoot application

I am running a Docker bundle with these images on my server
- SpringBoot app : PORT 18081
- Granafa : PORT 3001
- PostgreSQL : PORT 5432
- Prometheus : PORT 9090
and I would like to set up Prometheus to scrape from Springboot with this prometheus.yml configuration:
#My global config
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: spring-actuator
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /actuator/prometheus
scheme: http
static_configs:
- targets: ['172.30.0.9:18081']
where 172.30.0.9 is the docker internal IP for my SpringBoot application obtained with this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-id>
I checked the Prometheus Dashboard on ip:9090 and I was able to observe that the prometheus job is succesfully scrapped, but not the endpoint from the Spring application.
However, if I perform a curl on the VM machine curl http://172.30.0.9:18081/actuator/prometheus, I succesfully returns all the prometheus metrics.
I have tried to set as target:
localhost:18081
external_ip:18081
container-name:18081
host.docker.internal:18081
but Prometheus is still not accessing the endpoint as expected.
Did I miss anything to configure?
I see some things you may remove since they are redundant, you can try and use the following for scrape_configs (prometheus self-scrape is not necessary as well as some settings - since you defined global):
scrape_configs:
- job_name: 'spring-actuator'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['172.30.0.9:18081']
There might be some layout issue.

How to distinguish metrics from different services

I'm playing with OpenTelemetry and have such a setup:
Golang, docker-compose, 3 services, 1 standalone open-telemetry collector, 1 Prometheus.
I collect some system metrics to a standalone open-telemetry collector. These metrics are collected from 3 different services and metrics have identical names. Then, Prometheus gets the data from the open-telemetry collector. The problem is that I can't distinguish metrics from different services in Prometheus because all of the metrics have the same "instance" value, which is equal to the open-telemetry-collector's host.
I know that I can add a label with a service's name to the metric record and then distinguish the metrics by the label, but I'm searching for another solution because it is not always possible to add the label to each metric. Maybe, something like http-middleware, but for metrics, or maybe something on an infrastructure level.
Services are written with Golang, but I will be glad to see the solution in any other language.
otel-collector-config:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
prometheus:
endpoint: otel-collector:8889
const_labels:
label1: value1
send_timestamps: true
metric_expiration: 5m
processors:
batch:
service:
pipelines:
metrics:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ prometheus ]
Prometheus config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ['otel-collector:8889']
docker-compose:
version: "3.9"
services:
service1:
build:
context: ./service1
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
ports:
- "8094:8080"
expose:
- "8080"
service2:
build:
context: ./service2
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
ports:
- "8095:8080"
expose:
- "8080"
service3:
build:
context: ./service3
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
expose:
- "8080"
ports:
- "8096:8080"
prometheus:
image: prom/prometheus:v2.26.0
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
otel-collector:
image: otel/opentelemetry-collector:0.23.0
command: [ "--config=/etc/otel-collector-config.yaml" ]
expose:
- "55681" # HTTP otel receiver
- "8889" # Prometheus exporter metrics
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
Update 1.
I found that some new parameters were added to exporter-config https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/exporterhelper . One of them is what suitable for me: resource_to_telemetry_conversion. But as I see prometheusexporter and prometheusremotewriteexporter don't support that field in the config.
The resource_to_telemetry_conversion that you mentioned is part of prometheusexporter since version 0.26.0 (issue #2498) and will add the service_name label based on the agent settings to distinguish metrics from different services.

Endpoint IP not changed in Prometheus target specified in prometheus.yml

I want to use Prometheus with my spring boot project, I'm new in Prometheus that way i do not know why I get error describe in picture
My prometheus.yml like below
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'spring_micrometer'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['192.168.43.71:8080/app']
I run prometheus by this command docker run -d -p 9090:9090 -v <path-to-prometheus.yml>:/etc/prometheus/prometheus.yml prom/prometheus
I notice my ip not show in Prometheus targets page :
Normally Endpoint IP must be like 192.168.43.71:8080/app/actuator/prometheus but I get http://localhost:9090/metrics and when I click in it, i get error describe in picture 1
What I do wrong ?!, anyone can help me to resolve this issue and thanks.
You cannot do this - targets: ['192.168.43.71:8080/app']. Try the following:
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'spring_micrometer'
metrics_path: '/app/actuator/prometheus/metrics'
scrape_interval: 5s
static_configs:
- targets: ['192.168.43.71:8080']
Why does your config not work? Take a look at the config docs here: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#host
targets is a collection of host and host must be a "valid string consisting of a hostname or IP followed by an optional port number".

Only summary data in grafana for blackbox_exporter, not hosts separately

blackbox problem
I added blackbox_exporter in my docker-compose.yml:
blackbox_exporter:
container_name: blackbox_exporter
image: prom/blackbox-exporter
restart: always
ports:
- "9115:3115"
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I added job into prometheus.yml:
- job_name: 'blackbox'
metrics_path: /probe
params:
module: [http_2xx] # Look for a HTTP 200 response.
static_configs:
- targets: ['google.com','amazon.com'] # Target to probe with https.
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox_exporter:9115 # The blackbox exporter's real hostname:port.
I added this dashboard in grafana: https://grafana.com/dashboards/5345 because screenshot on this page was exactly what I need.
Alas, I have only summary graphics without legend, without site-specific chapters.
You can see screenshot here:
Where my actions were wrong? What can I do with it?
In the config you posted, you relabel the blackbox exporter label from __param_target to instance but the dashboard uses target for all the filters and also for the templating variable.
Either change your config to
- source_labels: [__param_target]
target_label: target
or adjust the queries and settings in the dashboard to use instance.

Resources