How can I add/set multiple IPs in Prometheus with different targets - yaml

I am trying to simplify the Prometheus configuration so that when we add/remove servers, we can easily replace the IP address.
Here is my prometheus.yml file
scrape_configs:
- job_name: 'MAP-map-health-test'
scrape_interval: 5s
metrics_path: /probe
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox_exporter:9115
and here is my map-servers.yml
targets:
- http://10.0.2.16
- http://10.0.2.11
- http://10.0.2.12
- http://10.0.2.13
- http://10.0.2.14
- http://10.0.2.17
- http://10.0.2.44
The above works if I only have to check the Apache service.
What I want to achive is I can add multiple checks with same IPs:
- 'map-server.yml'/test1.php
...
...
- 'map-server.yml'/test2.php
Is there a way that I can achieve this?

If you want alternative metric paths for the same set of IP addresses, you can just create another job using a different metric path:
scrape_configs:
- job_name: 'MAP-map-health-test_probe-a'
scrape_interval: 5s
metrics_path: /probe-a
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
- job_name: 'MAP-map-health-test_probe-b'
scrape_interval: 5s
metrics_path: /probe-b
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
This will scrape http://10.0.2.16/probe-a, http://10.0.2.16/probe-b, http://10.0.2.11/probe-a, http://10.0.2.11/probe-b, etc.
If an IP address changes, it is reflected immediately in both jobs.

Related

prometheus fails to start with error on Windows

I am trying to run the prometheus for wmi exporter with below configuration. But its failing with error :
ts=2022-06-27T16:04:42.665Z caller=main.go:450 level=error msg="Error loading config (--config.file=prometheus.yml)" file=C:\prometheus\prometheus.yml err="parsing YAML file prometheus.yml: yaml: line 31: did not find expected key"
Any idea what could be the issue?
below is my prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]
Your indentation is wrong on line 31 onwards. YAML is very very strict about spacing.
Fixed:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]

Caching Tokens in Prometheus

I am using OAuth 2.0 with Prometheus, and don't want to generate a new access token every single scrape (which is too often) - I want to generate it once every hour. How can I use a caching system in Prometheus to accomplish this?
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
rule_files: []
scrape_configs:
- job_name: "prometheus"
metrics_path: "/actuator/prometheus"
static_configs:
- targets: ["localhost:8080"]
oauth2:
client_id: ""
client_secret_file: ""
scopes: []
token_url: ""

Prometheus yaml file: did not find expected key

I am new to the yml file formatting and I cannot figure out why when I run the application, I get the error:
error loading config from \"prometheus.yml\": couldn't load
configuration (--config.file=\"prometheus.yml\"): parsing YAML file
prometheus.yml: yaml: line 34: did not find expected key
That is the only notification I get and there is nothing specific about it. This is what my file looks like:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
global:
scrape_interval: 10s
evaluation_interval: 10s
- job_name: 'kafka'
static_configs:
- targets:
- localhost:7071
Is my spacing causing the error? I tried duplicating the spacing like the default file. If I remove everything after the 2nd global, it runs. How can I fix this?
You can't have two "global" sections. The "scrape_interval" and the "evaluation_interval" parameters are already defined in the first "global", you don't need these definitions again at the end.

Is there a way to use the aws_ec2 ansible inventory plugin to define a prometheus target group dynamicly?

I'm trying to configure a prometheus server working with ansible. I want to monitor some aws instances (ec2) but I'm stuck with the targets of prometheus. I probably should use a json file to define my targets but I don't know how to update the file when I create a new instance with ansible. Also I'm using the aws_ec2 plugin for the dynamic inventory on ansible.
So my question is: how can I use the inventory plugin to keep update the target file of prometheus without doing it manually?
You can use prometheus service discovery with the same / similar filters that you use for aws_ec2 inventory. I spent time on this issue as well but am very happy with the outcome. Basic example:
scrape_configs:
- ec2_sd_configs:
- filters:
- name: tag:Environment
values:
- PROD
job_name: ec2
relabel_configs:
- replacement: ${1}.prod.yourdomain.com:9100
source_labels:
- __meta_ec2_tag_dnsname
target_label: __address__
- source_labels:
- __meta_ec2_tag_Name
target_label: instance
- ec2_sd_configs:
- filters:
- name: tag:Environment
values:
- PROD
job_name: application_actuators
relabel_configs:
- replacement: ${1}.prod.yourdomain.com:9001
source_labels:
- __meta_ec2_tag_dnsname
target_label: __address__
- source_labels:
- __meta_ec2_tag_Name
target_label: instance
- action: drop
regex: ^$
source_labels:
- __meta_ec2_tag_application
- regex: (.+)
replacement: /$1/api/actuator/prometheus
source_labels:
- __meta_ec2_tag_application
target_label: __metrics_path__
ec2_sd_config documentation
This config is generated by ansible.

prevent to_nice_yaml from generating aliases

Is it possible to force the function to_nice_yaml to avoid generating aliases?
The following line in the Ansible template
scrape_configs:
{{ scrape_configs | to_nice_yaml(indent=2) | indent(2,False) }}
where
common_relabeling:
- stuff1
- stuff2
scrape_configs:
- job_name: process_exporter
relabel_configs: "{{ common_relabeling }}"
- job_name: node_exporter
relabel_configs: "{{ common_relabeling }}"
expands in a YAML file using aliases (see below), which I'm not sure is supported by Prometheus' configuration parser. Obviously I'd like to fix it without hardcoding common_relabeling in every entry
scrape_configs:
- job_name: process_exporter
relabel_configs: &id001
- stuff1
- stuff2
- job_name: node_exporter
relabel_configs: *id001
You can just leave the anchor and alias as is.
Prometheus uses the package gopkg.in/yaml.v2, and if you read through the documentation of that package, you'll see that it is based on libyaml, which has been parsing anchors and aliases for over a decade now. And the documentation for gopkg.in/yaml.v2 explicitly states that anchors are supported:
The yaml package supports most of YAML 1.1 and 1.2, including support for anchors, tags ...

Resources