Prometheus yaml file: did not find expected key - yaml

I am new to the yml file formatting and I cannot figure out why when I run the application, I get the error:
error loading config from \"prometheus.yml\": couldn't load
configuration (--config.file=\"prometheus.yml\"): parsing YAML file
prometheus.yml: yaml: line 34: did not find expected key
That is the only notification I get and there is nothing specific about it. This is what my file looks like:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
global:
scrape_interval: 10s
evaluation_interval: 10s
- job_name: 'kafka'
static_configs:
- targets:
- localhost:7071
Is my spacing causing the error? I tried duplicating the spacing like the default file. If I remove everything after the 2nd global, it runs. How can I fix this?

You can't have two "global" sections. The "scrape_interval" and the "evaluation_interval" parameters are already defined in the first "global", you don't need these definitions again at the end.

Related

prometheus fails to start with error on Windows

I am trying to run the prometheus for wmi exporter with below configuration. But its failing with error :
ts=2022-06-27T16:04:42.665Z caller=main.go:450 level=error msg="Error loading config (--config.file=prometheus.yml)" file=C:\prometheus\prometheus.yml err="parsing YAML file prometheus.yml: yaml: line 31: did not find expected key"
Any idea what could be the issue?
below is my prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]
Your indentation is wrong on line 31 onwards. YAML is very very strict about spacing.
Fixed:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]

Caching Tokens in Prometheus

I am using OAuth 2.0 with Prometheus, and don't want to generate a new access token every single scrape (which is too often) - I want to generate it once every hour. How can I use a caching system in Prometheus to accomplish this?
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
rule_files: []
scrape_configs:
- job_name: "prometheus"
metrics_path: "/actuator/prometheus"
static_configs:
- targets: ["localhost:8080"]
oauth2:
client_id: ""
client_secret_file: ""
scopes: []
token_url: ""

How can I add/set multiple IPs in Prometheus with different targets

I am trying to simplify the Prometheus configuration so that when we add/remove servers, we can easily replace the IP address.
Here is my prometheus.yml file
scrape_configs:
- job_name: 'MAP-map-health-test'
scrape_interval: 5s
metrics_path: /probe
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox_exporter:9115
and here is my map-servers.yml
targets:
- http://10.0.2.16
- http://10.0.2.11
- http://10.0.2.12
- http://10.0.2.13
- http://10.0.2.14
- http://10.0.2.17
- http://10.0.2.44
The above works if I only have to check the Apache service.
What I want to achive is I can add multiple checks with same IPs:
- 'map-server.yml'/test1.php
...
...
- 'map-server.yml'/test2.php
Is there a way that I can achieve this?
If you want alternative metric paths for the same set of IP addresses, you can just create another job using a different metric path:
scrape_configs:
- job_name: 'MAP-map-health-test_probe-a'
scrape_interval: 5s
metrics_path: /probe-a
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
- job_name: 'MAP-map-health-test_probe-b'
scrape_interval: 5s
metrics_path: /probe-b
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
This will scrape http://10.0.2.16/probe-a, http://10.0.2.16/probe-b, http://10.0.2.11/probe-a, http://10.0.2.11/probe-b, etc.
If an IP address changes, it is reflected immediately in both jobs.

Why doesn't the metricbeat index name change each day?

I am using a metricbeat (7.3) docker container along side several other docker containers, and sending the results to an elasticsearch (7.3) instance. This works, and the first time everything spins up I get an index in elasticsearch called metricbeat-7.3.1-2019.09.06-000001
The initial problem is that I have a Graphana dashboard setup to look for an index with today's date, so it seems to ignore one created several days ago altogether. I could try to figure out what's wrong with those Grafana queries, but more generically I need those index names to roll at some point - the index that's there is already up to over 1.3GB, and at some point that will just be too big for the system.
My initial metricbeat.yml config:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
Searching around a bit, it seems like the index field on the elasticsearch output should configure the index name, so I tried the following:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
That throws an error about needing setup.template settings, so I settled on this:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
setup.template:
overwrite: true
name: "metricbeat"
pattern: "metricbeat-*"
I don't really know what the setup.template section does, so most of that is a guess from Google searches.
I'm not really sure if the issue is on the metricbeat side, or on the elasticsearch side, or somewhere in-between. But bottom line - how do I get them to roll the index to a new one when the day changes?
This is the setting/steps that worked for me:
metricbeat.yml file:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<es-ip>:9200"]
index: metricbeat-%{[beat.version]}
index_pattern: -%{+yyyy.MM.dd}
ilm.enabled: true
then, over to kibana i.e :5601:
go to "Stack Monitoring", select the "metricbeat-*"
do this kind of a setting to begin with, and what follows later is self-explanatory too:

prevent to_nice_yaml from generating aliases

Is it possible to force the function to_nice_yaml to avoid generating aliases?
The following line in the Ansible template
scrape_configs:
{{ scrape_configs | to_nice_yaml(indent=2) | indent(2,False) }}
where
common_relabeling:
- stuff1
- stuff2
scrape_configs:
- job_name: process_exporter
relabel_configs: "{{ common_relabeling }}"
- job_name: node_exporter
relabel_configs: "{{ common_relabeling }}"
expands in a YAML file using aliases (see below), which I'm not sure is supported by Prometheus' configuration parser. Obviously I'd like to fix it without hardcoding common_relabeling in every entry
scrape_configs:
- job_name: process_exporter
relabel_configs: &id001
- stuff1
- stuff2
- job_name: node_exporter
relabel_configs: *id001
You can just leave the anchor and alias as is.
Prometheus uses the package gopkg.in/yaml.v2, and if you read through the documentation of that package, you'll see that it is based on libyaml, which has been parsing anchors and aliases for over a decade now. And the documentation for gopkg.in/yaml.v2 explicitly states that anchors are supported:
The yaml package supports most of YAML 1.1 and 1.2, including support for anchors, tags ...

Resources