Caching Tokens in Prometheus - caching

I am using OAuth 2.0 with Prometheus, and don't want to generate a new access token every single scrape (which is too often) - I want to generate it once every hour. How can I use a caching system in Prometheus to accomplish this?
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
rule_files: []
scrape_configs:
- job_name: "prometheus"
metrics_path: "/actuator/prometheus"
static_configs:
- targets: ["localhost:8080"]
oauth2:
client_id: ""
client_secret_file: ""
scopes: []
token_url: ""

Related

prometheus fails to start with error on Windows

I am trying to run the prometheus for wmi exporter with below configuration. But its failing with error :
ts=2022-06-27T16:04:42.665Z caller=main.go:450 level=error msg="Error loading config (--config.file=prometheus.yml)" file=C:\prometheus\prometheus.yml err="parsing YAML file prometheus.yml: yaml: line 31: did not find expected key"
Any idea what could be the issue?
below is my prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]
Your indentation is wrong on line 31 onwards. YAML is very very strict about spacing.
Fixed:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "wmi_Exporter"
static_configs:
- targets: ["10.10.10.1:9182"]

Prometheus yaml file: did not find expected key

I am new to the yml file formatting and I cannot figure out why when I run the application, I get the error:
error loading config from \"prometheus.yml\": couldn't load
configuration (--config.file=\"prometheus.yml\"): parsing YAML file
prometheus.yml: yaml: line 34: did not find expected key
That is the only notification I get and there is nothing specific about it. This is what my file looks like:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
global:
scrape_interval: 10s
evaluation_interval: 10s
- job_name: 'kafka'
static_configs:
- targets:
- localhost:7071
Is my spacing causing the error? I tried duplicating the spacing like the default file. If I remove everything after the 2nd global, it runs. How can I fix this?
You can't have two "global" sections. The "scrape_interval" and the "evaluation_interval" parameters are already defined in the first "global", you don't need these definitions again at the end.

Is there a way to use the aws_ec2 ansible inventory plugin to define a prometheus target group dynamicly?

I'm trying to configure a prometheus server working with ansible. I want to monitor some aws instances (ec2) but I'm stuck with the targets of prometheus. I probably should use a json file to define my targets but I don't know how to update the file when I create a new instance with ansible. Also I'm using the aws_ec2 plugin for the dynamic inventory on ansible.
So my question is: how can I use the inventory plugin to keep update the target file of prometheus without doing it manually?
You can use prometheus service discovery with the same / similar filters that you use for aws_ec2 inventory. I spent time on this issue as well but am very happy with the outcome. Basic example:
scrape_configs:
- ec2_sd_configs:
- filters:
- name: tag:Environment
values:
- PROD
job_name: ec2
relabel_configs:
- replacement: ${1}.prod.yourdomain.com:9100
source_labels:
- __meta_ec2_tag_dnsname
target_label: __address__
- source_labels:
- __meta_ec2_tag_Name
target_label: instance
- ec2_sd_configs:
- filters:
- name: tag:Environment
values:
- PROD
job_name: application_actuators
relabel_configs:
- replacement: ${1}.prod.yourdomain.com:9001
source_labels:
- __meta_ec2_tag_dnsname
target_label: __address__
- source_labels:
- __meta_ec2_tag_Name
target_label: instance
- action: drop
regex: ^$
source_labels:
- __meta_ec2_tag_application
- regex: (.+)
replacement: /$1/api/actuator/prometheus
source_labels:
- __meta_ec2_tag_application
target_label: __metrics_path__
ec2_sd_config documentation
This config is generated by ansible.

How can I add/set multiple IPs in Prometheus with different targets

I am trying to simplify the Prometheus configuration so that when we add/remove servers, we can easily replace the IP address.
Here is my prometheus.yml file
scrape_configs:
- job_name: 'MAP-map-health-test'
scrape_interval: 5s
metrics_path: /probe
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox_exporter:9115
and here is my map-servers.yml
targets:
- http://10.0.2.16
- http://10.0.2.11
- http://10.0.2.12
- http://10.0.2.13
- http://10.0.2.14
- http://10.0.2.17
- http://10.0.2.44
The above works if I only have to check the Apache service.
What I want to achive is I can add multiple checks with same IPs:
- 'map-server.yml'/test1.php
...
...
- 'map-server.yml'/test2.php
Is there a way that I can achieve this?
If you want alternative metric paths for the same set of IP addresses, you can just create another job using a different metric path:
scrape_configs:
- job_name: 'MAP-map-health-test_probe-a'
scrape_interval: 5s
metrics_path: /probe-a
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
- job_name: 'MAP-map-health-test_probe-b'
scrape_interval: 5s
metrics_path: /probe-b
params:
module: [prod-map-servers]
file_sd_configs:
- files:
- 'map-servers.yml'
relabel_configs:
# ...
This will scrape http://10.0.2.16/probe-a, http://10.0.2.16/probe-b, http://10.0.2.11/probe-a, http://10.0.2.11/probe-b, etc.
If an IP address changes, it is reflected immediately in both jobs.

Why doesn't the metricbeat index name change each day?

I am using a metricbeat (7.3) docker container along side several other docker containers, and sending the results to an elasticsearch (7.3) instance. This works, and the first time everything spins up I get an index in elasticsearch called metricbeat-7.3.1-2019.09.06-000001
The initial problem is that I have a Graphana dashboard setup to look for an index with today's date, so it seems to ignore one created several days ago altogether. I could try to figure out what's wrong with those Grafana queries, but more generically I need those index names to roll at some point - the index that's there is already up to over 1.3GB, and at some point that will just be too big for the system.
My initial metricbeat.yml config:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
Searching around a bit, it seems like the index field on the elasticsearch output should configure the index name, so I tried the following:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
That throws an error about needing setup.template settings, so I settled on this:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
setup.template:
overwrite: true
name: "metricbeat"
pattern: "metricbeat-*"
I don't really know what the setup.template section does, so most of that is a guess from Google searches.
I'm not really sure if the issue is on the metricbeat side, or on the elasticsearch side, or somewhere in-between. But bottom line - how do I get them to roll the index to a new one when the day changes?
This is the setting/steps that worked for me:
metricbeat.yml file:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<es-ip>:9200"]
index: metricbeat-%{[beat.version]}
index_pattern: -%{+yyyy.MM.dd}
ilm.enabled: true
then, over to kibana i.e :5601:
go to "Stack Monitoring", select the "metricbeat-*"
do this kind of a setting to begin with, and what follows later is self-explanatory too:

Resources