I've been tryint to set up prometheus on my project but I'm having an issue when setting the targeton on prometheus.yml
I've already did everything else, added dependencies and I'm using prometheus through spring actuator
My local URL is localhost.tac.com:8080 and to hit the metrics I go to localhost.tac.com:8080/actuator/prometheus.
And this is my prometheus.yml... Could anyone tell me why is it not working with for me? I've also tried the 'targets' as localhost:8080 / localhost.tac.com:8080/actuator/prometheus and same result
I know the question is not really technical but I've been trying this for a lot of time, thanks!
EDIT: I'm using prometheus inside a remote desktop I connecto to... I connecto to this desktop, I open Eclipse, start the server, log in to an authentication page and then I'm able to use localhost.tac.com... I've downloaded prometheus insdie this remote desktop and I'm trying to run it through there. Maybe there is an issue with prometheus resolving this?
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - 'first_rules.yml'
# - 'second_rules.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'api'
metrics_path: /actuator/prometheus
scrape_interval: 5s
static_configs:
- targets: ['localhost.tac.com:8080']
Related
I am running a Docker bundle with these images on my server
- SpringBoot app : PORT 18081
- Granafa : PORT 3001
- PostgreSQL : PORT 5432
- Prometheus : PORT 9090
and I would like to set up Prometheus to scrape from Springboot with this prometheus.yml configuration:
#My global config
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: spring-actuator
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /actuator/prometheus
scheme: http
static_configs:
- targets: ['172.30.0.9:18081']
where 172.30.0.9 is the docker internal IP for my SpringBoot application obtained with this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-id>
I checked the Prometheus Dashboard on ip:9090 and I was able to observe that the prometheus job is succesfully scrapped, but not the endpoint from the Spring application.
However, if I perform a curl on the VM machine curl http://172.30.0.9:18081/actuator/prometheus, I succesfully returns all the prometheus metrics.
I have tried to set as target:
localhost:18081
external_ip:18081
container-name:18081
host.docker.internal:18081
but Prometheus is still not accessing the endpoint as expected.
Did I miss anything to configure?
I see some things you may remove since they are redundant, you can try and use the following for scrape_configs (prometheus self-scrape is not necessary as well as some settings - since you defined global):
scrape_configs:
- job_name: 'spring-actuator'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['172.30.0.9:18081']
There might be some layout issue.
I have started working on the Prometheus for my microservices. I was able to achieve it initially. Now, it's time to push the actuator endpoint under the spring security. After adding the security actuator is expecting the bearer token from the Prometheus. So, how to configure the username and password in the Prometheus job so that Prometheus will get the bearer token from the login and add it as the 'Authorization' in the header for all the requests.
I'm running the Prometheus in the docker container using the commands below
1. $ docker run --name prometheus -p 9090:9090 -v prometheus.yml:/etc/prometheus/prometheus.yml -d prom/prometheus
2. $ docker run --name grafana -d -p 3000:3000 grafana/grafana
Following is the prometheus.yml file
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any time series scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['127.0.0.1:9090']
- job_name: 'NL-APPLICATION'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
scheme: http
static_configs:
- targets: ['172.17.0.1:8085']
- job_name: 'NL-ADMIN-API'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['172.17.0.1:8083']
How to Instruct Prometheus to do as follow
API call to '/login' get the Bearer token using username and password
Add the Bearer token as the 'Authorization' as a header in all actuator API call
You can either specify as a file or add the token to the config
- job_name: 'test'
metrics_path: "/metrics"
scheme: "http"
bearer_token_file: /var/run/secrets/ OR bearer_token: token_here
static_configs:
- targets: ['host.com']
I want to use Prometheus with my spring boot project, I'm new in Prometheus that way i do not know why I get error describe in picture
My prometheus.yml like below
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'spring_micrometer'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['192.168.43.71:8080/app']
I run prometheus by this command docker run -d -p 9090:9090 -v <path-to-prometheus.yml>:/etc/prometheus/prometheus.yml prom/prometheus
I notice my ip not show in Prometheus targets page :
Normally Endpoint IP must be like 192.168.43.71:8080/app/actuator/prometheus but I get http://localhost:9090/metrics and when I click in it, i get error describe in picture 1
What I do wrong ?!, anyone can help me to resolve this issue and thanks.
You cannot do this - targets: ['192.168.43.71:8080/app']. Try the following:
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'spring_micrometer'
metrics_path: '/app/actuator/prometheus/metrics'
scrape_interval: 5s
static_configs:
- targets: ['192.168.43.71:8080']
Why does your config not work? Take a look at the config docs here: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#host
targets is a collection of host and host must be a "valid string consisting of a hostname or IP followed by an optional port number".
Getting error
level=error ts=2020-08-23T17:24:34.036Z caller=file.go:323 component="discovery manager scrape" discovery=file msg="Error reading file" path=/etc/prometheus/prometheus.yml err="yaml: unmarshal errors:\n line 1: cannot unmarshal !!map into []*targetgroup.Group"
when trying to load a yml file_sd config.
Prometheus yml is
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
alerting:
alertmanagers:
- static_configs:
- targets:
rule_files:
scrape_configs:
- job_name: file
file_sd_configs:
- files:
- '*.yml'
The file_sd_config is
---
- targets:
- x.x.x.x:9100
- x.x.x.x:9100
- x.x.x.x:9100
- x.x.x.x:9100
labels:
job: node
- targets:
- x.x.x.x:9090
labels:
job: prometheus
(real ip's obfuscated' The yml was converted from a working json file_sd_config.
Problem is in prometheus.yml. If a json file is specified then the wildcard will work. I had to add the entire filename for the yml file for it to work
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
alerting:
alertmanagers:
- static_configs:
- targets:
rule_files:
scrape_configs:
- job_name: file
file_sd_configs:
- files:
- 'clients.yml'
Looks like a prometheus bug
I want to deploy Prometheus to Cloud Foundry without using Docker container. When I try to deploy it with the standard Cloud Foundry Go Buildpack I get the following error:
can't load package: package prometheus: no buildable Go source files in /tmp/tmp.vv4iyDzMvE/.go/src/prometheus
Which somehow makes sense, because there are actually no sources in the root directory and the project is compiled with the Prometheus utility tool.
Is there any way to deploy Prometheus to Cloud Foundry, like using another Buildpack or something?
I had the same question, but (just today) came up with a slightly different solution, that seemed easier to me.
I used the prometheus-2.2.1-linux-amd64 binary build.
I modified the prometheus.yml to use the default port 8080 as a target (last line):
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:8080'] ###### Only changed this line
Then I added a manifest.yml
---
applications:
- name: prometheus
instances: 1
buildpack: https://github.com/cloudfoundry/binary-buildpack.git
command: ./prometheus --config.file=prometheus.yml --web.listen-address=:8080 --web.enable-admin-api
memory: 1024M
random-route: true
That is using the binary-buildpack, and tells prometheus to startup the server listening on port 8080.
2 files changes and this:
cf push
Now I have prometheus running in my space on Pivotal Web Services.
Prometheus is a TSDB. And it is intended to consume gigabytes and gigabytes of data.
On a Cloud Foundry platform, you are limited by available resources.
So, why deploy Prometheus to Cloud Foundry?
Why not spin up a standalone bosh director and deploy Prometheus through the director as a Bosh deployment, and a standalone. Then inject it as a CUPS into Cloud Foundry?
I am just curious and trying to understand the use case.
Ok, after digging around a bit I got the whole thing working as follows
manifest.yml
---
applications:
- name: prometheus
instances: 1
buildpack: https://github.com/cloudfoundry/go-buildpack.git
command: prometheus
env:
GOPACKAGENAME: github.com/prometheus/prometheus
GO_INSTALL_PACKAGE_SPEC: github.com/prometheus/prometheus/cmd/prometheus
memory: 1000M
BUT in order to listen on the right port, the only solution I could find is adding the following to the cmd/prometheus/config.go file to the beginning of the init() function
port := ":9090"
if s := os.Getenv("PORT"); s != "" {
port = ":"+s
}
and then changing the following part (also in the init() function)
cfg.fs.StringVar(
&cfg.web.ListenAddress, "web.listen-address", ":9090",
"Address to listen on for the web interface, API, and telemetry.",
)
to
cfg.fs.StringVar(
&cfg.web.ListenAddress, "web.listen-address", port,
"Address to listen on for the web interface, API, and telemetry.",
)
After that you can simply deploy the application with cf push and everything should work as a charm