I want to deploy Prometheus to Cloud Foundry without using Docker container. When I try to deploy it with the standard Cloud Foundry Go Buildpack I get the following error:
can't load package: package prometheus: no buildable Go source files in /tmp/tmp.vv4iyDzMvE/.go/src/prometheus
Which somehow makes sense, because there are actually no sources in the root directory and the project is compiled with the Prometheus utility tool.
Is there any way to deploy Prometheus to Cloud Foundry, like using another Buildpack or something?
I had the same question, but (just today) came up with a slightly different solution, that seemed easier to me.
I used the prometheus-2.2.1-linux-amd64 binary build.
I modified the prometheus.yml to use the default port 8080 as a target (last line):
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:8080'] ###### Only changed this line
Then I added a manifest.yml
---
applications:
- name: prometheus
instances: 1
buildpack: https://github.com/cloudfoundry/binary-buildpack.git
command: ./prometheus --config.file=prometheus.yml --web.listen-address=:8080 --web.enable-admin-api
memory: 1024M
random-route: true
That is using the binary-buildpack, and tells prometheus to startup the server listening on port 8080.
2 files changes and this:
cf push
Now I have prometheus running in my space on Pivotal Web Services.
Prometheus is a TSDB. And it is intended to consume gigabytes and gigabytes of data.
On a Cloud Foundry platform, you are limited by available resources.
So, why deploy Prometheus to Cloud Foundry?
Why not spin up a standalone bosh director and deploy Prometheus through the director as a Bosh deployment, and a standalone. Then inject it as a CUPS into Cloud Foundry?
I am just curious and trying to understand the use case.
Ok, after digging around a bit I got the whole thing working as follows
manifest.yml
---
applications:
- name: prometheus
instances: 1
buildpack: https://github.com/cloudfoundry/go-buildpack.git
command: prometheus
env:
GOPACKAGENAME: github.com/prometheus/prometheus
GO_INSTALL_PACKAGE_SPEC: github.com/prometheus/prometheus/cmd/prometheus
memory: 1000M
BUT in order to listen on the right port, the only solution I could find is adding the following to the cmd/prometheus/config.go file to the beginning of the init() function
port := ":9090"
if s := os.Getenv("PORT"); s != "" {
port = ":"+s
}
and then changing the following part (also in the init() function)
cfg.fs.StringVar(
&cfg.web.ListenAddress, "web.listen-address", ":9090",
"Address to listen on for the web interface, API, and telemetry.",
)
to
cfg.fs.StringVar(
&cfg.web.ListenAddress, "web.listen-address", port,
"Address to listen on for the web interface, API, and telemetry.",
)
After that you can simply deploy the application with cf push and everything should work as a charm
Related
Please note: my prometheus is running using ubuntu terminal and my springboot application is running on windows. Seems like my ubuntu is not able to connect with the localhost of windows.
I have created springboot metrics using "actuator" and my metrics are being exposed at "http/localhost:8080/actuator/prometheus".
My application.yml configuration in my springboot application looks like this:
management:
endpoints:
web:
exposure:
include: prometheus
The configuration file of prometheus i.e. prometheus.yml is as below:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from
this config.
- job_name: "services"
static_configs:
- targets: ["localhost:8080"]
metrics_path: '/actuator/prometheus'
Despite this configuration, i see "target" as down in prometheus interface. It says Get "http://localhost:8080/actuator/prometheus": dial tcp 127.0.0.1:8080: connect: connection refused . Why is prometheus not able to pick the metrics at localhost?
I've been tryint to set up prometheus on my project but I'm having an issue when setting the targeton on prometheus.yml
I've already did everything else, added dependencies and I'm using prometheus through spring actuator
My local URL is localhost.tac.com:8080 and to hit the metrics I go to localhost.tac.com:8080/actuator/prometheus.
And this is my prometheus.yml... Could anyone tell me why is it not working with for me? I've also tried the 'targets' as localhost:8080 / localhost.tac.com:8080/actuator/prometheus and same result
I know the question is not really technical but I've been trying this for a lot of time, thanks!
EDIT: I'm using prometheus inside a remote desktop I connecto to... I connecto to this desktop, I open Eclipse, start the server, log in to an authentication page and then I'm able to use localhost.tac.com... I've downloaded prometheus insdie this remote desktop and I'm trying to run it through there. Maybe there is an issue with prometheus resolving this?
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - 'first_rules.yml'
# - 'second_rules.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'api'
metrics_path: /actuator/prometheus
scrape_interval: 5s
static_configs:
- targets: ['localhost.tac.com:8080']
I am using springboot as microservice.
I have around 20 microservices in my k8s cluster.
I am using promethues to collect data to show in grafana.
In that application there are some url which uses Path variable like as follow
/v1/contacts/{id}
/v1/users/{id}
There are few more urls, If I consider all such URLs in all microservices then that could be around 60 to 70 URLs which uses path variable.
Problem :
Now whenever any url is getting requested i.e like
/v1/contacts/10
/v1/contacts/111
/v1/contacts/51
/v1/users/91
so on...
Then promethus is collecting metrics for all such url. After some time it has huge metrics, and at the end my response time is increased for collecting data from prometheus.
So basically I want to clear prometheus logs after some interval from my springboot application.
I am not sure whether its possible or not.
Can someone please help me to solve this ?
Thanks
Prometheus has several flags that configure local storage.
--storage.tsdb.path: Where Prometheus writes its database. Defaults to data/.
--storage.tsdb.retention.time: When to remove old data. Defaults to 15d. Overrides storage.tsdb.retention if this flag is set to anything other than default.
--storage.tsdb.retention.size: [EXPERIMENTAL] The maximum number of bytes of storage blocks to retain. The oldest data will be removed first. Defaults to 0 or disabled. This flag is experimental and may change in future releases. Units supported: B, KB, MB, GB, TB, PB, EB. Ex: "512MB"
Prometheus storage link
Steps:
Spring Boot application exposure monitoring indicators, add the
following dependencies.
<artifactId>spring-boot-starter-actuator</artifactId> or
<groupId>io.prometheus</groupId>
artifactId>simpleclient_spring_boot</artifactId>
Prometheus collects Spring Boot indicator data. First, get the
Docker image of Prometheus: docker pull prom/prometheus
Then, write the configuration file prometheus.yml:
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10m
scrape_configs:
- job_name: prometheus
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: http
You can add more values to it. Sample here
Next, start Prometheus:
docker run -d -p 9090:9090 \
-u root \
-v /opt/prometheus/tsdb:/etc/prometheus/tsdb \
-v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
--privileged=true prom/prometheus \
--storage.tsdb.path=/etc/prometheus/tsdb \
--storage.tsdb.retention.time=7d \
--storage.tsdb.retention.size = 300MB \
--config.file=/etc/prometheus/prometheus.yml
Alternative way, you can use this link Monitoring Spring Boot projects with Prometheus
But make sure when you run the Prometheus server with Docker Compose you have to update the commands section with size or time properties.
I've written a very basic Spring Boot 2 application that connects to Zookeeper for service discovery (by using spring-cloud-starter-zookeeper-discovery).
The application gets registered at /services/example-service with the following value:
{"name":"example-service","id":"cb14ad15-4d33-4f1c-a420-29980ddf2fa8","address":"bf3fb9191373","port":8080,"sslPort":null,"payload":{"#class":"org.springframework.cloud.zookeeper.discovery.ZookeeperInstance","id":"application-1","name":"example-service","metadata":{}},"registrationTimeUTC":1524120820273,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
The address is an id because I've deployed the stack with Docker.
My Prometheus configuration looks like this:
- job_name: 'example-service'
metrics_path: '/actuator/prometheus'
serverset_sd_configs:
- servers:
- zookeeper:2181
paths:
- '/services/example-service'
The service discovery page of Prometheus shows the following discovered labels:
__address__=":0" __meta_serverset_endpoint_host="" __meta_serverset_endpoint_port="0" __meta_serverset_path="/services/example-service/cb14ad15-4d33-4f1c-a420-29980ddf2fa8" __meta_serverset_shard="0" __meta_serverset_status="" __metrics_path__="/actuator/prometheus" __scheme__="http" job="example-service"
Any idea why __address__ is :0?
Serverset discovery is a particular way of using Zookeeper, which your application is not following. In this case you probably want file service discovery.
Serverset use config as below:
{"serviceEndpoint":{"host":"localhost","port":9100},"additionalEndpoints":{},"status":"ALIVE"}
I am trying to get some custom application metrics captured in golang using the prometheus client library to show up in Prometheus.
I have the following working:
I have a go application which is exposing metrics on localhost:8080/metrics as described in this article:
https://godoc.org/github.com/prometheus/client_golang/prometheus
I have a kubernates minikube running which has Prometheus, Grafana and AlertManager running using the operator from this article:
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus
I created a docker image for my go app, when I run it and go to localhost:8080/metrics I can see the prometheus metrics showing up in a browser.
I use the following pod.yaml to deploy my docker image to a pod in k8s
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
labels:
zone: prod
version: v1
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8080'
spec:
containers:
- name: my-container
image: name/my-app:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
If I connect to my pod using:
kubectl exec -it my-app-pod -- /bin/bash
then do wget on "localhost:8080/metrics", I can see my metrics
So far so good, here is where I am hitting a wall. I could have multiple pods running this same image. I want to expose all the images to prometheus as targets. How do I configure my pods so that they show up in prometheus so I can report on my custom metrics?
Thanks for any help offered!
The kubernetes_sd_config directive can be used to discover all pods with a given tag. Your Prometheus.yml config file should have something like so:
- job_name: 'some-app'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: python-app
action: keep
The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').
Once you've done this Prometheus will automatically discover the pods you want and start scraping the metrics from your app.
Hope that helps. You can follow blog post here for more detail.
Note: it is worth mentioning that at the time of writing, kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.
You need 2 things:
a ServiceMonitor for the Prometheus Operator, which specifies which services will be scraped for metrics
a Service which matches the ServiceMonitor and points to your pods
There is an example in the docs over here: https://coreos.com/operators/prometheus/docs/latest/user-guides/running-exporters.html
Can you share the prometheus config that you are using to scrape the metrics. The config will control what all sources to scrape the metrics from. Here are a few links that you can refer to : https://groups.google.com/forum/#!searchin/prometheus-users/Application$20metrics$20monitoring$20of$20Kubernetes$20Pods%7Csort:relevance/prometheus-users/uNPl4nJX9yk/cSKEBqJlBwAJ