Keda with Rabbitmq and Spring boot - spring-rabbit

I am trying to run KEDA with Rabbitmq & Spring Boot. But it is not working. Basically KEDA is not generating Kubernetes HPA object.
I tried sample code (which is provided by KEDA in GO language) & it is working fine.
I have my producer/consumer code written in spring boot. When I am trying to apply KEDA, it is not scaling rabbitmq consumers (basically not even created HPA object)
https://github.com/sky29/rabbitmq-k8s-broker-publisher-consumer
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot/tree/master/app/myclients
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot/blob/master/app/04_scaled-object-new.yaml

I am able to achieve the same. Sharing in case anyone facing the same issue.
I am using KEDA version : 2.8.2
apiVersion: v1
kind: Secret
metadata:
name: keda-rabbitmq-secret
namespace: default
data:
host: <HTTP API endpoint> # base64 encoded value of format http://guest:password#localhost:15672/vhost
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: trigger-auth-rabbitmq-conn
namespace: default
spec:
secretTargetRef:
- parameter: host
name: keda-rabbitmq-secret
key: host
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: test-analysis
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-celery # Mandatory. Must be in the same namespace as the ScaledObject
pollingInterval: 10 # Optional. Default: 10 seconds
cooldownPeriod: 3600 # Optional. Default: 300 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 2 # Optional. Default: 100
fallback: # Optional. Section to specify fallback options
failureThreshold: 3 # Mandatory if fallback section is included
replicas: 1 # Mandatory if fallback section is included
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
name: keda-hpa-auto-analysis # Optional. Default: keda-hpa-{scaled-object-name}
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 600
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: test_queue
queueLength: "1"
activationValue: "0"
authenticationRef:
name: trigger-auth-rabbitmq-conn
Reference : https://keda.sh/docs/2.9/scalers/rabbitmq-queue/

Related

How Can I Publish Kibana

I have a question if possible:
(I work with GKE)
this it my kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: cdbridgerpayelasticsearch
http:
service:
spec:
type: LoadBalancer
(It ran well with the loadbalancer - ...in the browser )
And I wanted to make it publish on https://some.thing.net
so I made an ingress:
apiVersion: v1
kind: Secret
metadata:
name: bcd-kibana-testsecret-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lVVkk1ellBakR0
RWFyd0Zqd2xuTnlLeU0xMnNrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZqRVVNQklHQTFVRUF3d0xa
bTl2TG1KaGNpNWpiMjB3SGhjTk1qSXdOakUxTVRJeE16RTVXaGNOTWpNdwpOakUxTVRJeE16RTVX
akFXTVJRd0VnWURWUVFEREF0bWIyOHVZbUZ5TG1OdmJUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJC
UUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMVRuTDJpZlluTUFtWEkrVmpXQU9OSjUvMGlpN0xWZlFJMjQ5
aVoKUXl5Zkw5Y2FQVVpuODZtY1NSdXZnV1JZa3dLaHUwQXpqWW9aQWR5N21TQ3ROMmkzZUZIRGdq
NzdZNEhQcFA1TQpKVTRzQmU5ZmswcnlVeDA3aU11UVA5T3pibGVzNHcvejJIaXcyYVA2cUl5ZFI3
bFhGSnQ0NXNSeDJ3THRqUHZCClErMG5UQlJrSndsQVZQYTdIYTN3RjBTSDJXL1dybTgrNlRkVGpG
MmFUanpkMFFXdy9hKzNoUU9HSnh4b1JxY1MKYmNNYmMrYitGeCtqZ3F2N0xuQ3R1Njd5L2tsb3ZL
Z2djdW41ZVNqY3krT0ZpdTNhY2hLVDlUeWJlSUxPY2FSYQp3KzJSeitNOUdTMWR2aUI0Q0dRNnlw
RDhkazc4bktja1FBam9QMXV5ZXJnR2hla0NBd0VBQWFOVE1GRXdIUVlEClZSME9CQllFRkpLUWps
KzI0TVJrNVpqTlB4ZVRnVU1pbE5xWk1COEdBMVVkSXdRWU1CYUFGSktRamwrMjRNUmsKNVpqTlB4
ZVRnVU1pbE5xWk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dF
QgpBSUp2Y1ZNclpEUEZ6TEhvZ3IyZklDN0E0TTB5WFREZXhONWNEZFFiOUNzVk0zUjN6bkZFU1Jt
b21RVVlCeFB3CmFjUVpWQ25qM0xGamRmeExBNkxrR0hhbjBVRjhDWnJ4ODRRWUtzQTU2dFpJWFVm
ZXRIZk1zOTZsSE5ROW5samsKT3RoazU3ZkNRZVRFMjRCU0RIVDJVL1hhNjVuMnBjcFpDU2FYWStF
SjJaWTBhZjlCcVBrTFZud3RTQ05lY0JLVQp3N0RCM0o4U2h1Z0FES21xU2VJM1R2N015SThvNHJr
RStBdmZGTUw0YlpFbS9IWW4wNkxQdVF3TUE1cndBcFN3CnlDaUJhcjRtK0psSzNudDRqU0hGeU4x
N1g1M1FXcnozQTNPYmZSWXI0WmJObE8zY29ObzFiQnd3eWJVVmgycXoKRGIrcnVWTUN5WjBTdXlE
OGZURFRHY1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: |
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3RVNXk5b24ySnpBSmwKeVBsWTFnRGpTZWY5SW91eTFYMENO
dVBZbVVNc255L1hHajFHWi9PcG5Fa2JyNEZrV0pNQ29idEFNNDJLR1FIYwp1NWtnclRkb3QzaFJ3
NEkrKzJPQno2VCtUQ1ZPTEFYdlg1Tks4bE1kTzRqTGtEL1RzMjVYck9NUDg5aDRzTm1qCitxaU1u
VWU1VnhTYmVPYkVjZHNDN1l6N3dVUHRKMHdVWkNjSlFGVDJ1eDJ0OEJkRWg5bHYxcTV2UHVrM1U0
eGQKbWs0ODNkRUZzUDJ2dDRVRGhpY2NhRWFuRW0zREczUG0vaGNmbzRLcit5NXdyYnV1OHY1SmFM
eW9JSExwK1hrbwozTXZqaFlydDJuSVNrL1U4bTNpQ3puR2tXc1B0a2MvalBSa3RYYjRnZUFoa09z
cVEvSFpPL0p5bkpFQUk2RDliCnNucTRCb1hwQWdNQkFBRUNnZ0VCQUpyb3FLUy83aTFTM1MyMVVr
MTRickMxSkJjVVlnRENWNGk4SUNVOHpWRzcKTUZteVJPT0JFc0FiUXlmd1V0ZXBaaktxODUwc3Rp
cWZzUTlqeHpieU9SeHBKYXNGN29sMXluaUJhYmd4dkFIQwp6TWNsQjVLclEyZFVCeTNRVFl0YXla
cW9sUU56NzV2bWk0M0lBQTQwbjU3aFdqU2QrTG5IL0hNQWRzbW04SnVwCjA5d3VHMGltdEhQYm5B
TERnY2N3V04zY2xxU0FtR3pxbW9kOEE1YjBWZTQwZHhjTXh3TldaN0JqOTBnamNWYnQKVU1aaFhp
T2E2bnNSSHZDQjF0b0lmZVBuOEdzRk9nOUVqUHZDTWM4QWZKVW84Qk5TK2N0RmF3RjRWaUUyUHlB
VgpxZzR1MHhBQ2Qya28zUUtpZFpsbjAyZkc2ZWg1UmxGYzdsL1RiWWxQY2xFQ2dZRUEyU0NvK2pJ
MUZ1WkZjSFhtCm1Ra2Z3Q0Vvc28xd3VTV3hRWDBCMnJmUmJVbS9xdXV4Rm92MUdVc0hwNEhmSHJu
RWRuSklqVUw4bWxpSjFFUTkKVUpxZC9SVkg1OTdZZ2Zib1dCbHh0cVhIRFRObjNIU3JzQmJlQTh6
NXFMZjE2QzZaa3U3YmR3L2pxazJVaFgrcwp6T3piYTVqYVVYU0pXYXpzRmZoZjdSMlZ3UTBDZ1lF
QXpGdDZBTDNYendBSi9EcXU5QlVuaTIxemZXSUp5QXFwCnBwRnNnQUczVkVYZWFRMjVGcjV6cURn
dlBWRkF5QUFYMU9TL3pHZVcxaDQ0SERzQjRrVmdxVlhHSTdoQUV1RjYKRlgra1M5Uk5QdmFsYXdQ
cXp4VTdPQmcvUis5NVB1NW1oNnFrRWVUekM2T21ZUGRGbmVQcWxKZk03YU43OEhjOApGVU1xRTBa
NkNVMENnWUJJUUVYNmU1cU85REZIS3ZTQkdEZ29odUEwQ2p6b1gxS01xRHhsdTZWRTZMV08rcjhD
CjhhK3Rxdm54RTVaYmN4V2RGSXB2OTBwM1VkOExjMm16MkwrWjUrcjFqWUllUFRzemxjUHhNMWo1
VzVIRUdrN0gKV2RTbkR4NUV0bkp0d0pQNkFPR216UExGU091VFFOa1BtQUdyM0VGSnVhMjYyWC8y
RDZCY0Z1d3VRUUtCZ0V0aApHcm1YVFRsdnZEOHJya2tlWEgzVG01d09RNmxrTlh2WmZIb2pKK3FQ
OHlBeERhclVDWGx0Y0E5Z0gxTW1wYVBECjFQT2k2a0tFMXhHaXVta3FTaU5zSGpBaTBJK21XQkFD
Q3lwbFh6RHdiY2Z4by9WSzBaTTVibTRzYVQ3TFZVcUoKcVFkb3VqWDY0VzQzQjVqYjd6VnNZUXp2
RnRKMlNOVlc5dmd4TU9hcEFvR0FDaHphN3BMWWhueFA5QWVUL1pxZwpvUWM0ckh1SDZhTDMveCtq
dlpRVXdiSitKOWZGY05pcTFZRlJBL2RJSEJWcGZTMWJWR3N3OW9MT2tsTWxDckxRCnJKSjkzWlRu
dFdSQ1FCOTNMbXhoZmJuRk9CemZtbHZYTjFJeE1FNStVbVZRQmRLaS9YMktMZnFSUW5HVTExL2UK
NytFcXFFbllrWTBraE1HL0xqRlRTU0E9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-bcd-ingress
spec:
tls:
- hosts:
- some.thing.net
secretName: bcd-kibana-testsecret-tls
rules:
- host: some.thing.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
(p.s. to get the tls.crt and the tls.key I ran in the GCP cli the command:
kubectl create secret tls bcdsecret --key="tls.key" --cert="tls.crt"
and then:
cat tls.crt | base64 and cat tls.key | base64)
But the ingress failed
when I ran kubectl get ingress I got:
$ kubectl get ingress
W0615 21:28:17.715766 604 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
...
tls-bcd-ingressbcd some.thing.net 34.120.185.85 80, 443 35m
but when I ran this hostname in the browser I got nothing
What am I supposed to do?
Thank you!
or in other words:
how can I publish the kibana on hostname?
thanks :smiling_face:
Frida

Can't communicate with pods through services

I have two deployment, where one of them creates 4 replica for php-fpm and another is a nginx webserver exposed to Internet through Ingress.
problem is that I can't connect to app service in webserver pod! (same issue while trying to connect to other services)
ping result:
$ ping -c4 app.ternobo-connect
PING app.ternobo-connect (10.245.240.225): 56 data bytes
--- app.ternobo-connect ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
but pods are individually available with their ClusterIP.
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
ternobo.kubernates.service: app
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
selector:
matchLabels:
ternobo.kubernates.service: app
template:
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
ternobo.kubernates.service: app
spec:
containers:
- env:
- name: SERVICE_NAME
value: app
- name: SERVICE_TAGS
value: production
image: ghcr.io/ternobo/ternobo-connect:0.1.01
name: app
ports:
- containerPort: 9000
resources: {}
tty: true
workingDir: /var/www
envFrom:
- configMapRef:
name: appenvconfig
imagePullSecrets:
- name: regsecret
restartPolicy: Always
status: {}
app-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
type: ClusterIP
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
ternobo.kubernates.service: app
status:
loadBalancer: {}
network-policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network
namespace: ternobo-connect
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
I also tried to removing netwok policy and but it didn't work! and change podSelector rules to only select services with ternobo.kubernates.network/app-network: "true" label.
Kubernetes services urls are in my-svc.my-namespace.svc.cluster-domain.example format, see: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
So the ping should be
ping -c4 app.ternobo-connect.svc.cluster.local
If the webserver is in the same namespace as the service you can ping the service name directly
ping -c4 app
I don't know the impact of network policy, I haven't worked with it.

How to rewrite in Istio the right way and avoid 404 error?

Scenario-
I have 2 deployments deployment-1 with label- version:v1 and deployment-2 with label- version:v2 both are hosted under a nodeport service- test-1. I have created a virtual service with two match conditions as follows
- match:
- uri:
exact: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
exact: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
The code file can be found here
Problem-
When I try to visit this Ingress Gateway IP at http://ingress-gateway-ip/v1/favicon.ico, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico not found (because this has been re-written to "/") also the stylings and js are absent at this route. But when I try to visit
http://ingress-gateway-ip/v1/favicon.ico I can see the favicon icon along with all the js and stylings.
Please find the screenshots of the problem here
Expectation-
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up?
EDIT-1:
Added code snippet from the original code
Added code file link
EDIT-2:
Added screenshot of the problem
Modified problem statement for clear understanding
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up
I assume your issue is your DestinationRule, in the v2 name your label is version: v1 and it should be version: v2, that's why your requests from /v1 and /v2 went only to v1 version of your pod.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v1 <---
It should be
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
When I try to visit this Ingress Gateway IP, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico
It's working as designed, you haven't specified path for /, just for /v1 and /v2.
If you want to be able to access then you would have to add another match for /
- match:
- uri:
prefix: /
route:
- destination:
host: test-1
There is working example with 2 nginx pods, take a look.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
selector:
matchLabels:
version: v1
replicas: 1
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v2
spec:
selector:
matchLabels:
version: v2
replicas: 1
template:
metadata:
labels:
app: frontend
version: v2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: test-1
labels:
app: frontend
spec:
ports:
- name: http-front
port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: simpleexample
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-virtualservice
spec:
gateways:
- simpleexample
hosts:
- '*'
http:
- match:
- uri:
prefix: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
prefix: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Results from curl:
curl -v ingress-gateway-ip/
404 Not Found
there is no path specified for that in virtual service
curl -v ingress-gateway-ip/v1
HTTP/1.1 200 OK
Hello nginx1
curl -v ingress-gateway-ip/v2
HTTP/1.1 200 OK
Hello nginx2
EDIT
the problem is that all the stylings and js are not readable by the browser at "/" when they are being re-written
It was already explained by #Rinor here
I would add this Istio in practise tutorial here, it explains well a way of dealing with that problem, which is to add more paths for your dependencies(js,css,etc).
Let’s break down the requests that should be routed to Frontend:
Exact path / should be routed to Frontend to get the Index.html
Prefix path /static/* should be routed to Frontend to get any static files needed by the frontend, like Cascading Style Sheets and JavaScript files.
Paths matching the regex ^.*.(ico|png|jpg)$ should be routed to Frontend as it is an image, that the page needs to show.
http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
Let me know if you have any more questions.

Prometheus metrics from custom exporter display in /metrics, but not in /graph (k8s)

I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.

Kubernetes and spring boot variable parsing conflict

I have a Kubernetes's and spring boot's env variables conflict error. Details is as follows:
When creating my zipkin server pod, I need to set env variable RABBITMQ_HOST=http://172.16.100.83,RABBITMQ_PORT=5672.
Initially I define zipkin_pod.yaml as follows:
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: 31503
With this configuration, when I do command
kubectl apply -f zipkin_pod.yaml
The console throws error:
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
Error from server (BadRequest): error when creating "zipkin_pod.yaml": Pod in version "v1" cannot be handled as a Pod: v1.Pod: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, parsing 1018 ...,"value":3... at {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"gearbox-rack-zipkin-server\",\"purpose\":\"platform-demo\"},\"name\":\"gearbox-rack-zipkin-server\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"EUREKA_SERVER\",\"value\":\"http://172.16.100.83:31501\"},{\"name\":\"RABBITMQ_HOST\",\"value\":\"http://172.16.100.83\"},{\"name\":\"RABBITMQ_PORT\",\"value\":31503}],\"image\":\"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server\",\"name\":\"gearbox-rack-zipkin-server\",\"ports\":[{\"containerPort\":9411}]}]}}\n"},"labels":{"app":"gearbox-rack-zipkin-server","purpose":"platform-demo"},"name":"gearbox-rack-zipkin-server","namespace":"default"},"spec":{"containers":[{"env":[{"name":"EUREKA_SERVER","value":"http://172.16.100.83:31501"},{"name":"RABBITMQ_HOST","value":"http://172.16.100.83"},{"name":"RABBITMQ_PORT","value":31503}],"image":"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server","name":"gearbox-rack-zipkin-server","ports":[{"containerPort":9411}]}]}}
so I modified the last line of zipkin_pod.yaml file as follows: Or use brutal force to make port number as int.
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: !!31503
Then pod is successfully created, but spring getProperties throws exception.
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
pod "gearbox-rack-zipkin-server" created
When I check logs:
[root#master3 sup]# kubectl logs gearbox-rack-zipkin-server
2018-05-28 07:56:26.792 INFO [zipkin-server,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#4ac68d3e: startup date [Mon May 28 07:56:26 UTC 2018]; root of context hierarchy
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.autoconfigure.amqp.RabbitProperties#324c64cd failed:
Property: spring.rabbitmq.port
Value:
Reason: Failed to convert property value of type 'java.lang.String' to required type 'int' for property 'port'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [int]
Action:
Update your application's configuration
My question is how to let kubernetes understand the port number as int, while not breaking spring boot convert rule from string to int? because spring boot could not convert !!31503 to int 31503.
As #Bal Chua and #Pär Nilsson mentioned, for environmental variables you can use only string variables because Linux environmental variables can be only strings.
So, if you use yaml, you need to place value into quotes to force Kubernetes to use string.
For example:
- name: RABBITMQ_PORT
value: '31503'

Resources