I have a question if possible:
(I work with GKE)
this it my kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: cdbridgerpayelasticsearch
http:
service:
spec:
type: LoadBalancer
(It ran well with the loadbalancer - ...in the browser )
And I wanted to make it publish on https://some.thing.net
so I made an ingress:
apiVersion: v1
kind: Secret
metadata:
name: bcd-kibana-testsecret-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lVVkk1ellBakR0
RWFyd0Zqd2xuTnlLeU0xMnNrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZqRVVNQklHQTFVRUF3d0xa
bTl2TG1KaGNpNWpiMjB3SGhjTk1qSXdOakUxTVRJeE16RTVXaGNOTWpNdwpOakUxTVRJeE16RTVX
akFXTVJRd0VnWURWUVFEREF0bWIyOHVZbUZ5TG1OdmJUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJC
UUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMVRuTDJpZlluTUFtWEkrVmpXQU9OSjUvMGlpN0xWZlFJMjQ5
aVoKUXl5Zkw5Y2FQVVpuODZtY1NSdXZnV1JZa3dLaHUwQXpqWW9aQWR5N21TQ3ROMmkzZUZIRGdq
NzdZNEhQcFA1TQpKVTRzQmU5ZmswcnlVeDA3aU11UVA5T3pibGVzNHcvejJIaXcyYVA2cUl5ZFI3
bFhGSnQ0NXNSeDJ3THRqUHZCClErMG5UQlJrSndsQVZQYTdIYTN3RjBTSDJXL1dybTgrNlRkVGpG
MmFUanpkMFFXdy9hKzNoUU9HSnh4b1JxY1MKYmNNYmMrYitGeCtqZ3F2N0xuQ3R1Njd5L2tsb3ZL
Z2djdW41ZVNqY3krT0ZpdTNhY2hLVDlUeWJlSUxPY2FSYQp3KzJSeitNOUdTMWR2aUI0Q0dRNnlw
RDhkazc4bktja1FBam9QMXV5ZXJnR2hla0NBd0VBQWFOVE1GRXdIUVlEClZSME9CQllFRkpLUWps
KzI0TVJrNVpqTlB4ZVRnVU1pbE5xWk1COEdBMVVkSXdRWU1CYUFGSktRamwrMjRNUmsKNVpqTlB4
ZVRnVU1pbE5xWk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dF
QgpBSUp2Y1ZNclpEUEZ6TEhvZ3IyZklDN0E0TTB5WFREZXhONWNEZFFiOUNzVk0zUjN6bkZFU1Jt
b21RVVlCeFB3CmFjUVpWQ25qM0xGamRmeExBNkxrR0hhbjBVRjhDWnJ4ODRRWUtzQTU2dFpJWFVm
ZXRIZk1zOTZsSE5ROW5samsKT3RoazU3ZkNRZVRFMjRCU0RIVDJVL1hhNjVuMnBjcFpDU2FYWStF
SjJaWTBhZjlCcVBrTFZud3RTQ05lY0JLVQp3N0RCM0o4U2h1Z0FES21xU2VJM1R2N015SThvNHJr
RStBdmZGTUw0YlpFbS9IWW4wNkxQdVF3TUE1cndBcFN3CnlDaUJhcjRtK0psSzNudDRqU0hGeU4x
N1g1M1FXcnozQTNPYmZSWXI0WmJObE8zY29ObzFiQnd3eWJVVmgycXoKRGIrcnVWTUN5WjBTdXlE
OGZURFRHY1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: |
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3RVNXk5b24ySnpBSmwKeVBsWTFnRGpTZWY5SW91eTFYMENO
dVBZbVVNc255L1hHajFHWi9PcG5Fa2JyNEZrV0pNQ29idEFNNDJLR1FIYwp1NWtnclRkb3QzaFJ3
NEkrKzJPQno2VCtUQ1ZPTEFYdlg1Tks4bE1kTzRqTGtEL1RzMjVYck9NUDg5aDRzTm1qCitxaU1u
VWU1VnhTYmVPYkVjZHNDN1l6N3dVUHRKMHdVWkNjSlFGVDJ1eDJ0OEJkRWg5bHYxcTV2UHVrM1U0
eGQKbWs0ODNkRUZzUDJ2dDRVRGhpY2NhRWFuRW0zREczUG0vaGNmbzRLcit5NXdyYnV1OHY1SmFM
eW9JSExwK1hrbwozTXZqaFlydDJuSVNrL1U4bTNpQ3puR2tXc1B0a2MvalBSa3RYYjRnZUFoa09z
cVEvSFpPL0p5bkpFQUk2RDliCnNucTRCb1hwQWdNQkFBRUNnZ0VCQUpyb3FLUy83aTFTM1MyMVVr
MTRickMxSkJjVVlnRENWNGk4SUNVOHpWRzcKTUZteVJPT0JFc0FiUXlmd1V0ZXBaaktxODUwc3Rp
cWZzUTlqeHpieU9SeHBKYXNGN29sMXluaUJhYmd4dkFIQwp6TWNsQjVLclEyZFVCeTNRVFl0YXla
cW9sUU56NzV2bWk0M0lBQTQwbjU3aFdqU2QrTG5IL0hNQWRzbW04SnVwCjA5d3VHMGltdEhQYm5B
TERnY2N3V04zY2xxU0FtR3pxbW9kOEE1YjBWZTQwZHhjTXh3TldaN0JqOTBnamNWYnQKVU1aaFhp
T2E2bnNSSHZDQjF0b0lmZVBuOEdzRk9nOUVqUHZDTWM4QWZKVW84Qk5TK2N0RmF3RjRWaUUyUHlB
VgpxZzR1MHhBQ2Qya28zUUtpZFpsbjAyZkc2ZWg1UmxGYzdsL1RiWWxQY2xFQ2dZRUEyU0NvK2pJ
MUZ1WkZjSFhtCm1Ra2Z3Q0Vvc28xd3VTV3hRWDBCMnJmUmJVbS9xdXV4Rm92MUdVc0hwNEhmSHJu
RWRuSklqVUw4bWxpSjFFUTkKVUpxZC9SVkg1OTdZZ2Zib1dCbHh0cVhIRFRObjNIU3JzQmJlQTh6
NXFMZjE2QzZaa3U3YmR3L2pxazJVaFgrcwp6T3piYTVqYVVYU0pXYXpzRmZoZjdSMlZ3UTBDZ1lF
QXpGdDZBTDNYendBSi9EcXU5QlVuaTIxemZXSUp5QXFwCnBwRnNnQUczVkVYZWFRMjVGcjV6cURn
dlBWRkF5QUFYMU9TL3pHZVcxaDQ0SERzQjRrVmdxVlhHSTdoQUV1RjYKRlgra1M5Uk5QdmFsYXdQ
cXp4VTdPQmcvUis5NVB1NW1oNnFrRWVUekM2T21ZUGRGbmVQcWxKZk03YU43OEhjOApGVU1xRTBa
NkNVMENnWUJJUUVYNmU1cU85REZIS3ZTQkdEZ29odUEwQ2p6b1gxS01xRHhsdTZWRTZMV08rcjhD
CjhhK3Rxdm54RTVaYmN4V2RGSXB2OTBwM1VkOExjMm16MkwrWjUrcjFqWUllUFRzemxjUHhNMWo1
VzVIRUdrN0gKV2RTbkR4NUV0bkp0d0pQNkFPR216UExGU091VFFOa1BtQUdyM0VGSnVhMjYyWC8y
RDZCY0Z1d3VRUUtCZ0V0aApHcm1YVFRsdnZEOHJya2tlWEgzVG01d09RNmxrTlh2WmZIb2pKK3FQ
OHlBeERhclVDWGx0Y0E5Z0gxTW1wYVBECjFQT2k2a0tFMXhHaXVta3FTaU5zSGpBaTBJK21XQkFD
Q3lwbFh6RHdiY2Z4by9WSzBaTTVibTRzYVQ3TFZVcUoKcVFkb3VqWDY0VzQzQjVqYjd6VnNZUXp2
RnRKMlNOVlc5dmd4TU9hcEFvR0FDaHphN3BMWWhueFA5QWVUL1pxZwpvUWM0ckh1SDZhTDMveCtq
dlpRVXdiSitKOWZGY05pcTFZRlJBL2RJSEJWcGZTMWJWR3N3OW9MT2tsTWxDckxRCnJKSjkzWlRu
dFdSQ1FCOTNMbXhoZmJuRk9CemZtbHZYTjFJeE1FNStVbVZRQmRLaS9YMktMZnFSUW5HVTExL2UK
NytFcXFFbllrWTBraE1HL0xqRlRTU0E9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-bcd-ingress
spec:
tls:
- hosts:
- some.thing.net
secretName: bcd-kibana-testsecret-tls
rules:
- host: some.thing.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
(p.s. to get the tls.crt and the tls.key I ran in the GCP cli the command:
kubectl create secret tls bcdsecret --key="tls.key" --cert="tls.crt"
and then:
cat tls.crt | base64 and cat tls.key | base64)
But the ingress failed
when I ran kubectl get ingress I got:
$ kubectl get ingress
W0615 21:28:17.715766 604 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
...
tls-bcd-ingressbcd some.thing.net 34.120.185.85 80, 443 35m
but when I ran this hostname in the browser I got nothing
What am I supposed to do?
Thank you!
or in other words:
how can I publish the kibana on hostname?
thanks :smiling_face:
Frida
Related
I have two deployment, where one of them creates 4 replica for php-fpm and another is a nginx webserver exposed to Internet through Ingress.
problem is that I can't connect to app service in webserver pod! (same issue while trying to connect to other services)
ping result:
$ ping -c4 app.ternobo-connect
PING app.ternobo-connect (10.245.240.225): 56 data bytes
--- app.ternobo-connect ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
but pods are individually available with their ClusterIP.
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
ternobo.kubernates.service: app
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
selector:
matchLabels:
ternobo.kubernates.service: app
template:
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
ternobo.kubernates.service: app
spec:
containers:
- env:
- name: SERVICE_NAME
value: app
- name: SERVICE_TAGS
value: production
image: ghcr.io/ternobo/ternobo-connect:0.1.01
name: app
ports:
- containerPort: 9000
resources: {}
tty: true
workingDir: /var/www
envFrom:
- configMapRef:
name: appenvconfig
imagePullSecrets:
- name: regsecret
restartPolicy: Always
status: {}
app-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
type: ClusterIP
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
ternobo.kubernates.service: app
status:
loadBalancer: {}
network-policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network
namespace: ternobo-connect
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
I also tried to removing netwok policy and but it didn't work! and change podSelector rules to only select services with ternobo.kubernates.network/app-network: "true" label.
Kubernetes services urls are in my-svc.my-namespace.svc.cluster-domain.example format, see: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
So the ping should be
ping -c4 app.ternobo-connect.svc.cluster.local
If the webserver is in the same namespace as the service you can ping the service name directly
ping -c4 app
I don't know the impact of network policy, I haven't worked with it.
I'm deploying an App on k8s. But I cannot connect oracle from external machine.
I tried to connect directly via DBIP or try to connect via Serice Enpoint, but it doesn't work.
Please help me to solve that.
Here is database Info
Database Ip: 192.168.1.25
Port: 1521
Here is service.yaml
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- port: 1521
targetPort: 1521
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: mydb
subsets:
- addresses:
- ip: 192.168.1.25
ports:
- port: 1521
And connectionStr is
"User ID=test;Password=pwd;Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mydb)(PORT=1521)))(CONNECT_DATA=(SID=ORCLCDB)));";
Here is app_deloyment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: apptest
spec:
selector:
matchLabels:
run: web
replicas: 1
template:
metadata:
labels:
run: web
spec:
containers:
- name: app-web
image: app-web
imagePullPolicy: IfNotPresent
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Staging"
volumeMounts:
- name: app-web-log
mountPath: /app/Log
volumes:
- name: app-web-log
hostPath:
path: /log
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: apptest
labels:
run: web
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
nodePort: 30200
selector:
run: web
The output when I run cmd kubectl logs core-dns-66bff467f8-htn5w -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:37732->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:41292->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:36947->8.8.8.8:53: read: no route to host
Scenario-
I have 2 deployments deployment-1 with label- version:v1 and deployment-2 with label- version:v2 both are hosted under a nodeport service- test-1. I have created a virtual service with two match conditions as follows
- match:
- uri:
exact: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
exact: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
The code file can be found here
Problem-
When I try to visit this Ingress Gateway IP at http://ingress-gateway-ip/v1/favicon.ico, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico not found (because this has been re-written to "/") also the stylings and js are absent at this route. But when I try to visit
http://ingress-gateway-ip/v1/favicon.ico I can see the favicon icon along with all the js and stylings.
Please find the screenshots of the problem here
Expectation-
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up?
EDIT-1:
Added code snippet from the original code
Added code file link
EDIT-2:
Added screenshot of the problem
Modified problem statement for clear understanding
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up
I assume your issue is your DestinationRule, in the v2 name your label is version: v1 and it should be version: v2, that's why your requests from /v1 and /v2 went only to v1 version of your pod.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v1 <---
It should be
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
When I try to visit this Ingress Gateway IP, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico
It's working as designed, you haven't specified path for /, just for /v1 and /v2.
If you want to be able to access then you would have to add another match for /
- match:
- uri:
prefix: /
route:
- destination:
host: test-1
There is working example with 2 nginx pods, take a look.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
selector:
matchLabels:
version: v1
replicas: 1
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v2
spec:
selector:
matchLabels:
version: v2
replicas: 1
template:
metadata:
labels:
app: frontend
version: v2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: test-1
labels:
app: frontend
spec:
ports:
- name: http-front
port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: simpleexample
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-virtualservice
spec:
gateways:
- simpleexample
hosts:
- '*'
http:
- match:
- uri:
prefix: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
prefix: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Results from curl:
curl -v ingress-gateway-ip/
404 Not Found
there is no path specified for that in virtual service
curl -v ingress-gateway-ip/v1
HTTP/1.1 200 OK
Hello nginx1
curl -v ingress-gateway-ip/v2
HTTP/1.1 200 OK
Hello nginx2
EDIT
the problem is that all the stylings and js are not readable by the browser at "/" when they are being re-written
It was already explained by #Rinor here
I would add this Istio in practise tutorial here, it explains well a way of dealing with that problem, which is to add more paths for your dependencies(js,css,etc).
Let’s break down the requests that should be routed to Frontend:
Exact path / should be routed to Frontend to get the Index.html
Prefix path /static/* should be routed to Frontend to get any static files needed by the frontend, like Cascading Style Sheets and JavaScript files.
Paths matching the regex ^.*.(ico|png|jpg)$ should be routed to Frontend as it is an image, that the page needs to show.
http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
Let me know if you have any more questions.
I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.
I have a running Elasticsearch STS with a headless Service assigned to it:
svc.yaml:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
stateful.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch-namespace
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-persistent-storage
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch-storageclass
resources:
requests:
storage: 20Gi
The question is: how to access this STS with PODs of Deployment Kind? Let's say, using this Redis POD:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ws-app
labels:
app: redis-ws-app
spec:
replicas: 1
selector:
matchLabels:
app: redis-ws-app
template:
metadata:
labels:
app: redis-ws-app
spec:
containers:
- name: redis-ws-app
image: redis:latest
command: [ "redis-server"]
ports:
- containerPort: 6379
I have been trying to create another service, that would enable me to access it from the outside, but without any luck:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-tcp
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
You would reach it directly hitting the headless service. As an example, this StatefulSet and this Service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
kind: Service
apiVersion: v1
metadata:
name: nginx-headless
spec:
selector:
app: nginx
clusterIP: None
ports:
- port: 80
name: http
I could reach the pods of the statefulset, through the headless service from any pod within the cluster:
/ # curl -I nginx-headless
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 09 Jun 2020 12:36:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
The singularity of the headless service, is that it doesn't create iptable rules for that service. So, when you query that service, it goes to kube-dns (or CoreDNS), and it returns the backends, rather then the IP address is the service itself. So, if you do nslookup, for example, it will return all the backends (pods) of that service:
/ # nslookup nginx-headless
Name: nginx-headless
Address 1: 10.56.1.44
Address 2: 10.56.1.45
Address 3: 10.56.1.46
Address 4: 10.56.1.47
And it won't have any iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx-headless
$
Unlike a normal service, that would return the IP address of the service itself:
/ # nslookup nginx
Name: nginx
Address 1: 10.60.15.30 nginx.default.svc.cluster.local
And it will have iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx
-A KUBE-SERVICES ! -s 10.56.0.0/14 -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
User #suren was right about the headless service. In my case, I was just using a wrong reference.
The Kube-DNS naming convention is
service.namespace.svc.cluster-domain.tld
and the default cluster domain is cluster.local
In my case, the in order to reach the pod, one has to use:
curl -I elasticsearch.elasticsearch-namespace