Shell Script to write configmap - bash

Looking for some help / guidance on writing a shell script to generate a configmap to use in my cluster. I wanted to write a shell script to loop through a map or dictionary of values to fill out a configmap.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
I wanted to write a shell script that adds everything from -seriesQuery and below replacing values such as the proxy="value" and the [2m] metric interval wherever that assignment variable exists.
And for each key / value pair, it would add the same block with indents. Kind of like this
pod names: [pod1, pod2]
interval: [2m, 3m]
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}[3m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
And would append that to everything below
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:

It would be preferable to use a template engine that's YAML-aware but, provided that your pod names and intervals don't contain any problematic character, then you can generate your config file easily with bash:
#!/bin/bash
pods=( pod1 pod2 )
intervals=( 2m 3m )
cat <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
EOF
for i in "${!pods[#]}"
do
cat <<EOF
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}[${intervals[i]}])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
EOF
done

Related

ELK Implement anonymous authentification on Kubernetes Deployment

I have an ElasticSearch + Kibana cluster on Kubernetes. We want to bypass authentification to let user to directly to dashboard without having to log in.
We have managed to implement Elastic Anoynmous access on our elastic nodes. Unfortunately, it is not what we want as we want user to bypass Kibana login, what we need is Anonymous Authentication.
Unfortunately we can't figure how to implement it. We are declaring Kubernetes Objects with yaml such as Deployment, Services etc.. without using ConfigMap. In order to add Elastic/kibana config, we pass them through env variables.
For example, here how we define es01 Kubernetes Deployment yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "37"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7RVTW/jNhD9KwueWkCmJfl
objectset.rio.cattle.io/id: 3dc011c2-d20c-465b-a143-2f27f4dc464f
creationTimestamp: "2022-05-24T15:17:53Z"
generation: 37
labels:
io.kompose.service: es01
objectset.rio.cattle.io/hash: 83a41b68cabf516665877d6d90c837e124ed2029
name: es01
namespace: waked-elk-pre-prod-test
resourceVersion: "403573505"
uid: e442cf0a-8100-4af1-a9bc-ebf65907398a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: es01
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-09T13:41:29Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: es01
spec:
affinity: {}
containers:
- env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
key: ELASTIC_PASSWORD
name: elastic-credentials
optional: false
- name: cluster.initial_master_nodes
value: es01,es02,es03
And here is the one for Kibana node :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "41"
field.cattle.io/publicEndpoints: '[{"addresses":["10.130.10.6","10.130.10.7","10.130.10.8"],"port":80,"protocol":"HTTP","serviceName":"waked-elk-pre-prod-test:kibana","ingressName":"waked-elk-pre-prod-test:waked-kibana-ingress","hostname":"waked-kibana-pre-prod.cws.cines.fr","path":"/","allNodes":false}]'
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7ST34/.........iNhDH/5WTn1o
objectset.rio.cattle.io/id: 5b109127-cb95-4c93-857d-12399979d85a
creationTimestamp: "2022-05-19T08:37:59Z"
generation: 49
labels:
io.kompose.service: kibana
objectset.rio.cattle.io/hash: 0d2e2477ef3e7ee3c8f84b485cc594a1e59aea1d
name: kibana
namespace: waked-elk-pre-prod-test
resourceVersion: "403620874"
uid: 6f22f8b1-81da-49c0-90bf-9e773fbc051b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: kibana
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-21T13:00:47Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
kubectl.kubernetes.io/restartedAt: "2022-11-08T14:04:53+01:00"
creationTimestamp: null
labels:
io.kompose.service: kibana
spec:
affinity: {}
containers:
- env:
- name: xpack.security.authc.providers.anonymous.anonymous1.order
value: "0"
- name: xpack.security.authc.providers.anonymous.anonymous1.credentials.username
value: username
- name: xgrzegrgepack.security.authc.providers.anonymous.anonymous1.credentials.password
value: password
image: docker.elastic.co/kibana/kibana:8.2.0
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
name: 5601tcp
protocol: TCP
resources:
limits:
memory: "1073741824"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/kibana/config/certs
name: certs
- mountPath: /usr/share/kibana/data
name: kibanadata
dnsPolicy: ClusterFirst
nodeName: k8-worker-cpu-3.cines.fr
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: certs
persistentVolumeClaim:
claimName: certs
- name: kibanadata
persistentVolumeClaim:
claimName: kibanadata
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-11-08T13:45:44Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-07T14:12:17Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: ReplicaSet "kibana-84b65ffb69" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 49
readyReplicas: 1
replicas: 1
updatedReplicas: 1
We don't face any problem when modifying/applying the yaml and the pod run flawlessly. But it just doesn't work, if we try to access Kibana, we land on the login page.
Both files are a bit cropped. Feel free to ask for full file if needed.
Have a good night!

yq replace value keep rest intact

I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.

Modify YAML to include label under each entry in metadata using yq

I have a yaml file like below.This yaml file needs to be patched with a new label under each configmap.
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
I want to insert labels under every configmap in this configmaplist as below,
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
labels:
grafana_dashboard: "1"
I am not sure how to do this in bash..Please help me how to do this bash..does yq help? or Kustomize?
If you are using mikefarah/yq, you can use the following path expression to write on all metadata fields under the array of items
yq w yaml 'items[*].metadata.labels.grafana_dashboard' --tag '!!str' 1
If the YAML output produced is as expected, you can write in-place with the -i flag i.e. yq w -i '..'

yq: how to read a sibling element value?

I am exploring yq to modify my YAML where I want to add a new element under spec of the ImageStream with name == openshift45
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The below command returns the valid metadata element. Now, I want to move to the parent and then pick spec. Is this possible with yq - https://github.com/mikefarah/yq?
yq r openshift45.yaml --printMode pv "items(kind==ImageStream).(name==openshift45)"
returns
items.[1].metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
Expected output:
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
*dockerImageRepository: <$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>*
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The Path Expressions in mikefarah/yq is not quite documented to show a real example of how to use multiple conditions to get to the desired object. So for the YAML in question, using one unique condition you could do something like below. Verified in yq version 3.3.2
yq w openshift45.yaml 'items.(metadata.name == openshift45).spec.dockerImageRepository' '<$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>'
You can use the -i flag along with write to modify the YAML in-place. See Updating files in-place
If this isn't desired and you need multiple conditional selection to get to the desired object, suggest raising a issue at the GitHub page - https://github.com/mikefarah/yq/issues requesting the right syntax for the same.
Currently it is not possible in yq. You will have to get the output of first command in a variable and pass that variable to next one.
yq is a wrapper on jq but it doesn't have the functionality of custom functions. Custom functions make it possible to use any value from any hierarchy of the document at one place.
Thanks to Inian for pointing out that OP is not using the wrapper yq version. Unfortunately, the version used also doesn't allow using custom functions for now.

Prometheus metrics from custom exporter display in /metrics, but not in /graph (k8s)

I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.

Resources