I have a yaml file like below.This yaml file needs to be patched with a new label under each configmap.
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
I want to insert labels under every configmap in this configmaplist as below,
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
labels:
grafana_dashboard: "1"
I am not sure how to do this in bash..Please help me how to do this bash..does yq help? or Kustomize?
If you are using mikefarah/yq, you can use the following path expression to write on all metadata fields under the array of items
yq w yaml 'items[*].metadata.labels.grafana_dashboard' --tag '!!str' 1
If the YAML output produced is as expected, you can write in-place with the -i flag i.e. yq w -i '..'
Related
Looking for some help / guidance on writing a shell script to generate a configmap to use in my cluster. I wanted to write a shell script to loop through a map or dictionary of values to fill out a configmap.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
I wanted to write a shell script that adds everything from -seriesQuery and below replacing values such as the proxy="value" and the [2m] metric interval wherever that assignment variable exists.
And for each key / value pair, it would add the same block with indents. Kind of like this
pod names: [pod1, pod2]
interval: [2m, 3m]
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}[3m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
And would append that to everything below
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
It would be preferable to use a template engine that's YAML-aware but, provided that your pod names and intervals don't contain any problematic character, then you can generate your config file easily with bash:
#!/bin/bash
pods=( pod1 pod2 )
intervals=( 2m 3m )
cat <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
EOF
for i in "${!pods[#]}"
do
cat <<EOF
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}[${intervals[i]}])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
EOF
done
I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.
I have a yaml config file which looks like this;
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: kind-worker1 # kind-worker-1
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
---
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: kind-worker2 # kind-worker-2
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
Now how to get or patch the name field of specific the cluster object ? When I execute;
yq '.metadata.name' config.yaml
then I get the respective values from both manifests;
kind-worker1
---
kind-worker2
Or, if I try to patch using;
yq '.metadata.name = "kind-worker"' config.yaml
both the fields get updated;
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: kind-worker # kind-worker-1
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
---
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: kind-worker # kind-worker-2
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
You can use select to filter the contents before querying/updating.
For example, to query the first one, use document_index == 0 as filter (counting starts at 0):
yq 'select(document_index == 0).metadata.name' config.yaml
kind-worker1
Or update the one where .metadata.name equals "kind-worker2" (which is the second one):
yq 'select(.metadata.name == "kind-worker2").metadata.name = "xy"' config.yaml
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: kind-worker1 # kind-worker-1
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
---
apiVersion: controller.kubeslice.io/v1alpha1
kind: Cluster
metadata:
name: xy # kind-worker-2
namespace: kubeslice-avesha # kubeslice-avesha
spec:
networkInterface: eth0
Note: You can query and update all fields independently.
I am exploring yq to modify my YAML where I want to add a new element under spec of the ImageStream with name == openshift45
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The below command returns the valid metadata element. Now, I want to move to the parent and then pick spec. Is this possible with yq - https://github.com/mikefarah/yq?
yq r openshift45.yaml --printMode pv "items(kind==ImageStream).(name==openshift45)"
returns
items.[1].metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
Expected output:
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
*dockerImageRepository: <$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>*
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The Path Expressions in mikefarah/yq is not quite documented to show a real example of how to use multiple conditions to get to the desired object. So for the YAML in question, using one unique condition you could do something like below. Verified in yq version 3.3.2
yq w openshift45.yaml 'items.(metadata.name == openshift45).spec.dockerImageRepository' '<$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>'
You can use the -i flag along with write to modify the YAML in-place. See Updating files in-place
If this isn't desired and you need multiple conditional selection to get to the desired object, suggest raising a issue at the GitHub page - https://github.com/mikefarah/yq/issues requesting the right syntax for the same.
Currently it is not possible in yq. You will have to get the output of first command in a variable and pass that variable to next one.
yq is a wrapper on jq but it doesn't have the functionality of custom functions. Custom functions make it possible to use any value from any hierarchy of the document at one place.
Thanks to Inian for pointing out that OP is not using the wrapper yq version. Unfortunately, the version used also doesn't allow using custom functions for now.
I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.