I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.
Related
I have an ElasticSearch + Kibana cluster on Kubernetes. We want to bypass authentification to let user to directly to dashboard without having to log in.
We have managed to implement Elastic Anoynmous access on our elastic nodes. Unfortunately, it is not what we want as we want user to bypass Kibana login, what we need is Anonymous Authentication.
Unfortunately we can't figure how to implement it. We are declaring Kubernetes Objects with yaml such as Deployment, Services etc.. without using ConfigMap. In order to add Elastic/kibana config, we pass them through env variables.
For example, here how we define es01 Kubernetes Deployment yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "37"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7RVTW/jNhD9KwueWkCmJfl
objectset.rio.cattle.io/id: 3dc011c2-d20c-465b-a143-2f27f4dc464f
creationTimestamp: "2022-05-24T15:17:53Z"
generation: 37
labels:
io.kompose.service: es01
objectset.rio.cattle.io/hash: 83a41b68cabf516665877d6d90c837e124ed2029
name: es01
namespace: waked-elk-pre-prod-test
resourceVersion: "403573505"
uid: e442cf0a-8100-4af1-a9bc-ebf65907398a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: es01
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-09T13:41:29Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: es01
spec:
affinity: {}
containers:
- env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
key: ELASTIC_PASSWORD
name: elastic-credentials
optional: false
- name: cluster.initial_master_nodes
value: es01,es02,es03
And here is the one for Kibana node :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "41"
field.cattle.io/publicEndpoints: '[{"addresses":["10.130.10.6","10.130.10.7","10.130.10.8"],"port":80,"protocol":"HTTP","serviceName":"waked-elk-pre-prod-test:kibana","ingressName":"waked-elk-pre-prod-test:waked-kibana-ingress","hostname":"waked-kibana-pre-prod.cws.cines.fr","path":"/","allNodes":false}]'
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7ST34/.........iNhDH/5WTn1o
objectset.rio.cattle.io/id: 5b109127-cb95-4c93-857d-12399979d85a
creationTimestamp: "2022-05-19T08:37:59Z"
generation: 49
labels:
io.kompose.service: kibana
objectset.rio.cattle.io/hash: 0d2e2477ef3e7ee3c8f84b485cc594a1e59aea1d
name: kibana
namespace: waked-elk-pre-prod-test
resourceVersion: "403620874"
uid: 6f22f8b1-81da-49c0-90bf-9e773fbc051b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: kibana
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-21T13:00:47Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
kubectl.kubernetes.io/restartedAt: "2022-11-08T14:04:53+01:00"
creationTimestamp: null
labels:
io.kompose.service: kibana
spec:
affinity: {}
containers:
- env:
- name: xpack.security.authc.providers.anonymous.anonymous1.order
value: "0"
- name: xpack.security.authc.providers.anonymous.anonymous1.credentials.username
value: username
- name: xgrzegrgepack.security.authc.providers.anonymous.anonymous1.credentials.password
value: password
image: docker.elastic.co/kibana/kibana:8.2.0
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
name: 5601tcp
protocol: TCP
resources:
limits:
memory: "1073741824"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/kibana/config/certs
name: certs
- mountPath: /usr/share/kibana/data
name: kibanadata
dnsPolicy: ClusterFirst
nodeName: k8-worker-cpu-3.cines.fr
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: certs
persistentVolumeClaim:
claimName: certs
- name: kibanadata
persistentVolumeClaim:
claimName: kibanadata
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-11-08T13:45:44Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-07T14:12:17Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: ReplicaSet "kibana-84b65ffb69" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 49
readyReplicas: 1
replicas: 1
updatedReplicas: 1
We don't face any problem when modifying/applying the yaml and the pod run flawlessly. But it just doesn't work, if we try to access Kibana, we land on the login page.
Both files are a bit cropped. Feel free to ask for full file if needed.
Have a good night!
I have tried many times with different combinations, but I cant get it working. Here is my yq command
yq -i e '(.spec.template.spec.containers[]|select(.name == "od-fe").image) = "abcd"
it is supposed to replace the deployment image which is successful, but it also adds template.spec.containers to the service. here is the deployment + service yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: od
name: od-fe
spec:
replicas: 2
selector:
matchLabels:
app: od-fe
template:
metadata:
labels:
app: od-fe
spec:
containers:
- name: od-fe
image: od-frontend:latest. <<<replace here only
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: od
name: od-fe-service
labels:
run: od-fe-service
spec:
ports:
- port: 3000
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: od-fe
now the issue is service also get changed to become
apiVersion: v1
kind: Service
metadata:
namespace: od
name: od-fe-service
labels:
run: od-fe-service
spec:
ports:
- port: 3000
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: od-fe
template:
spec:
containers: []
One way to fix that would be include a select statement at the top level to act on only Deployment type
yq e '(select(.kind == "Deployment").spec.template.spec.containers[]|select(.name == "od-fe").image) |= "abcd"' yaml
Note: If you are using yq version 4.18.1 or beyond, the eval flag e is no longer needed as it has been made the default action.
Hello I'm currently setting up a rook-cephfs test environment using minikube running on Windows 10.
So far I've ran crds.yaml, common.yaml, operator.yaml and cluster-test.yaml. I following the guide at https://github.com/kubernetes/kubernetes/tree/release-1.9/cluster/addons/registry to set up the storage.
From this guide, I've created the ReplicationController and the service. The issue that I'm having is that when I run kubectl get svc, I don't see the service. Any idea on why its not showing up? Thanks
service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: kube-registry-upstream
ports:
- name: registry
port: 5000
protocol: TCP
Docker registry
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry-upstream
version: v0
template:
metadata:
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
emptyDir: {}
Based on the service yaml you shared, the service in getting created in kube-system namespace.
You can view the service using the -n option to specify the namespace
kubectl get svc kube-registry -n kube-system
I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.
The invocation of .Files.Glob below needs to be from a variable supplied as a value from .Values.initDBFilesGlob. The value is getting set properly, but the if condition is not evaluating to truthy, even though .Values.initDBConfigMap is empty.
How do I pass a variable argument to .Files.Glob?
Template in question (templates/initdb-configmap.yaml from my WIP chart https://github.com/northscaler/charts/tree/support-env-specific-init/bitnami/cassandra which I'll submit to https://github.com/bitnami/charts/tree/master/bitnami/cassandra as a PR once this is fixed):
{{- $initDBFilesGlob := .Values.initDBFilesGlob -}}
# "{{ $initDBFilesGlob }}" "{{ .Values.initDBConfigMap }}"
# There should be content below this
{{- if and (.Files.Glob $initDBFilesGlob) (not .Values.initDBConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "cassandra.fullname" . }}-init-scripts
labels: {{- include "cassandra.labels" . | nindent 4 }}
data:
{{ (.Files.Glob $initDBFilesGlob).AsConfig | indent 2 }}
{{- end }}
File values.yaml:
dbUser:
forcePassword: true
password: cassandra
initDBFilesGlob: 'files/devops/docker-entrypoint-initdb.d/*'
Command: helm template -f values.yaml foobar /Users/matthewadams/dev/bitnami/charts/bitnami/cassandra
There are files in files/devops/docker-entrypoint-initdb.d, relative to the directory from which I'm invoking the command.
Output:
---
# Source: cassandra/templates/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
maxUnavailable: 1
---
# Source: cassandra/templates/cassandra-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
type: Opaque
data:
cassandra-password: "Y2Fzc2FuZHJh"
---
# Source: cassandra/templates/configuration-cm.yaml
# files/conf/*
apiVersion: v1
kind: ConfigMap
# files/conf/*
metadata:
name: foobar-cassandra-configuration
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
Place your Cassandra configuration files here. This will override the values set in any configuration environment variable. This will not be used in case the value *existingConfiguration* is used.
More information [here](https://github.com/bitnami/bitnami-docker-cassandra#configuration)
---
# Source: cassandra/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: intra
port: 7000
targetPort: intra
- name: tls
port: 7001
targetPort: tls
- name: jmx
port: 7199
targetPort: jmx
- name: cql
port: 9042
targetPort: cql
- name: thrift
port: 9160
targetPort: thrift
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
annotations:
{}
spec:
type: ClusterIP
ports:
- name: cql
port: 9042
targetPort: cql
nodePort: null
- name: thrift
port: 9160
targetPort: thrift
nodePort: null
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
serviceName: foobar-cassandra-headless
replicas: 1
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- name: cassandra
command:
- bash
- -ec
# Node 0 is the password seeder
- |
if [[ $HOSTNAME =~ (.*)-0$ ]]; then
echo "Setting node as password seeder"
export CASSANDRA_PASSWORD_SEEDER=yes
else
# Only node 0 will execute the startup initdb scripts
export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
fi
/entrypoint.sh /run.sh
image: docker.io/bitnami/cassandra:3.11.6-debian-10-r26
imagePullPolicy: "IfNotPresent"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: CASSANDRA_CLUSTER_NAME
value: cassandra
- name: CASSANDRA_SEEDS
value: "foobar-cassandra-0.foobar-cassandra-headless.default.svc.cluster.local"
- name: CASSANDRA_PASSWORD
valueFrom:
secretKeyRef:
name: foobar-cassandra
key: cassandra-password
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CASSANDRA_USER
value: "cassandra"
- name: CASSANDRA_NUM_TOKENS
value: "256"
- name: CASSANDRA_DATACENTER
value: dc1
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_RACK
value: rack1
- name: CASSANDRA_ENABLE_RPC
value: "true"
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status"]
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status | grep -E \"^UN\\s+${POD_IP}\""]
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
ports:
- name: intra
containerPort: 7000
- name: tls
containerPort: 7001
- name: jmx
containerPort: 7199
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/cassandra
- name: init-db
mountPath: /docker-entrypoint-initdb.d
- name: configurations
mountPath: /bitnami/cassandra/conf
volumes:
- name: configurations
configMap:
name: foobar-cassandra-configuration
- name: init-db
configMap:
name: foobar-cassandra-init-scripts
volumeClaimTemplates:
- metadata:
name: data
labels:
app: cassandra
release: foobar
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/devops/docker-entrypoint-initdb.d/*" ""
# There should be content below this
If I comment out the line in my values.yaml that sets initDBFilesGlob, the template renders correctly:
...
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/docker-entrypoint-initdb.d/*" ""
# There should be content below this
apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-cassandra-init-scripts
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
You can copy here your custom `.sh` or `.cql` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-cassandra](https://github.com/bitnami/bitnami-docker-cassandra#initializing-a-new-instance) repository.
I was able to accomplish this by using a printf function to initialize the variable, like this:
{{- $initDBFilesGlob := printf "%s" .Values.initDBFilesGlob -}}