I have some tricky thing that I need to do for a yaml file, this is how it looks before
Before
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
creationTimestamp: '2022-02-097T19:511:11Z'
finalizers:
- cuv.ssf.com
generation: 1
name: bar
namespace: foo
resourceVersion: '12236'
uid: 0117657e8
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar
status:
conditions:
- lastTransitionTime: '2022-02-09T19:50:12Z'
message: test
observedGeneration: 1
reason: Ready
status: 'True'
type: Ready
lastOperation:
description: test
state: Succeeded
type: Reconcile
https://codebeautify.org/yaml-validator/y22fe4943
I need to remove all the fields under section metadata and the tricky part is to keep only the name and namespace
in addition to removing the status section at all, it should look like following
After
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
name: bar
namespace: foo
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar
After link
https://codebeautify.org/yaml-validator/y220531ef
Using yq (https://github.com/mikefarah/yq/) version 4.19.1
Using mikefarah/yq (aka Go yq), you could do something like below. Using del for deleting an entry and with_entries for selecting known fields
yq '.items[] |= (del(.status) | .metadata |= with_entries(select(.key == "name" or .key == "namespace")))' yaml
Starting v4.18.1, the eval flag is the default action, so the e flag can be avoided
This should work using yq in the implementaion of https://github.com/kislyuk/yq (not https://github.com/mikefarah/yq), and the -y (or -Y) flag:
yq -y '.items[] |= (del(.status) | .metadata |= {name, namespace})'
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
name: bar
namespace: foo
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar
Related
I have this Kiali configMap along with different Kiali resources like deployment, service etc all in a single yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali
namespace: istio-system
labels:
helm.sh/chart: kiali-server-1.55.1
app: kiali
app.kubernetes.io/name: kiali
app.kubernetes.io/instance: kiali
version: "v1.55.1"
app.kubernetes.io/version: "v1.55.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: "kiali"
data:
config.yaml: |
external_services:
custom_dashboards:
enabled: true
istio:
root_namespace: istio-system
config_map_name: istio-1-15-3
auth:
openid: {}
openshift:
client_id_prefix: kiali
strategy: anonymous
deployment:
accessible_namespaces:
- '**'
additional_service_yaml: {}
affinity:
node: {}
pod: {}
pod_anti: {}
configmap_annotations: {}
custom_secrets: []
host_aliases: []
hpa:
api_version: autoscaling/v2beta2
spec: {}
image_digest: ""
image_name: quay.io/kiali/kiali
image_pull_policy: Always
image_pull_secrets: []
image_version: v1.55
ingress:
additional_labels: {}
class_name: nginx
override_yaml:
metadata: {}
ingress_enabled: false
instance_name: kiali
logger:
log_format: text
log_level: info
sampler_rate: "1"
time_field_format: 2006-01-02T15:04:05Z07:00
namespace: istio-system
node_selector: {}
pod_annotations: {}
pod_labels:
sidecar.istio.io/inject: "false"
priority_class_name: ""
replicas: 1
resources:
limits:
memory: 1Gi
requests:
cpu: 10m
memory: 64Mi
secret_name: kiali
service_annotations: {}
service_type: ""
tolerations: []
version_label: v1.55.1
view_only_mode: false
...
In this data I have to add this config under external-services
data:
config.yaml: |
external_services:
prometheus:
url: "http://prometheus-prometheus.observability-prisma.svc:9090/"
custom_dashboards:
enabled: true
istio:
root_namespace: istio-system
config_map_name: istio-1-15-3
I have tried using following kubectl patches but no one is working
This patch is replacing the whole original data config part
kubectl patch configmap/kiali \
-n istio-system \
--type merge \
-p '{"data":{"config.yaml":{"external_services":{"prometheus":{"url":"http://prometheus-prometheus.observability-prisma.svc:9090/"}}}}}'
This patch fails with the invalid request error. Since, the config under data is actually a string
kubectl patch cm kiali -n istio-system --type json --patch '[{ "op": "add", "path": "/data/config.yaml/external_services", "value": {"prometheus":{"url":"http://prometheus-prometheus.observability-prisma.svc:9090/"}} }]'
May I please get some assistance on how I can resolve this using kubectl or json-path function?
The Kiali operator stores its deployment configuration in Kiali CR(custom resource), so if we create, update or remove a Kiali CR will trigger the Kiali operator to create, update or remove a Kiali. We cannot manually update a resource created by the Kiali operator, after you have created your Kiali CR you can manage your Kiali installation by using the below command
kubectl edit kiali kiali -n istio-system
Your Kiali CR can be validated by using the Kiali CR validation tool.
I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.
I have a yaml file like below.This yaml file needs to be patched with a new label under each configmap.
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
I want to insert labels under every configmap in this configmaplist as below,
apiVersion: v1
kind: ConfigMapList
items:
- apiVersion: v1
data:
test: 1
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver1
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 2
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver2
namespace: monitoring
labels:
grafana_dashboard: "1"
- apiVersion: v1
data:
test: 3
kind: ConfigMap
metadata:
name: grafana-dashboard-apiserver3
namespace: monitoring
labels:
grafana_dashboard: "1"
I am not sure how to do this in bash..Please help me how to do this bash..does yq help? or Kustomize?
If you are using mikefarah/yq, you can use the following path expression to write on all metadata fields under the array of items
yq w yaml 'items[*].metadata.labels.grafana_dashboard' --tag '!!str' 1
If the YAML output produced is as expected, you can write in-place with the -i flag i.e. yq w -i '..'
I am exploring yq to modify my YAML where I want to add a new element under spec of the ImageStream with name == openshift45
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The below command returns the valid metadata element. Now, I want to move to the parent and then pick spec. Is this possible with yq - https://github.com/mikefarah/yq?
yq r openshift45.yaml --printMode pv "items(kind==ImageStream).(name==openshift45)"
returns
items.[1].metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
Expected output:
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
*dockerImageRepository: <$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>*
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The Path Expressions in mikefarah/yq is not quite documented to show a real example of how to use multiple conditions to get to the desired object. So for the YAML in question, using one unique condition you could do something like below. Verified in yq version 3.3.2
yq w openshift45.yaml 'items.(metadata.name == openshift45).spec.dockerImageRepository' '<$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>'
You can use the -i flag along with write to modify the YAML in-place. See Updating files in-place
If this isn't desired and you need multiple conditional selection to get to the desired object, suggest raising a issue at the GitHub page - https://github.com/mikefarah/yq/issues requesting the right syntax for the same.
Currently it is not possible in yq. You will have to get the output of first command in a variable and pass that variable to next one.
yq is a wrapper on jq but it doesn't have the functionality of custom functions. Custom functions make it possible to use any value from any hierarchy of the document at one place.
Thanks to Inian for pointing out that OP is not using the wrapper yq version. Unfortunately, the version used also doesn't allow using custom functions for now.
I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.
Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"
oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done