I have this Kiali configMap along with different Kiali resources like deployment, service etc all in a single yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali
namespace: istio-system
labels:
helm.sh/chart: kiali-server-1.55.1
app: kiali
app.kubernetes.io/name: kiali
app.kubernetes.io/instance: kiali
version: "v1.55.1"
app.kubernetes.io/version: "v1.55.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: "kiali"
data:
config.yaml: |
external_services:
custom_dashboards:
enabled: true
istio:
root_namespace: istio-system
config_map_name: istio-1-15-3
auth:
openid: {}
openshift:
client_id_prefix: kiali
strategy: anonymous
deployment:
accessible_namespaces:
- '**'
additional_service_yaml: {}
affinity:
node: {}
pod: {}
pod_anti: {}
configmap_annotations: {}
custom_secrets: []
host_aliases: []
hpa:
api_version: autoscaling/v2beta2
spec: {}
image_digest: ""
image_name: quay.io/kiali/kiali
image_pull_policy: Always
image_pull_secrets: []
image_version: v1.55
ingress:
additional_labels: {}
class_name: nginx
override_yaml:
metadata: {}
ingress_enabled: false
instance_name: kiali
logger:
log_format: text
log_level: info
sampler_rate: "1"
time_field_format: 2006-01-02T15:04:05Z07:00
namespace: istio-system
node_selector: {}
pod_annotations: {}
pod_labels:
sidecar.istio.io/inject: "false"
priority_class_name: ""
replicas: 1
resources:
limits:
memory: 1Gi
requests:
cpu: 10m
memory: 64Mi
secret_name: kiali
service_annotations: {}
service_type: ""
tolerations: []
version_label: v1.55.1
view_only_mode: false
...
In this data I have to add this config under external-services
data:
config.yaml: |
external_services:
prometheus:
url: "http://prometheus-prometheus.observability-prisma.svc:9090/"
custom_dashboards:
enabled: true
istio:
root_namespace: istio-system
config_map_name: istio-1-15-3
I have tried using following kubectl patches but no one is working
This patch is replacing the whole original data config part
kubectl patch configmap/kiali \
-n istio-system \
--type merge \
-p '{"data":{"config.yaml":{"external_services":{"prometheus":{"url":"http://prometheus-prometheus.observability-prisma.svc:9090/"}}}}}'
This patch fails with the invalid request error. Since, the config under data is actually a string
kubectl patch cm kiali -n istio-system --type json --patch '[{ "op": "add", "path": "/data/config.yaml/external_services", "value": {"prometheus":{"url":"http://prometheus-prometheus.observability-prisma.svc:9090/"}} }]'
May I please get some assistance on how I can resolve this using kubectl or json-path function?
The Kiali operator stores its deployment configuration in Kiali CR(custom resource), so if we create, update or remove a Kiali CR will trigger the Kiali operator to create, update or remove a Kiali. We cannot manually update a resource created by the Kiali operator, after you have created your Kiali CR you can manage your Kiali installation by using the below command
kubectl edit kiali kiali -n istio-system
Your Kiali CR can be validated by using the Kiali CR validation tool.
Related
I have an ElasticSearch + Kibana cluster on Kubernetes. We want to bypass authentification to let user to directly to dashboard without having to log in.
We have managed to implement Elastic Anoynmous access on our elastic nodes. Unfortunately, it is not what we want as we want user to bypass Kibana login, what we need is Anonymous Authentication.
Unfortunately we can't figure how to implement it. We are declaring Kubernetes Objects with yaml such as Deployment, Services etc.. without using ConfigMap. In order to add Elastic/kibana config, we pass them through env variables.
For example, here how we define es01 Kubernetes Deployment yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "37"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7RVTW/jNhD9KwueWkCmJfl
objectset.rio.cattle.io/id: 3dc011c2-d20c-465b-a143-2f27f4dc464f
creationTimestamp: "2022-05-24T15:17:53Z"
generation: 37
labels:
io.kompose.service: es01
objectset.rio.cattle.io/hash: 83a41b68cabf516665877d6d90c837e124ed2029
name: es01
namespace: waked-elk-pre-prod-test
resourceVersion: "403573505"
uid: e442cf0a-8100-4af1-a9bc-ebf65907398a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: es01
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-09T13:41:29Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: es01
spec:
affinity: {}
containers:
- env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
key: ELASTIC_PASSWORD
name: elastic-credentials
optional: false
- name: cluster.initial_master_nodes
value: es01,es02,es03
And here is the one for Kibana node :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "41"
field.cattle.io/publicEndpoints: '[{"addresses":["10.130.10.6","10.130.10.7","10.130.10.8"],"port":80,"protocol":"HTTP","serviceName":"waked-elk-pre-prod-test:kibana","ingressName":"waked-elk-pre-prod-test:waked-kibana-ingress","hostname":"waked-kibana-pre-prod.cws.cines.fr","path":"/","allNodes":false}]'
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7ST34/.........iNhDH/5WTn1o
objectset.rio.cattle.io/id: 5b109127-cb95-4c93-857d-12399979d85a
creationTimestamp: "2022-05-19T08:37:59Z"
generation: 49
labels:
io.kompose.service: kibana
objectset.rio.cattle.io/hash: 0d2e2477ef3e7ee3c8f84b485cc594a1e59aea1d
name: kibana
namespace: waked-elk-pre-prod-test
resourceVersion: "403620874"
uid: 6f22f8b1-81da-49c0-90bf-9e773fbc051b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: kibana
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-21T13:00:47Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
kubectl.kubernetes.io/restartedAt: "2022-11-08T14:04:53+01:00"
creationTimestamp: null
labels:
io.kompose.service: kibana
spec:
affinity: {}
containers:
- env:
- name: xpack.security.authc.providers.anonymous.anonymous1.order
value: "0"
- name: xpack.security.authc.providers.anonymous.anonymous1.credentials.username
value: username
- name: xgrzegrgepack.security.authc.providers.anonymous.anonymous1.credentials.password
value: password
image: docker.elastic.co/kibana/kibana:8.2.0
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
name: 5601tcp
protocol: TCP
resources:
limits:
memory: "1073741824"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/kibana/config/certs
name: certs
- mountPath: /usr/share/kibana/data
name: kibanadata
dnsPolicy: ClusterFirst
nodeName: k8-worker-cpu-3.cines.fr
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: certs
persistentVolumeClaim:
claimName: certs
- name: kibanadata
persistentVolumeClaim:
claimName: kibanadata
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-11-08T13:45:44Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-07T14:12:17Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: ReplicaSet "kibana-84b65ffb69" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 49
readyReplicas: 1
replicas: 1
updatedReplicas: 1
We don't face any problem when modifying/applying the yaml and the pod run flawlessly. But it just doesn't work, if we try to access Kibana, we land on the login page.
Both files are a bit cropped. Feel free to ask for full file if needed.
Have a good night!
Looking for some help / guidance on writing a shell script to generate a configmap to use in my cluster. I wanted to write a shell script to loop through a map or dictionary of values to fill out a configmap.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
I wanted to write a shell script that adds everything from -seriesQuery and below replacing values such as the proxy="value" and the [2m] metric interval wherever that assignment variable exists.
And for each key / value pair, it would add the same block with indents. Kind of like this
pod names: [pod1, pod2]
interval: [2m, 3m]
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}[3m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
And would append that to everything below
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
It would be preferable to use a template engine that's YAML-aware but, provided that your pod names and intervals don't contain any problematic character, then you can generate your config file easily with bash:
#!/bin/bash
pods=( pod1 pod2 )
intervals=( 2m 3m )
cat <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
EOF
for i in "${!pods[#]}"
do
cat <<EOF
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}[${intervals[i]}])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
EOF
done
I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.
I am exploring yq to modify my YAML where I want to add a new element under spec of the ImageStream with name == openshift45
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The below command returns the valid metadata element. Now, I want to move to the parent and then pick spec. Is this possible with yq - https://github.com/mikefarah/yq?
yq r openshift45.yaml --printMode pv "items(kind==ImageStream).(name==openshift45)"
returns
items.[1].metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
Expected output:
apiVersion: v1
items:
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: nodejs-10
spec:
lookupPolicy:
local: false
tags:
- annotations:
openshift.io/imported-from: registry.access.redhat.com/ubi8/nodejs-10
from:
kind: DockerImage
name: registry.access.redhat.com/ubi8/nodejs-10
generation: null
importPolicy: {}
name: latest
referencePolicy:
type: ""
status:
dockerImageRepository: ""
- apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: openshift45
app.kubernetes.io/component: openshift45
app.kubernetes.io/instance: openshift45
name: openshift45
spec:
*dockerImageRepository: <$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>*
lookupPolicy:
local: false
status:
dockerImageRepository: ""
The Path Expressions in mikefarah/yq is not quite documented to show a real example of how to use multiple conditions to get to the desired object. So for the YAML in question, using one unique condition you could do something like below. Verified in yq version 3.3.2
yq w openshift45.yaml 'items.(metadata.name == openshift45).spec.dockerImageRepository' '<$MYREGISTRY>/<$MYNAMESPACE>/<$MYPROJECT>'
You can use the -i flag along with write to modify the YAML in-place. See Updating files in-place
If this isn't desired and you need multiple conditional selection to get to the desired object, suggest raising a issue at the GitHub page - https://github.com/mikefarah/yq/issues requesting the right syntax for the same.
Currently it is not possible in yq. You will have to get the output of first command in a variable and pass that variable to next one.
yq is a wrapper on jq but it doesn't have the functionality of custom functions. Custom functions make it possible to use any value from any hierarchy of the document at one place.
Thanks to Inian for pointing out that OP is not using the wrapper yq version. Unfortunately, the version used also doesn't allow using custom functions for now.
I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.
Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"
oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done