Env var not available in scipt/command in a kind Job (Kubernetes) - bash

I run this Job:
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: sample
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -c
- |
env
echo "MY_VAR : ${MY_VAR}"
sleep 800000
env:
- name: MY_VAR
value: MY_VALUE
image: mcr.microsoft.com/azure-cli:2.0.80
imagePullPolicy: IfNotPresent
name: sample
restartPolicy: Never
backoffLimit: 4
EOF
But when I look at the log the value MY_VALUE its empty even though env prints it:
$ kubectl logs -f sample-7p6bp
...
MY_VAR=MY_VALUE
...
MY_VAR :
Why does this line contain an empty value for ${MY_VAR}:
echo "MY_VAR : ${MY_VAR}"
?
UPDATE: Tried the same with a simple pod:
kubectl -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
containers:
- name: sample
imagePullPolicy: Always
command: ["/bin/sh", "-c", "echo BEGIN ${MY_VAR} END"]
image: radial/busyboxplus:curl
env:
- name: MY_VAR
value: MY_VALUE
EOF
Same/empty result:
$ kubectl logs -f sample
BEGIN END

The reason this happens is because your shell expands the variable ${MY_VAR} before it's ever sent to the kubernetes. You can disable parameter expansion inside of a heredoc by quoting the terminator:
kubectl apply -f - <<'EOF'
Adding these quotes should resolve your issue.

Related

Update istio-ingressgateway with yaml instead of kubectl edit

I test tcp-based service from book...
To complete this task, I need to expose port 31400...
I found that I can do this using this command : KUBE_EDITOR="nano" kubectl edit svc istio-ingressgateway -n istio-system
and enter manually this :
name: tcp
nodePort: 30851
port: 31400,
protocol: TCP
targetPort: 31400
I work as expected, but how do the same task using yaml and kubectl apply ?
Thanks for your help,
WCDR
1 - Get current configuration :
$ kubectl get -n istio-system service istio-ingressgateway -o yaml
Output look like :
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{...,"kind":"Service",..."app":"istio-ingressgateway"...
...
labels:
app: istio-ingressgateway
...
spec:
...
ports:
...
>>>> insert block here <<<<
selector:
...
...
2 - Patch it with yq or manually...
https://github.com/mikefarah/yq
3 - Apply change :
$ kubectl apply -n istio-system -f - <<EOF
apiVersion: v1
kind: Service
...
EOF
Output must be :
service/istio-ingressgateway configured
Enjoy...

k8s CronJob loop on list of pods

I want to run a loop on the pods in specific namespace, however the trick is to do it in a cronJob,is it possible inline?
kubectl get pods -n foo
The trick here is after you get the list of the pods, I need to loop on then and delete each one by one with timeout of 15 seconde, is it possible to do it in cronJob?
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- 'kubectl'
- 'get'
- 'pods'
- '--namespace=foo'
When running the above script it works, but when you want to run loop its get complicated, how can I do it inline?
In your case you can use something like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- kubectl get pods -o name | while read -r POD; do kubectl delete "$POD"; sleep 15; done
However, do you really need to wait 15 seconds? If you want to be sure that pod is gone before deleting next one, you can use --wait=true, so the command will become:
kubectl get pods -o name | while read -r POD; do kubectl delete "$POD" --wait; done
Here is something similar I did to cleanup rabbitmq instances once our helm chart was deleted (hyperkube image can run kubectl commands):
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete-instances
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
labels:
app: {{ template "rabbitmq-cluster-operator.name" . }}
release: {{ .Release.Name }}
spec:
template:
metadata:
name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete- instances
labels:
app: {{ template "rabbitmq-cluster-operator.name" . }}
annotations:
sidecar.istio.io/inject: "false"
spec:
restartPolicy: Never
serviceAccountName: {{ template "rabbitmq-cluster-operator.serviceAccountName" . }}
containers:
- name: kubectl
image: "{{ .Values.global.hyperkube.image.repository }}/{{
.Values.global.hyperkube.image.name }}:{{ .Values.global.hyperkube.image.tag }}"
imagePullPolicy: "{{ .Values.global.hyperkube.image.pullPolicy }}"
command:
- /bin/sh
- -c
- >
kubectl get rabbitmqclusters | while read -r entry; do
name=$(echo $entry | awk '{print $1}');
kubectl delete rabbitmqcluster $name -n {{ .Release.Namespace }};
done
Note this is a job but something similar can be done in a cronjob.

PostStarthook exited with 126

I need to copy some configuration files already present in a location B to a location A where I have mounted a persistent volume, in the same container.
for that I tried to configure a post start hook as follows
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [! -d "/opt/A/data" ] ; then
cp -rp /opt/B/. /opt/A;
fi;
rm -rf /opt/B
but it exited with 126
Any tips please
You should give a space after the first bracket [. The following Deployment works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/suren" ] ; then
cp -rp /docker-entrypoint.sh /home/;
fi;
rm -rf /docker-entrypoint.sh
So, this nginx container starts with a docker-entrypoint.sh script by default. After the container has started, is won't find the directory suren, that will give true to the if statement, it will copy the script into /home directory and remove the script from the root.
# kubectl exec nginx-8d7cc6747-5nvwk 2> /dev/null -- ls /home/
docker-entrypoint.sh
Here is the yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracledb
labels:
app: oracledb
spec:
selector:
matchLabels:
app: oracledb
strategy:
type: Recreate
template:
metadata:
labels:
app: oracledb
spec:
containers:
- env:
- name: DB_SID
value: ORCLCDB
- name: DB_PDB
value: pdb
- name: DB_PASSWD
value: pwd
image: oracledb
imagePullPolicy: IfNotPresent
name: oracledb
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/opt/oracle/oradata/ORCLCDB" ] ; then
cp -rp /opt/oracle/files/* /opt/oracle/oradata;
fi;
rm -rf /opt/oracle/files/
volumeMounts:
- mountPath: /opt/oracle/oradata
name: oradata
securityContext:
fsGroup: 54321
terminationGracePeriodSeconds: 30
volumes:
- name: oradata
persistentVolumeClaim:
claimName: oradata

Filename of configMap shows up as env in the Pod

I have a file named config.txt, which i used to create configmap myconfig inside minikube cluster.
However, when I use myconfig in a Pod, the name of the file config.txt also shows up as part of the ENV.
How can I correct it?
> cat config.txt
var3=val3
var4=val4
> kubectl create cm myconfig --from-file=config.txt
configmap/myconfig created
> kubectl describe cm myconfig
Name: myconfig
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
config.txt:
----
var3=val3
var4=val4
Events: <none>
Pod definition
> cat nginx.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: myconfig
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
> kubectl create -f nginx.yml
pod/nginx created
Pod EVN inspection, notice the line config.txt=var3=val3
expected it to be just var3=val3
> kubectl exec -it nginx -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx
TERM=xterm
config.txt=var3=val3
var4=val4
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
NGINX_VERSION=1.19.4
NJS_VERSION=0.4.4
PKG_RELEASE=1~buster
HOME=/root
Create configmap like this will do the job:
kubectl create cm myconfig --from-env-file=config.txt

OpenShift : Waiting for image stream to be created

I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.
Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"
oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done

Resources