k8s CronJob loop on list of pods - bash

I want to run a loop on the pods in specific namespace, however the trick is to do it in a cronJob,is it possible inline?
kubectl get pods -n foo
The trick here is after you get the list of the pods, I need to loop on then and delete each one by one with timeout of 15 seconde, is it possible to do it in cronJob?
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- 'kubectl'
- 'get'
- 'pods'
- '--namespace=foo'
When running the above script it works, but when you want to run loop its get complicated, how can I do it inline?

In your case you can use something like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- kubectl get pods -o name | while read -r POD; do kubectl delete "$POD"; sleep 15; done
However, do you really need to wait 15 seconds? If you want to be sure that pod is gone before deleting next one, you can use --wait=true, so the command will become:
kubectl get pods -o name | while read -r POD; do kubectl delete "$POD" --wait; done

Here is something similar I did to cleanup rabbitmq instances once our helm chart was deleted (hyperkube image can run kubectl commands):
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete-instances
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
labels:
app: {{ template "rabbitmq-cluster-operator.name" . }}
release: {{ .Release.Name }}
spec:
template:
metadata:
name: {{ template "rabbitmq-cluster-operator.fullname" . }}-delete- instances
labels:
app: {{ template "rabbitmq-cluster-operator.name" . }}
annotations:
sidecar.istio.io/inject: "false"
spec:
restartPolicy: Never
serviceAccountName: {{ template "rabbitmq-cluster-operator.serviceAccountName" . }}
containers:
- name: kubectl
image: "{{ .Values.global.hyperkube.image.repository }}/{{
.Values.global.hyperkube.image.name }}:{{ .Values.global.hyperkube.image.tag }}"
imagePullPolicy: "{{ .Values.global.hyperkube.image.pullPolicy }}"
command:
- /bin/sh
- -c
- >
kubectl get rabbitmqclusters | while read -r entry; do
name=$(echo $entry | awk '{print $1}');
kubectl delete rabbitmqcluster $name -n {{ .Release.Namespace }};
done
Note this is a job but something similar can be done in a cronjob.

Related

Is there a way to give different parameters to a single job in helm charts having parallelism

This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable

Convert helm yaml values to cli executable

how do I convert this yaml values in a way such that this become executable with helm install command?
hostAliases:
- ip: "12.20.30.40"
hostnames:
- "001.stg.local"
i want to execute from terminal like this
helm install my-app -n my-namespace app/app1 \
--set hostAliases=[]
values.yaml
hostAliases:
- ip: "12.20.30.40"
hostnames:
- "001.stg.local"
- "002.stg.remote"
- ip: "0.0.0.0"
hostnames:
- "003.stg.local"
- "004.stg.remote"
template/configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
data: |-
{{- range $idx, $alias := $.Values.hostAliases }}
ip-{{ $idx }}: {{ $alias.ip }}
{{- range $i, $n := $alias.hostnames }}
host-{{ $idx }}-{{ $i }}: {{ $n }}
{{- end }}
{{- end }}
cmd
helm template --debug test .
output
---
# Source: test/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
data: |-
ip-0: 12.20.30.40
host-0-0: 001.stg.local
host-0-1: 002.stg.remote
ip-1: 0.0.0.0
host-1-0: 003.stg.local
host-1-1: 004.stg.remote
cmd
helm template --debug test . --set "hostAliases[0].ip=1.2.3.4" --set "hostAliases[0].hostnames={001,002}"
output
# Source: test/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
data: |-
ip-0: 1.2.3.4
host-0-0: 001
host-0-1: 002

Env var not available in scipt/command in a kind Job (Kubernetes)

I run this Job:
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: sample
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -c
- |
env
echo "MY_VAR : ${MY_VAR}"
sleep 800000
env:
- name: MY_VAR
value: MY_VALUE
image: mcr.microsoft.com/azure-cli:2.0.80
imagePullPolicy: IfNotPresent
name: sample
restartPolicy: Never
backoffLimit: 4
EOF
But when I look at the log the value MY_VALUE its empty even though env prints it:
$ kubectl logs -f sample-7p6bp
...
MY_VAR=MY_VALUE
...
MY_VAR :
Why does this line contain an empty value for ${MY_VAR}:
echo "MY_VAR : ${MY_VAR}"
?
UPDATE: Tried the same with a simple pod:
kubectl -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
containers:
- name: sample
imagePullPolicy: Always
command: ["/bin/sh", "-c", "echo BEGIN ${MY_VAR} END"]
image: radial/busyboxplus:curl
env:
- name: MY_VAR
value: MY_VALUE
EOF
Same/empty result:
$ kubectl logs -f sample
BEGIN END
The reason this happens is because your shell expands the variable ${MY_VAR} before it's ever sent to the kubernetes. You can disable parameter expansion inside of a heredoc by quoting the terminator:
kubectl apply -f - <<'EOF'
Adding these quotes should resolve your issue.

PostStarthook exited with 126

I need to copy some configuration files already present in a location B to a location A where I have mounted a persistent volume, in the same container.
for that I tried to configure a post start hook as follows
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [! -d "/opt/A/data" ] ; then
cp -rp /opt/B/. /opt/A;
fi;
rm -rf /opt/B
but it exited with 126
Any tips please
You should give a space after the first bracket [. The following Deployment works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/suren" ] ; then
cp -rp /docker-entrypoint.sh /home/;
fi;
rm -rf /docker-entrypoint.sh
So, this nginx container starts with a docker-entrypoint.sh script by default. After the container has started, is won't find the directory suren, that will give true to the if statement, it will copy the script into /home directory and remove the script from the root.
# kubectl exec nginx-8d7cc6747-5nvwk 2> /dev/null -- ls /home/
docker-entrypoint.sh
Here is the yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracledb
labels:
app: oracledb
spec:
selector:
matchLabels:
app: oracledb
strategy:
type: Recreate
template:
metadata:
labels:
app: oracledb
spec:
containers:
- env:
- name: DB_SID
value: ORCLCDB
- name: DB_PDB
value: pdb
- name: DB_PASSWD
value: pwd
image: oracledb
imagePullPolicy: IfNotPresent
name: oracledb
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/opt/oracle/oradata/ORCLCDB" ] ; then
cp -rp /opt/oracle/files/* /opt/oracle/oradata;
fi;
rm -rf /opt/oracle/files/
volumeMounts:
- mountPath: /opt/oracle/oradata
name: oradata
securityContext:
fsGroup: 54321
terminationGracePeriodSeconds: 30
volumes:
- name: oradata
persistentVolumeClaim:
claimName: oradata

How do you pass a variable value to .Files.Glob in a Helm chart?

The invocation of .Files.Glob below needs to be from a variable supplied as a value from .Values.initDBFilesGlob. The value is getting set properly, but the if condition is not evaluating to truthy, even though .Values.initDBConfigMap is empty.
How do I pass a variable argument to .Files.Glob?
Template in question (templates/initdb-configmap.yaml from my WIP chart https://github.com/northscaler/charts/tree/support-env-specific-init/bitnami/cassandra which I'll submit to https://github.com/bitnami/charts/tree/master/bitnami/cassandra as a PR once this is fixed):
{{- $initDBFilesGlob := .Values.initDBFilesGlob -}}
# "{{ $initDBFilesGlob }}" "{{ .Values.initDBConfigMap }}"
# There should be content below this
{{- if and (.Files.Glob $initDBFilesGlob) (not .Values.initDBConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "cassandra.fullname" . }}-init-scripts
labels: {{- include "cassandra.labels" . | nindent 4 }}
data:
{{ (.Files.Glob $initDBFilesGlob).AsConfig | indent 2 }}
{{- end }}
File values.yaml:
dbUser:
forcePassword: true
password: cassandra
initDBFilesGlob: 'files/devops/docker-entrypoint-initdb.d/*'
Command: helm template -f values.yaml foobar /Users/matthewadams/dev/bitnami/charts/bitnami/cassandra
There are files in files/devops/docker-entrypoint-initdb.d, relative to the directory from which I'm invoking the command.
Output:
---
# Source: cassandra/templates/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
maxUnavailable: 1
---
# Source: cassandra/templates/cassandra-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
type: Opaque
data:
cassandra-password: "Y2Fzc2FuZHJh"
---
# Source: cassandra/templates/configuration-cm.yaml
# files/conf/*
apiVersion: v1
kind: ConfigMap
# files/conf/*
metadata:
name: foobar-cassandra-configuration
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
Place your Cassandra configuration files here. This will override the values set in any configuration environment variable. This will not be used in case the value *existingConfiguration* is used.
More information [here](https://github.com/bitnami/bitnami-docker-cassandra#configuration)
---
# Source: cassandra/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: intra
port: 7000
targetPort: intra
- name: tls
port: 7001
targetPort: tls
- name: jmx
port: 7199
targetPort: jmx
- name: cql
port: 9042
targetPort: cql
- name: thrift
port: 9160
targetPort: thrift
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
annotations:
{}
spec:
type: ClusterIP
ports:
- name: cql
port: 9042
targetPort: cql
nodePort: null
- name: thrift
port: 9160
targetPort: thrift
nodePort: null
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
serviceName: foobar-cassandra-headless
replicas: 1
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- name: cassandra
command:
- bash
- -ec
# Node 0 is the password seeder
- |
if [[ $HOSTNAME =~ (.*)-0$ ]]; then
echo "Setting node as password seeder"
export CASSANDRA_PASSWORD_SEEDER=yes
else
# Only node 0 will execute the startup initdb scripts
export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
fi
/entrypoint.sh /run.sh
image: docker.io/bitnami/cassandra:3.11.6-debian-10-r26
imagePullPolicy: "IfNotPresent"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: CASSANDRA_CLUSTER_NAME
value: cassandra
- name: CASSANDRA_SEEDS
value: "foobar-cassandra-0.foobar-cassandra-headless.default.svc.cluster.local"
- name: CASSANDRA_PASSWORD
valueFrom:
secretKeyRef:
name: foobar-cassandra
key: cassandra-password
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CASSANDRA_USER
value: "cassandra"
- name: CASSANDRA_NUM_TOKENS
value: "256"
- name: CASSANDRA_DATACENTER
value: dc1
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_RACK
value: rack1
- name: CASSANDRA_ENABLE_RPC
value: "true"
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status"]
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status | grep -E \"^UN\\s+${POD_IP}\""]
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
ports:
- name: intra
containerPort: 7000
- name: tls
containerPort: 7001
- name: jmx
containerPort: 7199
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/cassandra
- name: init-db
mountPath: /docker-entrypoint-initdb.d
- name: configurations
mountPath: /bitnami/cassandra/conf
volumes:
- name: configurations
configMap:
name: foobar-cassandra-configuration
- name: init-db
configMap:
name: foobar-cassandra-init-scripts
volumeClaimTemplates:
- metadata:
name: data
labels:
app: cassandra
release: foobar
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/devops/docker-entrypoint-initdb.d/*" ""
# There should be content below this
If I comment out the line in my values.yaml that sets initDBFilesGlob, the template renders correctly:
...
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/docker-entrypoint-initdb.d/*" ""
# There should be content below this
apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-cassandra-init-scripts
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
You can copy here your custom `.sh` or `.cql` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-cassandra](https://github.com/bitnami/bitnami-docker-cassandra#initializing-a-new-instance) repository.
I was able to accomplish this by using a printf function to initialize the variable, like this:
{{- $initDBFilesGlob := printf "%s" .Values.initDBFilesGlob -}}

Resources