sh Job container after it is stopped - shell

I am backuping my Postgresql database using this cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
The cronjob gets successfully executed, the backup is made and saved in the container of the Job but this container is stopped after successful execution of the script.
of course I want to access the backup files in the container but I can't because it is stopped/terminated.
is there a way to execute shell commands in a container after it is terminated, so I can access the backup files saved in the container?
I know that I could do that on the node, but I don't have the permission to access it.

#confused genius gave me a great idea to create another same container to access the dump files so this is the solution that works:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: dev-pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
- name: postgres-restore
image: postgres:10.4
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
# Ensure the file directory is created.
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
after that one just needs to sh into the "postgres-restore" container and access the dump files.
thanks

Related

Run elasticsearch 7.17 on Openshift error chroot: cannot change root directory to '/': Operation not permitted

When starting the cluster Elasticsearch 7.17 in Openshift . Cluster writes an error
chroot: cannot change root directory to '/': Operation not permitted`
Kibana started ok.
Code :
`apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: elasticsearch-pp
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: NEXUS/elasticsearch:7.17.7
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms4012m -Xmx4012m"
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
initContainers:
- name: fix-permissions
image: NEXUS/busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: NEXUS/busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sysctl -w vm.max_map_count=262144; echo vm.max_map_count=262144 >> /etc/sysctl.conf ; sysctl -p"]
# image: NEXUS/busybox
# command: ["sysctl", "-w", "vm.max_map_count=262144"]
- name: increase-fd-ulimit
image: NEXUS/busybox
command: ["sh", "-c", "ulimit -n 65536"]
serviceAccount: elk-anyuid
serviceAccountName: elk-anyuid
restartPolicy: Always
volumes:
- name: data-cephfs
persistentVolumeClaim:
claimName: data-cephfs `
I tried changing the cluster settings, disabled initContainers , the error persists

How to include different script inside k8s kind Secret.stringData

I have a k8s cronjob which exports metrics periodically and there's k8s secret.yaml and I execute a script from it, called run.sh
Within run.sh, I would like to refer another script and I can't seem to find a right way to access this script or specify it in cronjob.yaml
cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: exporter
labels:
app: metrics-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: exporter
spec:
volumes:
- name: db-dir
emptyDir: { }
- name: home-dir
emptyDir: { }
- name: run-sh
secret:
secretName: exporter-run-sh
- name: clusters
emptyDir: { }
containers:
- name: stats-exporter
image: XXXX
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- /home/scripts/run.sh
resources: { }
volumeMounts:
- name: db-dir
mountPath: /.db
- name: home-dir
mountPath: /home
- name: run-sh
mountPath: /home/scripts
- name: clusters
mountPath: /db-clusters
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
restartPolicy: OnFailure
Here's how in secret.yaml, I run script run.sh and refer to another script inside /db-clusters.
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run.sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here
Here's the
Error Message:
/home/scripts/run.sh: line 57: /home/scripts/db-clusters/cluster1.sh: No such file or directory
As per secret yaml, In string data you need to mention “run-sh” as you need to include the secret in the run-sh volume mount.
Have a try by using run-sh in stringData as below :
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run-sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here

MountVolume.SetUp failed for volume "data" : hostPath type check failed: /usr/share/elasticsearch/data is not a directory

Below is my YAML for volume mounting:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumes:
- name: data
hostPath:
path: /usr/share/elasticsearch/data
type: DirectoryOrCreate
Even after changing type to DirectoryOrCreate, it shows the error:
MountVolume.SetUp failed for volume "data" : hostPath type check failed: /usr/share/elasticsearch/data is not a directory
How can I fix this ??
You can add a volumeMounts inside your container:
containers:
- name: increase-fd-ulimit
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
and add a volumeClaimTemplates inside your sts:
spec:
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
namespace: your_namespace
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
volumeMode: Filesystem

PostStarthook exited with 126

I need to copy some configuration files already present in a location B to a location A where I have mounted a persistent volume, in the same container.
for that I tried to configure a post start hook as follows
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [! -d "/opt/A/data" ] ; then
cp -rp /opt/B/. /opt/A;
fi;
rm -rf /opt/B
but it exited with 126
Any tips please
You should give a space after the first bracket [. The following Deployment works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/suren" ] ; then
cp -rp /docker-entrypoint.sh /home/;
fi;
rm -rf /docker-entrypoint.sh
So, this nginx container starts with a docker-entrypoint.sh script by default. After the container has started, is won't find the directory suren, that will give true to the if statement, it will copy the script into /home directory and remove the script from the root.
# kubectl exec nginx-8d7cc6747-5nvwk 2> /dev/null -- ls /home/
docker-entrypoint.sh
Here is the yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracledb
labels:
app: oracledb
spec:
selector:
matchLabels:
app: oracledb
strategy:
type: Recreate
template:
metadata:
labels:
app: oracledb
spec:
containers:
- env:
- name: DB_SID
value: ORCLCDB
- name: DB_PDB
value: pdb
- name: DB_PASSWD
value: pwd
image: oracledb
imagePullPolicy: IfNotPresent
name: oracledb
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/opt/oracle/oradata/ORCLCDB" ] ; then
cp -rp /opt/oracle/files/* /opt/oracle/oradata;
fi;
rm -rf /opt/oracle/files/
volumeMounts:
- mountPath: /opt/oracle/oradata
name: oradata
securityContext:
fsGroup: 54321
terminationGracePeriodSeconds: 30
volumes:
- name: oradata
persistentVolumeClaim:
claimName: oradata

Elasticsearch statefulset does not create volumesClaims on rook-cephfs

I am trying to deploy an elasticsearch statfulset and have the storage provisioned by rook-ceph storageclass.
The pod is in pending mode because of:
Warning FailedScheduling 87s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
The statefull set looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: rcc
labels:
app: elasticsearch
tier: backend
type: db
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
app: elasticsearch
tier: backend
type: db
template:
metadata:
labels:
app: elasticsearch
tier: backend
type: db
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-cephfs
resources:
requests:
storage: 5Gi
and the reson of the pvc is not getting created is:
Normal FailedBinding 47s (x62 over 15m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Any idea what I do wrong?
After adding rook-ceph-block storage class and changing the storageclass to that it worked without any issues.

Resources