PostStarthook exited with 126 - shell

I need to copy some configuration files already present in a location B to a location A where I have mounted a persistent volume, in the same container.
for that I tried to configure a post start hook as follows
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [! -d "/opt/A/data" ] ; then
cp -rp /opt/B/. /opt/A;
fi;
rm -rf /opt/B
but it exited with 126
Any tips please

You should give a space after the first bracket [. The following Deployment works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/suren" ] ; then
cp -rp /docker-entrypoint.sh /home/;
fi;
rm -rf /docker-entrypoint.sh
So, this nginx container starts with a docker-entrypoint.sh script by default. After the container has started, is won't find the directory suren, that will give true to the if statement, it will copy the script into /home directory and remove the script from the root.
# kubectl exec nginx-8d7cc6747-5nvwk 2> /dev/null -- ls /home/
docker-entrypoint.sh

Here is the yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracledb
labels:
app: oracledb
spec:
selector:
matchLabels:
app: oracledb
strategy:
type: Recreate
template:
metadata:
labels:
app: oracledb
spec:
containers:
- env:
- name: DB_SID
value: ORCLCDB
- name: DB_PDB
value: pdb
- name: DB_PASSWD
value: pwd
image: oracledb
imagePullPolicy: IfNotPresent
name: oracledb
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ ! -d "/opt/oracle/oradata/ORCLCDB" ] ; then
cp -rp /opt/oracle/files/* /opt/oracle/oradata;
fi;
rm -rf /opt/oracle/files/
volumeMounts:
- mountPath: /opt/oracle/oradata
name: oradata
securityContext:
fsGroup: 54321
terminationGracePeriodSeconds: 30
volumes:
- name: oradata
persistentVolumeClaim:
claimName: oradata

Related

Run elasticsearch 7.17 on Openshift error chroot: cannot change root directory to '/': Operation not permitted

When starting the cluster Elasticsearch 7.17 in Openshift . Cluster writes an error
chroot: cannot change root directory to '/': Operation not permitted`
Kibana started ok.
Code :
`apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: elasticsearch-pp
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: NEXUS/elasticsearch:7.17.7
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms4012m -Xmx4012m"
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
initContainers:
- name: fix-permissions
image: NEXUS/busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: NEXUS/busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sysctl -w vm.max_map_count=262144; echo vm.max_map_count=262144 >> /etc/sysctl.conf ; sysctl -p"]
# image: NEXUS/busybox
# command: ["sysctl", "-w", "vm.max_map_count=262144"]
- name: increase-fd-ulimit
image: NEXUS/busybox
command: ["sh", "-c", "ulimit -n 65536"]
serviceAccount: elk-anyuid
serviceAccountName: elk-anyuid
restartPolicy: Always
volumes:
- name: data-cephfs
persistentVolumeClaim:
claimName: data-cephfs `
I tried changing the cluster settings, disabled initContainers , the error persists

How to include different script inside k8s kind Secret.stringData

I have a k8s cronjob which exports metrics periodically and there's k8s secret.yaml and I execute a script from it, called run.sh
Within run.sh, I would like to refer another script and I can't seem to find a right way to access this script or specify it in cronjob.yaml
cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: exporter
labels:
app: metrics-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: exporter
spec:
volumes:
- name: db-dir
emptyDir: { }
- name: home-dir
emptyDir: { }
- name: run-sh
secret:
secretName: exporter-run-sh
- name: clusters
emptyDir: { }
containers:
- name: stats-exporter
image: XXXX
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- /home/scripts/run.sh
resources: { }
volumeMounts:
- name: db-dir
mountPath: /.db
- name: home-dir
mountPath: /home
- name: run-sh
mountPath: /home/scripts
- name: clusters
mountPath: /db-clusters
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
restartPolicy: OnFailure
Here's how in secret.yaml, I run script run.sh and refer to another script inside /db-clusters.
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run.sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here
Here's the
Error Message:
/home/scripts/run.sh: line 57: /home/scripts/db-clusters/cluster1.sh: No such file or directory
As per secret yaml, In string data you need to mention “run-sh” as you need to include the secret in the run-sh volume mount.
Have a try by using run-sh in stringData as below :
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run-sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here

Laravel application getting 502 bad gateway error on EKS

I am trying to run a legacy php larval app on my EKS cluster.
I have containerized the application via the docker file below
FROM php:7.2-fpm
RUN apt-get update -y \
&& apt-get install -y nginx
# PHP_CPPFLAGS are used by the docker-php-ext-* scripts
ENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"
RUN docker-php-ext-install pdo_mysql \
&& docker-php-ext-install opcache \
&& apt-get install libicu-dev -y \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& apt-get remove libicu-dev icu-devtools -y
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/php-opocache-cfg.ini
COPY nginx-site.conf /etc/nginx/sites-enabled/default
COPY entrypoint.sh /etc/entrypoint.sh
COPY --chown=www-data:www-data . /var/www/mysite
RUN chmod +x /etc/entrypoint.sh
WORKDIR /var/www/mysite
EXPOSE 9000
ENTRYPOINT ["sh", "/etc/entrypoint.sh"]
And the nginx-site.conf
server {
root /var/www/mysite/web;
include /etc/nginx/default.d/*.conf;
index app.php index.php index.html index.htm;
client_max_body_size 30m;
location / {
try_files $uri $uri/ /app.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:9000;
fastcgi_index app.php;
include fastcgi.conf;
}
}
The docker-compose.yaml
version: '3'
services:
proxy:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
web:
image: nginx:latest
expose:
- "9000"
volumes:
- ./source:/source
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- ./source:/source
I have deployed nginx ingress controller provided by the official Kubernetes GitHub via helm and it looks something like this.
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.35.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/version: "0.48.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: my-ing-ingress-nginx-controller
namespace: ingress
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/component: controller
replicas: 1
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: my-ing
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: "k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899"
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/my-ing-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=external-nginx
- --configmap=$(POD_NAMESPACE)/my-ing-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: metrics
containerPort: 10254
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
topologyKey: kubernetes.io/hostname
serviceAccountName: my-ing-ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: my-ing-ingress-nginx-admission
I have deployed my application via yaml like the one below.
apiVersion: v1
kind: Namespace
metadata:
name: tardis
labels:
monitoring: prometheus
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: website
namespace: tardis
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: website
image: image directory on ecr
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
namespace: tardis
name: website
spec:
type: ClusterIP
ports:
- name: http
port: 9000
selector:
app: website
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: website
namespace: tardis
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: external-nginx
rules:
- host: home.tardis.kr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: website
port:
number: 9000
When I check the log of the pods, it seems to indicate that it is running fine.
[27-Apr-2022 23:33:23] NOTICE: fpm is running, pid 25
[27-Apr-2022 23:33:23] NOTICE: ready to handle connections
However, when I try to access the address on mentioned in my ingress, it gives a "502 Bad Gateway Nginx" error.
I am really new to devops and even newer to php or larval.
If there is anything wrong with the way I containerized the application, or the way I deployed it, any sort of feedback would be much appreciated!!
Thank you in advance!!
Think this was a problem had to do something with the dockerfile. I followed the instruction and it seems to work fine.

sh Job container after it is stopped

I am backuping my Postgresql database using this cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
The cronjob gets successfully executed, the backup is made and saved in the container of the Job but this container is stopped after successful execution of the script.
of course I want to access the backup files in the container but I can't because it is stopped/terminated.
is there a way to execute shell commands in a container after it is terminated, so I can access the backup files saved in the container?
I know that I could do that on the node, but I don't have the permission to access it.
#confused genius gave me a great idea to create another same container to access the dump files so this is the solution that works:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: dev-pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
- name: postgres-restore
image: postgres:10.4
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
# Ensure the file directory is created.
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
after that one just needs to sh into the "postgres-restore" container and access the dump files.
thanks

How to access StatefulSet with other PODs

I have a running Elasticsearch STS with a headless Service assigned to it:
svc.yaml:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
stateful.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch-namespace
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-persistent-storage
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch-storageclass
resources:
requests:
storage: 20Gi
The question is: how to access this STS with PODs of Deployment Kind? Let's say, using this Redis POD:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ws-app
labels:
app: redis-ws-app
spec:
replicas: 1
selector:
matchLabels:
app: redis-ws-app
template:
metadata:
labels:
app: redis-ws-app
spec:
containers:
- name: redis-ws-app
image: redis:latest
command: [ "redis-server"]
ports:
- containerPort: 6379
I have been trying to create another service, that would enable me to access it from the outside, but without any luck:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-tcp
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
You would reach it directly hitting the headless service. As an example, this StatefulSet and this Service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
kind: Service
apiVersion: v1
metadata:
name: nginx-headless
spec:
selector:
app: nginx
clusterIP: None
ports:
- port: 80
name: http
I could reach the pods of the statefulset, through the headless service from any pod within the cluster:
/ # curl -I nginx-headless
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 09 Jun 2020 12:36:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
The singularity of the headless service, is that it doesn't create iptable rules for that service. So, when you query that service, it goes to kube-dns (or CoreDNS), and it returns the backends, rather then the IP address is the service itself. So, if you do nslookup, for example, it will return all the backends (pods) of that service:
/ # nslookup nginx-headless
Name: nginx-headless
Address 1: 10.56.1.44
Address 2: 10.56.1.45
Address 3: 10.56.1.46
Address 4: 10.56.1.47
And it won't have any iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx-headless
$
Unlike a normal service, that would return the IP address of the service itself:
/ # nslookup nginx
Name: nginx
Address 1: 10.60.15.30 nginx.default.svc.cluster.local
And it will have iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx
-A KUBE-SERVICES ! -s 10.56.0.0/14 -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
User #suren was right about the headless service. In my case, I was just using a wrong reference.
The Kube-DNS naming convention is
service.namespace.svc.cluster-domain.tld
and the default cluster domain is cluster.local
In my case, the in order to reach the pod, one has to use:
curl -I elasticsearch.elasticsearch-namespace

Resources