kubernetes redis-cluster pod by pod login using shell script? - bash

Currently, I am trying to prepare a shell script for redis-cluster backup of each pod running in a particular namespace.
I wanted to achieve it by performing two operations:
login to each pod one by one and then connect to redis-cli and execute BGSAVE command.
take a copy of dump.rdb file from each pod and place it at backup folder.
for first part, I have prepared following code snippet:
#!/bin/bash
NS=$1
for i in `cat "$NS"_POD_LIST`;
do
echo "POD: $i";
kubectl exec -it $i -n $NS -- bash -c "redis-cli -c BGSAVE"
##After BGSAVE wanted to get out of redis-cli, but it get stuck here, unable to switch to other pod
done
now on the second part wanted to copy dump.rdb file from each pod to the destination folder, but this execution will be outside of pod..
SOURCE_DIR="/redis/data"
BACKUP_DIR="/pod/backup"
for i in `cat "$NS"_POD_LIST`;
do
echo "POD: $i";
kubectl cp $NS/$i:$SOURCE_DIR $BACKUP_DIR
done
Please let me know on this code snippet how to achieve this?

Related

Run a bash script file inside a node pod using kubectl exec

I have a kubernetes cluster having one master two worker nodes. On one of the worker node there is a pod running a container from centos os base image. The OS release details of the running container is as
NAME="CentOS Linux" VERSION="8" ID="centos"
I wrote a scrip (script.sh) inside this container and I want to run it from master node using kubectl
for i in {1..10}
do
sleep 1 && top -p 11,12 -n1 | grep 'nginx' | awk '{print $2,"\t"$13,"\t"$7,"\t"$10}'
done
From Master Node when I executed below command, the script started executing but sj.txt file having "TERM environment variable not set." as output value instead of process's CPU and Memory details. But when I ran this command with "-it" option I got the expected output inside sj.txt file.
kubectl exec nginx-as-backend-server-57fd5d8d7b-6h24c -n localenv-pp
-- bash -c "/tmp/bla.sh &>> ./tmp/sj.txt"
Can someone please explain me, why after using -it with kubectl exec, TERM environment errors gone?
I tried below step to fix the "TERM environment variable not set." error:
export TERM=xterm inside bashrc file and did reload, still faced same
issue.

How to execute a mq script file in a Kubernetes Pod?

I have a file .mqsc with a commands for create queues(ibm mq).
How to run a script by kubectl?
kubectl exec -n test -it mq-0 -- /bin/bash -f create_queues.mqsc doesn't work.
log:
/bin/bash: create_queues.mqsc: No such file or directory command terminated with exit code 127
Most probably your script is not under the "/" directory in docker. You need to find whole path after that you need to execute script

Executing a kubernetes pod with the use of pod name

I am writing a shell script for executing a pod for which the syntax is:
winpty kubectl --kubeconfig="C:\kubeconfig" -n namespace exec -it podname bash
This works fine but since podname is not stable and changes for every deployment so is there any alternative for this?
Thanks.
You can use normally $ kubectl exec command but define value for changing pod name.
Assuming that you have deployment and labeled pods: app=example, simply execute:
$ kubectl exec -it $(kubectl get pods -l app=example -o custom-columns=:metadata.name) -- bash
EDIT:
You can also execute:
POD_NAME = $(kubectl get pods -l app=example -o custom-columns=":metadata.name")
or
POD_NAME = $(kubectl get pods -l app=example -o jsonpath = "{. Items [0] .metadata.name}")
finally
$ winpty kubectl exec -ti $POD_NAME --bash
Make sure that you execute command in proper namespace - you can also add -n flag and define it.
You can use the following command:
kubectl -n <namespace> exec -it deploy/<deployment-name> -- bash
Add a service to your application:
As you know, pods are ephemeral; They come in and out of existence dynamically to ensure your application stays in compliance with your configuration. This behavior implements the scaling and self-healing aspects of kubernetes.
You application will consist of one or more pods that are accessible through a service , The application's service name and address does not change and so acts as the stable interface to reach your application.
This method works both if your application has one pod or many pods.
Does that help?

Docker is not running my entire entrypoint.sh script

I have created a docker container to stand up Elasticsearch. Elasticsearch is being started and managed by supervisor which is also installed on my docker container. I have created an entrypoint.sh script and added the following to the end of my Dockerfile
ENTRYPOINT ["/usr/local/startup/entrypoint.sh"]
My entrypoint.sh script looks as follows:
#!/bin/bash -x
# Start Supervisor if not already running
if ! ps aux | grep -q "[s]upervisor"; then
echo "Starting supervisor service"
exec/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
else
echo "Supervisor is currently running"
fi
echo "creating /.es_created"
touch /.es_created
exec "$#"
When I start my docker container supervisor starts and in turn will successfully start elasticsearch. The problem is that it never executes the last bit of the script creating the .es_created file. It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there. I added -x to the #!/bin/bash so I could call docker logs on the container and it confirms that it never calls the last echo and touch commands. I feel like I may be missing something about entrypoint scripts which is why this is happening, but ultimately I want to be able to execute some commands after elasticsearch has started so I can configure a proper index and insert some data.
Your guess
It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there.
is correct, because the exec command of bash has indeed the following semantics: the specified program at stake is executed, and replace the parent shell process (it is an exec system call).
So your question is actually not a Docker issue, it is rather related to Bash. For more details on the exec shell builtin, you could for example take a look at this askubuntu question, or read the corresponding doc in the bash reference manual.
To sum up, you should try to just write
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
If that command indeed runs in the background, it should be OK. Otherwise, you could of course append a &:
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf &

Docker RUN statement (modifying a file) not executed

I am experiencing strange behavior when executing a Dockerfile (in https://github.com/Krijger/es-nagios-docker). Basically, I add a file to append its contents to a file in the image
ADD es-command /tmp/
RUN cat tmp/es-command >> /opt/nagios/etc/objects/commands.cfg
The problem is that, while /tmp/es-command is present in the resulting image, the commands.cfg file was not changed.
As a prelude to the accepted answer: my Dockerfile extends cpuguy83/nagios, which defines /opt/nagios/etc as a volume.
Good to the see sample code, which find the route cause.
Your docker image comes from cpuguy83/nagios, from this image https://github.com/cpuguy83/docker-nagios/blob/master/Dockerfile
You can see /opt/nagios/etc directory is set as VOLUME
VOLUME ["/opt/nagios/var", "/opt/nagios/etc", "/opt/nagios/libexec", "/var/log/apache2", "/usr/share/snmp/mibs"]
Then you can notice that docker volume can't be changed at the next commit by your new build.
And this is the reason you can see your changes when you enter into the container and lost it when exits.
Here is how I use it:
ls ./
configure.sh
commands.cfg
cat configure.sh
#!/bin/bash
script_path=$( cd "$( dirname "$0" )" && pwd )
cp ${script_path}/commands.cfg /opt/nagios/etc/objects/
docker run -d --name nagios cpuguy83/nagios
docker run --rm -v $(pwd):/tmp --volumes-from nagios --entrypoint /tmp/configure.sh cpuguy83/nagios

Resources