grep for pods and select each result and pass a command in loop - shell

I have few pods running in my kubernetes cluster. I am developing a shell script and I want to grep for few pods and want to select each pod from the grep result to execute a command.
Lets say I grep few pods by command :
kubectl get pods | grep test
the results are:
Test-0
Test-1
Test-2
From the result, I want to select each pod and execute a command for it in a loop.
for example:
for first pod:
kubectl exec -it Test-0 -- mysqldump.......
after finishing the first pod, it has to process the second pod and so on

for pod in $(kubectl get pod -oname |grep -i Test ); do
kubectl exec "$pod" -- ls -ltr ;
done
Replace ls -ltr with mysqldump .....

Get pods name and then use "for" to execute command in each pod
#!/bin/bash
pods=$(kubectl get pods | awk '{print $2}' | grep -i test)
for i in $pods
do
kubectl exec -it $i -- echo "test"
done

Select your target pods using labels is less error prone and can do multiple matching:
kubectl get pods --selector <key>=<value>,<key>=<value> --namespace <name> -oname | xargs -I{} kubectl exec -it {} --namespace <name> -- mysqldump ...

Related

Get logs from all pods in namespace using xargs

Is there anyway to get all logs from pods in a specific namespace running a dynamic command like a combination of awk and xargs?
kubectl get pods | grep Running | awk '{print $1}' | xargs kubectl logs | grep value
I have tried the command above but it's failing like kubectl logs is missing pod name:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
Do you have any suggestion about how get all logs from Running pods?
Think about what your pipeline is doing:
The kubectl logs command takes as an argument a single pod name, but through your use of xargs you're passing it multiple pod names. Make liberal use of the echo command to debug your pipelines; if I have these pods in my current namespace:
$ kubectl get pods -o custom-columns=name:.metadata.name
name
c069609c6193930cd1182e1936d8f0aebf72bc22265099c6a4af791cd2zkt8r
catalog-operator-6b8c45596c-262w9
olm-operator-56cf65dbf9-qwkjh
operatorhubio-catalog-48kgv
packageserver-54878d5cbb-flv2z
packageserver-54878d5cbb-t9tgr
Then running this command:
kubectl get pods | grep Running | awk '{print $1}' | xargs echo kubectl logs
Produces:
kubectl logs catalog-operator-6b8c45596c-262w9 olm-operator-56cf65dbf9-qwkjh operatorhubio-catalog-48kgv packageserver-54878d5cbb-flv2z packageserver-54878d5cbb-t9tgr
To do what you want, you need to arrange to call kubectl logs multiple times with a single argument. You can do that by adding -n1 to your xargs command line. Keeping the echo command, running this:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 echo kubectl logs
Gets us:
kubectl logs catalog-operator-6b8c45596c-262w9
kubectl logs olm-operator-56cf65dbf9-qwkjh
kubectl logs operatorhubio-catalog-48kgv
kubectl logs packageserver-54878d5cbb-flv2z
kubectl logs packageserver-54878d5cbb-t9tgr
That looks more reasonable. If we drop the echo and run:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs | grep value
Then you will get the result you want. You may want to add the --prefix argument to kubectl logs so that you know which pod generated the match:
kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs --prefix | grep value
Not directly related to your question, but you can lose that grep:
kubectl get pods | awk '/Running/ {print $1}' | xargs -n1 kubectl logs --prefix | grep value
And even lose the awk:
kubectl get pods --field-selector=status.phase==Running -o name | xargs -n1 kubectl logs --prefix | grep value

How to let Kubernetes pod run a local script

I want to run a local script within Kubernetes pod and then set the output result to a linux variable
Here is what I tried:
# if I directly run -c "netstat -pnt |grep ssh", I get output assigned to $result:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> -- /bin/bash -c "netstat -pnt |grep ssh")
echo "result is $result"
What I want is something like this:
#script to be called:
cat netstat_tcp_conn.sh
#!/bin/bash
netstat -pnt |grep ssh
#script to call netstat_tcp_conn.sh:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> --
/bin/bash -c "./netstat_tcp_conn.sh)
echo "result is $result
the result showed result is /bin/bash: ./netstat_tcp_conn.sh: No such file or directory.
How can I let Kubernetes pod execute netstat_tcp_conn.sh which is at my local machine?
You can use following command to execute your script in your pod:
kubectl exec POD -- /bin/sh -c "`cat netstat_tcp_conn.sh`"
You can copy local files into pod using kubectl command like kubectl cp /tmp/foo :/tmp/
Then you can change its permission and make it executable and run it using kubectl exec.

Run Linux Command inside Docker Container via command line

I have this command
clear; sudo kubectl exec -it $(kubectl get pods | grep 'app' | cut -d ' ' -f 1) ash
I will land here
/src #
I want to also run a command ls
/src # ls
Procfile composer.lock phpunit.xml server.php
app config public storage
artisan database resources tests
benu.code-workspace heroku.sh routes vendor
bootstrap package-lock.json run.sh webpack.mix.js
composer.json package.json scripts
/src #
I've tried
clear; sudo kubectl exec -it $(kubectl get pods | grep 'app' | cut -d ' ' -f 1) ash echo "ls"
and
clear; sudo kubectl exec -it $(kubectl get pods | grep 'app' | cut -d ' ' -f 1) echo "ls"
Please correct me
If you want to run the command, you want kubectl exec -it $podname ls. If you put echo "ls" then that is the command which run, i.e. print "ls".

Trying to copy files to Pods with `kubectl cp`, But getting Error: unknown flag: --all-namespaces

Trying to copy and execute a bash script in a POD (which has one container)
kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt -n apm
This commands works perfectly, But we have multiple Namespaces, Hence I wanted to use --all-namespaces like shown below
which errors out saying, Error: unknown flag: --all-namespaces
How Do I use --all-namspaces in kubectl cp command?
kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt --all-namespaces
echo "Successfully copied the upload_javadumps.sh script"```
For kubectl cp flag --all-namespaces doesn't exist, you can check it with kubectl cp -h.
In your case I would go with simple bash loop like this:
for ns in namespace1 namespace2; do kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt -n $ns;done

Injecting a script to Pod with docker returns EOF

my goal is to execute a script once on a permanently running pod in kubernetes. The pod is called busybox-<SOME_ID> and finds itself in the namespace default.Therefore, I wrote this script - called scan-one-pod.sh:
#!/bin/bash
export MASTER_IP=192.168.56.102
export SCRIPT_NAME=script.sh
export POD_NAMESPACE=default
export POD_NAME=busybox
echo "echo HALLO" | ssh ubuntu#$MASTER_IP
export POD_ID=$(kubectl get po | grep busybox | sed -n '1p'|awk '{print $1}')
kubectl cp $SCRIPT_NAME $POD_NAMESPACE/$POD_ID:.
kubectl exec $POD_ID -- chmod +x $SCRIPT_NAME
export CONTAINER_ID=$(kubectl describe pod busybox | grep 'Container ID' | sed -n '1p'|awk '{print $3}')
ssh -t ubuntu#$MASTER_IP "sudo docker exec -u root $CONTAINER_ID -- ./script.sh"
The referred script script.sh has the following content:
$ kubectl exec $POD_ID -- cat script.sh
#!/bin/bash
echo "test" >> test
cp test test-is-working
However, it is not possible to run the script on the pod:
the files test and test-is-working are not created
the script scan-one-pod.sh returns just EOF:
$ ./scan-one-pod.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
155 Software-Pakete können aktualisiert werden.
72 Aktualisierungen sind Sicherheitsaktualisierungen.
HALLO
[sudo] Passwort für ubuntu:
EOF
Connection to 192.168.56.102 closed.
If I execute the docker-command directly, remote on my kubernetes-controller, I get the same message of EOF:
ubuntu#controller:~$ export CONTAINER_ID=$(kubectl describe pod busybox | grep 'Container ID' | sed -n '1p'|awk '{print $3}')
ubuntu#controller:~$ sudo docker exec -u root $CONTAINER_ID ./script.sh
EOF
If I execute it from my local workstation via kubectl exec I get this error:
$ kubectl exec $POD_ID ./script.sh
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"no such file or directory\"\n"
I don't know, which missing file they are referring to, but the script.sh-file is present and the busybox-pod seems to be running:
$ kubectl exec $POD_ID ls script.sh
script.sh
$ kubectl get po busybox-6bdf9b5bbc-4skds
NAME READY STATUS RESTARTS AGE
busybox-6bdf9b5bbc-4skds 1/1 Running 10 12d
Question: As far as I know, EOF means End-Of-File. End of which file would be important for me to know, and why is that a problem?
Thanks in advance, any help is appreciated :)

Resources