Trying to copy files to Pods with `kubectl cp`, But getting Error: unknown flag: --all-namespaces - bash

Trying to copy and execute a bash script in a POD (which has one container)
kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt -n apm
This commands works perfectly, But we have multiple Namespaces, Hence I wanted to use --all-namespaces like shown below
which errors out saying, Error: unknown flag: --all-namespaces
How Do I use --all-namspaces in kubectl cp command?
kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt --all-namespaces
echo "Successfully copied the upload_javadumps.sh script"```

For kubectl cp flag --all-namespaces doesn't exist, you can check it with kubectl cp -h.
In your case I would go with simple bash loop like this:
for ns in namespace1 namespace2; do kubectl cp ../docker/scripts/upload_javadumps.sh ${POD}:/opt -n $ns;done

Related

Why are shell builtins not found when using Kubectl exec

I am making a bash script to copy files from a Kubernetes pod running Debian. When I include the following line:
kubectl --namespace "$namesp" exec "$pod" -c "$container" -- cd /var
it errors out:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "cd": executable file not found in $PATH: unknown
command terminated with exit code 126
I also tried
kubectl --namespace "$namesp" exec "$pod" -c "$container" -- builtin
kubectl --namespace "$namesp" exec "$pod" -c "$container" -it -- cd /var
which gave the same result.
I was able to resolve the issue by changing the command to:
kubectl --namespace "$namesp" exec "$pod" -c "$container" -- /bin/bash -c "builtin"
Would love to understand why the first command(s) don't work and the latter one does. I would have thought that builtin commands are the one group of commands that would always be found, in contrast to commands that rely on the PATH environment variable.
kubectl exec is used to execute an executable in a running container. The command has to be built into the container.
Neither builtin nor cd are valid executables in your container. Only /bin/bash is.
To execute a builtin shell command, you have to execute the shell and call it as the command argument like in your third example.

grep for pods and select each result and pass a command in loop

I have few pods running in my kubernetes cluster. I am developing a shell script and I want to grep for few pods and want to select each pod from the grep result to execute a command.
Lets say I grep few pods by command :
kubectl get pods | grep test
the results are:
Test-0
Test-1
Test-2
From the result, I want to select each pod and execute a command for it in a loop.
for example:
for first pod:
kubectl exec -it Test-0 -- mysqldump.......
after finishing the first pod, it has to process the second pod and so on
for pod in $(kubectl get pod -oname |grep -i Test ); do
kubectl exec "$pod" -- ls -ltr ;
done
Replace ls -ltr with mysqldump .....
Get pods name and then use "for" to execute command in each pod
#!/bin/bash
pods=$(kubectl get pods | awk '{print $2}' | grep -i test)
for i in $pods
do
kubectl exec -it $i -- echo "test"
done
Select your target pods using labels is less error prone and can do multiple matching:
kubectl get pods --selector <key>=<value>,<key>=<value> --namespace <name> -oname | xargs -I{} kubectl exec -it {} --namespace <name> -- mysqldump ...

How to let Kubernetes pod run a local script

I want to run a local script within Kubernetes pod and then set the output result to a linux variable
Here is what I tried:
# if I directly run -c "netstat -pnt |grep ssh", I get output assigned to $result:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> -- /bin/bash -c "netstat -pnt |grep ssh")
echo "result is $result"
What I want is something like this:
#script to be called:
cat netstat_tcp_conn.sh
#!/bin/bash
netstat -pnt |grep ssh
#script to call netstat_tcp_conn.sh:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> --
/bin/bash -c "./netstat_tcp_conn.sh)
echo "result is $result
the result showed result is /bin/bash: ./netstat_tcp_conn.sh: No such file or directory.
How can I let Kubernetes pod execute netstat_tcp_conn.sh which is at my local machine?
You can use following command to execute your script in your pod:
kubectl exec POD -- /bin/sh -c "`cat netstat_tcp_conn.sh`"
You can copy local files into pod using kubectl command like kubectl cp /tmp/foo :/tmp/
Then you can change its permission and make it executable and run it using kubectl exec.

Jenkins - Assign variable inside sh script

I would like to create a variable name as POD inside script to assign kubectl output and then pass this variable while running kubectl port-forward pods..
But I received below error
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 151: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 151, column 80.
e-context ${KUBE_CLUSTER_STAGE}
Here is my script.
environment {
POD = ''
}
steps {
script {
withCredentials([file(credentialsId: 'mbtkubeconfig', variable: 'config')]){
try {
// Expose PostreSQL
sh '''#!/bin/sh
chmod ug+w ${config}
export KUBECONFIG=\${config}
kubectl config use-context ${KUBE_CLUSTER_STAGE}
kubectl config set-context --current --namespace=database
POD = `$(kubectl get po -n database --selector='role==master' -o jsonpath="{.items[0].metadata.name}")`
kubectl port-forward pods/$POD 5432:64000 & echo \$! > filename.txt
'''
When I tried without variable there is no any error.Here is the script running without any error.
sh """#!/bin/sh
chmod ug+w ${config}
export KUBECONFIG=\${config}
kubectl config use-context ${KUBE_CLUSTER_STAGE}
kubectl config set-context --current --namespace=database
kubectl get pods -n database
kubectl port-forward pods/my-postgres-postgresql-helm-0 5432:64000 & echo \$! > filename.txt
"""
When you run commands with sh make sure you are using " not '. Groovy variables will only be resolved when using "${config}".
By the way, it is considered best practice to mark variables with env. although not needed to resolve the variable. For instance, try to mark your cluster stage with ${env.KUBE_CLUSTER_STAGE}

Error from server (NotFound): pods "\nmongo-client-79667cc85d-tsg72" not found

I'm trying to make a backup from Mongo / K8S with this script
export POD=$(kubectl get pods -l app=mongo-client -o custom-columns=:metadata.name -n espace-client)
kubectl exec "$POD" sh -c 'mongodump --archive' > ~/backup/mongo/$(date +%F).db.dump
I get this error:
Error from server (NotFound): pods "\nmongo-client-79667cc85d-tsg72" not found
When I check the pods, I can see mongo-client-79667cc85d-tsg72
When I put the name without variable, it works well, so it might be because of initial \n. How can I avoid it ?
How can I remove it from name ?
Your kubectl get pods command is constrained with a namespace selector -n espace-client. Your kubectl exec command also needs the namespace flag.
The output of your kubectl get pods command has a newline before the pod name because the first line of the output is the column header (which is empty in your case).
To prevent this and get only the name as output, you can suppress the column headers with the --no-headers flag:
kubectl get pods -l app=mongo-client -o custom-columns=:metadata.name -n espace-client --no-headers

Resources