Run a shell script using kubectl exec - OCI runtime exec failed: exec failed: container_linux.go:346 - shell

I am trying a run a shell script via kubectl exec.
Eg- kubectl exec -n abc podxyz -- /root/test/./generate.sh
The script runs in the podxyz container but returns the below error, breaking the rest of the flow.
"command terminated with exit code 126"]
"OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused \"no such file or directory\": unknown"]}
I have tried to use -- /bin/sh and bash after the -- , but that did not help.
Note - the above command is executed as part of another script.

Related

How to pass env variable to a kubectl exec script call?

How do I pass a environment variable into a kubectl exec command, which is calling a script?
kubectl exec client -n namespace -- /mnt/script.sh
In my script.sh, I need the value of the passed variable.
I tried:
kubectl exec client -n namespace -- PASSWORD=pswd /mnt/script.sh
which errors with:
OCI runtime exec failed: exec failed: unable to start container process: exec: "PASSWORD=pswd": executable file not found in $PATH: unknown
You can use env(1)
kubectl exec client -n namespace -- \
env PASSWORD=pswd /mnt/script.sh
or explicitly wrap the command in an sh(1) invocation so a shell processes it
kubectl exec client -n namespace -- \
sh -c 'PASSWORD=pswd /mnt/script.sh'
This comes with the usual caveats around kubectl exec: a "distroless" image may not have these standard tools; you're only modifying one replica of a Deployment; your changes will be lost as soon as the Pod is deleted, which can sometimes happen outside of your control.

Prevent Makefile command to print error after exiting a Docker container

Consider the following Makefile
bash:
docker run -it --rm bash:4.4
When I run the Makefile command, attach to the Docker container, create an error on the console and exit, I get a "make: *** [bash] Error 127":
➜ make bash
docker run -it --rm bash:4.4
bash-4.4# peng
bash: peng: command not found
bash-4.4# exit
exit
make: *** [bash] Error 127
When I simply run the same command without the Makefile context - there is no error.
Is there a way I can prevent this error from getting printed after exiting the Docker container? This is a minimal example - we would like to use Makefile for running Docker related tasks in a development setup.
Make will run the recipe you give it in a shell. If the shell exits with a non-0 error code, make thinks that the operation failed and prints that message. So, all you have to do is make sure that the recipe doesn't fail. For example:
bash:
docker run -it --rm bash:4.4 || true
Now, if the docker command exits with a non-0 code the || true will be run and exit with a success code.
Alternatively you could prefix the recipe with - which will still print a message but make will ignore the error:
bash:
-docker run -it --rm bash:4.4
Just be aware, that if you do this you have no way to inform make that the command you tried to run didn't succeed.

kubectl exec: $PATH: unknown command terminated with exit code 126

Trying to exec into a container with the following command
kubectl exec -it my-pod my-container1 -- bash
Gives error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "my-container1": executable file not found in $PATH: unknown
command terminated with exit code 126
The pod my-pod has two containers. And my-container1 has alipne image with bash installed.
Trying to get shell into the container, it is not able to find bash.
Kubectl client version: v1.17.0
Adding -c before the container name worked.
kubectl exec -it my-pod -c my-container1 -- bash

How to execute a mq script file in a Kubernetes Pod?

I have a file .mqsc with a commands for create queues(ibm mq).
How to run a script by kubectl?
kubectl exec -n test -it mq-0 -- /bin/bash -f create_queues.mqsc doesn't work.
log:
/bin/bash: create_queues.mqsc: No such file or directory command terminated with exit code 127
Most probably your script is not under the "/" directory in docker. You need to find whole path after that you need to execute script

Kubectl: get a shell to a running container under Windows

I'm trying to log into running container using Kubectl, according to instructions in https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/, but I'm failing miserably:
kubectl exec -it mycontainer -- /bin/bash
Unable to use a TTY - input is not a terminal or the right kind of
file rpc error: code = 2 desc = oci runtime error: exec failed:
container_linux.go:247: starting container process caused "exec:
\"D:/Applications/Git/usr/bin/bash\": stat
D:/Applications/Git/usr/bin/bash: no such file or directory"
command terminated with exit code 126
It looks like kubectl tries to exec bash on my machine, which is totally not what I want to achieve.
I can exec commands without spaces:
$ kubectl exec mycontainer 'ls'
lib
start.sh
But with not:
$ kubectl exec mycontainer 'ls .'
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"ls .\": executable file not found in $PATH"
command terminated with exit code 126
What I'm doing wrong?
I've tried both in mingw git shell , as with plain windows console.
Seems it might be related to this github issue.
One of the workarounds might be to use winpty as specified here.
winpty kubectl.exe exec -it pod-name -- sh
You can also try /bin/sh instead of /bin/bash it worked for me, but I do not have a Windows machine to check it in the same environment as you.
Below command worked for me to launch windows command prompt
kubectl exec -it mycontainer -- cmd.exe

Resources