Injecting a script to Pod with docker returns EOF - bash

my goal is to execute a script once on a permanently running pod in kubernetes. The pod is called busybox-<SOME_ID> and finds itself in the namespace default.Therefore, I wrote this script - called scan-one-pod.sh:
#!/bin/bash
export MASTER_IP=192.168.56.102
export SCRIPT_NAME=script.sh
export POD_NAMESPACE=default
export POD_NAME=busybox
echo "echo HALLO" | ssh ubuntu#$MASTER_IP
export POD_ID=$(kubectl get po | grep busybox | sed -n '1p'|awk '{print $1}')
kubectl cp $SCRIPT_NAME $POD_NAMESPACE/$POD_ID:.
kubectl exec $POD_ID -- chmod +x $SCRIPT_NAME
export CONTAINER_ID=$(kubectl describe pod busybox | grep 'Container ID' | sed -n '1p'|awk '{print $3}')
ssh -t ubuntu#$MASTER_IP "sudo docker exec -u root $CONTAINER_ID -- ./script.sh"
The referred script script.sh has the following content:
$ kubectl exec $POD_ID -- cat script.sh
#!/bin/bash
echo "test" >> test
cp test test-is-working
However, it is not possible to run the script on the pod:
the files test and test-is-working are not created
the script scan-one-pod.sh returns just EOF:
$ ./scan-one-pod.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
155 Software-Pakete können aktualisiert werden.
72 Aktualisierungen sind Sicherheitsaktualisierungen.
HALLO
[sudo] Passwort für ubuntu:
EOF
Connection to 192.168.56.102 closed.
If I execute the docker-command directly, remote on my kubernetes-controller, I get the same message of EOF:
ubuntu#controller:~$ export CONTAINER_ID=$(kubectl describe pod busybox | grep 'Container ID' | sed -n '1p'|awk '{print $3}')
ubuntu#controller:~$ sudo docker exec -u root $CONTAINER_ID ./script.sh
EOF
If I execute it from my local workstation via kubectl exec I get this error:
$ kubectl exec $POD_ID ./script.sh
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"no such file or directory\"\n"
I don't know, which missing file they are referring to, but the script.sh-file is present and the busybox-pod seems to be running:
$ kubectl exec $POD_ID ls script.sh
script.sh
$ kubectl get po busybox-6bdf9b5bbc-4skds
NAME READY STATUS RESTARTS AGE
busybox-6bdf9b5bbc-4skds 1/1 Running 10 12d
Question: As far as I know, EOF means End-Of-File. End of which file would be important for me to know, and why is that a problem?
Thanks in advance, any help is appreciated :)

Related

How to let Kubernetes pod run a local script

I want to run a local script within Kubernetes pod and then set the output result to a linux variable
Here is what I tried:
# if I directly run -c "netstat -pnt |grep ssh", I get output assigned to $result:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> -- /bin/bash -c "netstat -pnt |grep ssh")
echo "result is $result"
What I want is something like this:
#script to be called:
cat netstat_tcp_conn.sh
#!/bin/bash
netstat -pnt |grep ssh
#script to call netstat_tcp_conn.sh:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> --
/bin/bash -c "./netstat_tcp_conn.sh)
echo "result is $result
the result showed result is /bin/bash: ./netstat_tcp_conn.sh: No such file or directory.
How can I let Kubernetes pod execute netstat_tcp_conn.sh which is at my local machine?
You can use following command to execute your script in your pod:
kubectl exec POD -- /bin/sh -c "`cat netstat_tcp_conn.sh`"
You can copy local files into pod using kubectl command like kubectl cp /tmp/foo :/tmp/
Then you can change its permission and make it executable and run it using kubectl exec.

Bash Script fails with error: OCI runtime exec failed

I am running the below script and getting error.
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
if [ -n "$webproxy" ] ; then
sudo docker exec $webproxy sh -c "$webproxycheck"
fi
Here is my docker ps -a output
$sudo docker ps -a --format "{{.Names}}"|grep webproxy
webproxy-dev-01
webproxy-dev2-01
when i run the command individually it works. For Example:
$sudo docker exec webproxy-dev-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
$sudo docker exec webproxy-dev2-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
Here is the error i get.
$ sh healthcheck.sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"webproxy-dev-01\": executable file not found in $PATH": unknown
Could someone please help me with the error. Any help will be greatly appreciated.
Because the variable contains two tokens (on two separate lines) that's what the variable expands to. You are running
sudo docker exec webproxy-dev-01 webproxy-dev2-01 ...
which of course is an error.
It's not clear what you actually expect to happen, but if you want to loop over those values, that's
for host in $webproxy; do
sudo docker exec "$host" sh -c "$webproxycheck"
done
which will conveniently loop zero times if the variable is empty.
If you just want one value, maybe add head -n 1 to the pipe, or pass a more specific regular expression to grep so it only matches one container. (If you have control over these containers, probably run them with --name so you can unambiguously identify them.)
Based on your given script, you are trying to "exec" the following
sudo docker exec webproxy-dev2-01
webproxy-dev-01 sh -c "curl -k -s https://localhost:${nginx_https_port}/HealthCheckService"
As you see, here is your error.
sudo docker exec webproxy-dev2-01
webproxy-dev-01 [...]
The problem is this line:
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
which results in the following (you also posted this):
webproxy-dev2-01
webproxy-dev-01
Now, the issue is, that your docker exec command now takes both images names (coming from the variable assignment $webproxy), interpreting the second entry (which is webproxy-dev-01 and sepetrated by \n) as the exec command. This is now intperreted as the given command which is not valid and cannot been found: That's what the error tells you.
A workaround would be the following:
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy | head -n 1)
It only graps the first entry of your output. You can of course adapt this to do this in a loop.
A small snippet:
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy )
echo ${webproxy}
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
while IFS= read -r line; do
if [ -n "$line" ] ; then
echo "sudo docker exec ${line} sh -c \"${webproxycheck}\""
fi
done <<< "$webproxy"

File not found in Docker Container using GitLab-CI

Using GitLab-CI, I am attempting to echo a secret variable into a file inside a Docker container. The file exists and the user has permissions to write to the file yet I get a No such file or directory error.
$ /usr/bin/docker exec -t $CI_PROJECT_NAME ls -la /opt/application/conf/kubeadminaccount.yml
-rw-rw-r-- 1 nodeuser nodeuser 420 Aug 18 07:19 /opt/application/conf/kubeadminaccount.yml
$ /usr/bin/docker exec -t $CI_PROJECT_NAME whoami
nodeuser
$ /usr/bin/docker exec -t $CI_PROJECT_NAME echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.yml
bash: line 69: /opt/application/conf/kubeadminaccount.yml: No such file or directory
Your redirection operator is working on host and not inside your container. Change below
$ /usr/bin/docker exec -t $CI_PROJECT_NAME echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.yml
to
$ /usr/bin/docker exec -t $CI_PROJECT_NAME bash -c "echo $KUBE_ADMIN_ACCOUNT > /opt/application/conf/kubeadminaccount.yml"

Command line shortcut to connect to a docker container

Is there any shortcut command to connect to a docker container without running docker exec -it 'container_id' bash every time?
Here is a shorter command line shortcut to:
Check if a container is running
If running, connect to a running container using docker exec -it <container> bash command:
Script docker-enter:
#!/bin/bash
name="${1?needs one argument}"
containerId=$(docker ps | awk -v app="$name:" '$2 ~ app{print $1}')
if [[ -n "$containerId" ]]; then
docker exec -it $containerId bash
else
echo "No docker container with name: $name is running"
fi
Then run it as:
docker-enter webapp
I'm using the following alias on OS X:
alias dex='function _dex(){ docker exec -i -t "$(basename $(pwd) | tr -d "[\-_]")_$1_1" /bin/bash -c "export TERM=xterm; exec bash" };_dex'
In the same directory as my docker-files, I run "dex php" to enter the PHP container.
If random id is complicated. Start container with name docker run --name test image and connect with its name docker exec -it test bash.

Jenkins - Publish Over SSH - EXEC: STDOUT/STDERR - bash: service: command not found

I have issue with Publish Over SSH plugin and already try so many possible solution, but still doesn't work:
using exec in pty
using bash --login
using shebang (#!/usr/bin/env bash)
Exec Script
service monitoring-daemon stop
cd /home/push/monitoring/target
rm -rf Monitoring.jar
ls -la | grep Monitoring | grep -v grep | awk '{print $9}' | xargs -I file mv file Monitoring.jar
service monitoring-daemon start
Console Output
10:26:25 SSH: EXEC: STDOUT/STDERR from command [service monitoring-daemon stop
10:26:25 cd /home/push/monitoring/target
10:26:25 rm -rf Monitoring.jar
10:26:25 ls -la | grep Monitoring | grep -v grep | awk '{print $9}' | xargs -I file mv file Monitoring.jar
10:26:25 service monitoring-daemon start] ...
10:26:25 SSH: EXEC: connected
10:26:25 bash: service: command not found
10:26:25 bash: line 4: service: command not found
10:26:25 SSH: EXEC: completed after 201 ms
10:26:25 SSH: Disconnecting configuration [*********] ...
10:26:25 ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [127]]
10:26:25 Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
10:26:25 [ANALYSIS-COLLECTOR] Computing warning deltas based on reference build #48
My Jenkins is using bash as default shell, and remote server also bash.
Remote Server .bashrc
# .bashrc
alias service='/sbin/service'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
Remote Server .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
is there any piece of configuration or step that I'm missing?
Try putting the full path for "service", ie.. replace "service" with "/usr/sbin/service"
Adding a first line like below might also help set your path
. /etc/profile

Resources