Bash Script fails with error: OCI runtime exec failed - bash

I am running the below script and getting error.
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
if [ -n "$webproxy" ] ; then
sudo docker exec $webproxy sh -c "$webproxycheck"
fi
Here is my docker ps -a output
$sudo docker ps -a --format "{{.Names}}"|grep webproxy
webproxy-dev-01
webproxy-dev2-01
when i run the command individually it works. For Example:
$sudo docker exec webproxy-dev-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
$sudo docker exec webproxy-dev2-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
Here is the error i get.
$ sh healthcheck.sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"webproxy-dev-01\": executable file not found in $PATH": unknown
Could someone please help me with the error. Any help will be greatly appreciated.

Because the variable contains two tokens (on two separate lines) that's what the variable expands to. You are running
sudo docker exec webproxy-dev-01 webproxy-dev2-01 ...
which of course is an error.
It's not clear what you actually expect to happen, but if you want to loop over those values, that's
for host in $webproxy; do
sudo docker exec "$host" sh -c "$webproxycheck"
done
which will conveniently loop zero times if the variable is empty.
If you just want one value, maybe add head -n 1 to the pipe, or pass a more specific regular expression to grep so it only matches one container. (If you have control over these containers, probably run them with --name so you can unambiguously identify them.)

Based on your given script, you are trying to "exec" the following
sudo docker exec webproxy-dev2-01
webproxy-dev-01 sh -c "curl -k -s https://localhost:${nginx_https_port}/HealthCheckService"
As you see, here is your error.
sudo docker exec webproxy-dev2-01
webproxy-dev-01 [...]
The problem is this line:
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
which results in the following (you also posted this):
webproxy-dev2-01
webproxy-dev-01
Now, the issue is, that your docker exec command now takes both images names (coming from the variable assignment $webproxy), interpreting the second entry (which is webproxy-dev-01 and sepetrated by \n) as the exec command. This is now intperreted as the given command which is not valid and cannot been found: That's what the error tells you.
A workaround would be the following:
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy | head -n 1)
It only graps the first entry of your output. You can of course adapt this to do this in a loop.
A small snippet:
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy )
echo ${webproxy}
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
while IFS= read -r line; do
if [ -n "$line" ] ; then
echo "sudo docker exec ${line} sh -c \"${webproxycheck}\""
fi
done <<< "$webproxy"

Related

Run shell script inside ssh session inside Jenkinsfile

I'm trying to run a complete script while the ssh session is live instead of single commands.
Here is my current code:
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh "ssh -v ubuntu#IPV4_DNS docker pull X:${BUILD_NUMBER}"
sh "ssh -v ubuntu#IPV4_DNS docker rm -f test"
sh "ssh -v ubuntu#IPV4_DNS docker run --name=test -d -p 3000:3000X:${BUILD_NUMBER}"
The desired code is something like this, but the following doesn't work:*
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh ''' ssh -v ubuntu#IPV4_DNS docker pull X:${BUILD_NUMBER}
&& docker rm -f test && docker run --name=test -d -p 3000:3000X:${BUILD_NUMBER}
'''
ssh something here && something else && another one
runs something here in the ssh session, and something else and another one locally. You want to add quotes to pass the entire command line to ssh.
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh """ssh -v ubuntu#IPV4_DNS 'docker pull X:${BUILD_NUMBER} &&
docker rm -f test &&
docker run --name=test -d -p "3000:3000X:${BUILD_NUMBER}"'
"""
I switched to triple double quotes instead of triple single quotes, assuming you want Jenkins to expand ${BUILD_NUMBER} for you.
The original question asked about Bash, but for the record, you are running sh here, not Bash. If you wanted to use Bash features in a Jenkinsfile, you can add a shebang #!/usr/bin/env bash or similar as the very first line of the command. But that's not necessary here; all these commands are simple and completely POSIX. (Maybe see also Difference between sh and bash)

Direct group of commands into `docker exec`

I have the following command that works fine and prints foo before returning:
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I want to direct multiple commands into the container with one pipe, for example echo 'foo' and ls /. I have tried the following:
This fails because it runs the commands on the host and pipes the output into the container:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This fails because it has bad syntax. It also runs on the host:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This one fails, but I would like to not use an array of strings anyway:
for COMMAND in 'echo "foo"' 'ls /'
do
docker exec -i <id> /bin/sh < echo $COMMAND
done
I've also tried several other methods like piping commands into tee or echo but haven't had any luck. If you would like to know why I want to do this seemingly ridiculous thing, it's because:
This is a short script that I would like to keep all in one place
I would like to use syntax highlighting, so I don't want to store it all in a list of strings
The container has the programs the script should run and the host does not
This is an automatic process that I would like to trigger with crontab on the host
You can run a group of commands in the below fashion
docker exec -i <id> /bin/sh -c 'echo "foo"; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo 'foo'; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo foo; ls -l'
If you want to run more than 2 commands, just append ; after each command like
docker exec -i 996eee5d121d /bin/sh -c 'echo "foo"; ls -l; ls -a'
Use a here document.
docker run -i --rm alpine /bin/sh <<EOF
echo abc
ls /
EOF
Note the difference between quoted and unquoted here document delimiter.
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I think you meant to do:
docker exec -i <id> /bin/sh < <(echo "echo 'foo'")
which is just the same as:
docker exec -i <id> /bin/sh <<<"echo 'foo'"
#edit There is a cool little trick. The idea is to pipe the script itself except first lines to another subprocess, it's sometimes used by installer scripts:
#!/bin/sh
# output this script except first 4 lines to docker
tail -n+5 "$0" | docker run -i --rm alpine /bin/sh -x
exit # we exit original script
#!/bin/sh
# inside docker now
echo abc
ls /
Execution:
$ bash -x ./script.sh
+ tail -n+5 ./script.sh
+ docker run -i --rm alpine /bin/sh -x
+ echo abc
+ ls /
abc
bin
...
var
+ exit
In a similar fashion you could use sed or another parsing tool to extract the only the relevant part between some marks for example.
I found a gist that explained how to pipe commands into docker exec:
echo "echo foo" | docker exec -i <id> /bin/sh -
Now we need a way to pipe multiple commands. Command groups won't work because they run on the host and semicolon separated commands can get messy. I thought of writing a function and getting just its body, it turns out you can do that with a simple declare and sed call.
You can combine all these pieces to pipe a command into the container:
function func {
echo "foo"
ls /
}
declare -f func | sed '1,2d;$d' | docker exec -i <id> /bin/bash -
Syntax highlighting still works in the function and it is easy to read.
If you want to use environment variables that are on the host in the container you have to list them manually in docker exec like so:
... | docker exec -i -e VAR=$VAR <id> /bin/bash -
Edit: I'm leaving this here as a possible solution, but the accepted answer is the proper solution I am using.

Unable to run queries from a file using psql command line with docker exec

I have a bash file should bring the postgres docker container online and then run a .sql file to create the databases. But it's throwing the error.
psql: error: provision-db.sql: No such file or directory
I have checked the path and the file exists at the same level of this bash script. Following is the content of my bash file.
#!/usr/bin/env bash
docker-compose up -d db
# Ensure the Postgres server is online and usable
until docker exec -i boohoo.postgres pg_isready --host="${POSTGRES_HOST}" --username="${POSTGRES_USER}"
do
echo "."
sleep 1
done
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
And this is the provision-db.sql file.
DROP DATABASE "boo-hoo";
CREATE DATABASE "boo-hoo";
GRANT ALL PRIVILEGES ON DATABASE "boo-hoo" TO postgres;
This is the part of docker-compose.yml
version: '3.3'
services:
db:
container_name: boohoo.postgres
hostname: postgres.boohoo
image: postgres
ports:
- "15432:5432"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
The short version
This works
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
The long version
multiple things here
1) why does following command not find the provision-db.sql?
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
because the provision-db.sql is on your host and not in your container. Therefore, when you execute the psql command inside the container it can not find the file
2) Why didn't my first solution work?
cat provision-db.sql | docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f - should do the trick asuming provision-db.sql
That is due to the fact, that the variables ${POSTGRES_USER} and ${POSTGRES_PASSWORD} get evaluated on your host machine and I guess they are not set there. In addition, I forgot to specify the -w flag to avoid the password prompt
3) Why does that work?
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
Well, let's go through it step by step.
First, we print the content of provision-db.sql, which resides on the host machine to stdout and pipe it to the next command via |.
docker-exec executes a command in the container specified (boohoo.postgres). By specifying the -i flag we allow the stdin from your host to go to stdin in the container <- that's important.
In the container, we execute bash -c which is just a wrapper to avoid evaluating the bash variables on the host. We want the variables from the container and by putting it into single quotes we can do that.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the host env variable named POSTGRES_USER.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the container env variable named POSTGRES_USER.
Next we just have to get our postgres command in order.
psql -U ${POSTGRES_USER} -w -a -q -f -
-U specifies the user
-w does not ask for password
-q do it quietly
-f - process whatever you get from stdin
-f is an option for psql and not for docker exec, and psql is running inside the container, so it can only access the file if it is inside the container as well.

Bash script to get into a running container and then run another bash script from that container

I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test

docker run -i -t image /bin/bash - source files first

This works:
# echo 1 and exit:
$ docker run -i -t image /bin/bash -c "echo 1"
1
# exit
# echo 1 and return shell in docker container:
$ docker run -i -t image /bin/bash -c "echo 1; /bin/bash"
1
root#4c064f2554de:/#
Question: How could I source a file into the shell? (this does not work)
$ docker run -i -t image /bin/bash -c "source <(curl -Ls git.io/apeepg) && /bin/bash"
# content from http://git.io/apeepg is sourced and shell is returned
root#4c064f2554de:/#
In my case, I use RUN source command (which will run using /bin/bash) in a Dockerfile to install nvm for node.js
Here is an example.
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
...
...
RUN source ~/.nvm/nvm.sh && nvm install 0.11.14
I wanted something similar, and expanding a bit on your idea, came up with the following:
docker run -ti --rm ubuntu \
bash -c 'exec /bin/bash --rcfile /dev/fd/1001 \
1002<&0 \
<<<$(echo PS1=it_worked: ) \
1001<&0 \
0<&1002'
--rcfile /dev/fd/1001 will use that file descriptor's contents instead of .bashrc
1002<&0 saves stdin
<<<$(echo PS1=it_worked: ) puts PS1=it_worked: on stdin
1001<&0 moves this stdin to fd 1001, which we use as rcfile
0<&1002 restores the stdin that we saved initially
You can use .bashrc in interactive containers:
RUN curl -O git.io/apeepg.sh && \
echo 'source apeepg.sh' >> ~/.bashrc
Then just run as usual with docker run -it --rm some/image bash.
Note that this will only work with interactive containers.
I don't think you can do this, at least not right now. What you could do is modify your image, and add the file you want to source, like so:
FROM image
ADD my-file /my-file
RUN ["source", "/my-file", "&&", "/bin/bash"]

Resources