SSH and Docker Exec together are not accepting multiple commands - bash

I need to create a shell script that connects to a remote machine using SSH and then runs some commands inside a docker container that is running on that machine.
I want the below command to work. But it only executes first command in the container.
ssh -i key user#1.1.1.1 docker exec my-container bash -c command1 && command2 && command3
So far the best solution I could come up with is this:
ssh -i key user#1.1.1.1 "docker exec my-container bash -c 'command1 && command2 && command3'"
But it only works with some commands. I can run commands like mkdir echo but I couldn't use curl with it.
ssh -i key user#1.1.1.1 "docker exec my-container bash -c 'curl --verbose --stderr stderr -X GET "http://2.2.2.2:5000/file/download" -H "Authorization: Bearer $1" > curl_out
I somehow need to make the curl command work. It succesfully expands $1 as the authorization token but curl command does not see use the headers. I couldn't get it to work.
Is there a better way of constructing this kind of nested command pipe. I have tried like 50 different combinations of quotes, different variables, trying to write the echo inside a shell script inside the container and then running it. Each solution fails upon trying to use complex commands with multiple options / arguments.

When you write this:
ssh -i key user#1.1.1.1 docker exec my-container \
bash -c command1 && command2 && command3
You are just creating a local shell pipeline. That's the same thing as if you were to run:
date && command2 && command3
Your shell doesn't magically know that you intended to run those second two commands on the remote host. If you want to pass that entire shell pipeline to the remote host, you need to quote it.
You might be tempted to do something like this:
ssh -i key user#1.1.1.1 docker exec my-container \
bash -c "command1 && command2 && command3"
But that still won't work as intended: in this case, you are running the command docker exec my-container bash -c command1 && command2 && command3 on the remote host. That is, only command1 is being run inside the container. You need another level of quoting:
ssh -i key user#1.1.1.1 docker exec my-container \
'bash -c "command1 && command2 && command3"'

Related

Run shell script inside ssh session inside Jenkinsfile

I'm trying to run a complete script while the ssh session is live instead of single commands.
Here is my current code:
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh "ssh -v ubuntu#IPV4_DNS docker pull X:${BUILD_NUMBER}"
sh "ssh -v ubuntu#IPV4_DNS docker rm -f test"
sh "ssh -v ubuntu#IPV4_DNS docker run --name=test -d -p 3000:3000X:${BUILD_NUMBER}"
The desired code is something like this, but the following doesn't work:*
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh ''' ssh -v ubuntu#IPV4_DNS docker pull X:${BUILD_NUMBER}
&& docker rm -f test && docker run --name=test -d -p 3000:3000X:${BUILD_NUMBER}
'''
ssh something here && something else && another one
runs something here in the ssh session, and something else and another one locally. You want to add quotes to pass the entire command line to ssh.
sh "ssh -tt -o StrictHostKeyChecking=no ubuntu#IPV4_DNS uptime"
sh """ssh -v ubuntu#IPV4_DNS 'docker pull X:${BUILD_NUMBER} &&
docker rm -f test &&
docker run --name=test -d -p "3000:3000X:${BUILD_NUMBER}"'
"""
I switched to triple double quotes instead of triple single quotes, assuming you want Jenkins to expand ${BUILD_NUMBER} for you.
The original question asked about Bash, but for the record, you are running sh here, not Bash. If you wanted to use Bash features in a Jenkinsfile, you can add a shebang #!/usr/bin/env bash or similar as the very first line of the command. But that's not necessary here; all these commands are simple and completely POSIX. (Maybe see also Difference between sh and bash)

Direct group of commands into `docker exec`

I have the following command that works fine and prints foo before returning:
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I want to direct multiple commands into the container with one pipe, for example echo 'foo' and ls /. I have tried the following:
This fails because it runs the commands on the host and pipes the output into the container:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This fails because it has bad syntax. It also runs on the host:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This one fails, but I would like to not use an array of strings anyway:
for COMMAND in 'echo "foo"' 'ls /'
do
docker exec -i <id> /bin/sh < echo $COMMAND
done
I've also tried several other methods like piping commands into tee or echo but haven't had any luck. If you would like to know why I want to do this seemingly ridiculous thing, it's because:
This is a short script that I would like to keep all in one place
I would like to use syntax highlighting, so I don't want to store it all in a list of strings
The container has the programs the script should run and the host does not
This is an automatic process that I would like to trigger with crontab on the host
You can run a group of commands in the below fashion
docker exec -i <id> /bin/sh -c 'echo "foo"; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo 'foo'; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo foo; ls -l'
If you want to run more than 2 commands, just append ; after each command like
docker exec -i 996eee5d121d /bin/sh -c 'echo "foo"; ls -l; ls -a'
Use a here document.
docker run -i --rm alpine /bin/sh <<EOF
echo abc
ls /
EOF
Note the difference between quoted and unquoted here document delimiter.
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I think you meant to do:
docker exec -i <id> /bin/sh < <(echo "echo 'foo'")
which is just the same as:
docker exec -i <id> /bin/sh <<<"echo 'foo'"
#edit There is a cool little trick. The idea is to pipe the script itself except first lines to another subprocess, it's sometimes used by installer scripts:
#!/bin/sh
# output this script except first 4 lines to docker
tail -n+5 "$0" | docker run -i --rm alpine /bin/sh -x
exit # we exit original script
#!/bin/sh
# inside docker now
echo abc
ls /
Execution:
$ bash -x ./script.sh
+ tail -n+5 ./script.sh
+ docker run -i --rm alpine /bin/sh -x
+ echo abc
+ ls /
abc
bin
...
var
+ exit
In a similar fashion you could use sed or another parsing tool to extract the only the relevant part between some marks for example.
I found a gist that explained how to pipe commands into docker exec:
echo "echo foo" | docker exec -i <id> /bin/sh -
Now we need a way to pipe multiple commands. Command groups won't work because they run on the host and semicolon separated commands can get messy. I thought of writing a function and getting just its body, it turns out you can do that with a simple declare and sed call.
You can combine all these pieces to pipe a command into the container:
function func {
echo "foo"
ls /
}
declare -f func | sed '1,2d;$d' | docker exec -i <id> /bin/bash -
Syntax highlighting still works in the function and it is easy to read.
If you want to use environment variables that are on the host in the container you have to list them manually in docker exec like so:
... | docker exec -i -e VAR=$VAR <id> /bin/bash -
Edit: I'm leaving this here as a possible solution, but the accepted answer is the proper solution I am using.

Docker run bash --init-file

I'm trying to create an alias to help debug my docker containers.
I discovered bash accepts a --init-file option which ought to let us run some commands before passing over to interactive mode.
So I thought I could do
docker-bash() {
docker run --rm -it "$1" bash --init-file <(echo "ls; pwd")
}
But those commands don't appear to be running:
% docker-bash c7460dfcab50
root#9c6f64a9db8c:/#
Is it an escaping issue or.. what's going on?
bash --init-file <(echo "ls; pwd")
Alone in a terminal on my host machine works as expected (runs the command starts a new bash instance).
In points:
The <(...) is a bash extension process subtitution.
From the manual above: Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files..
The process substitution works like this:
bash creates a fifo in /tmp or creates a new file descriptor in /dev/fd.
The filename, either the /tmp/.something or /dev/fd/<number> is substituted for <(...) when command is executed.
So for example echo <(echo 1) outputs /dev/fd/63.
Docker works by creating a new environment that is separated from the host. That means that:
Processes inside docker do not inherit file descriptors from the host process:
So /dev/fd/* files are not inherited.
Processes inside docker are accessing isolated filesystem tree.
So processes can't access /tmp/* files from the host.
So summarizing docker run -ti --rm alpine cat <(echo 1) will not work, because the filename substituted by <(...) is not available from docker environment.
An easy workaround would be to just:
docker run -ti --rm alpine sh -c 'ls; pwd; exec sh'
Or use a temporary file:
echo "ls; pwd" > /tmp/tempfile
docker run -v /tmp/tempfile:/tmp/tempfile bash bash --init-file /tmp/tempfile
For my use-case I wanted to set an alias which won't persist if we re-exec the shell. However, aliases can be written to ~/.bashrc which will be reloaded on the subsequent exec. Ergo,
docker-bash() {
docker run --rm -it "$1" bash -c $'set -o xtrace; echo "alias ll=\'ls -lAhtrF --color=always\'" >> ~/.bashrc; exec "$0"'
}
Works. --rm should clean up any files we create anyway if I understand properly how docker works.
Or perhaps this is a nicer way to write it:
docker-bash() {
read -r -d '' BASHRC << EOM
alias ll='ls -lAhtrF --color=always'
EOM
docker run --rm -it "$1" bash -c "echo \"$BASHRC\" >> ~/.bashrc; exec \"\$0\""
}

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

Some Output Lost in Command Passed to SSH

I'm trying to use an ssh command to ssh to a server and run theuseradd command I passed to it. It seems like its running ok for the most part (no errors produced) but the hashed password in the /etc/shadow file is missing the salt (I believe that's the portion that's missing.).
I'm not sure if the quoting that is incorrect or not. But running this command manually on the server works fine, so I'm assuming its the expansion that's messed up.?
The command below is running inside a Bash script...
Command:
ssh user#$host "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios"
*When I escape the double quotes inside the perl one-liner, I get the error:
Can't find string terminator '"' anywhere before EOF at -e line 1.
Usage: useradd [options] LOGIN
Any idea what I'm doing wrong here?
Instead of enclosing the entire command in double-quotes and making sure to correctly escape everything in it, it will be more robust to use single-quotes, and handle embedded single-quotes as necessary.
In fact there are no embedded single-quotes to handle,
only the embedded literal $ in the $6$salt.
ssh "user#$host" 'useradd -d /usr/local/nagios -p $(perl -e "print crypt(q{mypassword}, q{\$6\$salt});") -g nagios nagios && chown -R nagios:nagios /usr/local/nagios'
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && scp /tmp/tempcommand root#server1:/tmp && ssh server1 "sh -x /tmp/tempcommand && finger nagios && rm /tmp/tempcommand"
In such cases I always prefer to have a local file on the local/remote server from which I execute the command set. Saves a lot of "quotes debugging time". What I am doing above is first to save the long one-liner to a file locally, "as is" and "as works" locally, copy it over with scp to the remote server and execute it there with the shell.
More secure way (no need to copy over the file). Again - save it locally and pass it to the remote bash with -s option :
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && echo finger nagios >> /tmp/tempcommand && ssh server1 'bash -s' < /tmp/tempcommand

Resources