How to send a command properly into docker container? - bash

I can execute the command below in my terminal succesfully.
command:
gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Now, I want to store this command into a variable and expand this command inside a docker container.
My script run.sh looks like the following. I first store my target command into mycommand and run the container with the command as input.
mycommand=$#;
docker run -ti --rm osgeo/gdal:ubuntu-small-latest /bin/bash -c "cd $(pwd); ${mycommand}"
And then I execute the run.sh as following.
bash run.sh gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Issue:
I was hoping everything after bash run.sh can be store literally into the mycommand variable.
And inside the docker container, the mycommand can be expand and execute literally. But it looks like that the double quote in my original command will be lost during this process.
Thank you.

You could pass the command as argument and then invoke "$#" inside the shell. I prefer mostly single quotes.
docker run -ti --rm osgeo/gdal:ubuntu-small-latest \
/bin/bash -c 'cd '"$(pwd)"' && "$#"' -- "$#"
If only you want cd just let docker change the directory with -w. In Bash $PWD will be faster then pwd command.
docker run ... -w "$PWD" image "$#"
Note that "$(pwd)" is not properly quoted inside child shell - the result will undergo word splitting and filename expansion. Anyway, I recommend declare -p and Bash arrays (and declare -f for functions) to transfer data between Bash-es. declare will always properly quote all stuff, so that child shell can properly import it.
cmd=("$#")
pwd=$PWD
work() {
cd "$pwd"
"${cmd[#]}"
}
docker ... bash -c "$(declare -p pwd cmd); $(declare -f work); work"
Research: when to use quoting in shell, difference between single and double quotes, word splitting expansion and how to prevent it, https://mywiki.wooledge.org/Quotes , bash arrays, https://mywiki.wooledge.org/BashFAQ/050 .

Related

Run an arbitrary command in a docker container that runs on a remote host after sourcing some environment variables from another command

To show what I am trying to do, this is part of the bash script I have so far:
COMMAND="${#:1}"
CONTAINER_DOCKER_NAME=this-value-is-computed-prior
MY_IP=this-ip-is-computed-prior
ssh user#$MY_IP -t 'bash -c "docker exec -it $( docker ps -a -q -f name='$CONTAINER_DOCKER_NAME' | head -n 1 ) /bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND""'
So let's break down the long command:
I am ssh-ing into a host where I run a bash which fetches the correct container with docker ps and then I do docker exec to run a shell in the container to load some environment variables that my $COMMAND needs to work. Important to note is that $BAR should be the value of the BAR variable inside the container.
So thats what I'm trying to accomplish in theory. However when running this no matter how I set the braces, quotes or escape characters - I always run into problems, either the shell syntax is not correct or it does not run the correct command (especially when the command has multiple arguments) or it loads $BAR value from my local desktop or the remote host but not from the container.
Is this even possible at all with a single shell one-liner?
I think we can simplify your command quite a bit.
First, there's no need to use eval here, and you don't need the &&
operator, either:
/bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND"
Instead:
/bin/sh -c "FOO=$BAR $COMMAND"
That sets the environment variable FOO for the duration of
$COMMAND.
Next, you don't need this complex docker ps expression:
docker ps -a -q -f name="$CONTAINER_DOCKER_NAME"
Docker container names are unique. If you have a container name
stored in $CONTAINER_DOCKER_NAME, you can just run:
docker exec -it $CONTAINER_DOCKER_NAME ...
This simplifies the docker command down to:
docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c "FOO=\$BAR $COMMAND"
Note how we're escaping the $ in $BAR there, because we want that
interpreted inside the container, rather than by our current shell.
Now we just need to arrange to run this via ssh. There are a couple
of solutions to that. We can just make sure to protect everything on
the command line against the extra level of shell expansion, like
this:
ssh user#$MY_IP "docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c \"FOO=\\\$BAR $COMMAND\""
We need to wrap the entire command in double quotes, which means we
need to escape any quotes inside the command (we can't use single
quotes because we actually want to expand the variable
$CONTAINER_DOCKER_NAME locally). We're going to lose one level of
\ expansion, so our \$BAR becomes \\\$BAR.
If your command isn't interactive, you can make this a little less
hairy by piping the script to bash rather than including it on the
command line, like this:
ssh user#$MY_IP docker exec -i $CONTAINER_DOCKER_NAME /bin/sh <<EOF
FOO=\$BAR $COMMAND
EOF
That simplifies the quoting and escaping necessary to get things
passed through to the container shell.
thanks to larsks great explanation I got it working, my final one-liner is:
ssh -i $ECS_SSH_KEY ec2-user#$EC2_IP -t "bash -c \"docker exec -it \$( docker ps -a -q -f name=$CONTAINER_DOCKER_NAME | head -n 1 ) /bin/sh -c \\\"eval \\\\\\\$(AWS_ENV_PATH=/\\\\\\\$ENVIRONMENT /bin/aws-env) && $COMMAND\\\"\""
so basically you wrap everything in double quotes and then also use double quotes inside of it because we need some variables ,like $DOCKER_CONTAINER_NAME from the host. to escape the quotes and the $ sign you use \ .
but because we have multiple levels of shells (host, server, container) we also need to use multiple levels of escaping. so the first level is just \$ which will protect that the variable (or the shell command, like docker ps) is not run on the host but on the server.
then the next level of escaping is 7 times \ . every \ escapes the character to the right so in the end it is \\\$ on the second level (server) and \$ on the third level (container). this ensures that the variable is evaluated in the container not on the server.
same principle with the double quotes. Everything between \" is run on the second level and everything between \\\" is run on the third level.

Pass all args to a command called in a new shell using bash -c

I've simplified my example to the following:
file1.sh:
#!/bin/bash
bash -c "./file2.sh $#"
file2.sh:
#!/bin/bash
echo "first $1"
echo "second $2"
I expect that if I call ./file1.sh a b to get:
first a
second b
but instead I get:
first a
second
In other words, my later arguments after the first one are not getting passed through to the command that I'm executing inside a new bash shell. I've tried many variations of removing and moving around the quotation marks in the file1.sh file, but haven't got this to work.
Why is this happening, and how do I get the behavior I want?
(UPDATE - I realize it seems pointless that I'm calling bash -c in this example, my actual file1.sh is a proxy script for a command that gets called locally to run in a docker container so it's actually docker exec -i mycontainer bash -c '')
Change file1.sh to this with different quoting:
#!/bin/bash
bash -c './file2.sh "$#"' - "$#"
- "$#" is passing hyphen to populate $0 and $# is being passed in to populate all other positional parameters in bash -c command line.
You can also make it:
bash -c './file2.sh "$#"' "$0" "$#"
However there is no real need to use bash -c here and you can just use:
./file2.sh "$#"

Bash: Execute command WITH ARGUMENTS in new terminal [duplicate]

This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 20 days ago.
So i want to open a new terminal in bash and execute a command with arguments.
As long as I only take something like ls as command it works fine, but when I take something like route -n , so a command with arguments, it doesnt work.
The code:
gnome-terminal --window-with-profile=Bash -e whoami #WORKS
gnome-terminal --window-with-profile=Bash -e route -n #DOESNT WORK
I already tried putting "" around the command and all that but it still doesnt work
You can start a new terminal with a command using the following:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>"
To continue the terminal with the normal bash profile, add exec bash:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>; exec bash"
Here's how to create a Here document and pass it as the command:
cmd="$(printf '%s\n' 'wc -w <<-EOF
First line of Here document.
Second line.
The output of this command will be '15'.
EOF' 'exec bash')"
xterm -e bash -c "${cmd}"
To open a new terminal and run an initial command with a script, add the following in a script:
nohup xterm -e bash -c "$(printf '%s\nexec bash' "$*")" &>/dev/null &
When $* is quoted, it expands the arguments to a single word, with each separated by the first character of IFS. nohup and &>/dev/null & are used only to allow the terminal to run in the background.
Try this:
gnome-terminal --window-with-profile=Bash -e 'bash -c "route -n; read"'
The final read prevents the window from closing after execution of the previous commands. It will close when you press a key.
If you want to experience headaches, you can try with more quote nesting:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "route -n; read -p '"'Press a key...'"'"'
(In the following examples there is no final read. Let’s suppose we fixed that in the profile.)
If you want to print an empty line and enjoy multi-level escaping too:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "printf \\\\n; route -n"'
The same, with another quoting style:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c '\''printf "\n"; route -n'\'
Variables are expanded in double quotes, not single quotes, so if you want them expanded you need to ensure that the outermost quotes are double:
command='printf "\n"; route -n'
gnome-terminal --window-with-profile=Bash \
-e "bash -c '$command'"
Quoting can become really complex. When you need something more advanced that a simple couple of commands, it is advisable to write an independent shell script with all the readable, parametrized code you need, save it somewhere, say /home/user/bin/mycommand, and then invoke it simply as
gnome-terminal --window-with-profile=Bash -e /home/user/bin/mycommand

Echo variable using sudo bash -c 'echo $myVariable' - bash script

I want to echo a string into the /etc/hosts file. The string is stored in a variable called $myString.
When I run the following code the echo is empty:
finalString="Hello\nWorld"
sudo bash -c 'echo -e "$finalString"'
What am I doing wrong?
You're not exporting the variable into the environment so that it can be picked up by subprocesses.
You haven't told sudo to preserve the environment.
\
finalString="Hello\nWorld"
export finalString
sudo -E bash -c 'echo -e "$finalString"'
Alternatively, you can have the current shell substitute instead:
finalString="Hello\nWorld"
sudo bash -c 'echo -e "'"$finalString"'"'
You can do this:
bash -c "echo -e '$finalString'"
i.e using double quote to pass argument to the subshell, thus the variable ($finalString) is expanded (by the current shell) as expected.
Though I would recommend not using the -e flag with echo. Instead you can just do:
finalString="Hello
World"

Simplest way to "forward" script arguments to another command

I have following script
#!/bin/bash
docker exec my_container ./bin/cli
And I have to append all arguments passed to the script to the command inside script. So for example executing
./script some_command -t --option a
Should run
docker exec my_container ./bin/cli some_command -t --option a
Inside the script. I am looking for simplest/most elegant way.
"$#" represent all arguments and support quoted arguments too:
docker exec my_container ./bin/cli "$#"

Resources