Dereference environment variable on parameter expansion in shell script - bash

I am trying to dereference the value of environment variable using parameter expansion $#, but it doesn't seem to work.
I need to call a shell script with certain arguments. The list of arguments contain environment variables, and the environment variables are expected to be present where the shell script is to be executed. I do not know the list of commands before hand, so I am expanding those list of commands using $#. However, the script is not able to de-reference the value of environment variables.
A minimal setup which explains my problem can be done as below.
Dockerfile
FROM alpine:3.10
ENV MY_VAR=production
WORKDIR /app
COPY run.sh .
ENTRYPOINT [ "sh", "run.sh" ]
run.sh
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
I can build the image using docker build . -t env-test. When I run it using docker run env-test:latest 'echo $MY_VAR', I get the below output.
Value of MY_VAR is production
Begin
$MY_VAR
Done
While the output that I am expecting is:
Value of MY_VAR is production
Begin
production
Done
SideNote: In actuality I am trying to run it using a compose file like below:
version: '3'
services:
run:
image: env-test:latest
command: echo $$MY_VAR
but it again gives me the similar result.

Expanding on the eval approach, here is a simple bash script that will use eval to evaluate a string as a sequence of bash commands:
#!/usr/bin/env bash
echo program args: $#
eval $#
but beware, eval comes with dangers:
https://medium.com/dot-debug/the-perils-of-bash-eval-cc5f9e309cae

First thing, $# it will just print the number of argument
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
$# = stores all the arguments in a list of string
$* = stores all the arguments as a single string
$# = stores the number of arguments
What does $# mean in a shell script?
Second thing, When run the below command
docker run env-test:latest 'echo $MY_VAR'
It will look for $MY_VAR in host system not in the container.
To access container env you have to pass them as -e MY_VAR=test ,not as argument to docker run command which will in the ENV in host.
docker run -e MY_VAR=test env-test:latest
So the value of MY_VAR will be test not production.
To debug the argument to docker run
export MY_VAR2="value_from_host"
Now run
docker run env-test:latest "echo $MY_VAR2"
so the value will value_from_host because thes argument pick from host.

There are a more ways to skin this particular cat:
me#computer:~$ docker run -it --rm ubuntu:20.04 bash -c 'echo $HOSTNAME'
e610946f50c1
Here, we're calling on bash inside the container to process everything inside the single quotes, so variable substitution and/or expansion isn't applied by your shell, but by the shell inside the container.
Another approach is:
me#computer:~$ cat test.sh
#!/bin/bash
echo $HOSTNAME
me#computer:~$ cat test.sh | docker run -i --rm ubuntu:20.04 bash
62ba950a60fe
In this case, cat is "pushing" the contents of the script to bash the container, so it's functionally equivalent to my first example. The first method is "cleaner", however if you've got a more complex script, multi-line variables or other stuff that's difficult to put into a single command, then the second method is a better choice.
Note: The hostname is different in each example, because I'm using the --rm option which discards the container once it exits. This is great when you want to run a command in a container but don't need to keep the container afterwards.

Related

How to send a command properly into docker container?

I can execute the command below in my terminal succesfully.
command:
gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Now, I want to store this command into a variable and expand this command inside a docker container.
My script run.sh looks like the following. I first store my target command into mycommand and run the container with the command as input.
mycommand=$#;
docker run -ti --rm osgeo/gdal:ubuntu-small-latest /bin/bash -c "cd $(pwd); ${mycommand}"
And then I execute the run.sh as following.
bash run.sh gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Issue:
I was hoping everything after bash run.sh can be store literally into the mycommand variable.
And inside the docker container, the mycommand can be expand and execute literally. But it looks like that the double quote in my original command will be lost during this process.
Thank you.
You could pass the command as argument and then invoke "$#" inside the shell. I prefer mostly single quotes.
docker run -ti --rm osgeo/gdal:ubuntu-small-latest \
/bin/bash -c 'cd '"$(pwd)"' && "$#"' -- "$#"
If only you want cd just let docker change the directory with -w. In Bash $PWD will be faster then pwd command.
docker run ... -w "$PWD" image "$#"
Note that "$(pwd)" is not properly quoted inside child shell - the result will undergo word splitting and filename expansion. Anyway, I recommend declare -p and Bash arrays (and declare -f for functions) to transfer data between Bash-es. declare will always properly quote all stuff, so that child shell can properly import it.
cmd=("$#")
pwd=$PWD
work() {
cd "$pwd"
"${cmd[#]}"
}
docker ... bash -c "$(declare -p pwd cmd); $(declare -f work); work"
Research: when to use quoting in shell, difference between single and double quotes, word splitting expansion and how to prevent it, https://mywiki.wooledge.org/Quotes , bash arrays, https://mywiki.wooledge.org/BashFAQ/050 .

How do I pass multiple arguments to a shell script into `kubectl exec`?

Consider the following shell script, where POD is set to the name of a K8 pod.
kubectl exec -it $POD -c messenger -- bash -c "echo '$#'"
When I run this script with one argument, it works fine.
hq6:bot hqin$ ./Test.sh x
x
When I run it with two arguments, it blows up.
hq6:bot hqin$ ./Test.sh x y
y': -c: line 0: unexpected EOF while looking for matching `''
y': -c: line 1: syntax error: unexpected end of file
I suspect that something is wrong with how the arguments are passed.
How might I fix this so that arguments are expanded literally by my shell and then passed in as literals to the bash running in kubectl exec?
Note that removing the single quotes results in an output of x only.
Note also that I need the bash -c so I can eventually pass in file redirection: https://stackoverflow.com/a/49189635/391161.
I managed to work around this with the following solution:
kubectl exec -it $POD -c messenger -- bash -c "echo $*"
This appears to have the additional benefit that I can do internal redirects.
./Test.sh x y '> /tmp/X'
You're going to want something like this:
kubectl exec POD -c CONTAINER -- sh -c 'echo "$#"' -- "$#"
With this syntax, the command we're running inside the container is echo "$#". We then take the local value of "$#" and pass that as parameters to the remote shell, thus setting $# in the remote shell.
On my local system:
bash-5.0$ ./Test.sh hello
hello
bash-5.0$ ./Test.sh hello world
hello world

Run an arbitrary command in a docker container that runs on a remote host after sourcing some environment variables from another command

To show what I am trying to do, this is part of the bash script I have so far:
COMMAND="${#:1}"
CONTAINER_DOCKER_NAME=this-value-is-computed-prior
MY_IP=this-ip-is-computed-prior
ssh user#$MY_IP -t 'bash -c "docker exec -it $( docker ps -a -q -f name='$CONTAINER_DOCKER_NAME' | head -n 1 ) /bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND""'
So let's break down the long command:
I am ssh-ing into a host where I run a bash which fetches the correct container with docker ps and then I do docker exec to run a shell in the container to load some environment variables that my $COMMAND needs to work. Important to note is that $BAR should be the value of the BAR variable inside the container.
So thats what I'm trying to accomplish in theory. However when running this no matter how I set the braces, quotes or escape characters - I always run into problems, either the shell syntax is not correct or it does not run the correct command (especially when the command has multiple arguments) or it loads $BAR value from my local desktop or the remote host but not from the container.
Is this even possible at all with a single shell one-liner?
I think we can simplify your command quite a bit.
First, there's no need to use eval here, and you don't need the &&
operator, either:
/bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND"
Instead:
/bin/sh -c "FOO=$BAR $COMMAND"
That sets the environment variable FOO for the duration of
$COMMAND.
Next, you don't need this complex docker ps expression:
docker ps -a -q -f name="$CONTAINER_DOCKER_NAME"
Docker container names are unique. If you have a container name
stored in $CONTAINER_DOCKER_NAME, you can just run:
docker exec -it $CONTAINER_DOCKER_NAME ...
This simplifies the docker command down to:
docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c "FOO=\$BAR $COMMAND"
Note how we're escaping the $ in $BAR there, because we want that
interpreted inside the container, rather than by our current shell.
Now we just need to arrange to run this via ssh. There are a couple
of solutions to that. We can just make sure to protect everything on
the command line against the extra level of shell expansion, like
this:
ssh user#$MY_IP "docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c \"FOO=\\\$BAR $COMMAND\""
We need to wrap the entire command in double quotes, which means we
need to escape any quotes inside the command (we can't use single
quotes because we actually want to expand the variable
$CONTAINER_DOCKER_NAME locally). We're going to lose one level of
\ expansion, so our \$BAR becomes \\\$BAR.
If your command isn't interactive, you can make this a little less
hairy by piping the script to bash rather than including it on the
command line, like this:
ssh user#$MY_IP docker exec -i $CONTAINER_DOCKER_NAME /bin/sh <<EOF
FOO=\$BAR $COMMAND
EOF
That simplifies the quoting and escaping necessary to get things
passed through to the container shell.
thanks to larsks great explanation I got it working, my final one-liner is:
ssh -i $ECS_SSH_KEY ec2-user#$EC2_IP -t "bash -c \"docker exec -it \$( docker ps -a -q -f name=$CONTAINER_DOCKER_NAME | head -n 1 ) /bin/sh -c \\\"eval \\\\\\\$(AWS_ENV_PATH=/\\\\\\\$ENVIRONMENT /bin/aws-env) && $COMMAND\\\"\""
so basically you wrap everything in double quotes and then also use double quotes inside of it because we need some variables ,like $DOCKER_CONTAINER_NAME from the host. to escape the quotes and the $ sign you use \ .
but because we have multiple levels of shells (host, server, container) we also need to use multiple levels of escaping. so the first level is just \$ which will protect that the variable (or the shell command, like docker ps) is not run on the host but on the server.
then the next level of escaping is 7 times \ . every \ escapes the character to the right so in the end it is \\\$ on the second level (server) and \$ on the third level (container). this ensures that the variable is evaluated in the container not on the server.
same principle with the double quotes. Everything between \" is run on the second level and everything between \\\" is run on the third level.

AWS_ACCESS_KEY_ID command not found when using shell to call aws cli passing environment var fail:

When I use Environment Variable to call aws cli in work terminal directly it will success:
AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=xxx AWS_REGION=xxx aws elb describe-load-balancers --output json --debug
But if in shell, I first setup AWS_KEY_PAIR then execute the same command, it will returns to me AWS_ACCESS_KEY_ID=xxx: command not found
My shell script looks like this:
function test(){
AWS_KEY_PAIR="AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_KEY} AWS_REGION=${AWS_REGION}"
${AWS_KEY_PAIR} aws elb describe-load-balancers --output json --debug }
Could anyone tell me why it success if I run first command directly in work terminal but fail in shell script? Is the AWS_KEY_PAIR can't be setup like this? Thank you very much!
The problem lays in the way you modify the environment:
man bash : The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described above in PARAMETERS. These assignment statements affect only the environment seen by
that command.
This means that if a variable exists, you can have it temporarily modified for a single command. Eg.
$ cat test.sh
#!/usr/bin/env zsh
echo $FOO
$ FOO="hello world"
$ echo $FOO
hello world
$ FOO="hello universe" ./test.sh
hello universe
$ echo $FOO
hello world
There you see that during the execution of FOO="hello universe" ./test.sh the variable FOO is temporarily modified. This is exactly what you do during execution of your command on the command line.
In your script however, you attempt something different. You assign the assignment of your script to a variable and then try to "execute this".
$ BAR="FOO=hello"
$ echo $BAR
FOO=hello
$ $BAR
bash: FOO=hello: command not found...
As you see, it tries to execute the command FOO=hello which is not a command but actually a string you try to execute. It is similar to typing $ "FOO=hello". So you can now imagine that
$ $BAR ./test.sh
will also not execute.
There is an evil workaround here using eval, but eval is evil.
man bash : eval [arg ...]
The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0.
$ BAR="FOO=hello"
$ echo $BAR
FOO=hello
$ eval $BAR
$ echo $FOO
hello
$ FOO="hello world"
$ eval $BAR ./test.sh
hello
The latter examples are exactly what you try to attempt in your function test(). You assign your variable declaration to the variable AWS_KEY_PAIR and then execute aws with the modified environment ${AWS_KEY_PAIR}, but this will not work as AWS_KEY_PAIR is nothing more then a long string. You can thus fix it by placing eval in front of it, or by typing the full key pair out as
function test(){
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY} \
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_KEY} \
AWS_REGION=${AWS_REGION} \
aws elb describe-load-balancers --output json --debug
}

Pass all args to a command called in a new shell using bash -c

I've simplified my example to the following:
file1.sh:
#!/bin/bash
bash -c "./file2.sh $#"
file2.sh:
#!/bin/bash
echo "first $1"
echo "second $2"
I expect that if I call ./file1.sh a b to get:
first a
second b
but instead I get:
first a
second
In other words, my later arguments after the first one are not getting passed through to the command that I'm executing inside a new bash shell. I've tried many variations of removing and moving around the quotation marks in the file1.sh file, but haven't got this to work.
Why is this happening, and how do I get the behavior I want?
(UPDATE - I realize it seems pointless that I'm calling bash -c in this example, my actual file1.sh is a proxy script for a command that gets called locally to run in a docker container so it's actually docker exec -i mycontainer bash -c '')
Change file1.sh to this with different quoting:
#!/bin/bash
bash -c './file2.sh "$#"' - "$#"
- "$#" is passing hyphen to populate $0 and $# is being passed in to populate all other positional parameters in bash -c command line.
You can also make it:
bash -c './file2.sh "$#"' "$0" "$#"
However there is no real need to use bash -c here and you can just use:
./file2.sh "$#"

Resources