Cannot use process substitution during docker build because bash goes into posix mode - bash

In a Dockerfile, I want to use process substitution:
RUN echo <(echo '$DATA:'"$DATA")
But docker build runs every RUN command with /bin/sh. Apparently being run as sh causes bash to switch to POSIX mode, which does not allow process substitution:
/bin/sh: -c: line 0: syntax error near unexpected token `('
I tried switching off POSIX mode:
RUN set +o posix && echo <(echo '$DATA:'"$DATA")
But it seems the syntax error happens even before the first command is run. Same if I replace && with ;.
Note that the command (even the one that I used as a simplified example here) contains both single and double quotes, so I can't simply prepend bash -c.
The used shell is actually a bash, but it is invoked as /bin/sh by docker:
Step 7 : RUN ls -l /bin/sh
---> Running in 93a9809e12a7
lrwxrwxrwx 1 root root 9 Dec 28 03:38 /bin/sh -> /bin/bash

If you are sure you have bash in your image being built, then you can change the shell invokation by using the SHELL command, which I described in another question.
You can use SHELL [ "/bin/bash", "-c" ]. Consider:
$ docker build --no-cache - < <(echo '
> FROM fedora
> RUN cat <(echo hello world)
> ')
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM fedora
---> ef49352c9c21
Step 2/2 : RUN cat <(echo hello world)
---> Running in 573730ced3a3
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `cat <(echo hello world)'
The command '/bin/sh -c cat <(echo hello world)' returned a non-zero code: 1
$ docker build --no-cache - < <(echo '
> FROM fedora
> SHELL ["/bin/bash", "-c"]
> RUN cat <(echo hello world)
> ')
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM fedora
---> ef49352c9c21
Step 2/3 : SHELL ["/bin/bash", "-c"]
---> Running in e78260e6de42
Removing intermediate container e78260e6de42
---> ff6ec782a9f6
Step 3/3 : RUN cat <(echo hello world)
---> Running in afbb42bba5b4
hello world
Removing intermediate container afbb42bba5b4
---> 25f756dcff9b
Successfully built 25f756dcff9b

Assuming your sh is not bash, you can't use process substitution in shell mode directly; you need to spawn a bash session (non-login, non-interactive here):
RUN [ "/bin/bash", "-c", "echo <(echo '$DATA:'\"$DATA\")" ]
Here i have used the json (aka exec) form to make sure the quotes are easily managed, here you just need to escape quotes around $DATA: \"$DATA\" -- to prevent json interpretation beforehand.
If your sh is in fact bash, this should do:
RUN "echo <(echo '$DATA:'"$DATA")"
Also this just outputs the file descriptor, i am not rally sure about your plan.

Related

Bash script start docker container script & pass in arguments

I have a bash script that runs command line functions, and I then need the script to run commands in a docker container. I then need the script to pass in arguments into the docker, and eventually exit. However, I'm unable to have the script pass in arguments into the docker container. How can I do this?
This is what the docker commands look like without the bash script for reference.
$ docker exec -it rti_cmd
root#29c:/data# rti
187.0.0.1:9806> run_cmd
(integer) 0
187.0.0.1:9806> exit
root#29c:/data# exit
exit
Code snippet with two variations of attempts:
#!/bin/bash
docker exec -it rti_cmd bash<< eeee
rti
run_cmd
exit
exit
eeee
#also have done without the ";"
docker exec -it rti_cmd bash /bin/sh -c
"rti;
run_cmd;
exit;
exit"
Errors:
$ chmod +x test.sh
$ ./test.sh
the input device is not a TTY
/bin/sh: /bin/sh: cannot execute binary file
./test.sh: line 17: $'rti;\nrun_cmd;\nexit;\nexit': command not found
You don't need -i interacive nor -t tty if you want to be non-interactive.
docker exec rti_cmd sh -c 'rti;run_cmd'

How can I avoid the shell script from automatically adding single quotes?

Here is my example shell script:
#!/bin/bash
#assuming this param is obtained from outside
cmd='-c "python test.py --log_level=info"'
#full cmd
docker run xxx $cmd
The expected command should be
docker run xxx -c "python test.py --log_level=info"
however, error occurs:
test.py: -c: line 0: unexpected EOF while looking for matching `"'
So I run with 'sh -x' and here is output:
+ cmd='-c "python test.py --log_level=info"'
+ docker run xxx -c '"python' test.py '--log_level=info"'
The full cmd is not I wondered, can you help me to solve this? Big thanks :)
First make it easy for yourself by testing with an easier command, eg ls -- $cmd
Second, since you're using bash and not just the posix shell, use arrays!
cmd=(-c "python test.py --log_level=info")
ls -- "${cmd[#]}"
/bin/ls: cannot access '-c': No such file or directory
/bin/ls: cannot access 'python test.py --log_level=info': No such file or directory
See?
IIUC, there is no need double quotes in your cmd, just cmd='-c python test.py --log_level=info'.
In docker run, the -c is an OPTION which means CPU shares (relative weight).
Maybe you want to exec docker run xxx sh -c ..., so the script should looks like:
cmd='sh -c python test.py --log_level=info'
docker run xxx $cmd

How do I pass multiple arguments to a shell script into `kubectl exec`?

Consider the following shell script, where POD is set to the name of a K8 pod.
kubectl exec -it $POD -c messenger -- bash -c "echo '$#'"
When I run this script with one argument, it works fine.
hq6:bot hqin$ ./Test.sh x
x
When I run it with two arguments, it blows up.
hq6:bot hqin$ ./Test.sh x y
y': -c: line 0: unexpected EOF while looking for matching `''
y': -c: line 1: syntax error: unexpected end of file
I suspect that something is wrong with how the arguments are passed.
How might I fix this so that arguments are expanded literally by my shell and then passed in as literals to the bash running in kubectl exec?
Note that removing the single quotes results in an output of x only.
Note also that I need the bash -c so I can eventually pass in file redirection: https://stackoverflow.com/a/49189635/391161.
I managed to work around this with the following solution:
kubectl exec -it $POD -c messenger -- bash -c "echo $*"
This appears to have the additional benefit that I can do internal redirects.
./Test.sh x y '> /tmp/X'
You're going to want something like this:
kubectl exec POD -c CONTAINER -- sh -c 'echo "$#"' -- "$#"
With this syntax, the command we're running inside the container is echo "$#". We then take the local value of "$#" and pass that as parameters to the remote shell, thus setting $# in the remote shell.
On my local system:
bash-5.0$ ./Test.sh hello
hello
bash-5.0$ ./Test.sh hello world
hello world

Executing 'bash -c' in 'docker exec' command

Context: I'm trying to write a shortcut for my daily use of the docker exec command. For some reasons, I'm experimenting the problem that my output is sometimes broken when I'm using a bash console inside a container (history messed up, lines overwrite each other as I'm writing, ...)
I read here that you could overcome this problem by adding some command before starting the bash console.
Here is a relevant excerpt of what my script does
#!/bin/bash
containerHash=$1
commandToRun='bash -c "stty cols $COLUMNS rows $LINES && bash -l"'
finalCommand="winpty docker exec -it $containerHash $commandToRun"
echo $finalCommand
$finalCommand
Here is the output I get:
winpty docker exec -it 0b63a bash -c "stty cols $COLUMNS rows $LINES && bash -l"
cols: -c: line 0: unexpected EOF while looking for matching `"'
cols: -c: line 1: syntax error: unexpected end of file
I read here that this had to do with parsing and expansion. However, I can't use a function or an eval command (or at least I didn't succeed in making it work).
If I execute the first output line directly in my terminal, it works without trouble.
How can I overcome this problem?
It's not Docker related, but Bash (In other words, the docker's part of the command works well, it's just bash grumbling on the container like it would grumble on your host):
Minimal reproducible error
cmd='bash -c "echo hello"'
$cmd
hello": -c: line 0: unexpected EOF while looking for matching `"'
hello": -c: line 1: syntax error: unexpected end of file
Fix
cmd='bash -c "echo hello"'
eval $cmd
hello
Answer
foo='docker exec -it XXX bash -c "echo hello"'
eval $foo
This will let you execute your command echo hello on your container, now if you want to add dynamic variables to this command (like echo $string) you just have to get rid of single quotes for double ones, to make this works you will have to escape inner double quotes:
foo="docker exec -it $container bash -c \"echo $variable\""
A complete example
FOO="Hello"
container=$1
bar=$2
cmd="bash -c \"echo $FOO, $bar\""
final_cmd="docker exec -it $container $cmd"
echo "running command: \"$final_cmd\""
eval $final_cmd
Let's take time to dig in,
$FOO is a static variable, in our case it works exactly like a regular variable, just to show you.
$bar is a dynamic variable which takes second command line argument as value
Because $cmd and $final_cmd uses only double quotes, variables are interpreted
Because we use eval $final_cmd command is well interpreted, bash is happy.
Finally, a usage example:
bash /tmp/dockerize.sh 5b02ab015730 world
Gives
running command: "docker exec -it 5b02ab015730 bash -c "echo Hello, world""
Hello, world

Using exec -a in Script

Hi I'm trying to run the following script. However, I get an error. Any tips?
prog1 takes in an argument in this case 1000. I am using the exec command because I want to change the program name to "/bin/grade" when executing prog1.
This is the error I am getting:
/script.sh: 2: exec: -a: not found
#! /bin/sh
exec -a "/bin/grade" ./prog1 1000 &
sleep 0.001
kill -14 $!
Run the script with bash instead of bash instead of sh - put #!/bin/bash at the top. The -a flag is specific to the bash shell.
Example A:
#!/bin/sh
exec -a "/bin/bash" pwd
Returns: ./test.sh: 3: exec: -a: not found
Example B:
#!/bin/bash
exec -a "/bin/sh" pwd
Returns: /home/dev

Resources