How can I script a Docker command into a 'single word' binary? Using bash script? - bash

When I install something like nmap(even from APT), I cant get it to execute correctly, so I like to go the container route. Instead of typing:
docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
I figured maybe I could script it out, but nothing i've learned or found on google, youtube, etc, has helped so far... Can somebody lend a hand? I need to know how to get Bash to execute a command with args:
execute like:
./nmap.sh -A -T4 -Pn x.x.x.x
#!/bin/bash
echo docker run --rm -it instrumentisto/nmap $1 $2 $3 $4 $5
but how to get bash to run this instead of just echo I dont know. Thanks ahead!

Two solutions: create an alias, create a script.
With an alias
The command you write is replaced with the value of the alias, so
alias nmap="docker run --rm -it instrumentisto/nmap"
nmap -A -T4 -Pn x.x.x.x
# executes docker run --rm -it instrumentisto/nmap -A -T4 -Pn x.x.x.x
Aliases are not persistent so you will have to store it in some bash config (generally ~/.bashrc).
With a script
#!/bin/bash
set -Eeuo pipefail
docker run --rm -it instrumentisto/nmap "$#"
"$#" will forward all the arguments provided to the script directly to the command. The quotes are important, if you call your script with quoted values like ./nmap "something with spaces", that's one argument, it needs to be kept as one argument.
Bonus: With a function
Just like the script, you need to forward arguments when writing functions, just like aliases, they are not persistent so you have to store them in bash config:
nmap() {
docker run --rm -it instrumentisto/nmap "$#"
}

Related

How to pass ALL environment variables to container with docker exec

It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh

Save output of bash command from Dockerfile after Docker container was launched

I have a Dockerfile with ubuntu image as a base.
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name"]
I expect from this
executing of a simple bash script, which will take an environment variable from user keyboard input and output this value after running docker container. It goes right.
(part where i have a problem) saving values of environment variables to file + after every running of docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME i can see a list of entered from keyboard values.
My idea about part 2 were like
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME > /directory/tosave/values.txt. That works, but only one last value saves, not a list of values.
How can i change Dockerfile to save values to a file, which Docker will see and from which Docker after running will read and ouyput values? May be i shouldn`t use ENTRYPOINT?
Appreciate for any possible help. I`ve stuck.
Emphasizing that output and save of environment variables expected.
Like #lojza hinted at, > overwrites files whereas >> appends to them which is why your command is clobbering the file instead of adding to it. So you could fix it with this:
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME >> /directory/tosave/values.txt
Or using tee(1):
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME | tee -a /directory/tosave/values.txt
To clarify though, the docker container is not writing to values.txt, your shell is what is redirecting the output of the docker run command to the file. If you want the file to be written to by docker you should mount a file or directory into it the container using -v and redirect the output of the echo there. Here's an example:
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name | tee -a /data/values.txt"]
And then run it like so:
$ docker run --rm -e env_var_name=test1 -v "$(pwd):/data:rw" IMAGE-NAME
test1
$ docker run --rm -e env_var_name=test2 -v "$(pwd):/data:rw" IMAGE-NAME
test2
$ ls -l values.txt
-rw-r--r-- 1 root root 12 May 3 15:11 values.txt
$ cat values.txt
test1
test2
One more thing worth mentioning. echo $env_var_name is printing the value of the environment variable whose name is literally env_var_name. For example if you run the container with -e env_var_name=PATH it would print the literal string PATH and not the value of your $PATH environment variable. This does seem to be the desired outcome, but I thought it was worth explicitly spelling this out.

Docker run bash --init-file

I'm trying to create an alias to help debug my docker containers.
I discovered bash accepts a --init-file option which ought to let us run some commands before passing over to interactive mode.
So I thought I could do
docker-bash() {
docker run --rm -it "$1" bash --init-file <(echo "ls; pwd")
}
But those commands don't appear to be running:
% docker-bash c7460dfcab50
root#9c6f64a9db8c:/#
Is it an escaping issue or.. what's going on?
bash --init-file <(echo "ls; pwd")
Alone in a terminal on my host machine works as expected (runs the command starts a new bash instance).
In points:
The <(...) is a bash extension process subtitution.
From the manual above: Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files..
The process substitution works like this:
bash creates a fifo in /tmp or creates a new file descriptor in /dev/fd.
The filename, either the /tmp/.something or /dev/fd/<number> is substituted for <(...) when command is executed.
So for example echo <(echo 1) outputs /dev/fd/63.
Docker works by creating a new environment that is separated from the host. That means that:
Processes inside docker do not inherit file descriptors from the host process:
So /dev/fd/* files are not inherited.
Processes inside docker are accessing isolated filesystem tree.
So processes can't access /tmp/* files from the host.
So summarizing docker run -ti --rm alpine cat <(echo 1) will not work, because the filename substituted by <(...) is not available from docker environment.
An easy workaround would be to just:
docker run -ti --rm alpine sh -c 'ls; pwd; exec sh'
Or use a temporary file:
echo "ls; pwd" > /tmp/tempfile
docker run -v /tmp/tempfile:/tmp/tempfile bash bash --init-file /tmp/tempfile
For my use-case I wanted to set an alias which won't persist if we re-exec the shell. However, aliases can be written to ~/.bashrc which will be reloaded on the subsequent exec. Ergo,
docker-bash() {
docker run --rm -it "$1" bash -c $'set -o xtrace; echo "alias ll=\'ls -lAhtrF --color=always\'" >> ~/.bashrc; exec "$0"'
}
Works. --rm should clean up any files we create anyway if I understand properly how docker works.
Or perhaps this is a nicer way to write it:
docker-bash() {
read -r -d '' BASHRC << EOM
alias ll='ls -lAhtrF --color=always'
EOM
docker run --rm -it "$1" bash -c "echo \"$BASHRC\" >> ~/.bashrc; exec \"\$0\""
}

Source script on interactive shell inside Docker container

I want to open a interactive shell which sources a script to use the bitbake environment on a repository that I bind mount:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh"
The problem is that the -it argument does not seem to have any effect, since the shell exits right after executing cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh
I also tried this:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh && bash"
Which spawns an interactive shell, but none of the macros defined in set_bb_env.sh
Would there be a way to provide a tty with the script properly sourcered ?
The -it flag is conflicting with the command to run in that you're telling docker to create the pseudo-terminal (ptty), and then running a command in that terminal (bash -c ...). When that command finishes, then the run is done.
What some people have done to work around this is to only have export variables in their sourced environment, and the last command would be exec bash. But if you need aliases or other items that aren't inherited like that, then your options are a bit more limited.
Instead of running the source in a parent shell, you could run it in the target shell. If you modified your .bash_profile to include the following line:
[ -n "$DOCKER_LOAD_EXTRA" -a -r "$DOCKER_LOAD_EXTRA" ] && source "$DOCKER_LOAD_EXTRA”
and then had your command be:
... /bin/bash -c "cd /mnt/bb_repository/oe-core && DOCKER_LOAD_EXTRA=build/conf/set_bb_env.sh exec bash"
that may work. This tells your .bash_profile to load this file when the env variable is already set, but not otherwise. (There can also be the -e flag on the docker command line, but I think that sets it globally for the entire container, which is probably not what you want.)

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

Resources