How to run a bash terminal in a Docker container along with additional commands? - bash

To run a bash terminal in a Docker container I can run the following:
$ docker exec -it <container> /bin/bash
However, I want to execute a command in the container automatically. For example, if I want to open a bash terminal in the container and create a file I would expect to run something like:
docker exec -it <container> /bin/bash -c "touch foo.txt"
However, this doesn't work... Is there a simple way to achieve this? Of course, I could type the command after opening the container, but I want to open a bash terminal and run a command at the same time.

You can run your touch command and then spawn another shell :
docker exec -it <container> /bin/bash -c "touch foo.txt; exec bash"

Works perfectly fine for me
~# docker run -tid --rm --name test ubuntu:20.04
~# docker exec -it test /bin/bash -c "touch /foo.txt"
~# docker exec -it test /bin/bash
root#b6b0efbb13be:/# ls -ltr foo.txt
-rw-r--r-- 1 root root 0 Mar 7 05:35 foo.txt

Easy solution:
docker exec -it <container> touch foo.txt
You can verify
docker exec -it <container> ls
This was tested with alpine image.
Remember that in docker images there is a entrypoint and a command. Now we are editing the command of the default entrypoint for alpine, via docker exec
It depends of the entrypoint if env variablers are load or not, $PATH ..., so other images maybe you need to write /bin/touch or /usr/bin/ls
Good luck!

When you run docker exec -it <container> /bin/bash -c "touch foo.txt", container sends 0 exit code so that it means the task is done and you'll be returned to your host.
When you run docker exec -it <container /bin/bash, bash shell is not terminated until you explicitly type exit or use CTRL+D in bash environment. bash is continuously running.
This is why when you run the second command, it goes to bash, runs your command (touches) and then exits.

Related

Running docker from bash script

I am using a tool (gatk) distributed as a docker image and try to use its commands in a shell script.
I run the docker in detached mode.
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Then I run the commands from shell script
#!/bin/bash
docker exec my_container gatk command1
wait
docker exec my_container gatk command2
command2 needs input from command1 so I use wait, but still command2 is executed before command 1 is finished.
I also tried
#!/bin/bash
docker exec my_container gatk command1
docker wait my_container
docker exec my_container gatk command2
but then the script does not continue running after command1 is completed.
I managed to solve it. The problem was is that when I ran docker exec I did not define it to receive input from the shell. Adding -i flag to docker exec solved the problem. Here is the full solution.
I start docker in detached mode
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Now I can close the terminal, the docker container is up and running and I can use it in a new terminal.
I generate a bash script called myscript.sh with the following code.
#!/bin/bash
docker exec -i my_container gatk command1
wait
docker exec -i my_container gatk command2
I run the script, disown it and close the terminal.
./myscript.sh&disown;exit
You can run both commands in a single shot:
docker run image /bin/bash -c "gatk command1 && gatk command2"

How to use docker container's env variable while running docker exec command

I am running a docker exec -it ... command and I need to use an environment variable of my docker container. An example:
docker exec -it container_id command_here param_1 $param_2_as_env_variable
In the case above, it pulls param_2_as_env_variable from host machine, and not the docker container. Is it possible to use env variable from container itself while running docker exec ... command from another machine?
Update: I can use ouput of docker exec -it container_id printenv | grep .... But I couldn't separate value and key. How can I get only value here?
Something like this could work (it assumes, the container has a shell installed)
docker exec -it container_id sh -c 'command_here param_1 $param_2_as_env_variable'
For example the following works:
docker exec -it test sh -c 'echo $HOSTNAME'
to give the host name of the container.
You need to escape the variable to pass it to the docker exec command unresolved:
Try:
docker exec -it container_id command_here param_1 \$param_2_as_env_variable

How to bash into a docker container

trying to bash into container and do a for loop which simply performs a command (which works on a single file by the way). it even seems to echo the right command...what did I forget
Untitled
for pdf in *.pdf ;
do
docker run --rm -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -g jpeg2000 -v -i '\'''$pdf''\''';
done
You can bash in a container with this commands:
To see the docker container id
docker container ls
To enter in bash inside a container.
docker exec -it CONTAINER_ID bash
First thing, you are not allocating tty in the docker run command and the docker container dies soon after converting files. Here is main process of container
#!/bin/bash
cd /home/docker
exec pdf2pdfocr.py "$#"
So, in this case, the life of this container is the life of exec pdf2pdfocr.py "$#" command.
As mentioned by #Fra, override the entrypoint and run the command manually.
docker run --rm -v "$(pwd):/home/docker" -it --entrypoint /bin/bash leofcardoso/pdf2pdfocr
but in the above run command, docker container will do not a thing and will just allocate the tty and the bash will open. So you can convert files inside your containers using docker exec and then run pdf2pdfocr.py -g jpeg2000 -v -i mypdf.pdf
So, if you want to run with override entry point then you can try.
docker run -it --rm --entrypoint /bin/bash -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -c "pdf2pdfocr.py -g jpeg2000 -v -i mypdf.pdf"
or with the bash script
#!/bin/bash
for pdf in *.pdf ;
do
echo "converting $pdf"
docker run -it --rm --entrypoint /bin/bash -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -c "pdf2pdfocr.py -g jpeg2000 -v -i $pdf"
done
But the container will die after completing the conversion.

Docker exec quoting variables

I'd like to know if there's a way to do this
Let's say the dockerfile contains this line, that specifies path of an executable
ENV CLI /usr/local/bin/myprogram
I'd like to be able to call this program using ENV variable name through exec command.
For example
docker exec -it <my container> 'echo something-${CLI}
Expecting
something-/usr/local/bin/myprogram
However that returns:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"${CLI} do something\": executable file not found in $PATH": unknown
Ok, I found a way to do it, all you need to do is evaluate command with bash
docker exec -it <container id> bash -c 'echo something-${CLI}'
returns something-/usr/local/bin/myprogram
If the CLI environment variable is not already set in the container, you can also pass it in such as:
docker exec -it -e CLI=/usr/local/bin/myprogram <container id> bash -c 'echo something-${CLI}'
See the help file:
docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
-e, --env list Set environment variables
....
In it's original revision docker exec -it <my container> '${CLI} do something' with the expectation that ${CLI} will be substituted with /usr/local/bin/myprogram (as the exec COMMAND) and everything after passed as ARG's to /usr/local/bin/myprogram will not work, this is clearly documented: https://docs.docker.com/engine/reference/commandline/exec/
COMMAND should be an executable, a chained or a quoted command will not work. Example:
docker exec -ti my_container "echo a && echo b" will not work, but
docker exec -ti my_container sh -c "echo a && echo b" will.
Following the documentation, this will work as expected: docker exec -ti my_container sh -c "${CLI} foo", ${CLI} will be be executed after variable expansion and the argument(s) passed to the shell script set in ${CLI} (e.g. sh -c /usr/local/bin/myprogram foo).
Alternatively you could set the ENTRYPOINT to your script and pass in arguments with CMD or at the command line with docker run for example:
Given the below directory structure:
.
├── Dockerfile
└── example.sh
The Dockerfile contents:
FROM ubuntu:18.04
COPY example.sh /bin
RUN chmod u+x /bin/example.sh
ENTRYPOINT ["/bin/example.sh"]
CMD ["bla"]
And the example.sh script contents:
#!/bin/bash
echo $1
The CMD specified in the Dockerfile after the ENTRYPOINT will be the default argument for your script and you can override the default argument on the command line (assuming that the image is built and tagged as example:0.1):
user#host> docker run --rm example:0.1
bla
user#host> docker run --rm example:0.1 "arbitrary text"
arbitrary text
Note: this is my go to article for differences between ENTRYPOINT and CMD in Dockerfile's: https://medium.freecodecamp.org/docker-entrypoint-cmd-dockerfile-best-practices-abc591c30e21

Docker exec Requires minimum of 2 arguments

I am using a shell script on Linux in order to execute some Docker commands :
docker exec -t -i test1 passwd
...
docker exec -t -i test2 passwd
And on the second exec command I receive the following error :
docker: "exec" requires a minimum of 2 arguments.
What am I doing wrong, or what am I missing?
Thank you in advance.
I have had the same mistake
docker exec -it gallant_bose
C:\Program Files\Docker Toolbox\docker.exe: "exec" requires a minimum of 2 arguments.
See 'C:\Program Files\Docker Toolbox\docker.exe exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
The solution, add the command bash in my case:
$ docker exec -it gallant_bose bash
root#e747ffecc84d:/#
Best wishes!
Update
Also, you can execute docker exec -it gallant_bose /bin/bash for some images
Are you sure that test2 exists?
I don't see any error in your command. If problem persists, can you provide the docker ps and docker images output please?

Resources