I would like to write a bash script that automates the following:
Get inside running container
docker exec -it CONTAINER_NAME /bin/bash
Execute some commands:
cat /dev/null > /usr/local/tomcat/logs/app.log
exit
The problematic part is when docker exec is executed. The new shell is created, but the other commands are not executed.
Is there a way to solve it?
You can use heredoc with docker exec command:
docker exec -i CONTAINER_NAME bash <<'EOF'
cat /dev/null > /usr/local/tomcat/logs/app.log
exit
EOF
To use variables:
logname='/usr/local/tomcat/logs/app.log'
then use as:
docker exec -i CONTAINER_NAME bash <<EOF
cat /dev/null > "$logname"
exit
EOF
You can simply launch
docker exec -it container_id cat /dev/null > /usr/local/tomcat/logs/app.log
Related
I am trying to write a single line command to run a shell script which is inside the pod
getting a shell for a running container:
kubectl exec -it test-pod -c test-container -- /bin/bash
directory in the container:
cd test/bin
script inside the bin:
./backup.sh
how do I write all this in a single command?
Try:
kubectl exec -it test-pod -c test-container -- sh /full/path/to/the/backup.sh
Try:
kubectl exec -it test-pod -c test-container -- /bin/bash -c "/path/to/backup-script.sh"
I have a shell script my-script.sh like:
#!/bin/bash
while true; do
echo '1'
done
I can deploy a bash pod in Kubernetes like:
kubectl run my-shell --rm -it --image bash -- bash
Now, I want to execute the script on bash. How can I pass my-script.sh as input to bash? Something like
kubectl run my-shell --rm -it --image bash -- /bin/bash -c < my-script.sh
Just drop the -t to kubectl run (because you're reading from stdin, not a terminal) and the -c from bash (because you're passing the script on stdin, not as an argument):
$ kubectl run my-shell --rm -i --image docker.io/bash -- bash < my-script.sh
If you don't see a command prompt, try pressing enter.
1
1
1
1
...
I am using a tool (gatk) distributed as a docker image and try to use its commands in a shell script.
I run the docker in detached mode.
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Then I run the commands from shell script
#!/bin/bash
docker exec my_container gatk command1
wait
docker exec my_container gatk command2
command2 needs input from command1 so I use wait, but still command2 is executed before command 1 is finished.
I also tried
#!/bin/bash
docker exec my_container gatk command1
docker wait my_container
docker exec my_container gatk command2
but then the script does not continue running after command1 is completed.
I managed to solve it. The problem was is that when I ran docker exec I did not define it to receive input from the shell. Adding -i flag to docker exec solved the problem. Here is the full solution.
I start docker in detached mode
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Now I can close the terminal, the docker container is up and running and I can use it in a new terminal.
I generate a bash script called myscript.sh with the following code.
#!/bin/bash
docker exec -i my_container gatk command1
wait
docker exec -i my_container gatk command2
I run the script, disown it and close the terminal.
./myscript.sh&disown;exit
You can run both commands in a single shot:
docker run image /bin/bash -c "gatk command1 && gatk command2"
To run a bash terminal in a Docker container I can run the following:
$ docker exec -it <container> /bin/bash
However, I want to execute a command in the container automatically. For example, if I want to open a bash terminal in the container and create a file I would expect to run something like:
docker exec -it <container> /bin/bash -c "touch foo.txt"
However, this doesn't work... Is there a simple way to achieve this? Of course, I could type the command after opening the container, but I want to open a bash terminal and run a command at the same time.
You can run your touch command and then spawn another shell :
docker exec -it <container> /bin/bash -c "touch foo.txt; exec bash"
Works perfectly fine for me
~# docker run -tid --rm --name test ubuntu:20.04
~# docker exec -it test /bin/bash -c "touch /foo.txt"
~# docker exec -it test /bin/bash
root#b6b0efbb13be:/# ls -ltr foo.txt
-rw-r--r-- 1 root root 0 Mar 7 05:35 foo.txt
Easy solution:
docker exec -it <container> touch foo.txt
You can verify
docker exec -it <container> ls
This was tested with alpine image.
Remember that in docker images there is a entrypoint and a command. Now we are editing the command of the default entrypoint for alpine, via docker exec
It depends of the entrypoint if env variablers are load or not, $PATH ..., so other images maybe you need to write /bin/touch or /usr/bin/ls
Good luck!
When you run docker exec -it <container> /bin/bash -c "touch foo.txt", container sends 0 exit code so that it means the task is done and you'll be returned to your host.
When you run docker exec -it <container /bin/bash, bash shell is not terminated until you explicitly type exit or use CTRL+D in bash environment. bash is continuously running.
This is why when you run the second command, it goes to bash, runs your command (touches) and then exits.
I'm trying to create an alias to help debug my docker containers.
I discovered bash accepts a --init-file option which ought to let us run some commands before passing over to interactive mode.
So I thought I could do
docker-bash() {
docker run --rm -it "$1" bash --init-file <(echo "ls; pwd")
}
But those commands don't appear to be running:
% docker-bash c7460dfcab50
root#9c6f64a9db8c:/#
Is it an escaping issue or.. what's going on?
bash --init-file <(echo "ls; pwd")
Alone in a terminal on my host machine works as expected (runs the command starts a new bash instance).
In points:
The <(...) is a bash extension process subtitution.
From the manual above: Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files..
The process substitution works like this:
bash creates a fifo in /tmp or creates a new file descriptor in /dev/fd.
The filename, either the /tmp/.something or /dev/fd/<number> is substituted for <(...) when command is executed.
So for example echo <(echo 1) outputs /dev/fd/63.
Docker works by creating a new environment that is separated from the host. That means that:
Processes inside docker do not inherit file descriptors from the host process:
So /dev/fd/* files are not inherited.
Processes inside docker are accessing isolated filesystem tree.
So processes can't access /tmp/* files from the host.
So summarizing docker run -ti --rm alpine cat <(echo 1) will not work, because the filename substituted by <(...) is not available from docker environment.
An easy workaround would be to just:
docker run -ti --rm alpine sh -c 'ls; pwd; exec sh'
Or use a temporary file:
echo "ls; pwd" > /tmp/tempfile
docker run -v /tmp/tempfile:/tmp/tempfile bash bash --init-file /tmp/tempfile
For my use-case I wanted to set an alias which won't persist if we re-exec the shell. However, aliases can be written to ~/.bashrc which will be reloaded on the subsequent exec. Ergo,
docker-bash() {
docker run --rm -it "$1" bash -c $'set -o xtrace; echo "alias ll=\'ls -lAhtrF --color=always\'" >> ~/.bashrc; exec "$0"'
}
Works. --rm should clean up any files we create anyway if I understand properly how docker works.
Or perhaps this is a nicer way to write it:
docker-bash() {
read -r -d '' BASHRC << EOM
alias ll='ls -lAhtrF --color=always'
EOM
docker run --rm -it "$1" bash -c "echo \"$BASHRC\" >> ~/.bashrc; exec \"\$0\""
}