docker run -i -t image /bin/bash - source files first - bash

This works:
# echo 1 and exit:
$ docker run -i -t image /bin/bash -c "echo 1"
1
# exit
# echo 1 and return shell in docker container:
$ docker run -i -t image /bin/bash -c "echo 1; /bin/bash"
1
root#4c064f2554de:/#
Question: How could I source a file into the shell? (this does not work)
$ docker run -i -t image /bin/bash -c "source <(curl -Ls git.io/apeepg) && /bin/bash"
# content from http://git.io/apeepg is sourced and shell is returned
root#4c064f2554de:/#

In my case, I use RUN source command (which will run using /bin/bash) in a Dockerfile to install nvm for node.js
Here is an example.
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
...
...
RUN source ~/.nvm/nvm.sh && nvm install 0.11.14

I wanted something similar, and expanding a bit on your idea, came up with the following:
docker run -ti --rm ubuntu \
bash -c 'exec /bin/bash --rcfile /dev/fd/1001 \
1002<&0 \
<<<$(echo PS1=it_worked: ) \
1001<&0 \
0<&1002'
--rcfile /dev/fd/1001 will use that file descriptor's contents instead of .bashrc
1002<&0 saves stdin
<<<$(echo PS1=it_worked: ) puts PS1=it_worked: on stdin
1001<&0 moves this stdin to fd 1001, which we use as rcfile
0<&1002 restores the stdin that we saved initially

You can use .bashrc in interactive containers:
RUN curl -O git.io/apeepg.sh && \
echo 'source apeepg.sh' >> ~/.bashrc
Then just run as usual with docker run -it --rm some/image bash.
Note that this will only work with interactive containers.

I don't think you can do this, at least not right now. What you could do is modify your image, and add the file you want to source, like so:
FROM image
ADD my-file /my-file
RUN ["source", "/my-file", "&&", "/bin/bash"]

Related

How not to terminate after carried out commands in bash

After carrying out commands with "-c" option of bash, how can I make the terminal wait for input while preserving the environment?
Like CMD /K *** or pwsh -NoExit -Command ***.
From a comment by Cyrus:
You can achieve something similar by abusing the --rcfile option:
bash --rcfile <(echo "export PS1='> ' && ls")
From bash manpage:
--rcfile file
Execute commands from file instead of the system wide initialization file /etc/bash.bashrc and the standard personal initialization file ~/.bashrc if the shell is interactive
This is the answer I was looking for. Thank you!!
As an example of use, I use the following method to use the latest docker image with my preferred repository without building the image:
# Call bash in the container from bash
docker run --rm -it ubuntu:22.04 bash -c "bash --rcfile <(echo 'sed -i -E '\''s%^(deb(-src|)\s+)https?://(archive|security)\.ubuntu\.com/ubuntu/%\1http://mirrors.xtom.com/ubuntu/%'\'' /etc/apt/sources.list && apt update && FooBar=`date -uIs`')"
# ... from pwsh
docker run --rm -it ubuntu:22.04 bash -c "bash --rcfile <(echo 'sed -i -E '\''s%^(deb(-src|)\s+)https?://(archive|security)\.ubuntu\.com/ubuntu/%\1http://mirrors.xtom.com/ubuntu/%'\'' /etc/apt/sources.list && apt update && FooBar=``date -uIs``')"
# Call dash (BusyBox ash) in the container from bash
docker run --rm -it alpine:latest ash -c "ash -c 'export ENV=\$1;ash' -s <(echo 'sed -i -E '\''s%^https?://dl-cdn\.alpinelinux\.org/alpine/%https://ftp.udx.icscoe.jp/Linux/alpine/%'\'' /etc/apk/repositories && apk update && FooBar=`date -uIs`')"
# ... from pwsh
docker run --rm -it alpine:latest ash -c "ash -c 'export ENV=`$1;ash' -s <(echo 'sed -i -E '\''s%^https?://dl-cdn\.alpinelinux\.org/alpine/%https://ftp.udx.icscoe.jp/Linux/alpine/%'\'' /etc/apk/repositories && apk update && FooBar=``date -uIs``')"

Impossible to start a script into my docker container

I am trying to create my own image based on Centos.
I don't understand why when I use CMD command in my docker file to execute a script at startup, it's impossible to start my image (Exited (0) immediatly).
If build without the CMD command and then I connect to the container and I execute "sh /opt/jbossEAP/Mock/scripts/mock_start.sh". I have no issue
I have tryied to use entrypoint command but same result :(
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN yum update -y
RUN mkdir -p /opt/jbossEAP/Mock/scripts/
ADD ./scripts /opt/jbossEAP/Mock/scripts/
RUN chmod +x /opt/jbossEAP/Mock/scripts/mock_start.sh
### START SCRIPT ###
CMD sh /opt/jbossEAP/Mock/scripts/mock_start.sh
mock_start.sh
#!/bin/sh
############################################
echo "hello"
I suspect your CMD or ENTRYPOINT does work, but that the container simply finishes after outputting hello
You can check your docker's output even after it has been stopped with:
docker logs <container-id>
Read https://stackoverflow.com/a/28214133/4486184 for more information on how it works and how to avoid that.
My guesses could be wrong, so please always add to your question:
How you start your docker image
docker ps -a's output
the relevant part of docker logs <container-id>'s output
You're right!!!
I just add and now it's ok.
CMD sh /opt/jbossEAP/Mock/scripts/mock_start.sh && tail -f /dev/null
Thank you very much

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

How can I set Bash aliases for docker containers in Dockerfile?

I am new to Docker. I found that we can set environment variables using the ENV instruction in the Dockerfile. But how does one set Bash aliases for long commands in Dockerfile?
Basically like you always do, by adding it to the user's .bashrc file:
FROM foo
RUN echo 'alias hi="echo hello"' >> ~/.bashrc
As usual this will only work for interactive shells:
docker build -t test .
docker run -it --rm --entrypoint /bin/bash test hi
/bin/bash: hi: No such file or directory
docker run -it --rm test bash
$ hi
hello
For non-interactive shells you should create a small script and put it in your path, i.e.:
RUN echo -e '#!/bin/bash\necho hello' > /usr/bin/hi && \
chmod +x /usr/bin/hi
If your alias uses parameters (ie. hi Jim -> hello Jim), just add "$#":
RUN echo -e '#!/bin/bash\necho hello "$#"' > /usr/bin/hi && \
chmod +x /usr/bin/hi
To create an alias of an existing command, might also use ln -s:
ln -s $(which <existing_command>) /usr/bin/<my_command>
If you want to use aliases just in Dockerfile, but not inside a container then the shortest way is the ENV declaration:
ENV update='apt-get update -qq'
ENV install='apt-get install -qq'
RUN $update && $install apt-utils \
curl \
gnupg \
python3.6
And for use in a container the way like already described:
RUN printf '#!/bin/bash \n $(which apt-get) install -qq $#' > /usr/bin/install
RUN chmod +x /usr/bin/install
Most of the time I use aliases just in the building stage and do not go inside containers, so the first example is quicker, clearer and simpler for every day use.
I just added this to my app.dockerfile file:
# Set up aliases
ADD ./bashrc_alias.sh /usr/sbin/bashrc_alias.sh
ADD ./initbash_profile.sh /usr/sbin/initbash_profile
RUN chmod +x /usr/sbin/initbash_profile
RUN /bin/bash -C "/usr/sbin/initbash_profile"
And inside the initbash_profile.sh file which just appends my custom aliases and no need to source the .bashrc file:
# Add the Bash aliases
cat /usr/sbin/bashrc_alias.sh >> ~/.bashrc
It worked a treat!
Another option is to just use the "docker exec -it <container-name> command" from outside the container and just use your own .bashrc or the .bash_profile file (what ever you prefer).
E.g.,
docker exec -it docker_app_1 bash
I think the easiest way would be to mount a file into your container containing your aliases, and then specify where Bash should find it:
docker run \
-it \
--rm \
-v ~/.bash_aliases:/tmp/.bash_aliases \
[image] \
/bin/bash --init-file /tmp/.bash_aliases
Sample usage:
echo 'alias what="echo it works"' > my_aliases
docker run -it --rm -v ~/my_aliases:/tmp/my_aliases ubuntu:18.04 /bin/bash --init-file /tmp/my_aliases
alias
Output:
alias what='echo it works'
what
Output:
it works
You can use ENTRYPOINT, but it will not work for aliases, in your Dockerfile:
ADD dev/entrypoint.sh /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]
Your entrypoint.sh file:
#!/bin/bash
set -e
function dev_run()
{
}
export -f dev_run
exec "$#"
Here is a Bash function to have your aliases in every container you use interactively.
ducker_it() {
docker cp ~/bin/alias.sh "$1":/tmp
docker exec -it "$1" /bin/bash -c "[[ ! -f /tmp/alias.sh.done ]] \
&& [[ -w /root/.bashrc ]] \
&& cat /tmp/alias.sh >> /root/.bashrc \
&& touch /tmp/alias.sh.done"
docker exec -it "$1" /bin/bash
}
Required step before:
grep ^alias ~/.zshrc > ~/bin/alias.sh
Used some of the previous solutions, but the aliases are not recognised still.
I'm trying to set aliases and use them both within later Dockerfile steps and in the container at runtime.
RUN echo "alias model-downloader='python3 ${MODEL_DL_PATH}/downloader.py'" >> ~/.bash_aliases && \
echo "alias model-converter='python3 ${MODEL_DL_PATH}/converter.py'" >> ~/.bash_aliases && \
source ~/.bash_aliases
# Download the model
RUN model-downloader --name $MODEL_NAME -o $MODEL_DIR --precisions $MODEL_PRECISION;
The solution for me was to use ENV variables that held folder paths and then add the exact executable. I could have use ARG too, but for more of my scenarios I needed the aliases in both the build stage and later in the runtime.
I used the ENV variables in conjunction with a Bash script that runs once dependencies have ponged and sets the Bash source, sets some more env variables, and allows for further commands to pipe through.
#ErikDannenberg's answer did the trick, but in my case, some adjustments were needed.
It didn't work with aliases cause apparently there's an issue with interactive shells.
I reached for his second solution, but it still didn't really work. I checked existing shell scripts in my project and noticed the head comment (first line = #!/usr/bin/env sh) differs a bit from #!/usr/bin/bash. After changing it accordingly it started working for my t and tc "aliases", but I had to use the addendum to his second solution for getting tf to work.
Here's the complete Dockerfile
FROM php:8.1.1-fpm-alpine AS build
RUN apk update && apk add git
RUN curl -sS https://getcomposer.org/installer | php && mv composer.phar /usr/local/bin/composer
RUN apk add --no-cache $PHPIZE_DEPS \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& touch /usr/local/etc/php/conf.d/99-xdebug.ini \
&& echo "xdebug.mode=coverage" >> /usr/local/etc/php/conf.d/99-xdebug.ini \
&& echo -e '#!/usr/bin/env sh\nphp artisan test' > /usr/bin/t \
&& chmod +x /usr/bin/t \
&& echo -e '#!/usr/bin/env sh\nphp artisan test --coverage' > /usr/bin/tc \
&& chmod +x /usr/bin/tc \
&& echo -e '#!/usr/bin/env sh\nphp artisan test --filter "$#"' > /usr/bin/tf \
&& chmod +x /usr/bin/tf
WORKDIR /var/www

How to detect fully interactive shell in bash from docker?

I'm wanting to detect in "docker run" whether -ti has been passed to the entrypoint script.
docker run --help for -t -i
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
I have tried the following but even when tested locally (not inside docker) it didn't work and printed out "Not interactive" always.
#!/bin/bash
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
entrypoint.sh:
#!/bin/bash
set -e
if [ -t 0 ] ; then
echo "(interactive shell)"
else
echo "(not interactive shell)"
fi
/bin/bash -c "$#"
Dockerfile:
FROM debian:7.8
COPY entrypoint.sh /usr/bin/entrypoint.sh
RUN chmod 755 /usr/bin/entrypoint.sh
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["/bin/bash"]
build the image:
$ docker build -t is_interactive .
run the image interactively:
$ docker run -ti --rm is_interactive "/bin/bash"
(interactive shell)
root#dd7dd9bf3f4e:/$ echo something
something
root#dd7dd9bf3f4e:/$ echo $HOME
/root
root#dd7dd9bf3f4e:/$ exit
exit
run the image not interactively:
$ docker run --rm is_interactive "echo \$HOME"
(not interactive shell)
/root
$
This stackoverflow answer helped me find [ -t 0 ].

Resources