Run a simple shell script before running CMD command in Dockerfile - shell

I have a dockerfile and the last command is
CMD ["/opt/startup.sh"]
Now i have another shell script i.e replacevariables.sh and i want to execute the following command in my dockerfile.
sh replacevariables.sh ${app_dir} dev
How can i execute this command. It is a simple script which is basically going to replace some characters of files in ${app_dir}. What can be the solution for this because when i see any kind of documentation they all suggest to run only one sh script.

You can use a Docker ENTRYPOINT to support this. Consider the following Dockerfile fragment:
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh replacevariables.sh
ENTRYPOINT ["./entrypoint.sh"]
# Same as above
CMD ["/opt/startup.sh"]
The ENTRYPOINT becomes the main container process, and it gets passed the CMD as arguments. So your entrypoint can do the first-time setup, and then run the special shell command exec "$#" to replace itself with the command it was given.
#!/bin/sh
./replacevariables.sh "${app_dir}" dev
exec "$#"
Even if you're launching some alternate command in your container (docker run --rm -it yourimage bash to get a debugging shell, for example) this will only replace the "command" part, so bash becomes the "$#" in the script, and you still do the first-time setup before launching the shell.
The important caveats are that ENTRYPOINT must be the JSON-array form (CMD can be a bare string that gets wrapped in /bin/sh -c, but this setup breaks ENTRYPOINT) and you only get one ENTRYPOINT. If you already have an ENTRYPOINT (many SO questions seem to like naming an interpreter there) move it into the start of CMD (CMD ["python3", "./script.py"]).

Related

the bashrc file is not working when I docker run --mount bashrc

I'm testing an app on docker (search engine) but when I use docker run the bashrc doesn't work if for example there was an alias inside bashrc, I can't use it.
The file bashrc is copied to the container but still can't use it.
My question is why not? is it only because that bashrc needs to be reloaded or there is another reason?
sudo docker run \
--mount type=bind,source=$(pwd)/remise/bashrc,destination=/root/.bashrc,readonly \
--name="s-container" \
ubuntu /go/bin/s qewrty
If you start your container as
docker run ... image-name \
/go/bin/s qwerty
when Docker creates the container, it directly runs the command /go/bin/s qwerty; it does not invoke bash or any other shell to do it. Nothing will ever know to look for a .bashrc file.
Similarly, if your Dockerfile specifies
CMD ["/go/bin/s", "qwerty"]
it runs the command directly without a shell.
There's an alternate shell form of CMD that takes a command string, and runs it via /bin/sh -c. That does involve a shell; but it's neither an interactive nor a login shell, and it's invoked as sh, so it won't read any shell dotfiles (for the specific case where /bin/sh happens to be GNU Bash, see Bash Startup Files).
Since none of these common paths to specify the main container command will read .bashrc or other shell dotfiles, it usually doesn't make sense to try to write or inject these files. If you need to set environment variables, consider the Dockerfile ENV directive or an entrypoint wrapper script instead.

'docker run' ignores first command appended to ENTRYPOINT ["/bin/bash", "-c", ". foo.sh"] but not ["bash", "foo.sh"]

I am trying to run a docker image which executes a bash script and passes run-time arguments to that bash script. I have found that when I build the image using the recommended ENTRYPOINT ["/bin/bash", "-c", ". foo.sh"] entrypoint, the first argument appended to the docker run command doesn't get picked up by my script, but when I build the image with ENTRYPOINT ["bash", "foo.sh"], it does.
A toy version of the shell script looks like this:
#!/bin/bash
echo you got "$#" args
ARG1=${1:-foo}
ARG2=${2:-bar}
ARG3=${3:-1}
ARG4=${4:-$(date)}
echo "$ARG1"
echo "$ARG2"
echo "$ARG3"
echo "$ARG4"
so basically the script expects up to 4 command line arguments that each have default values.
The original Dockerfile I tried looks like this:
FROM ubuntu
COPY foo.sh foo.sh
ENTRYPOINT ["/bin/bash", "-c", ". foo.sh"]
and was based on a number of resources I found for how to properly execute a shell script using the exec form of ENTRYPOINT recommended by docker.
After building this image with docker build -t foo ., I run it with docker run -it foo first second third fourth and I get the following output:
you got 3 args
second
third
fourth
Tue Jul 2 13:14:52 UTC 2019
so clearly the first argument appended to the docker run command is dropped somewhere along the line, and the only arguments that get ingested by the shell command are the second, third, and fourth.
I spent ages trying to diagnose the issue, and so far haven't figured out why this is happening. The best I've come up with is somewhat of a hacky workaround, after discovering that changing the entrypoint to simply ENTRYPOINT ["bash", "pilates.sh"] produces the desired results.
I would love to know the following: 1. Why does the original entrypoint drop the first run-time argument? 2. Why does the second entrypoint work any differently than the first?
When you run bash -c 'something' foo bar baz, "foo" becomes the zero'th parameter (i.e. $0)
You need to insert a dummy param in there, perhaps
ENTRYPOINT ["/bin/bash", "-c", ". foo.sh", "bash"]
This is documented in the bash man page, in the description of the -c option.
A Docker container runs a single process, specified in your case by the ENTRYPOINT setting; when that process exits the container exits. Here this means that there’s no surrounding shell environment you need to update, so there’s no need to run your script using the . built-in; and once you’re just running a simple command, there’s also no need to wrap your command in a sh -c wrapper.
That yields a Dockerfile like
FROM ubuntu
COPY foo.sh foo.sh
RUN chmod +x foo.sh # if it’s not executable already
ENTRYPOINT ["./foo.sh"]
This also avoids the issue with sh -c consuming its first argument noted in #GlennJackman’s answer.
I needed to access environment variables in container startup command. This works since Docker 1.121
SHELL [ "/bin/sh", "-c", "exec my_app \"$#\"" ]
ENTRYPOINT
The blank ENTRYPOINT will become $0 and all other arguments will be passed as-is.
Example
FROM busybox
ENV MY_VAR=foobar
SHELL [ "/bin/sh", "-c", "printf '[%s]\n' \"${MY_VAR}\" \"$#\"" ]
ENTRYPOINT
docker run foo/bar a b c
[foobar]
[a]
[b]
[c]

How do I Run Docker cmds Exactly Like in a Dockerfile

There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient

Bash brace expansion not working on Dockerfile RUN command

I'm running the following RUN command in my Dockerfile, expecting a "logs" directory to be created under each of the listed subdirectories:
RUN mkdir -p /opt/seagull/{diameter-env,h248-env,http-env,msrp-env,octcap-env,radius-env,sip-env,synchro-env,xcap-env}/logs
But when I check the image, I see a directory literally called "{diameter-env,h248-env,http-env,msrp-env,octcap-env,radius-env,sip-env,synchro-env,xcap-env}" created under /opt/seagull, instead of brace expansion taking place.
What could I be doing wrong?
You're not using brace expansion, because you're not using Bash. If you look at the documentation for the RUN command:
RUN (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows)
And also:
Note: To use a different shell, other than ‘/bin/sh’, use the exec form passing in the desired shell. For example, RUN ["/bin/bash", "-c", "echo hello"]
So, just change the command to use the exec form and explicitly use a Bash shell:
RUN [ "/bin/bash", "-c", "mkdir -p /opt/seagull/{diameter-env,h248-env,http-env,msrp-env,octcap-env,radius-env,sip-env,synchro-env,xcap-env}/logs" ]
If /bin/bash is available in your image, you can change the shell that the docker build system uses to execute your RUN command, like this:
SHELL ["/bin/bash", "-c"]
Now, your RUN command should work unchanged.

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources