Executing a shell script within docker with RUN command - bash

New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.

I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less

I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.

Related

How do I Run Docker cmds Exactly Like in a Dockerfile

There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient

Running shell script using Docker image

Input:
- There is Windows machine with Docker Toolbox installed.
- There is a shell script file baz.sh which calls py2dsc-deb.
Problem: py2dsc-deb is not available on Windows.
As I understand correctly, I can pull some Linux distro image from Docker repository, create a container and then execute shell-script file and it will run py2dsc-deb and do its job.
I have pulled:
debian - stretch-slim - 3ad21 - 3 weeks ago - 55.3MB
Now
How do I run my script using debian, something like: docker exec mycontainer /path/to/test.sh?
Running docker --rm debian:stretch-slim does nothing. Doesn't it suppose to run Debian distro at docker-machine ip?
I have tried to keep the container up using docker run -it debian:stretch-slim /bin/bash, then run the script using docker exec 1ef5b ./build.sh, but getting
$ docker exec 745 ./build.sh
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"./build.sh\": stat ./build.sh: no such file or directory"
Does it mean I can't run external script and has to always pass it inside the Docker?
You can execute bash command inside your container by typing
docker exec -ti -u `username` `container_name` bash -c "cd /path/to/ && ./test.sh"
lets say your container name is test_buildbox, you are root and your script stays inside /bin/test.sh You can call this script by typing
docker exec -ti -u root test_buildbox bash -c "cd /bin/ && ./test.sh
Please check if you have correct line endings in your .sh scripts (<LF>) when you built Docker image on Windows.

How to execute the Entrypoint of a Docker images at each "exec" command?

After trying to test Dockerfiles with Dockerspec, I finally had an issue I can't resolve properly.
The problem is, I think, from Docker itself ; If I understand its process, an Entrypoint is only executed at run, but if the container stay started and I launch an "exec" command in, it's not re-called.
I think it's the wanted behavior.
But if the Entrypoint is a "gosu" script which precede all my commands, it's a problem...
Example
"myImage" has this Entrypoint :
gosu 1000:1000 "$#"
If I launch : docker run -it myImage id -u
The output is "1000".
If I start a container : docker run -it myImage bash
In this container, id -u outputs "1000".
But if I start a new command in this container, it starts a new shell, and does not execute the Entrypoint, so : docker exec CONTAINER_ID id -u
Output "0", because the new shell is started as "root".
It there a way to execute each time the entrypoint ?
Or re-use the shell open ?
Or a better way to do that ?
Or, maybe I haven't understand anything ? ;)
Thanks !
EDIT
After reading solutions proposed here, I understand that the problem is not how Docker works but how Serverspec works with ; my goal is to directly test a command as a docker run argument, but Serverspec start a container and test commands with docker exec.
So, the best solution is to found how get the stdout of the docker run executed by Serverspec.
But, in my personal use-case, the best solution is maybe to not use Gosu but --user flag :)
if your goal is to run the docker exec with a specific user inside of the container, you can use the --user option.
docker exec --user myuser container-name [... your command here]
If you want to run gosu every time, you can specify that as the command with docker exec
docke exec container-name gosu 1000:1000 [your actual command here]
in my experience, the best way to encapsulate this into something easily re-usable is with a .sh script (or .cmd file in windows).
drop this into a file in your local folder... maybe gs for example.
#! /bin/sh
docker exec container-name gosu 1000:1000 "$#"
give it execute permissions with chmod +x gs and then run it with ./gs from the local folder

Dockerfile CMD instruction will exit the container just after running it

I want to setup some configuration when my container starts, for this I am using shell scripts. But my container will exits as soon as my scripts ends, I have tried with -d flag / detached mode but It will never run in detached mode.
Below is my Dockerfile
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
Below is my shell script
#!/bin/bash
echo Hello-docker
Run without any flag
docker run hello-docker
This will print 'Hello-docker' on my console and exits
Run with -itd flags
docker run -itd hello-docker
and as below my console output, This time also will exits soon. :(
The difference I saw is in COMMAND section when I run other images command section will shows "/bin/bash" and will continue in detached mode.
And when I run my image in container with shell script COMMAND section will show "/bin/sh -c /usr/loca", and Exit.
I want to run container till I not stop it manually.
EDIT:
After adding ENTRYPOINT instruction in Dockerfile, this will not execute my shell script :(
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
ENTRYPOINT /bin/bash
As per docker documentation here
CMD will be overridden when running the container with alternative arguments, so If I run docker image with some arguments as below, will not execute CMD instructions. :(
sudo docker run -it --entrypoint=/bin/bash <imagename>
A docker container will run as long as the CMD from your Dockerfile takes.
In your case your CMD consists of a shell script containing a single echo. So the container will exit after completing the echo.
You can override CMD, for example:
sudo docker run -it --entrypoint=/bin/bash <imagename>
This will start an interactive shell in your container instead of executing your CMD. Your container will exit as soon as you exit that shell.
If you want your container to remain active, you have to ensure that your CMD keeps running. For instance, by adding the line while true; do sleep 1; done to your shell.sh file, your container will print your hello message and then do nothing any more until you stop it (using docker stop in another terminal).
You can open a shell in the running container using docker exec -it <containername> bash. If you then execute command ps ax, it will show you that your shell.sh is still running inside the container.
Finally with some experiments I got my best result as below
There is nothing wrong with my Dockerfile as below it's correct.
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
What I do to get expected result is, I just add one more command(/bin/bash) in my shell script file as below and vola everything works in my best way.
#!/bin/bash
echo “Hello-docker” > /usr/hello.txt
/bin/bash
You can also modify your first Dockerfile, replacing
CMD /usr/local/bin/shell.sh
by
CMD /usr/local/bin/shell.sh ; sleep infinity
That way, your script does not terminate, and your container stays running.
CMD bash -C '/path/to/start.sh';'bash'
Try
CMD /bin/bash -c 'MY_COMMAND_OR_SHELL_SCRIPT; /bin/bash'
Trying an explanation here to the answer of #lanni654321. sh shell is standard in Dockerfile. You must call bash shell to start bash with .bashrc, many commands also need RUN /bin/bash -c '...' in the same way as in CMD above, since sh shell is often not enough. If you add 'bash' in the end of CMD, the container will not exit because the image was committed with something that is still open.
See “/bin/sh: 1: MY_COMMAND: not found” for an error caused by sh and solved by bash.
I think that you will usually not need this. You can just use RUN /bin/bash -c '...', in my case, this could do anything that can be done in a base image before you go into varying details in docker-compose to start the containers.
But that is all not needed if you need to just have a container running without exiting. Just
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you should be in the bash of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
At your start shell append a line code:
tail -f /dev/null or /bin/bash
to make sure you shell done and suspend a process in system so that docker container not shutdown.Don't forget to give "chmod +x" access to start.sh.
there is demo:
#!/bin/bash
cp /root/supervisor/${RUN_SERVICE}.ini /etc/supervisor/conf.d/
sleep 1
service supervisor start
/bin/bash

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources