Docker is not running my entire entrypoint.sh script - bash

I have created a docker container to stand up Elasticsearch. Elasticsearch is being started and managed by supervisor which is also installed on my docker container. I have created an entrypoint.sh script and added the following to the end of my Dockerfile
ENTRYPOINT ["/usr/local/startup/entrypoint.sh"]
My entrypoint.sh script looks as follows:
#!/bin/bash -x
# Start Supervisor if not already running
if ! ps aux | grep -q "[s]upervisor"; then
echo "Starting supervisor service"
exec/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
else
echo "Supervisor is currently running"
fi
echo "creating /.es_created"
touch /.es_created
exec "$#"
When I start my docker container supervisor starts and in turn will successfully start elasticsearch. The problem is that it never executes the last bit of the script creating the .es_created file. It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there. I added -x to the #!/bin/bash so I could call docker logs on the container and it confirms that it never calls the last echo and touch commands. I feel like I may be missing something about entrypoint scripts which is why this is happening, but ultimately I want to be able to execute some commands after elasticsearch has started so I can configure a proper index and insert some data.

Your guess
It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there.
is correct, because the exec command of bash has indeed the following semantics: the specified program at stake is executed, and replace the parent shell process (it is an exec system call).
So your question is actually not a Docker issue, it is rather related to Bash. For more details on the exec shell builtin, you could for example take a look at this askubuntu question, or read the corresponding doc in the bash reference manual.
To sum up, you should try to just write
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
If that command indeed runs in the background, it should be OK. Otherwise, you could of course append a &:
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf &

Related

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

Dockerfile CMD instruction will exit the container just after running it

I want to setup some configuration when my container starts, for this I am using shell scripts. But my container will exits as soon as my scripts ends, I have tried with -d flag / detached mode but It will never run in detached mode.
Below is my Dockerfile
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
Below is my shell script
#!/bin/bash
echo Hello-docker
Run without any flag
docker run hello-docker
This will print 'Hello-docker' on my console and exits
Run with -itd flags
docker run -itd hello-docker
and as below my console output, This time also will exits soon. :(
The difference I saw is in COMMAND section when I run other images command section will shows "/bin/bash" and will continue in detached mode.
And when I run my image in container with shell script COMMAND section will show "/bin/sh -c /usr/loca", and Exit.
I want to run container till I not stop it manually.
EDIT:
After adding ENTRYPOINT instruction in Dockerfile, this will not execute my shell script :(
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
ENTRYPOINT /bin/bash
As per docker documentation here
CMD will be overridden when running the container with alternative arguments, so If I run docker image with some arguments as below, will not execute CMD instructions. :(
sudo docker run -it --entrypoint=/bin/bash <imagename>
A docker container will run as long as the CMD from your Dockerfile takes.
In your case your CMD consists of a shell script containing a single echo. So the container will exit after completing the echo.
You can override CMD, for example:
sudo docker run -it --entrypoint=/bin/bash <imagename>
This will start an interactive shell in your container instead of executing your CMD. Your container will exit as soon as you exit that shell.
If you want your container to remain active, you have to ensure that your CMD keeps running. For instance, by adding the line while true; do sleep 1; done to your shell.sh file, your container will print your hello message and then do nothing any more until you stop it (using docker stop in another terminal).
You can open a shell in the running container using docker exec -it <containername> bash. If you then execute command ps ax, it will show you that your shell.sh is still running inside the container.
Finally with some experiments I got my best result as below
There is nothing wrong with my Dockerfile as below it's correct.
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
What I do to get expected result is, I just add one more command(/bin/bash) in my shell script file as below and vola everything works in my best way.
#!/bin/bash
echo “Hello-docker” > /usr/hello.txt
/bin/bash
You can also modify your first Dockerfile, replacing
CMD /usr/local/bin/shell.sh
by
CMD /usr/local/bin/shell.sh ; sleep infinity
That way, your script does not terminate, and your container stays running.
CMD bash -C '/path/to/start.sh';'bash'
Try
CMD /bin/bash -c 'MY_COMMAND_OR_SHELL_SCRIPT; /bin/bash'
Trying an explanation here to the answer of #lanni654321. sh shell is standard in Dockerfile. You must call bash shell to start bash with .bashrc, many commands also need RUN /bin/bash -c '...' in the same way as in CMD above, since sh shell is often not enough. If you add 'bash' in the end of CMD, the container will not exit because the image was committed with something that is still open.
See “/bin/sh: 1: MY_COMMAND: not found” for an error caused by sh and solved by bash.
I think that you will usually not need this. You can just use RUN /bin/bash -c '...', in my case, this could do anything that can be done in a base image before you go into varying details in docker-compose to start the containers.
But that is all not needed if you need to just have a container running without exiting. Just
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you should be in the bash of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
At your start shell append a line code:
tail -f /dev/null or /bin/bash
to make sure you shell done and suspend a process in system so that docker container not shutdown.Don't forget to give "chmod +x" access to start.sh.
there is demo:
#!/bin/bash
cp /root/supervisor/${RUN_SERVICE}.ini /etc/supervisor/conf.d/
sleep 1
service supervisor start
/bin/bash

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources