How to execute the Entrypoint of a Docker images at each "exec" command? - bash

After trying to test Dockerfiles with Dockerspec, I finally had an issue I can't resolve properly.
The problem is, I think, from Docker itself ; If I understand its process, an Entrypoint is only executed at run, but if the container stay started and I launch an "exec" command in, it's not re-called.
I think it's the wanted behavior.
But if the Entrypoint is a "gosu" script which precede all my commands, it's a problem...
Example
"myImage" has this Entrypoint :
gosu 1000:1000 "$#"
If I launch : docker run -it myImage id -u
The output is "1000".
If I start a container : docker run -it myImage bash
In this container, id -u outputs "1000".
But if I start a new command in this container, it starts a new shell, and does not execute the Entrypoint, so : docker exec CONTAINER_ID id -u
Output "0", because the new shell is started as "root".
It there a way to execute each time the entrypoint ?
Or re-use the shell open ?
Or a better way to do that ?
Or, maybe I haven't understand anything ? ;)
Thanks !
EDIT
After reading solutions proposed here, I understand that the problem is not how Docker works but how Serverspec works with ; my goal is to directly test a command as a docker run argument, but Serverspec start a container and test commands with docker exec.
So, the best solution is to found how get the stdout of the docker run executed by Serverspec.
But, in my personal use-case, the best solution is maybe to not use Gosu but --user flag :)

if your goal is to run the docker exec with a specific user inside of the container, you can use the --user option.
docker exec --user myuser container-name [... your command here]
If you want to run gosu every time, you can specify that as the command with docker exec
docke exec container-name gosu 1000:1000 [your actual command here]
in my experience, the best way to encapsulate this into something easily re-usable is with a .sh script (or .cmd file in windows).
drop this into a file in your local folder... maybe gs for example.
#! /bin/sh
docker exec container-name gosu 1000:1000 "$#"
give it execute permissions with chmod +x gs and then run it with ./gs from the local folder

Related

Crontab can't execute script directly in container environment

I present to you the following dilemma I execute my script manually and it works well, see next line for example:
docker exec -ti backup_subversion sh -c "/tmp/my_script.sh"
But when I attempt to schedule the process this line is just skipped.
I have tried to execute just a touch command and it too is ignored.
I have tried to execute as root, same problem.
I have tried to execute in another docker environment: same problem.
My OS is Centos 7.
In this script for example the bug part who will crash :
#!/bin/bash
# Create a container.
docker run -d --name=backup_subversion \
-v /subversion/dump:/var/dump \
--net my_network my_server.domaine.com/subversion/billy:1.9
# I copy a script.
docker cp tools_subversion_dump.sh backup_subversion:/tmp
# This line is ignore since crontab exec.
docker exec -ti backup_subversion sh -c "/tmp/tools_subversion_dump.sh"
Thank you in advance for your answers because it's a mystery to me.
It's probably because you used the -it options that only apply to an interactive shell rather then a pseudo one like the one used in scripts as referenced in the question

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

How do I Run Docker cmds Exactly Like in a Dockerfile

There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient

use docker exec in bash script

I have a bash script that is supposed to execute other bash scripts using "docker exec" which are installed in different docker containers. Although each command works correctly when started manually, the script stops after the execution of first docker exec command.
Example:
#!/bin/bash
...
docker exec -it mysql_container /scripts/import_database.sh ## Scripts stops here...
docker exec -it web_container /scripts/copy_doc_root.sh
...
What am I missing? ;)
Thanks for your help!
David
Use docker exec -d since you neither want a terminal nor an interactive session.

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources