Stopping Apache2 is much slower when overriding ENTRYPOINT in Docker - performance

I'm containerizing a PHP application and would like to modify the Apache configuration based on environment variables. This is done in a script overriding the default ENTRYPOINT:
FROM php:7.2-apache
# ...
COPY prepare-docker-configs.sh .
RUN chmod +x prepare-docker-configs.sh
ENTRYPOINT ./prepare-docker-configs.sh
After those modifications, Apache doesn't start. apache2-foreground seems to be the missing command, so I run it at the end of prepare-docker-configs.sh
#!/bin/bash
# ... Some config substitution
apache2-foreground
Now Apache got started and everything works as expected. But I noticed that stopping the container is much slower than before. I ran time docker-compose down for both constellations:
Without my custom overriden ENTRYPOINT
real 0m2,080s
user 0m0,449s
sys 0m0,064
With my custom script as ENTRYPOINT
real 0m12,247s
user 0m0,491s
sys 0m0,067s
So it takes about 10 seconds longer. Especially during development where a lot of testing is done, this would waist a lot of time in total.
Why is my custom ENTRYPOINT so much slower on stopping and how could it be fixed?
I tried adding STOPSIGNAL SIGWINCH from the original Dockerfile and also run docker-php-entrypoint, both doesn't help.
The docker-compose.yml file is nothing special. It just defines the services and override the default network because of internal conflicts:
version: '2'
services:
app:
build:
context: .
args:
http_proxy: ${http_proxy}
env_file: docker.env
ports:
- 80:80
networks:
default:
driver: bridge
ipam:
config:
- subnet: 10.10.8.0/28
What doesn't work
Ressource issues
I'm running this on my Ubuntu workstation with SSD, i7 Quadcore and 32GB RAM. It doesn't run anything large, the load is quite low. A ressource issue is very unlikely. And the performance issue is reproduceable: On another Ubuntu machine with Ryzen 5 3600 and 48GB memory it took 11 seconds with overriden ENTRYPOINT. The same result on Debian with a much slower i3.
Calling the original ENTRYPOINT
In my script, I call docker-php-entrypoint at the end, which executes the original entrypoint script from the PHP image. It doesn't start Apache successfully, I had to call apache2-foreground instead.
Starting the Apache process with the RUN statement
I added a CMD directive to my Dockerfile
ENTRYPOINT ./prepare-docker-configs.sh
CMD apache2-foreground
and an exec statement at the end of prepare-docker-configs.sh assuming that the CMD entry got passed
set -x
exec "$#"
But the container exited because nothing was passed
app_1 | + set -x
app_1 | + exec
test_app_1 exited with code 0
I tested passing the file directly
exec apache2-foreground
Now Apache is started, but it still takes 10+ seconds to stop.

A Docker container runs a single process; when you declare an ENTRYPOINT in the Dockerfile, that is the process. When you docker stop a container, it sends SIGTERM to that process (only), and if it doesn't stop itself within 10 seconds, it sends SIGKILL to forcibly kill it off. Since the container process has process ID 1, there are also some special conditions on signal handling.
In your case, the bash instance running the entrypoint script is the root process, and it's running apache2-foreground as a child process. You can use the Bourne shell exec command to replace the shell with the process you're trying to run; then apache2-foreground the main container process instead, and the docker stop signals go straight to that process.
A typical pattern for entrypoint scripts is to honor the Docker "command" part. This gets passed to the entrypoint as additional arguments, so your entrypoint script typically looks like
#!/bin/sh
# ... Some config substitution
exec "$#"
and then in your Dockerfile you need to provide the default command to run
# NOTE! ENTRYPOINT must be JSON-array syntax for this to work
ENTRYPOINT ["./prepare-docker-configs.sh"]
CMD apache2-foreground
Since the entrypoint script still execs the command, it replaces the shell command wrapper as process 1 and will receive the docker stop signals.

Related

Bash script set as my container entrypoint executes 2x slower than if it is executed from within the container itself

I have been Docker for quite some time and I now stumbled upon a very strange issue I cannot find the explanation for.
If I run a container for the Dockerfile described below (which has the entrypoint set to /opt/discourse/entrypoint.sh) it will have an execution time of +- 90 seconds. Now the strange thing is, that if I enter this container (using docker exec) and execute this same bash script manually, it has an execution time of +- 45 seconds..
I have tried to see if this is caused by some cashing or anything, so I went ahead and let the entrypoint bash script execute the same commands twice (and timed them twice). It added up to an execution time of 180 seconds, meaning just being a second run will not make it faster.
So just because this bash script is set as the Docker container entrypoint is causing it to be 2x slower.
What on earth could cause a bash script to execute slower if it is set as the container entrypoint? Isn't it executed in the same container anyway under the same environment? Am I missing something about Docker container entrypoints?
Any help would greatly be appreciated!
/opt/discourse/entrypoint.sh:
#!/bin/bash
# shellcheck disable=SC1091
set -o errexit
set -o nounset
set -o pipefail
# set -o xtrace # Uncomment this line for debugging purpose
# Load libraries
. /liblog.sh
info "Compiling assets, this may take a while..."
bundle exec rake assets:precompile
info "Assets compiled!"
Dockerfile:
FROM discourse/base:release
WORKDIR /var/www/discourse
RUN bundle exec rake plugin:pull_compatible_all
# copies over entrypoint.sh script etc.
ADD rootfs /
USER root
ENTRYPOINT ["/opt/discourse/entrypoint.sh"]
CMD ["/opt/discourse/run.sh"]

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

running a bash client at docker creation to set an environment variable

From examples I've seen one can set environment variables in docker-compose.yml like so:
services:
postgres:
image: my_node_app
ports: -8080:8080
environment:
APP_PASSWORD: mypassword
...
For security reasons, my use case requires me to fetch the password from a server that we have a bash client for:
#!/bin/bash
get_credential <server> <dev-environment> <role> <key>
In docker documentation, I found this, which says that I can pass in shell environment variable values to docker compose. So I can run the bash client to grab the passwords in my starting shell that creates the docker instances. However, that requires me to have my bash client outside docker and inside my maven project.
Another way to do this would be to run/cmd/entrypoint a bash script that can set environment variable for the docker instance. Since my docker image runs node.js, currently my Dockerfile is like this:
FROM node:4-slim
MAINTAINER myself
# ... do Dockerfile stuff
# TRIAL #1: run a bash script to set the environment varable --- UNSUCCESSFUL!
COPY set_en_var.sh /
RUN chmod +x /set_en_var.sh
RUN /bin/bash /set_en_var.sh
# original entry point
#ENTRYPOINT ["node", "mynodeapp.js", "configuration.js"]
# TRIAL #2: use a bash script as entrypoint that sets
# the environment variable and runs my node app . --- UNSUCCESSFUL TOO!
ENTRYPOINT ["/entrypoint.sh"]
Here is the code for entrypoint.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>)
export APP_PASSWORD=( $cred_str )
# run the original entrypoint command
node mynodeapp.js configuration.js
And here is code for my set_en_var.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>
export APP_PASSWORD=( $cred_str )
So 2 questions:
Which is a better choice, having my bash client for password live inside docker or outside docker?
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Which is a better choice, having my bash client for password live inside docker or outside docker?
Always have it inside. You don't want dependencies on the host OS. You want to avoid that situation as much as possible
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Consider the below line of code you used
RUN /bin/bash /set_en_var.sh
This won't work at all. Because you don't make any change to the docker container as such. You just run a bash which gets some environment variables and then the bash exits and nothing on the OS gets changes. Dockerfile build will only maintain changes that happened to the OS from that command. And in your case except for that session of the bash, nothing changes.
Next your approach to do this during the build time is also not justified. If you build it with the environment variables inside it then you are breaking the purpose of having a command to fetch the latest credentials. Suppose your change the password, then this would require you to rebuild the image (in case it had worked)
Now your entrypoint.sh approach is the right one and it should work. You should just check what is going wrong with it. Also echo the cred_str for your testing to make sure you are getting the right credentials detail back from the command
Last you should change the line
node mynodeapp.js configuration.js
to
exec node mynodeapp.js configuration.js
This makes sure that your node process becomes the PID 1.

How do you start a Docker-ubuntu container into bash?

The answers from this question do not work.
The docker container always exits before I can attach or won't accept the -t flag. I could list all of the commands I've tried, but it's a combination of start exec attach with various -it flags and /bin/bash.
How do I start an existing container into bash? Why is this so difficult? Is this an "improper" use of Docker?
EDITS:
I created the container with docker run ubuntu. The information about the container: 60b93bda690f ubuntu "/bin/bash" About an hour ago Exited (0) 50 minutes ago ecstatic_euclid
First of all, a container is not a virtual machine. A container is an isolation environment for running a process. The life-cycle of the container is bound to the process running inside it. When the process exits, the container also exits, and the isolation environment is gone. The meaning of "attach to container" or "enter an container" actually means you go inside the isolation environment of the running process, so if your process has been exited, your container has also been exited, thus there's no container for you to attach or enter. So the command of docker attach, docker exec are target at running container.
Which process will be started when you docker run is configured in a Dockerfile and built into a docker image. Take image ubuntu as an example, if you run docker inspect ubuntu, you'll find the following configs in the output:
"Cmd": ["/bin/bash"]
which means the process got started when you run docker run ubuntu is /bin/bash, but you're not in an interactive mode and does not allocate a tty to it, so the process exited immediately and the container exited. That's why you have no way to enter the container again.
To start a container and enter bash, just try:
docker run -it ubuntu
Then you'll be brought into the container shell. If you open another terminal and docker ps, you'll find the container is running and you can docker attach to it or docker exec -it <container_id> bash to enter it again.
You can also refer to this link for more info.
Here is a very simple Dockerfile with instructions as comments ... launch it to spin up a running container you can exec login to
FROM ubuntu:20.04
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y
CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_ubuntu . # creates image stens_ubuntu
#
# docker run -d stens_ubuntu sleep infinity # launches container
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_ubuntu | cut -d' ' -f1 ) bash # login to running container
# docker exec -ti 3cea1993ed28 bash # login to running container using sample containerId
#
A container will exit normally when it has no work to do ... if you give it no work it will exit immediately upon launch for this reason ... typically the last command of your Dockerfile is the execution of some flavor of a server which stays alive due to an internal event loop and in so doing keeps alive its enclosing container ... short of that you can mention a server executable which has been installed into the container as the final parameter of your call to
docker run -d my-image-name my-server-executable

Reuse inherited image's CMD or ENTRYPOINT

How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...

Resources