Gracefully stop Phusion Passenger running on apache - ruby

I have a docker container with apache running in foreground. On stopping the docker container , a SIGTERM is sent to all the child processes , which is apache in our case.
Now, the problem i am facing is to gracefully shutdown apache on receiving SIGTERM signal.
Apache normally terminates on the current requests immediately which is the main cause of the problem . Somehow, i need to translate the SIGTERM signal to SIGWINCH , which would eventually gracefully shutdown the server.
I was thinking of writing some kind of wrapper script , but couldn't get as to how to start.
Any suggestions in this regard would be highly appreciated!
Thanks.

The tomcat inside of container can be stopped gracefully by issuing below command (change tomcat path if needed):
docker exec -it <container id / name> /usr/local/apache2/bin/apachectl -k graceful
And to your comment, if you want to see the tomcat log in case if it is not running in foreground
docker exec -it <container id / name> tail -f tail -f /usr/local/apache2/logs/error_log
UPDATE: Based on the comments.
From the docker documentation, you may specify the time while stopping the docker container. By default, it will only wait for 10 sec.
To stop container with different timeout:
docker stop -t <time in seconds> <container id/ name>
I believe that, increasing time out while stopping might help in your case.
UPDATE2 sending custom signal, SIGWINCH in your case. Please refer here for more details.
docker kill -s SIGWINCH <apache container id / name>
UPDATE3
There are helpful resources on signal trapping:
https://medium.com/#gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86#.qp68kskwd
http://www.techbar.me/stopping-docker-containers-gracefully/
Hope these are helpful.

Related

container restarting again and again and unable to stop / remove / kill

I have a problem, when I check the list of running container by command:
docker ps It show me running container with id and name. I killed it by command docker kill jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
after a few seconds It was started again with new container id automatically.
I have tried another command to remove it : docker container rm jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
It removed and then again started with another container id after few seconds.
I am really upset what's going on...
I have stoped container first and then removed, when I checked after remove by docker ps it was showing no container in list and after few seconds a container was running with some other id... It was surprising me.
The container is managed by swarm mode. Swarm mode will see the difference between the current state and target state and create a new container to correct the difference. Try:
docker service ls
docker service rm jenkins-master

How to determine why sigterm was sent to process running inside docker container on mesos?

I have a docker container that I can excecute fine locally. Yet when run on a mesos cluster, I get SIGTERMS
/usr/my_script.sh: line 57: 310 Killed xsltproc sort.xsl ${2} > ${2}_bat
W0703 09:09:54.465442 5074 logging.cpp:91] RAW: Received signal SIGTERM from process 2262 of user 0; exiting
I don't understand where this problem is comming from and how to best debug it. How can I find out what's killing my container?
I tried increasing the RAM of the container available to over 4GB, yet to no avail. Furthermore, according to /usr/bin/time -v xsltproc sort.xsl offending_file.xml > sortedFile.xml the process should only consume 1GB RAM.
I also tried googling for the error output of W0703 and 5074 logging.cpp:91, yet to no avail. It also begs the question why the container has no problem executing the command when run locally.
I had this same issue. I was running a docker container on Chronos and left the "command" field blank, assuming it would execute CMD in the Dockerfile when not overridden. Explicitly copying the command into the Mesos configuration fixed the issue for me.

Running an docker image with cron

I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.

Ubuntu run service in foreground

I've made a (docker) container for ddclient.
The problem is that I'm having problems in running that service in the foreground so that the docker container keeps running.
I've managed to keep the docker running by adding a bashat the end of the script but this is hackish, since the actual process it should be whatching is the ddclient.
Another way I found was to tail -f the log file, but if the service stops, the container will keep running instead of stoping.
Q: So is there any (easy) way to keep a service running in the foreground?
The problem with the process (any process) running in a container is signal management: you need to make sure the SIGKILL and other signals are properly communicated to the right process(es) in order to successfully stop/remove a container (and not leave zombie processes: see "PID 1 zombie reaping issue")
One option is at least to make your service at least write in a log file
ENTRYPOINT ["/bin/sh" "-c" ]
CMD yourProcess > log
That should keep it in foreground, as suggested in "How do I bring a daemon process to foreground?".
For a service, try and use as a base image phusion/baseimage-docker which manages other services properly.

Starting docker service with "sudo docker -d"

I am trying to push some image to my registry, but when i tried to do:
sudo docker push myreg:5000\image
i got some error that told me that i need to start docker daemon with
docker -d --insecure-registry myreg:5000
So i stopped the docker service, and started it using the command above, once i do that the current shell window(ssh) is stuck with docker output, and if i close it the docker service is stopped.
I know this is an easy one, and i searched for hours and couldn't find anything.
Thank you
The problem is that when i run the command, i get all the docker output to the shell, and if i close it, the docker service stopped, usually the -d should take care of it, but it wont work
I think there's a confusion here; the top-level -d (docker -d) flag starts docker in daemon mode, in the foreground. This is different from the docker run -d <image> flag, which means "start a container from <image>, in detached mode". What you're seeing on your screen, is the daemon output / logs, waiting for connections from a docker client.
Back to your original issue;
The instructions to run docker -d --insecure-registry myreg:5000 could be clearer, but they illustrate that you should change the daemon options of your docker service to include the --insecure-registry myreg:5000 option.
Depending on the process manager your system users (e.g., upstart or systemd), this means you'll have to edit the /etc/default/docker file (see the documentation), or adding a "drop-in" file to override the default systemd service options; see SystemD custom daemon options
Some notes;
The top-level -d option is deprecated in docker 1.8 in favor of the new docker daemon command
Using --insecure-registry is discouraged for security reasons as it allows both unencrypted and untrustworthy communication with the registry. It's preferable to add your CA to the trusted list of your system.

Resources