Docker quits after script has been executed - bash

I have docker container starting with command:
"CMD [\"/bin/bash\", \"/usr/bin/gen_new_key.sh\"]"
script looks like:
#!/bin/bash
/usr/bin/generate_signing_key -k xxxx -r eu-west-2 > /usr/local/nginx/s3_signature_key.txt
{ read -r val1
read -r val2
sed -i "s!AWS_SIGNING_KEY!'$val1'!;
s!AWS_KEY_SCOPE!'$val2'!;
" /etc/nginx/nginx.conf
} < /usr/local/nginx/s3_signature_key.txt
if [ -z "$(pgrep nginx)" ]
then
nginx -c /etc/nginx/nginx.conf
else
nginx -s reload
fi
Script is working itself as I can see all data in docker layer in /var/lib/docker..
It is intended to run it by cron for every 5 days as AWS signature key generated in first line is valid for 7 days only. How can I prevent Docker to quit after script is finished and keep is running?

You want a container that is always ON with nginx, and run that script every 5 days.
First you can just run nginx using:
CMD ["nginx", "-g", "daemon off;"]
This way, the container is always ON with nginx running.
Then, just run your script as a usual script with the cron:
chmod +x script.sh
0 0 */5 * * script.sh
EDIT: since the script must be running in the first time
1) one solution (the pretty one), it's to load manually the AWS valid signing key the first time. After that first time, the script will update the AWS valid signing key automatically. (using the solution previously presented)
2) the other solution, it's to run a docker entrypoint file (that it's your script)
# Your script
COPY docker-entrypoint.sh /usr/local/bin/
RUN ["chmod", "+x", "/usr/local/bin/docker-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Define default command.
CMD ["/bin/bash"]
On your script:
service nginx start
echo "Nginx is running."
#This line will prevent the container from turning off
exec "$#";
+ info about reason and solution to use the exec line

Related

Docker Container Start Command Did Not Get .bashrc variables

I'm using docker to execute a command when starting the container but seems the environment variable did not get from the .bashrc file, please give me some advice.
thanks
dockerFile I add this to .bashrc:
echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc
docker-compose.yml file with:
command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
PS:if I exec echo $PYTHPATH or just exec python2 /usr/bin/supervisord -c /etc/supervisor/supervisor.conf from container, there have not issues.
The System is Ubuntu 16.04
supervisor config:
[program:mosquitto-subscrible]
process_name=%(program_name)s_%(process_num)02d
command=python3 detection.py start_mosquitto_subscrible
autostart=true
autorestart=true
user=root
numprocs=1
directory=/var/www/html/detection
redirect_stderr=true
stdout_logfile=/var/www/html/detection/logs/detection.log
docker-compose.yml
version: '3'
services:
tensorflow:
container_name: object-detection
build:
context: ./tensorflow
dockerfile: Dockerfile
# environment:
# - PYTHONPATH=:/models/research:/models/research/slim
volumes:
- ./www:/var/www/html:cached
- ./tensorflow/supervisor:/etc/supervisor/conf.d
command: ['tail', '-f', '/dev/null']
# command: ["python2", "-c", "/usr/bin/supervisord", "--nodaemon","--configuration", "/etc/supervisor/supervisord.conf"]
In conclusion, I write a command in Dockfile echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc to make /models/research can be found by PYTHON.
there have a python model /models/research/object_detection.
with my supervisor, the command python3 detection.py start_mosquitto_subscrible can't find object_detection model if I start supervisord just from docker-compose command instead of exec it inside docker container.
supervisord need python2 to start, my code needs python3
~/.bashrc wont run untill the shell is opened interactively, that's why no issues when you do docker exec which is interactive, see the first few lines of bashrc file :
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
you need to comment these lines.
If you just need one Environment variable, better get the value of PYTHON_PATH from your container and add the complete variable to your docker-compose.yml file.
command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
The command you've provided is using the exec syntax. See the documentation on CMD (the same applies to RUN and ENTRYPOINT):
If you use the shell form of the CMD, then the <command> will execute
in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must
express the command as a JSON array and give the full path to the
executable. This array form is the preferred format of CMD. Any
additional parameters must be individually expressed as strings in the
array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
In your case, you want a bash shell to process the .bashrc file, which means you need something along the lines of:
command: ["/bin/bash", "-c", "python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf"]
Edit: with the /root/.bashrc in ubuntu:16.04, you'll see the following at the top of the file:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
You can modify the file before this line with this sed command:
sed -i '4s;^;export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim\n;' /root/.bashrc
I'd consider placing this in a script used to start the container instead of hacking the .bashrc, e.g. a start.sh:
#!/bin/sh
export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim
exec python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf
And then add that to your image with:
COPY start.sh /
RUN chmod 755 /start.sh # if your build server doesn't have this permission set
CMD [ "/start.sh" ]
Try to start docker compose with command:
PYTHONPATH="$PYTHONPATH:/models/research:/models/research/slim" docker-compose up -d

Docker is not running my entire entrypoint.sh script

I have created a docker container to stand up Elasticsearch. Elasticsearch is being started and managed by supervisor which is also installed on my docker container. I have created an entrypoint.sh script and added the following to the end of my Dockerfile
ENTRYPOINT ["/usr/local/startup/entrypoint.sh"]
My entrypoint.sh script looks as follows:
#!/bin/bash -x
# Start Supervisor if not already running
if ! ps aux | grep -q "[s]upervisor"; then
echo "Starting supervisor service"
exec/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
else
echo "Supervisor is currently running"
fi
echo "creating /.es_created"
touch /.es_created
exec "$#"
When I start my docker container supervisor starts and in turn will successfully start elasticsearch. The problem is that it never executes the last bit of the script creating the .es_created file. It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there. I added -x to the #!/bin/bash so I could call docker logs on the container and it confirms that it never calls the last echo and touch commands. I feel like I may be missing something about entrypoint scripts which is why this is happening, but ultimately I want to be able to execute some commands after elasticsearch has started so I can configure a proper index and insert some data.
Your guess
It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there.
is correct, because the exec command of bash has indeed the following semantics: the specified program at stake is executed, and replace the parent shell process (it is an exec system call).
So your question is actually not a Docker issue, it is rather related to Bash. For more details on the exec shell builtin, you could for example take a look at this askubuntu question, or read the corresponding doc in the bash reference manual.
To sum up, you should try to just write
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
If that command indeed runs in the background, it should be OK. Otherwise, you could of course append a &:
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf &

Set ENV variable in container is not working, is every under "/usr/local/bin" executed on container run?

I have the following piece of definition in a Dockerfile:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
COPY /container-files/etc/php.d/zz-php.ini /etc/php5/mods-available/zz-php.ini
RUN ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
CMD ["/usr/local/bin/setup_xdebug_ip.sh", "/usr/local/bin/setup_php_settings.sh"]
This is the relevant piece of definition at zz-php.ini:
; Xdebug
[Xdebug]
xdebug.remote_enable=true
xdebug.remote_host="192.168.3.1" => this should be overwrited by HOST_IP
xdebug.remote_port="9001"
xdebug.idekey="XDEBUG_PHPSTORM"
This is the content of the script setup_xdebug_ip.sh:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/conf.d/zz-php.ini
Updated the script
I have updated the script to see it that's the reason why the value isn't changed and still not working. See the code below:
#!/usr/bin/bash
sed -ri "s/^xdebug.remote_host\s*=.*$//g" /etc/php5/apache2/conf.d/zz-php.ini
echo "xdebug.remote_host = $HOST_IP" >> /etc/php5/apache2/conf.d/zz-php.ini
In order to build the image and run the container I follow this steps:
Build the image:
docker build -t reynierpm/dev-php55 .
Run the container:
docker run -e HOST_IP=$(hostname -I | cut -d' ' -f1)
--name dev-php5
-it /bin/bash reynierpm/dev-php55
After the image gets built and the container is running I open a browser and point to: http://container_address/index.php (which contains phpinfo()) and I can see the value of xdebug.remote_host as 192.168.3.1 ...
why? What is not running when the container start? Why the value doesn't get overwritten using the provided value by -e on the run command?
UPDATE:
I've notice that I am only copying the file and setting up the permissions but I am not running it at all:
# Copy the script for change the xdebug.remote_host value based on HOST_IP
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
# Execute the script
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
Could this be the issue? Everything that I put under /usr/local/bin is executed at container start? If not that's definitively the issue or at least I think.
UPDATE #2:
After the suggestions from #charles-dufly I've fixed a few things but still not working.
Now the Dockerfile looks like:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
ADD container-files /
RUN chmod +x /usr/local/bin/setup_xdebug_ip && \
/usr/local/bin/setup_xdebug_ip && \
chmod +x /usr/local/bin/setup_php_settings && \
ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini && \
ln -s /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini && \
a2enmod rewrite
EXPOSE 80 9001
CMD ["/usr/local/bin/setup_php_settings"]
After build the image I am running the following command:
$ docker run -e HOST_IP=192.168.3.120 -p 80:80 --name php55-img-6 -it reynierpm/php5-dev-4 /bin/bash
I can see the value of xdebug.remote_host being set as 127.0.0.1 but is not taking the value passed as -e on the run command, why?
You're correct in that items under /usr/local/bin are not automatically executed.
The Filesystem Hierarchy Standard specifies /usr/local as a "tertiary hierarchy" with its own bin, lib, &c. subdirectories, equivalent in their intent and use to the like-named directories under / or /usr but for content installed local to the machine (in practice, this means software installed without the benefit of the local distro's packaging system).
If you want a command to be executed, you need a RUN that directly or indirectly invokes it.
As for the other matters discussed as this question has morphed, consider the following:
FROM alpine
ENV foo=bar
RUN echo $foo >/tmp/foo-value
CMD cat /tmp/foo-value; echo $foo
When invoked with:
docker run -e foo=qux
...this emits as output:
bar
qux
...because bar is the environment variable laid down by the RUN command, whereas qux is the environment variable as it exists at the CMD command's execution.
Thus, to ensure that an environment variable is honored in configuration, it must be read and applied during the CMD's execution, not during a prior RUN stage.
Multiple problems with your repo:
First of all when using CMD in docker file, the command added after the image name in the docker run : /bin/bash will override the CMD ["/usr/local/bin/setup_php_settings"] from your Dockerfile.
Thus your setup_php_settings is never executed!
You should use ENTRYPOINT i.s.o. CMD in your Dockerfile. I found good explanation here and here.
In conclusion for the Dockerfile change the CMD [...] line in:
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
then you can run your container with:
docker run -it -e HOST_IP=<your_ip_address> -e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' -p 80:80 --name dev-php5 mmi/dev-php55
No need to add /bin/bash at the end. Check-out test-repo for test-setup.
Secondly, in your /usr/local/bin/setup_php_settings, you should add
a2enmod rewrite
service apache2 restart
at the end, just before
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND`
this in order for your new settings to be applied in your web app.

docker exec is not working in cron

I have pretty simple command which is working fine standalone as a command or bash script but not when I put it in crontab
40 05 * * * bash /root/scripts/direct.sh >> /root/cron.log
which has following line
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
SHELL=/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin:/root/
# Mongo Backup
docker exec -it mongodb mongodump -d meteor -o /dump/
I tried to change the url of script to /usr/bin/scirpts/ no luck
I even tried to run script directly in cron
26 08 * * * docker exec -it mongodb mongodump -d meteor -o /dump/ >> /root/cron.log
with no luck, any help appreciated.
EDIT
I don't see any errors in /root/cron.log file either
Your docker exec command says it needs "pseudo terminal and runs in interactive mode" (-it flags) while cron doesn't attach to any TTYs.
Try changing your docker exec command to this and see if that works?
docker exec mongodb mongodump -d meteor -o /dump/
for what it's worth I had this exact same problem. Fixing your PATH, changing permissions, and making sure you are running as the appropriate docker user are all good things, but that's not enough. It's going to continue failing because you are using "docker exec -it", which tells docker to use an interactive shell. Change it to "docker exec -t" and it'll work fine. There will be no log output anywhere telling you this, however. Enjoy!
cron debugging
1. /var/log or sendmail
As crond work as a daemon, without ability of failing, execution is more important than logging. Then by default, if something goes wrong, cron will send a mail to $USER#localhost reporting script output and errors.
Have a look at /var/mail or /var/spool/mail for some mails, maybe
and at /etc/aliases to see where root's mail are sents.
2. crond and $PATH
When you run a command by cron, have care that $PATH is user's default path and not root default path (ie no */sbin and other reserved path to super user tools).
For this, the simplier way is to print your default path in the environment where everything run fine:
echo $PATH
or patch your script from command line:
sed -e "2aPATH='$PATH'" -i /root/scripts/direct.sh
This will add current $PATH initializer at line 2 in your script.
Or this, will whipe from your script all other PATH=:
sed -e "s/PATH=[^ ]*\( \|$\)/\1/;2aPATH='$PATH'" -i /root/scripts/direct.sh
3. Force logging
Add at top of your script:
exec 1>/tmp/cronlog-$$.log
exec 2>/tmp/cronlog-$$.err
Try this:
sed -e '1a\\nexec 1>/tmp/cronlog-$$.log\nexec 2>/tmp/cronlog-$$.err' -i ~/scripts/direct.sh
Finalized script could look like:
#!/bin/bash
# uncomment two following lines to force log to /tmp
# exec 1>/tmp/cronlog-$$.log
# exec 2>/tmp/cronlog-$$.err
PATH='....' # copied from terminal console!
docker exec -it mongodb mongodump -d meteor -o /dump/
Executable flag
If you run your script by
40 05 * * * bash /root/scripts/direct.sh
no executable flag are required, but you must add them:
chmod +x ~/scripts/direct.sh
if you want to run:
40 05 * * * /root/scripts/direct.sh
1) Make sure this task is in the root user's crontab - it's probably the case but you didn't write it explicitly
2) cron may be unable to find bash. I would remove it and call directly your script after making it executable:
chmod 755 /root/scripts/direct.sh
and then set your crontab entry as 40 05 * * * /root/scripts/direct.sh 2>&1 >> /root/cron.log
If it's still not working, then you should have some useful output in /root/cron.log
Are you sure your script is running? Add an other command like touch /tmp/cronok before the docker exec call.
Don't forget that the crontab needs a newline at the end. Use crontab -e to edit it.
Restart the cron service and check the logs (grep -i cron /var/log/syslog).
If your OS is redhat/centos/fedora, you should try with the username (root) between the frequency and the command.
Check your mails with the mail command.
Check the crontab permissions. chmod 644 /etc/crontab.
Maybe you just don't want to reinvent the wheel.
Here's a few things I'd change-- first, capture STDERR along with STDOUT and remove the shell specification in cron-- use #! in your script instead.
40 05 * * * /root/scripts/direct.sh &>> /root/cron.log
Next, you are setting your PATH in the reverse order, and you are missing your shbang. I have no idea why you are defining SHELL as /bin/sh, when you are running bash, instead of dash. Change your script to this.
#!/usr/bin/env bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/root
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# Mongo Backup
docker exec -it mongodb mongodump -d meteor -o /dump/
See if that yields something better to work with.

Executing a shell script within docker with RUN command

New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.
I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less
I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.

Resources