bash script not moving onto next command - bash

I have a simple bash script as follows:
#!/bin/bash
gunicorn --bind 0.0.0.0:5000 --chdir server/ run:app -w 2
supervisord -c /usr/src/server/supervisord.conf -n
The script is supposed to launch the gunicorn service and then start supervisor. However when the script is called, only the gunciron service is started and not the supervisor service.
The script is being called as follows from a Dockerfile:
FROM python:3.8.1
RUN apt-get update && \
apt-get -y install netcat && \
apt-get clean
RUN apt-get install -y libpq-dev
RUN apt-get install -y redis-server
COPY . usr/src/server
WORKDIR /usr/src
RUN pip install -r server/requirements.txt
RUN chmod +x /usr/src/server/start.sh
CMD ["/usr/src/server/start.sh"]
I have tried using the && separator between the 2 commands as well, but that doesn't seem to make a difference.

The bash script will move on to the next command when the first command has finished running. supervisord will start when gunicorn exits.
To run Gunicorn in the background, pass the --daemon option.
#!/bin/bash
gunicorn --bind 0.0.0.0:5000 --chdir server/ run:app -w 2 --daemon
supervisord -c /usr/src/server/supervisord.conf -n
However, this doesn't make much sense to me. Since you're running Supervisor, why aren't you running Gunicorn from Supervisor? Starting daemons is Supervisor's job.

Related

How to start cron plus a shelll script in an Ubuntu Docker container

I am attempting to start cron automatically in an Ubuntu 20.10 (Groovy Gorilla) Docker container, thus far without success.
From previous searches (example), I've found a method to start cron using Dockerfile as follows:
# Install and enable cron
RUN apt-get install systemd -y
RUN apt-get install cron -y
RUN systemctl enable cron
# Copy cron file to the cron.d directory
COPY cronfile /etc/cron.d/cronfile
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronfile
# Apply cron job
RUN crontab /etc/cron.d/cronfile
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
However, I can't make this work with my server setup. I have another, later, CMD in my Dockerfile (similar to this):
CMD ["/usr/sbin/run-lamp.sh"]
and of course only the second CMD will be run. I have tried combining multiple commands:
CMD cron && tail -f /var/log/cron.log && /usr/sbin/run-lamp.sh
but this does not run run-lamp.sh. I also tried putting the commands inside run-lamp.sh, but nothing has resulted in cron starting. Having said that, it is very easy to start cron manually, by opening up a shell in the container and entering the following:
# cron
# crontab /etc/cron.d/cronfile
I am open to suggestions.
All the files I'm working with are available here:
https://github.com/Downes/gRSShopper
In particular:
Dockerfile:
https://github.com/Downes/gRSShopper/blob/master/Dockerfile
run-lamp.sh:
https://github.com/Downes/gRSShopper/blob/master/run-lamp.sh
cronfile:
https://github.com/Downes/gRSShopper/blob/master/cronfile
Thanks in advance.
First off, you don't need that tail -f /var/log/cron.log, it's useless in a container.
Secondly, tail -f is designed to only stop on a signal, and you never signal it, so it will not stop, and therefore the next command, run-lamp.sh, will not run.
Here's a minimal reproducer:
entrypoint.sh:
#!/bin/bash
touch /tmp/x
sleep 120
cronfile:
* * * * * touch /tmp/y
# An empty line is required at the end of this file for a valid cron file.
Dockerfile:
FROM ubuntu:20.10
RUN apt-get update
RUN apt-get install systemd -y
RUN apt-get install cron -y
RUN systemctl enable cron
# Copy cron file to the cron.d directory
COPY cronfile /etc/cron.d/cronfile
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronfile
# Apply cron job
RUN crontab /etc/cron.d/cronfile
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
COPY entrypoint.sh /entrypoint.sh
CMD /entrypoint.sh
Test command:
docker build -t lamp . \
&& docker rm -f lamp \
&& docker run -d --name lamp lamp \
&& echo waiting for cron... \
&& sleep 61 \
&& docker exec lamp ls /tmp \
&& docker exec lamp sh -c "ps -e | grep cron || echo no cron"
Result:
Sending build context to Docker daemon 71.17kB
Step 1/12 : FROM ubuntu:20.10
...
Successfully tagged lamp:latest
lamp
0fbe19e0583b178543ccf1d1108f72b7f3f6dffb664122621bc67d5939b66672
waiting for cron...
x
no cron
However, with this Dockerfile:
FROM ubuntu:20.10
RUN apt-get update
RUN apt-get install systemd -y
RUN apt-get install cron -y
RUN systemctl enable cron
# Copy cron file to the cron.d directory
COPY cronfile /etc/cron.d/cronfile
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronfile
# Apply cron job
RUN crontab /etc/cron.d/cronfile
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
COPY entrypoint.sh /entrypoint.sh
# Run everything in parallel with '&', even the useless tail command
CMD /entrypoint.sh & cron & tail -f /var/log/cron.log
Result:
Sending build context to Docker daemon 87.55kB
Step 1/11 : FROM ubuntu:20.10
...
Successfully tagged lamp:latest
lamp
99dca45fe135326ca96ea90fe21ff7ae23689a56fab5cf0c2ccd8252bc4be84a
waiting for cron...
x
y
10 ? 00:00:00 cron

Installing OSSEC agent on a container. The ossec install script (install.sh) falls and loops infintely when passing arguments via script

Basically I am going to have a whole bunch of ubuntu containers that are going to have ossec agent installed that will communicate with a main server. I want to automate the installation so using the docker RUN variable in the dockerfile I wrote a script that downloads the ossec tar file, unpacks it, cds into directory and runs the install script while passing arguments to each question of the installation phase:
Dockerfile:
From ubuntu
RUN apt-get update && apt-get install -y \
build-essential \
libmysqlclient-dev \
postgresql-common \
wget \
tar \
RUN wget -U ossec https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
RUN tar -xvf ossec-hids-2.8.3.gz && \
rm -f ossec-hids-2.8.3.tar.gz && \
cd ossec-hids-2.8.3 && \
echo "en agent \n 192.168.1.50 y y y" | ./install.sh
When it echos in the arguments into the script, the install.sh script falls and loops over the second question infinitely. Note I have tried printf, expect script, yes command and tried the script inside the container. All with the same outcome.

Docker network does not work with bash entrypoint

First, we have a Docker network like so:
docker network create cdt-net
Then I have this bash script which will start a selenium server:
cd $(dirname "$0")
./node_modules/.bin/webdriver-manager update
./node_modules/.bin/webdriver-manager start
The above bash script is called by this Dockerfile:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
I would build it like so:
docker build -t cdt-selenium .
and then run it like so:
docker run --network=cdt-net --name cdt-selenium -d cdt-selenium
the problem that I am having, is that even though everything is clean with no errors, other processes in the same Docker network cannot talk to the selenium server.
On the other hand, if I create a selenium server using a pre-existing image, like so:
docker run -d --network=cdt-net --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
then things are working as expected, and I can connect to the selenium server from other processes in the Docker network.
Anyone know what might be wrong with my bash script or Dockerfile? Perhaps my manually created Selenium server is not listening on the right host?
Here is the complete Dockerfile for reference:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y apt-utils
RUN sudo apt-get -y update
RUN sudo apt-get -y upgrade
RUN sudo apt-get purge nodejs npm
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -y nodejs
RUN echo "before nodejs => $(which nodejs)"
RUN echo "before npm => $(which npm)"
RUN sudo ln -s `which nodejs` /usr/bin/node || echo "ignore error"
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
RUN rm -rf node_modules > /dev/null 2>&1
RUN npm init -f || echo "ignore non-zero exit code" > /dev/null 2>&1
RUN npm install webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
You should use -d only when you docker images run fine. Before that use -it.
Change you webdriver-manager to a global install
RUN npm install -g webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
Also change your start-selenium-server.sh to
webdriver-manager update
webdriver-manager start
And use below to run and check if there are any issues
docker run --network=cdt-net --name cdt-selenium -it cdt-selenium

How to start muli process in background with supervisor on Docker

With my dockerfile I try to run 2 process in background (Tor and Polipo) with supervisor.
My Dockerfile look like that:
# Pull base image.
FROM ubuntu:latest
# Upgrade system
RUN apt-get update && apt-get dist-upgrade -y --no-install-recommends && apt-get autoremove -y && apt-get clean
# Install TOR
RUN apt-get install -y --no-install-recommends tor tor-geoipdb torsocks && apt-get autoremove -y && apt-get clean
# INSTALL POLIPO
RUN apt-get update && apt-get install -y polipo
# INSTALL SUPERVISOR
RUN apt-get install -y supervisor
# Default ORPort
EXPOSE 9001
# Default DirPort
EXPOSE 9030
# Default SOCKS5 proxy port
EXPOSE 9050
# Default ControlPort
EXPOSE 9051
# Default polipo Port
EXPOSE 8123
RUN echo 'socksParentProxy = "localhost:9050"' >> /etc/polipo/config
RUN echo 'socksProxyType = socks5' >> /etc/polipo/config
RUN echo 'diskCacheRoot = ""' >> /etc/polipo/config
RUN echo 'ORPort 9001' >> /etc/tor/torrc
RUN echo 'ExitPolicy reject *:*' >> /etc/tor/torrc
ADD supervisor_tor.conf /etc/supervisor/conf.d/tor.conf
CMD /usr/bin/supervisord -n
and my supervisor_tor.conf look like that:
[group:tor]
programs=polipo,tor
[program:polipo]
command=/usr/bin/polipo -c /etc/polipo/config
autostart=true
autorestart=true
[program:tor]
command=/usr/bin/tor
autostart=true
autorestart=true
redirect_stderr=true
Once my countainer is running, I see all the log I don't access on the bash.
How I can start 2 process in background with supervisor ?
Thanks in advance.
I'm a little lost as I don't understand what "I see all the log I don't access on the bash" means.
However, it sounds like you the problem is you want your bash prompt back. If that's the case, just give the -d argument to docker run when starting your image. If you want to get another shell, just docker exec e.g:
$ docker exec -it mycon bash

How to start multiple processes for a Docker container in a bash script

I found very strange behaviour when I build and run docker container. I would like to have container with cassandra and ssh.
In my Dockerfile I've got:
RUN echo "deb http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN echo "deb-src http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN gpg --keyserver pgp.mit.edu --recv-keys 4BD736A82B5C1B00
RUN apt-key add ~/.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get -y install cassandra
And then for ssh
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo '{{ docker_ssh_user }}:{{docker_ssh_password}}' | chpasswd
EXPOSE 22
And I added start script to run everything I want:
USER root
ADD start start
RUN chmod 777 start
CMD ["sh" ,"start"]
And here comes problem. When I have start like this below:
#!/bin/bash
/usr/sbin/sshd -D
/usr/sbin/cassandra -f
SSH is working well. I can do ssh root#172.17.0.x. After I log in container I try to run cqlsh to ensure that cassandra is working. But cassandra is not started for some reason and I can't access cqlsh. I've also checked /var/log/cassandra/ but it was empty.
In second scenario I change my start script to this:
#!/bin/bash
/usr/sbin/sshd -D & /usr/sbin/cassandra/ -f
And I again try to connect ssh root#172.17.0.x and then when I run cqlsh inside container I have connection to cqlsh.
So I was thinking that ampersand & is doing some voodoo that all works well ?
Why I can't run bash staring script with one command below another?
Or I'm missing something else??
Thanks for reading && helping.
Thanks to my friend linux guru we found the reason of error.
/usr/sbin/sshd -D means that -D : When this option is specified, sshd will not detach and does not become a deamon. This allows easy monitoring of sshd
So in the first script sshd -D was blocking next command to run.
In second script I've got & which let sshd -D go background and then cassandra could start.
Finally I've got this version of script:
#!/bin/bash
/usr/sbin/sshd
/usr/sbin/cassandra -f

Resources