Unable to Find Entrypoint For Nextcloud (Alpine-based Version) For a Cron Container - shell

I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!

Related

How to run ENTRYPOINT as root and switch to non-root to run CMD using gosu?

To connect to my container from Azure WebApp admin I need to start ssh server at startup. Then I need to run web server once the db is up.
In my Dockerfile I create a dedicated non-root user to run the web server.
RUN groupadd -g 1000 wagtail && \
useradd -u 1000 wagtail -m -d /home/wagtail -g wagtail
I copy startup-ssh.sh and startup-main.sh scripts into the container:
COPY startup-ssh.sh /app/
COPY startup-main.sh /app/
RUN chmod +x /app/startup-ssh.sh
RUN chmod +x /app/startup-main.sh
ENTRYPOINT ["/bin/bash", "-c", "/app/startup-ssh.sh"]
CMD ["/bin/bash", "-c", "/app/startup-main.sh"]
In the startup-ssh.sh I start the ssh server and then use gosu to switch user:
#!/bin/bash
# start ssh server
sed -i "s/SSH_PORT/$SSH_PORT/g" /etc/ssh/sshd_config
/usr/sbin/sshd
# restore /app directory rights
chown -R wagtail:wagtail /app
# switch to the non-root user
exec gosu wagtail "$#"
I expect the CMD's startup-main.sh script to be executed next but I get this in the Docker Desktop logs when the container is started.
Exited(1)
Usage: gosu user-spec command [args]
gosu nobody:root bash -c 'whoami && id'
gosu 1000:1 idie: gosu tianon bash
gosu version: 1.10 (go1.11.5 on linux/amd64; gc)
license: GPL-3 (full text at https://github.com/tianon/gosu)
I believe that Docker Desktop uses root when connecting to the container.
Maybe I'm missing something critical and/or this is something obvious. Please point me.
The code passed no arguments to the script. Imagine it like this:
bash -c '/app/startup-ssh.sh <NO ARGUMENTS HERE>' ignored ignored2 ignored3...
Test:
bash -c 'echo' 1
bash -c 'echo' 1 2
bash -c 'echo $0' 1 2 3
bash -c 'echo $1' 1 2 3
bash -c 'echo "$#"' 1 2 3
You want:
ENTRYPOINT ["/bin/bash", "-c", "/app/startup-ssh.sh \"$#\"", "--"]
Or, why the explicit shell if the file has a shebang and is executable, really just:
ENTRYPOINT ["/app/startup-ssh.sh"]

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

How to check if docker daemon is running?

I am trying to create a bash utility script to check if a docker daemon is running in my server.
Is there a better way of checking if the docker daemon is running in my server other than running a code like this?
ps -ef | grep docker
root 1250 1 0 13:28 ? 00:00:04 /usr/bin/dockerd --selinux-enabled
root 1598 1250 0 13:28 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
root 10997 10916 0 19:47 pts/0 00:00:00 grep --color=auto docker
I would like to create a bash shell script that will check if my docker daemon is running. If it is running then do nothing but if it is not then have the docker daemon started.
My pseudocode is something like this. I am thinking of parsing the output of my ps -ef but I just would like to know if there is a more efficient way of doing my pseudocode.
if(docker is not running)
run docker
end
P.S.
I am no linux expert and I just need to do this utility on my own environment.
I made a little Script (Mac Osx) to ensure Docker is running by checking the exit code of docker stats.
#!/bin/bash
#Open Docker, only if is not running
if (! docker stats --no-stream ); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
#Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream ); do
# Docker takes a few seconds to initialize
echo "Waiting for Docker to launch..."
sleep 1
done
fi
#Start the Container..
This works for me on Ubuntu
$ systemctl status docker
You have a utility called pgrep on almost all the Linux systems.
You can just do:
pgrep -f docker > /dev/null || echo "starting docker"
Replace the echo command with your docker starting command.
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
Ref: Docker api v1.28
The following works on macOS and on Windows if git bash is installed. On macOS open /Applications/Docker.app would start the docker deamon. Haven't seen anything similar for Windows however.
## check docker is running at all
## based on https://stackoverflow.com/questions/22009364/is-there-a-try-catch-command-in-bash
{
## will throw an error if the docker daemon is not running and jump
## to the next code chunk
docker ps -q
} || {
echo "Docker is not running. Please start docker on your computer"
echo "When docker has finished starting up press [ENTER} to continue"
read
}
You can simply:
docker version > /dev/null 2>&1
The exit code of that command will be stored to $? so you can check if it's 0, then docker is running.
docker version will exit 1 if daemon is not running. If other issues are encountered, such as docker not being installed at all, the exit code will vary.
But in the end of the day, if docker is installed and daemon is running, the exit code will be 0.
The 2>&1 will redirect stderr to stdout and > /dev/null will redirect stdout to /dev/null practically silencing the output no matter what was the result of the execution.
You could also just check for the existence of /var/run/docker.pid.
Following #madsonic, I went for the following
#!/bin/bash
if (! docker stats --no-stream 2>/dev/null); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
echo -n "Waiting for Docker to launch"
sleep 1
# Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream >/dev/null 2>&1); do
# Docker takes a few seconds to initialize
echo -n "."
sleep 1
done
fi
echo
echo "Docker started"
A function could looks so:
isRunning {
`ps -ef | grep "[d]ocker" | awk {'print $2'}`
}
I created a script to start, stop, restart a mongodb-server.
You only need to change some path inside the scripts, and i also works for you:
Script
I'm sure you want to start the docker daemon so here's the code to start it before executing your Docker run statement:
sudo systemctl start docker

Why docker exec is killing nohup process on exit?

I have running docker ubuntu container with just a bash script inside. I want to start my application inside that container with docker exec like that:
docker exec -it 0b3fc9dd35f2 ./main.sh
Inside main script I want to run another application with nohup as this is a long running application:
#!/bin/bash
nohup ./java.sh &
#with this strange sleep the script is working
#sleep 1
echo `date` finish main >> /status.log
The java.sh script is as follow (for simplicity it is a dummy script):
#!/bin/bash
sleep 10
echo `date` finish java >> /status.log
The problem is that java.sh is killed immediately after docker exec returns. The question is why?
The only solution I found out is to add some dummy sleep 1 into the first script after nohup is started. Than second process is running fine. Do you have any ideas why it is like that?
[EDIT]
Second solution is to add some echo or trap command to java.sh script just before sleep. Than it works fine. Unfortunately I cannot use this workaround as instead of this script I have java process.
This is not an answer, but I still don't have the required reputation to comment.
I don't know why the nohup doesn't work. But I did a workaround that worked, using your ideas:
docker exec -ti running_container bash -c 'nohup ./main.sh &> output & sleep 1'
Okay, let's join two answers above :D
First rcmgleite say exactly right: use
-d
options to run process as 'detached' background.
And second (the most important!) if you run detached process, you don't needed nohup!
deploy_app.sh
#!/bin/bash
cd /opt/git/app
git pull
python3 setup.py install
python3 -u webui.py >> nohup.out
Execute this inside a container
docker exec -itd container_name bash -c "/opt/scripts/deploy_app.sh"
Check it
$ docker attach container_name
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11768 1940 pts/0 Ss Aug31 0:00 /bin/bash
root 887 0.4 0.0 11632 1396 pts/1 Ss+ 02:47 0:00 /bin/bash /opt/scripts/deploy_app
root 932 31.6 0.4 235288 32332 pts/1 Sl+ 02:47 0:00 python3 -u webui.py
I know this is a late response but I will add it here for documentation reasons.
When using nohup on bash and running it with 'exec' on a docker container, you should use
$ docker exec -d 0b3fc9dd35f2 /bin/bash -c "./main.sh"
The -d option means:
-d, --detach Detached mode: run command in the
background
for more information about docker exec, see:
https://docs.docker.com/engine/reference/commandline/exec/
This should do the trick.

Resources