Docker: Cronjob is not working - bash

I am trying to run cron job on Docker container. I have a running container (Fedora 20).
I have also installed cron packages in container and explicitly run the cron daemon.
I have also checked cron.deny file it is empty and there is no file called cron.allow under /etc/ directory.
Whenever I tried to set the cronjob by using crontab -e or trying to list the cron job using
crontab -l I am getting following error.
bash-4.2# crontab -l
You (root) are not allowed to access to (crontab) because of pam configuration.
bash-4.2# crontab -e
You (root) are not allowed to access to (crontab) because of pam configuration.
I also checked the /etc/pam.d/crond file it has following entry
bash-4.2# vi /etc/pam.d/crond
#
# The PAM configuration file for the cron daemon
#
#
# No PAM authentication called, auth modules not needed
account required pam_access.so
account include password-auth
session required pam_loginuid.so
session include password-auth
auth include password-auth
Has any one faced this issue? If yes could you please suggest me some pointer on this?
thanks in advance.

An LXC container is not a virtual machine. You'll need to explictly run the cron daemon in the foreground. Better still run cron from program like Supervisor or runit.
Reference: Docker documentation
Traditionally a Docker container runs a single process when it is
launched, for example an Apache daemon or a SSH server daemon. Often
though you want to run more than one process in a container. There are
a number of ways you can achieve this ranging from using a simple Bash
script as the value of your container's CMD instruction to installing
a process management tool.
In this example we're going to make use of the process management
tool, Supervisor, to manage multiple processes in our container. Using
Supervisor allows us to better control, manage, and restart the
processes we want to run. To demonstrate this we're going to install
and manage both an SSH daemon and an Apache daemon.

You can do:
ENTRYPOINT cron -f
although remember that you can only have one ENTRYPOINT.
From the docs:
There can only be one ENTRYPOINT in a Dockerfile. If you have more
than one ENTRYPOINT, then only the last one in the Dockerfile will
have an effect.
An ENTRYPOINT helps you to configure a container that you can run as
an executable. That is, when you specify an ENTRYPOINT, then the whole
container runs as if it was just that executable.

Related

How to restart Laravel queue workers inside a docker container?

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.
Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.
I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

Using setuid inside a docker container

I have a container which needs to do some initialisation on startup that can only be done as root, but following good practice I don't want the container running as root.
I figured I should be able to create a script inside the container, owned by root and with the setuid bit set. The container can then be started with a non-root user, the initialisation done by executing the script, and the the container does what it needs to do.
This does not seem to work. Even though the script is owned by root and the setuid bit set, the initialisation script runs as the non-root user.
Should this work? Is there another (better) way?
I'm running with Docker for Desktop on a mac.
The initialisation I need to do is to update /etc/hosts with a value that can only be determined at run time from inside the container - specifically the IP address associated with host.docker.internal.
I have tried making /etc/hosts writable by the non-root user from within the Dockerfile. That doesn't work either. /etc/hosts is a mounted volume when in the docker file and chmod and chown seem to have no effect on the file in the running container.
Probably to achieve your goal you can specify in the Dockerfile to use the non-root user after the installation script.
For example:
FROM ...
RUN ./install.sh
USER foo
...
This Dockerfile will launch the installer as root and after changing the user to the selected one.
Hope it can be useful for you!

Commands on hosts when docker is up?

Is there a possibility to run some commands on host when my docker-compose up is running?
In my case I am building a set of containers for 2 projects and some of containers are facilitating frameworks tools or phing for migrations and other tasks. I'd like to prepare a set of aliases to be able to run some common tasks from host level instead of running docker-compose exec web command just write drun command or something like that.
For now I came up with a build.sh where I would define a set of such aliases and then run docker-compose up but is there easier way for that?

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

Running docker as non-root user OR running jenkins on tomcat as root user

I am trying to build a docker image using docker-maven plugin, and plan to execute the mvn command using jenkins. I have jenkins.war deployed on a tomcat instance instead of a standalone app, which runs as a non-root user.
The problem is that docker needs to be run as root user, so maven commands need to be run as root user, and hence jenkins/tomcat needs to run as root user which is not a good practice (although my non-root-user is also sudoer so I guess won't matter much).
So bottom line, I see two solutions : Either run docker as non-root user (and need help on how to do that)
OR
Need to run jenkins as root (And not sure how to achieve that as I changed environment variable /config and still its not switching to root).
Any advice on which solution to choose and how to implement it ?
The problem is that docker needs to be run as root user, so maven commands need to be run as root user,
No, a docker run can be done with a -u (--user) parameter in order to use a non-root user inside the container.
Either run docker as non-root user
Your user (on the host) needs to be part of the docker group. Then you can run the docker service with that user.
As commented, this is not very secure.
See:
"chrisfosterelli/dockerrootplease"
"Understanding how uid and gid work in Docker containers"
That last links ends with the following findings:
If there’s a known uid that the process inside the container is executing as, it could be as simple as restricting access to the host system so that the uid from the container has limited access.
The better solution is to start containers with a known uid using the--user (you can use a username also, but remember that it’s just a friendlier way of providing a uid from the host’s username system), and then limiting access to the uid on the host that you’ve decided the container will run as.
Because of how uids and usernames (and gids and group names) map from a container to the host, specifying the user that a containerized process runs as can make the process appear to be owned by different users inside vs outside the container.
Regarding that last point, you now have user namespace (userns) remapping (since docker 1.10, but I would advice 17.06, because of issue 33844).
I am also stuck on how to setup a docker build server.
Here's where I see ground truth right now...
Docker commands require root privileges
This is because if can run arbitrary docker commands, you have the same powers as root on the host. (You can build a container runnings as root internally, with a filesystem mount to anywhere on the host, thus allowing any root action.)
The "docker" group is a big lie IMHO. It's effectively the same as making the members root.
The only way I can see to wrap docker with any kind of security for non-root use is to build custom bash scripts to launch very specific docker commands, then to carefully audit the security implications of those commands, then add those scripts to the sudoers file (granting passwordless sudo to non-root users).
In the world where we integrate docker into development pipelines (e.g. putting docker commands in Maven builds or allow developers to make arbitrary changes to build definitions for a docker build server), I have idea how you maintain any security.
From a lot of searching and research debugging this issue in the the last week.
I found to run a maven docker container as non root would be to pass the user flag
eg -u 1000
But for this to work correctly the user needs to be in the /passwd directory of the image
to work around this you can add the host (Jenkins) /etc/passwd directory to the docker image and use a non root user.
From your system commmand arguments on the docker run container add the following to mount the correct volumes to the mvn image to allow the host non root user to get mapped inside the maven container.
-v /share:/share -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -v "$HOME/.m2":/var/maven/.m2:z -w /usr/src/mymaven -e MAVEN_CONFIG=/var/maven/.m2 -e MAVEN_OPTS="-Duser.home=/var/maven"
I know this might not be the most informative answer but it should work to run a mvn container as non root specifically to run otj-embedded-pg for integration tests that pass fine locally but fail on a Jenkins server.
See this link OTJ_EMBEDDED_RUN_IN_CI_SERVER
As most of the posters on that thread suggest creating a new image there is no need to do that and you can run the latest maven docker image with the commands listed above and it works as it should
Hope this helps somebody that might get stuck on this issue and save them a few hours work.

Resources