Commands on hosts when docker is up? - bash

Is there a possibility to run some commands on host when my docker-compose up is running?
In my case I am building a set of containers for 2 projects and some of containers are facilitating frameworks tools or phing for migrations and other tasks. I'd like to prepare a set of aliases to be able to run some common tasks from host level instead of running docker-compose exec web command just write drun command or something like that.
For now I came up with a build.sh where I would define a set of such aliases and then run docker-compose up but is there easier way for that?

Related

How to restart Laravel queue workers inside a docker container?

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.
Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.
I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

Can I use a DockerFile as a script?

We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

How to quickly switch between docker environments for development?

I have multiple projects that I need to switch in between on a regular basis. The projects are setup via docker-compose, yet some need external containers to be available.
So in order to run docker-compose up -d in a project, I have to switch to a different directory first and start some basic service containers there (shared instances of mysql, redis, and the like).
I do not want to run all the containers in parallel, and for some it is not possible as they they listen on the same port.
What I also find annoying that certain containers need a script to be run inside of them in order to function properly in a development, and I find myself repeating doing the same commands over again just in order to switch to a project.
I think this can be automated, I am just unsure how to tackle this problem.
How can I manage to quickly switch the docker environments? My goal is to just have a one-liner.
My current workflow now involves desk.
For each project, I have initialized a desk via:
desk edit project_a
and there I run all the steps that I would have done manually, e.g.:
ponysay "INIT PROJECT A"
docker stop $(docker ps -a -q) # stopping all the running containers
cd ~/src/docker-compose/basic-services
docker-compose up -d
cd ~/src/project_a
docker-compose up -d
docker exec -it project_a_container_name /var/www/project_a/docker/scripts/dev-init.sh
and I switch between the enviornments via:
desk . project_a
desk . project_b
and switching projects now has become quite easy.

Docker: Cronjob is not working

I am trying to run cron job on Docker container. I have a running container (Fedora 20).
I have also installed cron packages in container and explicitly run the cron daemon.
I have also checked cron.deny file it is empty and there is no file called cron.allow under /etc/ directory.
Whenever I tried to set the cronjob by using crontab -e or trying to list the cron job using
crontab -l I am getting following error.
bash-4.2# crontab -l
You (root) are not allowed to access to (crontab) because of pam configuration.
bash-4.2# crontab -e
You (root) are not allowed to access to (crontab) because of pam configuration.
I also checked the /etc/pam.d/crond file it has following entry
bash-4.2# vi /etc/pam.d/crond
#
# The PAM configuration file for the cron daemon
#
#
# No PAM authentication called, auth modules not needed
account required pam_access.so
account include password-auth
session required pam_loginuid.so
session include password-auth
auth include password-auth
Has any one faced this issue? If yes could you please suggest me some pointer on this?
thanks in advance.
An LXC container is not a virtual machine. You'll need to explictly run the cron daemon in the foreground. Better still run cron from program like Supervisor or runit.
Reference: Docker documentation
Traditionally a Docker container runs a single process when it is
launched, for example an Apache daemon or a SSH server daemon. Often
though you want to run more than one process in a container. There are
a number of ways you can achieve this ranging from using a simple Bash
script as the value of your container's CMD instruction to installing
a process management tool.
In this example we're going to make use of the process management
tool, Supervisor, to manage multiple processes in our container. Using
Supervisor allows us to better control, manage, and restart the
processes we want to run. To demonstrate this we're going to install
and manage both an SSH daemon and an Apache daemon.
You can do:
ENTRYPOINT cron -f
although remember that you can only have one ENTRYPOINT.
From the docs:
There can only be one ENTRYPOINT in a Dockerfile. If you have more
than one ENTRYPOINT, then only the last one in the Dockerfile will
have an effect.
An ENTRYPOINT helps you to configure a container that you can run as
an executable. That is, when you specify an ENTRYPOINT, then the whole
container runs as if it was just that executable.

Resources