How to restart Laravel queue workers inside a docker container? - laravel

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.

Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.

I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

Related

Curious about the use of docker-compose and dockerfile

I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.

Can we have more than one installation of Rundeck in a Linux server?

I have one installation of Rundeck in a Linux server and it is up & running on port 4440. But I want to have one more installation of it and expecting it to run on other port. Is it possible? This question may look weird but I want to have additional setup of Rundeck due to personal reasons.
Eagerly looking for help. Thanks in advance.
You can test your "Personal instance" with a docker container without touch the "real instance" (or use two docker containers if you want), in both cases, you need to specify different ports (for example 4440 for "real" instance/container and 5550 to "test" container).
Here you have the official docker image, here about how to run, check the "Environment variables" section to specify the TCP port of each container (also, you have a lot of params to test).
And here you have a lot of configurations to test (LDAP, DB backends, etc..).
If you use Rundeck with Docker you must change the init.sh.
He is responsible of configuration overwrite at each container creation, so all your configuration updates are lost.
Doing this also avoid to have clear configuration params i' your docker-compose file...
The steps are :
create docker-compose file as mentioned on Rundeck Docker hub
map volumes on your host so you can save rundeck's files and directory
stop your container
comment config overwrite in init.sh
restart your container
You can then update rundeck's config on the fly and just restart rundeck container to see the changes...

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

DOCKER - LAMP Stack issues - Premade Image

Trying to setup a LAMP stack with docker,
and found and tried to used https://hub.docker.com/r/linode/lamp/
But I can't find and don't know how to access the files linked to the domain
or how to change the domain name from example.com and so on.
I think my real question is how do I change files or rebuild an image
from other people.
First of all I want to mention I'm not a big fan of this image + approach because it's bundling multiple microservices. I would recommend to use a container for apache2, a container for mysql etc.
But for the setup of LAMP. I'm using the documentation provided on the site.
I've a path /xx/test/index.html which contains some html. I will map the port of the container on my container port + mount my files to the right folder in the container.
docker run -p 80:80 -t -i -v /root/test/:/var/www/example.com/public_html/ linode/lamp /bin/bash
I'm using -ti and start a bash session. In this they are starting the apache2 + mysql service. (it is the approach of the official documentation. Not mine. It's a strange approach):
root#35d00285b625:/# service apache2 start
* Starting web server apache2 *
root#35d00285b625:/# service mysql start
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt or were
not closed cleanly.
After starting the services you can exit the container by pressing ctrl + p then ctrl + q. Now you can check your server-ip:80 to check your html code. If you want to replace example.conf you can mount your own apache2 configurations too.
If you want to change foldernames inside the image I would recommend to create your own dockerfile which starts with:
FROM docker pull linode/lamp
RUN changes..
First of all, Consider using microservices in separate containers. This will provide advantages like:
Fault Containment
Ease of Upgrades
Eliminates long-term commitment to a single technology stack
Easy to scale
System resilience
...
Now Docker was created with having microservices in mind, so for your LAMP Stack, I recommend using Apache+PHP in a container and mysql in another container. To make your containers communicate to eachother, create a userdefined network and put both containers in it.
Now back to your question:
You have 3 options for using your custom configuration files:
You need to mount your configuration files when creating a container(Recommended):
sudo docker run -d --name my-apache -v /path/to/custom/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd
Please note this example is using library (official) apache2 image from docker hub, You should consult image creator's instructions for custom images.
You can manually edit the configuration file inside a running container and commit it as a new image.
sudo docker commit my-apache myrepository/myimagename:tag
sudo docker run -d myrepository/myimagename:tag
Create your own image via Dockerfile, and using FROM <base image> directive.

How to quickly switch between docker environments for development?

I have multiple projects that I need to switch in between on a regular basis. The projects are setup via docker-compose, yet some need external containers to be available.
So in order to run docker-compose up -d in a project, I have to switch to a different directory first and start some basic service containers there (shared instances of mysql, redis, and the like).
I do not want to run all the containers in parallel, and for some it is not possible as they they listen on the same port.
What I also find annoying that certain containers need a script to be run inside of them in order to function properly in a development, and I find myself repeating doing the same commands over again just in order to switch to a project.
I think this can be automated, I am just unsure how to tackle this problem.
How can I manage to quickly switch the docker environments? My goal is to just have a one-liner.
My current workflow now involves desk.
For each project, I have initialized a desk via:
desk edit project_a
and there I run all the steps that I would have done manually, e.g.:
ponysay "INIT PROJECT A"
docker stop $(docker ps -a -q) # stopping all the running containers
cd ~/src/docker-compose/basic-services
docker-compose up -d
cd ~/src/project_a
docker-compose up -d
docker exec -it project_a_container_name /var/www/project_a/docker/scripts/dev-init.sh
and I switch between the enviornments via:
desk . project_a
desk . project_b
and switching projects now has become quite easy.

Resources