keep Guard running inside a docker container - ruby

everyone.
Is there a way to keep Guard running inside a docker container?
At this point, I have tried many different things but all seems to fail.
Originally I was running it bundle exec guard .... but since now I need to manage the Docker container with Docker Cloud I cannot specify anymore -i as option to the run command so with that approach Guard closes right after booting.
11:03:18 - INFO - Guard is now watching at '/usr/app'
11:03:19 - INFO - Bye bye...
I tried to run Guard programmatically from a ruby file as such
...
guardfile = <<-EOF
...
EOF
Guard.start(guardfile_contents: guardfile)
with the same outcome.
I have also tried to use directly the listen gem but in this case the changes to files are not picked up.
Now, I'm out of options. Any suggestions?
Thanks

If you cannot specify the -i option anymore, you can still get the same effect by disabling interation inside the Guardfile:
https://github.com/guard/guard/wiki/Guardfile-DSL---Configuring-Guard#interactor
with: interactor :off

Related

How to restart Laravel queue workers inside a docker container?

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.
Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.
I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

How do I uninstall Docker packages?

I wanted to install CVAT for training an Object Detection AI using Docker. The install failed for some reason in the middle and it wasn't installed. But all the files were still occupying space on my machine. I tried reinstalling the CVAT and the files keep adding to the occupied space. How do I remove all of these files? I am using a MacBook Pro with MacOS Big Sur Beta 4.
Edit: https://github.com/opencv/cvat/blob/develop/cvat/apps/documentation/installation.md#mac-os-mojave
These are the commands I am running to install CVAT.
docker-compose build output: https://pastebin.com/7EkeQ289
docker-compose up -d output: https://pastebin.com/hF3GFDkX
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser output: https://pastebin.com/Mfh8CivL
If you are trying to remove the containers, attempt the following:
1. docker ps -a - lists all containers
2. docker stop [label or SHA of the containers you want to remove]
docker-compose down [YAML configuration file you targeted with docker-compose up] - this should stop all containers, teardown networks, etc. that docker-compose started with 'up'
docker container prune - removes all stopped containers
NOTE: If you have other stopped containers that you want to keep, do not run this, but remove them individually, as I suggested in the stricken-through step two above, or Konrad Botor's comment
https://docs.docker.com/compose/reference/down/
https://docs.docker.com/engine/reference/commandline/container_prune/
If you want to remove the images:
docker images
docker rmi [label or SHA] (RMI is the remove image command)
https://docs.docker.com/engine/reference/commandline/images/
https://docs.docker.com/engine/reference/commandline/rmi/
To speed up this process, analyze the YAML configuration file being targeted for your docker-compose build command, and/or reference the documentation for that specific project (CVAT) if available, to determine what containers (software) it is initializing (and how it is doing so, if necessary). It might help to paste its contents in the question.
Note: what is taking up space may be volumes which are not cleaned up properly by the docker build scripts for this project. See the following documentation on how to remove those:
https://docs.docker.com/engine/reference/commandline/volume_rm/
I might be missing some context, as I cannot access your pastebin links (behind a firewall at the moment).

Dockerfile for a newly committed docker image

I downloaded an image of ubuntu os in my system and after about committing 3 images from it(every image has an incremental change from before image), I now have my final image. Now I want the Dockerfile of this final image so that I can include commands like starting service at the start of container.
I have used this command,
sudo docker commit --change='CMD service apache2 start' 172d6dc34471 server_startup
to make the apache service start when the container is run. This starts the service in the container but doesn't go inside the container. It just starts the apache and exits to my local environment.
I would like to know how to get the Dockerfile for my final image so that I can include startup commands.
I have also tried this dockerfile from image, but its not working. It just throws error that .sock file is missing. I have tried this with both the new image and the parent image that I first downloaded.
Any help is much appreciated. Thanks.
you can use
docker history --no-trunc your_image
it will show you (in reverse order) what has been done
It is less user friendly than dockerfile from image, but it is basically the same thing.
I have somewhere a Python script that does that cleanly, I will check and post it.
Edit: Just this should show a lot
docker history your_image | docker inspect $(awk ' NR>1 {print $1}')
As CMD you need to provide a no-daemon program that keeps in foreground. That is not accomplished by service.
You should try:
sudo docker commit --change='CMD apache2 -DFOREGROUND' 172d6dc34471 server_startup
Sorry for the late reply, I used the --change argument while committing the image to point to the custom script(which I added within the container with the list of commands to be run at the start) that I want to run when the image is run for the first time.
sudo docker commit --change='ENTRYPOINT ["/test.sh"]' containerId autostart
I didn't know much about the dockerfile that everyone is suggesting but it is also a very good solution.

How to edit files in stopped/not starting docker container

Trying to fix errors and debug problems with my application that is split over several containers, I frequently edit files in containers:
either I am totally lazy and install nano and edit directly in container or
I docker cp the file out of the container, edit it, copy it back and restart the container
Those are intermediate steps before coming to new content for container build, which takes a lot longer than doing the above (which of course is only intermediate/fiddling around).
Now I frequently break the starting program of the container, which in the breaking cases is either a node script or a python webserver script, both typically fail from syntax errors.
Is there any way to save those containers? Since they do not start, I cannot docker exec into them, and thus they are lost to me. I then go the rm/rmi/build/run route after fixing the offending file in the build input.
How can I either edit files in a stopped container, or cp them in or start a shell in a stopped container - anything that allows me to fix this container?
(It seems a bit like working on a remote computer and breaking the networking configuration - connection is lost "forever" this way and one has to use a fallback, if that exists.)
How to edit Docker container files from the host? looks relevant but is outdated.
I had a problem with a container which wouldn't start due to a bad config change I made.
I was able to copy the file out of the stopped container and edit it. something like:
docker cp docker_web_1:/etc/apache2/sites-enabled/apache2.conf .
(correct the file)
docker cp apache.conf docker_web_1:/etc/apache2/sites-enabled/apache2.conf
Answering my own question.. still hoping for a better answer from a more knowledgable person!!
There are 2 possibilities.
1) Editing file system on host directly. This is somewhat dangerous and has a chance of completely breaking the container, possibly other data depending on what goes wrong.
2) Changing the startup script to something that never fails like starting a bash, doing the fixes/edits and then changing the startup program again to the desired one (like node or whatever it was before).
More details:
1) Using
docker ps
to find the running containers or
docker ps -a
to find all containers (including stopped ones) and
docker inspect (containername)
look for the "Id", one of the first values.
This is the part that contains implementation detail and might change, be aware that you may lose your container this way.
Go to
/var/lib/docker/aufs/diff/9bc343a9..(long container id)/
and there you will find all files that are changed towards the image the container is based upon. You can overwrite files, add or edit files.
Again, I would not recommend this.
2) As is described at https://stackoverflow.com/a/32353134/586754 you can find the configuration json config.json at a path like
/var/lib/docker/containers/9bc343a99..(long container id)/config.json
There you can change the args from e. g. "nodejs app.js" to "/bin/bash". Now restart the docker service and start the container (you should see that it now correctly starts up). You should use
docker start -i (containername)
to make sure it does not quit straight away. You can now work with the container and/or later attach with
docker exec -ti (containername) /bin/bash
Also, docker cp is rather useful for copying files that were edited outside of the container.
Also, one should only fall back to those measures if the container is more or less "lost" anyway, so any change would be an improvement.
You can edit container file-system directly, but I don't know if it is a good idea.
First you need to find the path of directory which is used as runtime root for container.
Run docker container inspect id/name.
Look for the key UpperDir in JSON output.
That is your directory.
If you are trying to restart an stopped container and need to alter the container because of misconfiguration but the container isn't starting you can do the following which works using the "docker cp" command (similar to previous suggestion). This procedure lets you remove files and do any other changes needed. With luck you can skip a lot of the steps below.
Use docker inspect to find entrypoint, (named Path in some versions)
Create a clone of the using docker run
Enter clone using docker exec -ti bash (if *nix container)
Locate entrypoint file location by looking though the clone to find
Copy the old entrypoint script using docker cp : ./
Modify or create a new entrypoint script for instance
#!/bin/bash
tail -f /etc/hosts
ensure the script has execution rights
Replace the old entrypoint using docker cp ./ :
start the old container using start
redo steps 6-9 until the starts
Fix issues in container
Restore entrypoint if needed and redo steps 6-9 as required
Remove clone if needed

God always reports Socket drbunix:///tmp/god.17165.sock already in use by another instance of god

I am using God for the first time to monitor my resque and resque-sceduler process.I followed the tutorial on God's home page. According to that if god if there is already a watch added to God by:
sudo god -c /path/to/config.god
then after editing the watch it can be added to God again using the same command. But it does not allow to add it and reports that sock is already in use, I have to manually kill the process and add the watch again. Am I missing something?
I need to add the watch again after every deployment, that is why I am trying to do this.
The page you link to does not actually support your assertion that you reload watches by using the same command that starts god, to wit:
sudo god -c /path/to/config.god
Instead it says to use:
sudo god load path/to/config.god
Specifically, the extracted parts of that page are:
STARTING AND CONTROLLING GOD
To start the god monitoring process as a daemon simply run the god executable passing in the path to the config file (you need to sudo if you're using events on Linux or want to use the setuid/setgid functionality):
$ sudo god -c /path/to/config.god
: : : : :
DYNAMICALLY LOADING CONFIG FILES INTO AN ALREADY RUNNING GOD
God allows you to load or reload configurations into an already running instance. There are a few things to consider when doing this:
Existng Watches with the same name as the incoming Watches will be overidden by the new config.
All paths must be either absolute or relative to the path from which god was started.
To load a config into a running god, issue the following command:
$ sudo god load path/to/config.god
If you're relying on the text:
Ctrl-C out of the foregrounded god instance. Notice that your current simple server will continue to run. Start god again with the same command as before.
then that's only for a foregrounded instance of god, one run with -D. If you CTRL-C that, then god will stop (but the servers it started will continue). If you're god instance is running in the background (no -D), you need to use kill to stop it in the same manner.

Resources