Restore a whole docker-compose project from backup - bash

Assuming the project was backed up with the following script:
https://gist.githubusercontent.com/pirate/265e19a8a768a48cf12834ec87fb0eed/raw/64145b8275a081e0c3082365bb1a5835c8b01b3c/docker-compose-backup.sh
and I have compressed tar archive with full backup, is there any "one-liner" way to successfully restore and run the project on clean machine?

The standard “oops I lost all of my containers” Docker restore script should be roughly
# Get a copy of the repository with docker-compose.yml
git clone git#github.com:...
# Unpack a backup specifically of the bind-mounted
# data directories
tar xzf data-backup.tar.gz
# Recreate all of the containers from scratch
docker-compose up -d --build
This requires making sure all of the data in your application is stored somewhere outside individual Docker containers. In a Docker Compose setup, that means using volumes: directives to store the data somewhere else. A typical practice is to store as much data as you can in databases, and have no persistent data at all in non-database containers. If you’re worried about losing the entire /var/lib/docker tree then prefer bind mounts to named volumes, and use whatever normal backup solution you normally use to back up the corresponding host directories.
The script you show tries to preserve a number of things that just don’t need to be backed up:
If you’re preserving the database container’s data directory in a bind-mounted host directory, you don’t need to separately take a database-level backup (though it doesn’t hurt)
docker inspect is an extremely low-level diagnostic tool and it’s usually not useful to run it; there’s nothing you can restore from it
You don’t need to docker save the images because they’re in an external Docker registry (Docker Hub, AWS ECR, ...), and regardless you’ve checked their Dockerfiles into source control and can rebuild them
You don’t need to docker export individual containers because they don’t keep mutable data, and you need to destroy them extremely routinely anyways
The one thing it does is to take advantage of reasonably-known Docker internal details to back up the content of named volumes. Manually accessing files in /var/lib/docker isn’t usually a best practice and the actual format of the files there isn’t guaranteed. The Docker documentation discusses backing up and restoring named volumes in a more portable way (but this is a place I find bind mounts to be much more convenient).

Related

Is there a difference between store and move methods in laravel file upload and when to use one over the other?

When uploading images I realize FIRST I can use store method which saves images in the storage/app/public by default then I'll have to create a symbolic link from public/storage to storage/app/public to access the image.
SECOND, I can still use move method and have the image saved in the public/images directly.
I feel like the first way is longer for no reason, is there scenarios of when to use one over the other or it's just a matter of preference ?
Yes it's better in some cases, but it might not be relevant to you, let me explain.
The storage folder is usually considered a "shared" folder. What I'm trying to say with that is that the contents usually should not change when you deploy your application and most of its contents are usually even ignored in git (to prevent your uploads from ending up in your git repository).
So storing your uploads in this case inside the storage/app/public directory means the contents are not in git and the storage folder can be shared between deployments.
This is useful when you are using tools like Envoyer, Envoy or other "zero downtime" deployment tools.
Most (if not all) zero downtime deployment tools work by cloning your application to a fresh directory and running composer install and other setup commands before promoting that fresh directory to the current directory which is used by your webserver to serve your app. Changing a symlink over to a new directory is instant and thus you have zero downtime deployments since all setup (installing dependencies etc.) was done in a folder not yet serving traffic to your users.
And since each deployment starts with a fresh clone of your repository that also means that your public and storage folder are empty again... which is not what you want because you of course want to retain uploads between deployments. A way to work around that is that those deployment tools will have the storage folder stored in another folder and with every deployment it clones your git repo and symlinks the storage folder to that shared storage folder so all your deployments share the same storage directory making sure uploads (but depending on the drivers you use also sessions, caches, and logs) are the same for every deployment.
And from there you can use php artisan:link to symlink the storage/app/public to public/storage so that the files are publicly accessible.
(Note: with the symlink in place it doesn't matter to which path your write, storage/app/public or public/storage because they point to the same folder on the disk).
So this seemingly overcomplicated symlink dance is to make deployments easier and having all your "storage" in a single place, the storage dir.
But when you are not using those zero downtime deployment tools this all seems like a lot of nonsense. But even there it still might be useful to have a single place where all your app storage lives for backups for example instead of having to backup multiple directories.
from laravel documentation https://laravel.com/docs/5.4/filesystem:
move method may be used to rename or move an existing file to a new location
Laravel makes it very easy to store uploaded files using the store method on an uploaded file instance.
So, use storeAs() or store() when you are working with a file that has been uploaded (i.e. within a controller), and move() only when you've already got a file in the disk to move it from one location to another.

Update the base image of a committed docker container

I have just committed and saved a MySQL container to an image. This MySQL container was created using Portainer, a long time ago, with the main upstream MySQL image through the Portainer web interface making some mouse clicks.
The point with the image was to take it to another server with all the history, metadata and such. I saved also the volume with MySQL data.
I managed to replicate perfectly the same environment on the new server.
But now I'm a bit concerned as I can not find a way to update the "base" MySQl image.
To be clear, I did not build any image with any Dockerfile. The process was exactly as I stated before, through Portainer using MySQL mainstream image from Docker Hub.
So, is there any way to update the MySQL part of my container? I believe there should be, because of all that layers Docker philosophy.
Thanks for your time and help
You can't update the base image underneath an existing image, no matter how you created it. You need to start over from the updated base image and re-run whatever commands you originally ran to create the image. The standard docker build system will do this all for you, given a straightforward text description of what image you start FROM and what commands you need to RUN.
In the particular case of the Docker Hub database images, there's actually fairly little you can do with a derived image. These images are generally set up so that it's impossible to create a derived image with preloaded data; data is always in a volume, possibly an anonymous volume that gets automatically created, and that can't be committed. You can add files to /docker-entrypoint-initdb.d that will be executed the first time the database starts up, but once you have data in a volume, these won't be considered.
You might try running the volume you have against an unmodified current mysql image:
docker run -d -p 3306:3306 -v mysql_data:/var/lib/mysql mysql:8
If you do need to rebuild the custom image, I'd strongly encourage you to do it by writing a Dockerfile to regenerate the image, and check that into source control. Then when you do need to update the base image (security issues happen!) you can just re-run docker build. Avoid docker commit, since it will lead to an unreproducible image and exactly the sort f question you're asking.

Ansible - choose the most recently updated side and synchronize the local and remote directories?

I have several workstations with a similar setup (home computer, workstation at the office) and also a server that is used as a remote storage. I'm trying to make Ansible to backup and synchronize several application profile directories (Intellij Idea profile dir, my desktop environment profile dir, some applications unpacked from tar.gz distributions and so on) between these devices. I never use all devices at the same time.
The logic for every dir should be:
check local directory modification timestamp
check modification timestamp of a directory on the remote server
if local copy is older, overwrite it with contents of a directory on the remote server. Otherwise, backup contents of local directory to a remote directory (effectively overwriting it).
I'm going to use Ansible with synchronize module. But implementing the logic above using when for every folder in my (rather long) list sounds like inventing a bycicle to me. It should be a better way to accomplish that.
Seems like a common task, maybe there is a third-party Ansible role/plugin that does that? Or maybe a separate application that may be called using command?
Ansible is obviously not the appropriate tool for this.
Why don't you use an online storage service with folder sync capabilities, like Dropbox?

Docker out of space when running bundle install

I'm trying to build an image for my app, FROM ruby:2.2.1, my app folder sums up about 200 mb compressed.
I'm receiving a "Your disk is full" when running bundle install. It's also takes too much time to create the compressed context. However runing a df on /var/ shows more than 1TB available, this however is not what bother me.
My question is, can I ignore everything using an * in .dockerignore and then add my root project folder as a volume using docker-compose? does this sounds like a good idea?
I've also think in:
Move the Dockerfile to a subfolder (but I think i'm not able to add a parent folder as volume using docker compose
Do a git clone in the Dockerfile, but as I already have the files on my computer this sounds like a dumb step.
Should I just figure out how to add more disk space to the docker container? But I still dont like the time that it takes to create the context.
Note, your question doesn't match your title or first half of your post, I'll answer what you've asked.
My question is, can I ignore everything using an * in .dockerignore and then add my root project folder as a volume using docker-compose? does this sounds like a good idea?
You can add your project with a volume in docker-compose, but you lose much of the portability (your image will be incomplete if anyone else tries to use it without your volume data). You also lose the ability to do any compilation steps and may increase your container startup time as it pulls in dependencies. Lastly, if you run out of space on build, there's a good chance you'll run out of space on a run unless your volume data is a significant portion of your container size.
If I ignore a file on .dockerignore can I use COPY on that file from Dockerfile?
No, you can't use COPY or ADD on any file that's excluded in the push to the docker daemon via .dockerignore.

Can Docker Autonomously Restart Containers and Commit Changes with New Image Tag?

I am using Docker for my deployment and as it stands I use Docker-Compose (.yml file) to launch ~6 containers simultaneously. Each image within the Compose file is locally found (no internet connection within deployment environment).
As it stands the steps my deployment takes are as follows:
Run docker-compose up (launches 6 containers from local images such as image1:latest, image2:latest, etc. using the images with the "latest" tag)
When exited/stopped, I have 6 stopped containers. Manually restart each of the six stopped containers (docker start xxx)
Manually commit each re-started container (docker commit xxx)
Manually re-tag each of the previous generation images incrementally (image1:latest -> image1:version1, image1:version2, etc.) and manually delete the image containing the "latest" tag
Manually tag each of the committed containers (which are now images) with the "latest" tag (image1:latest)
This process is rather user-involved and our deployment requires the user involvement to only be run the "docker-compose up" command then shutting down/stopping Docker-Compose.
The required end goal is to have a script, or Docker, take care of these steps by itself and end up with different generations of images (image1:version1, image1:version2, image1:latest, etc.).
So, my question is, how would I go about creating a script (or have Docker do it) where the script (or Docker) can autonomously:
Restart the stopped containers upon stopping/exiting of Docker-Compose
Commit the restarted containers
Re-tag the previous images with latest tags to an incremented version# (image1:version1, image1:version2, etc.) then delete the previous image1:latest image
Tag the newly committed restarted containers (which are now images) with the "latest" tag
This is a rather lengthy and intensive question to answer, but I would appreciate any help with any of the steps required to accomplish my task. Thank you.
The watchtower project tries to address this.
https://github.com/CenturyLinkLabs/watchtower
It auto restarts a running container when a base image is updated.
It is also intelligent so, for example, when in needs to restart a container that is linked to other containers, it does so without destroying the links.
I've never tried it but worth a shot!
Let us know how it goes. I'm gonna favourite this question as it sounds a great idea.
PS If watchtower proves a pain and you try to do this manually then ...
docker inspect
is your friend since it gives you loads of info about containers and images. Allowing you to determine current status.

Resources