How to use ddev commands in its own exec-host hooks for an automated backup - ddev

I've made a custom command to my ddev, creating a database backup with a single command (yes, I'm lazy, sorry).
I was thinking if there's some way to hook a ddev command, e.g. ddev poweroff to run another command or command sequence together.
The idea is to make a backup of all databases in a specific directory when I run the ddev poweroff.
Anyone have a clue about it?
Thanks

Sure, pre-stop exec-host hooks can invoke ddev directly. Here's an example of a pre-stop hook that does both a snapshot and a traditional db dump:
hooks:
pre-stop:
- exec-host: ddev snapshot --name=$(date +%Y%m%d%H%M)
- exec-host: mkdir -p .tarballs && ddev export-db --file=.tarballs/db.$(date +%Y%m%d%H%M).sql.gz
For more info on hooks, see DDEV hook docs.
Hope that helps!

Related

How can I add to the $PATH in the DDEV web container? (for drush, for example)

I need an additional directory to appear in my $PATH in the ddev web container, for example, /var/www/html/bin, and I need it to show up not just when I ddev ssh (which could be done with ~/.homeadditions/.bashrc) but also when I use ddev exec. This came to a head with ddev v1.17, where drush launcher was removed from the web container and so ddev exec drush and ddev drush no longer found the drush command in my nonstandard composer install.
Edited for DDEV v1.19+:
There's a new and efficient/flexible way to add to the $PATH now.
mkdir -p .ddev/homeadditions/.bashrc.d
cd .ddev/homeadditions/bashrc.d
then just edit a file named path.sh (name isn't important) and add to it something that changes the $PATH, for example export PATH=$PATH:/var/www/html/somewhereelse/vendor/bin
See docs.
--------- Original answer below ------------
There are at least two ways to extend the $PATH in the web container. Here we'll add /var/www/html/something to the standard $PATH.
Either of these options will work:
Mount a replacement commandline-addons.bashrc with a docker-compose.path.yaml:
.ddev/docker-compose.path.yaml:
version: '3.6'
#ddev-generated
services:
web:
volumes:
- ./commandline-addons.bashrc:/etc/bashrc/commandline-addons.bashrc
.ddev/commandline-addons.bashrc:
export PATH="$PATH:/var/www/html/vendor/bin:/var/www/html/something"
Add /editthe commandline-addons.bashrc in a .ddev/web-build/Dockerfile. (If you use this option, the custom Dockerfile overrides webimage_extra_packages in .ddev/config.yaml, so you'll have to use workarounds in docs)
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN echo 'export PATH="$PATH:/var/www/html/vendor/bin:/var/www/html/something"' >/etc/bashrc/commandline-addons.bashrc
Both of these options require a ddev restart after adding them.

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

Is it possible to have DDev automatically launch the site after starting?

Using DDEv I'd like to be able to run the command ddev start and after starting it would automatically open the site in my browser. I know I can run ddev launch separately but I'd like it to happen automacically.
I tried chaining the commands but that failed and I also looked at the post-start hook but couldn't get it to work.
Is this possible?
(edited with full recipe)
Use a post start exec host hook that does ddev launch. Add this to your .ddev/config.yaml:
hooks:
post-start:
- exec-host: ddev launch
You don't have to run ddev launch after ddev start – just run ddev launch. If the container is not running yet, a ddev start will be executed automatically as well.

How can I export a database from ddev?

ddev currently lacks an export-db command (see https://github.com/drud/ddev/issues/767)
How can I export a database?
Use the ddev export-db command. You can do many things (from ddev export-db -h):
ddev export-db --file=/tmp/db.sql.gz
ddev export-db -f /tmp/db.sql.gz
ddev export-db --gzip=false --file /tmp/db.sql
ddev export-db > /tmp/db.sql.gz
ddev export-db --gzip=false > /tmp/db.sql
ddev export-db myproject --gzip=false --file=/tmp/myproject.sql
ddev export-db someproject --gzip=false --file=/tmp/someproject.sql
In addition, don't forget about ddev snapshot, which is a great and quick way to make a quick dump of your db, but it's not as portable as a text-based dump. (See ddev snapshot -h and ddev restore-snapshot -h.)
Using traditional techniques inside the container:
Because DDEV has all the familiar tools inside the container you can also use commands like mysqldump and mysql and psql inside the container:
ddev ssh
mkdir /var/www/html/.tarballs
mysqldump db | gzip >/var/www/html/.tarballs/db.sql.gz
# or with explicit authentication
mysqldump -udb -pdb -hdb db | gzip >/var/www/html/.tarballs/db.sql.gz
or for Drupal/drush users:
ddev ssh
drush sql-dump --gzip >.tarballs/my-project-db.sql.gz
That places the dump in the project's .tarballs directory for later use (it's on the host).
See database management docs for more info.
I think it's very usefull to have the TYPO3 pendant for this, thanks to Outdoorsman for the comment on GitHub Issue above.
Outdoorsman wrote:
I'm coming from the TYPO3 CMS world and also agree this would be a
good thing to have. I currently use
ddev ssh and ./vendor/bin/typo3cms database:export | gzip > project_name_db.sql.gz
if the typo3_console
extension is installed via composer.
Also You could use Drupal console:
ddev start
ddev ssh
drupal database:dump
drupal database:restore --file db-2018-07-04-11-31-22.sql
To explain more on #rfay answer, i generally prefer drush cli, however, its based on preference .
ddev start
ddev ssh
drush sql:dump --result-file=../db-export.sql

How can I find out what's going wrong with a ddev container, or see the logs?

I'm working on a project using ddev and I don't know how to troubleshoot things because they're hidden in the containers they run in. For example, I've tried ddev logs but it doesn't give me enough information.
Use ddev list and ddev describe to get the general idea of what's going on, but then ddev logs is the first line of investigation. It gets the logs of the web container (both the nginx error log and the php-fpm error log, mixed together).
Extra approaches:
You could probably (temporarily) remove any custom nginx/php/mysql configuration that you might have added to the project in the .ddev folder, as those are common culprits.
Please make sure you're using the current docker images that match the ddev version you're using. I recommend deleting any "webimage" or "dbimage" lines in your .ddev/config.yaml.
ddev logs -f will "follow" the web logs, so you can see what happens when you hit a particular URL.
ddev logs -s db (or of course ddev logs -f -s db will show you the logs of the database container (MariaDB logs)
Use ddev ssh (for the web container) or ddev ssh -s db (for the db container) to actually go in there and look around. The most important logs are in /var/log/ and /var/log/nginx.
You can even use ddev logs when a container has crashed or stopped for some reason, and figure out what happened with it.
Don't forget the troubleshooting section in the docs.

Resources