How can I add to the $PATH in the DDEV web container? (for drush, for example) - ddev

I need an additional directory to appear in my $PATH in the ddev web container, for example, /var/www/html/bin, and I need it to show up not just when I ddev ssh (which could be done with ~/.homeadditions/.bashrc) but also when I use ddev exec. This came to a head with ddev v1.17, where drush launcher was removed from the web container and so ddev exec drush and ddev drush no longer found the drush command in my nonstandard composer install.

Edited for DDEV v1.19+:
There's a new and efficient/flexible way to add to the $PATH now.
mkdir -p .ddev/homeadditions/.bashrc.d
cd .ddev/homeadditions/bashrc.d
then just edit a file named path.sh (name isn't important) and add to it something that changes the $PATH, for example export PATH=$PATH:/var/www/html/somewhereelse/vendor/bin
See docs.
--------- Original answer below ------------
There are at least two ways to extend the $PATH in the web container. Here we'll add /var/www/html/something to the standard $PATH.
Either of these options will work:
Mount a replacement commandline-addons.bashrc with a docker-compose.path.yaml:
.ddev/docker-compose.path.yaml:
version: '3.6'
#ddev-generated
services:
web:
volumes:
- ./commandline-addons.bashrc:/etc/bashrc/commandline-addons.bashrc
.ddev/commandline-addons.bashrc:
export PATH="$PATH:/var/www/html/vendor/bin:/var/www/html/something"
Add /editthe commandline-addons.bashrc in a .ddev/web-build/Dockerfile. (If you use this option, the custom Dockerfile overrides webimage_extra_packages in .ddev/config.yaml, so you'll have to use workarounds in docs)
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN echo 'export PATH="$PATH:/var/www/html/vendor/bin:/var/www/html/something"' >/etc/bashrc/commandline-addons.bashrc
Both of these options require a ddev restart after adding them.

Related

How to use ddev commands in its own exec-host hooks for an automated backup

I've made a custom command to my ddev, creating a database backup with a single command (yes, I'm lazy, sorry).
I was thinking if there's some way to hook a ddev command, e.g. ddev poweroff to run another command or command sequence together.
The idea is to make a backup of all databases in a specific directory when I run the ddev poweroff.
Anyone have a clue about it?
Thanks
Sure, pre-stop exec-host hooks can invoke ddev directly. Here's an example of a pre-stop hook that does both a snapshot and a traditional db dump:
hooks:
pre-stop:
- exec-host: ddev snapshot --name=$(date +%Y%m%d%H%M)
- exec-host: mkdir -p .tarballs && ddev export-db --file=.tarballs/db.$(date +%Y%m%d%H%M).sql.gz
For more info on hooks, see DDEV hook docs.
Hope that helps!

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

Can I add e.g. aliases only once for multiple containers with ddev?

I have several projects where I use ddev. I want to configure bash-scripts and aliases like
alias ll="ls -lh"
for all projects. How can I do this?
My ddev Version is 1.14.2 and I am on a MAC with Bash 5.0.11 configured on my terminal.
I know if I use .ddev/homeadditions/.bash_aliases I have all aliases, which I configure in .bash_aliases, but I don't want to configure it again and again for each project.
DRUD did it. In Version 1.15 you can now add eg. global aliases or whatever you like, that matches .bash_profile .profile or other dotfiles in the home directory.
You just have to move or copy the file:
~/.ddev/homeadditions/bash_aliases.example to ~/.ddev/homeadditions/.bash_aliases
and add your aliasses there.
Now, if your container already runs, use ddev restart to "copy" the global aliasfile to the specific container.
If you need functions you can use that file too, like:
function test() {
clear
echo "Some text"
ls -lha
}
Thanks to #rfay

How can I make symlinks made from inside docker linux containers to be seen from a windows host (maybe involving samba, if needed)

Question
How can I see symlinks of docker linux-containers from a windows host? (Even if I have to place an intermediate linux machine exposing the filesystem via NFS or Samba)
Context
In a DEVEL environment, I have this structure in a certain remote filesystem in a Linux within the office:
/files/repos/app-1
/files/repos/app-2
/files/repos/lib-x
/files/repos/lib-y
both app-1 and app-2 use those libraries which are vendored and symlinked like this:
/files/repos/app-1/vendor/my-company/lib-x => /files/repos/lib-x
/files/repos/app-1/vendor/my-company/lib-y => /files/repos/lib-y
/files/repos/app-2/vendor/my-company/lib-x => /files/repos/lib-x
/files/repos/app-2/vendor/my-company/lib-y => /files/repos/lib-y
The developers need to be in Windows.
So the developers have their IDE pointing to some mounted unit, for example Z:\ where they see all the repos and projects.
This allows us the following:
Edit any of the projects from it's own folder, and run the unit-tests for that project, including running the lib-x and lib-y.
Develope any of the libraries and have them updated in the depending applications (note I say I am in DEVEL, not PRE or PROD).
From the IDE, pointing see the "complete structure" of any of the applications (for instance app-1) also see the classes of the lib-x and lib-y so the autocompletion and so works perfectly.
This has been working like this for nearly a decade and works perfectly.
Problems
The developers need the connection to the server to develop and we wanted to mutate to local dockers so we can make the devels work from home.
Going to docker
We now decided that we are not going to use anymore the office-servers and we are going to setup all the development within docker containers.
What does actually work
We just installed docker desktop in Windows and shared C:\repos from the host into the dockers.
We now have some devel machines FROM ubuntu:xxx and run them mounting the volumes.
We made the symlinks within the app-1 and app-2 to lib-x and lib-y from inside the linux containers.
This does work perfectly and also the repositories work fine if we run the applications in the local dockers
Problem with symlinks in linux container and windows host
The problem is now the IDE: While it reads the files in C:\repos\app-1, the symlink that has been created within the linux containers can't be seen from the host.
This makes the IDE to be unable to follow C:\repos\app-1\vendor\lib-x and all the code-completion helpers are broken.
I already know Windows does not support symlink compatible with linux symlinks.
This forces us to look for an alternate solution.
Solution we've though with Samba
Initially I thought that as well as in the old topology a linux server just shared the filesystem via samba and the windows could just read the symlinks contents as they were demapped at the serverside and not the clientside, I thought that I could run another docker machine with a samba server just to locally share the "things seen from the linux" into the Windows host again.
To do so, I setup this docker-compose:
version: "3.7"
services:
samba:
container_name: samba
hostname: samba
image: dperson/samba
volumes:
- //c/Users/xavi/Documents/repos/test_samba:/mount
ports:
- "139:139"
- "445:445"
command: samba.sh -s "test_samba;/mnt/repos/test_samba;yes;no;yes;all"
restart: always
But this conflicts as 445 is locally already used.
If I turn down the local SMB, then in the next reboot, docker is unable to share C:\ into docker (I was not consciuos it does this sharing via SMB, could it be turned into a NFS or so?)
If I map to another port, like 10445:445 then the client is unable to access it, as client samba ports in windows seem to be not configurable.
Mapping an IP
So I tried to map an IP:
version: "3.7"
services:
samba:
container_name: samba
hostname: samba
image: dperson/samba
volumes:
- //c/Users/xavi/Documents/repos/test_samba:/mount
ports:
- "139:139"
- "192.168.4.83:445:445"
command: samba.sh -s "test_samba;/mnt/repos/test_samba;yes;no;yes;all"
restart: always
networks:
samba:
ipv4_address: 192.168.4.83
networks:
samba:
ipam:
driver: default
config:
- subnet: "192.168.4.0/16"
But is seems that this still creates problems:
It seems the IP is only for internal docker networking but not seen from the host
It seems the original service still listens not to 127.0.0.1:445 but to 0.0.0.0:445 so still "blocking" the attachment to listen to 192.168.4.83:445
So question
How could I make a windows host to see the "demapped contents of symlinks" to make the IDE see the vendored content that is linked from inside docker linux containers?
TL;DR
Run git-bash as administrator.
Issue export MSYS=winsymlinks:nativestrict in git-bash.
From there on, ln -s works in windows.
Links are seen from inside the docker.
Details
We'll walk thru these steps:
Preparation: Prepare a temporary dir with some files within the abc directory.
See it fail: We'll try to make a symlink and see it fail.
Create symlink: We'll create the symlink in windows and see it. We'll point xyz to abc.
Run docker: We'll then run docker with ubuntu and change contents in xyz.
Check in ubuntu container: We'll see the changes also in abc from within the docker.
Check in windows host: Well check both abc and xyz from ouside the container.
1. Preparation
In a git-bash go to /c and create a temporary dir tmp.
Inside it, create an abc dir and throw some contents there.
cd /c
mkdir tmp
cd tmp/
mkdir abc
cd abc/
echo 1111 > old_1
echo 2222 > old_2
echo 3333 > old_3
Here's a sample session:
2. See it fail
First let's try the "normal" way and see it fail.
In a git-bash, navigate to /c/tmp
Then do a symlink making xyz to point to abc: ln -s abc xyz
See it fails, by ls-ing the tmpand see xyz is a regular dir.
To be sure, create new content in xyz and see it's not there in abc.
Try to create the link. It will not become a symlink, but rather create a copy of the directory.
cd /c/tmp/
ln -s abc xyz
Create new_bad in xyz and don't see it in abc.
cd xyz/
touch new_bad
cd ../abc/
ls -l
Clear the wrong xyz
rm -Rf xyz/
Here's a sample session:
3. Create symlink
Here it comes the real stuff. The inspiration comes from #Slayvin's answer here, as well as here Git Bash shell fails to create symbolic links and the official git-for-windows repo here https://github.com/git-for-windows/git/pull/156
First open a new git-bash in Administrator mode. The reason is that only admins can create links in windows.
Once you are a CLI admin, navigate to the destination and set this evironment variable:
export MSYS=winsymlinks:nativestrict
This will tell the runtime subsytem of git-bash to actually use the symlinks feature. As we are admins we'll succeed.
The do just "normal symlinks" as you would expect: ln -s abc xyz
It works!!! Now next move is to test within docker!
NOTE: As per Sebastian's answer here https://stackoverflow.com/a/40914277/1315009 you DON'T need to be administrator to create symlinks in git-bash if you enabled the developer tools. In the search-bar write for developers and enable it:
4. Run docker + 5. Check in docker
The bash with admin privileges is no longer needed. So we'll close it and re-instantiate a "normal" bash.
In it, run an ubuntu continainer with docker. Use -it to interact with the ubuntu's bash. Use winpty to allow -it to work.
Bind-mount the /c/tmp directory so both abc and xyz are reachable. I chose to mount it to /files.
From inside, cd /files and see that xyz is actually a symlink.
Create some new content in xyz
Run and see:
winpty docker run -it --rm --mount type=bind,source="c:\tmp",target=/files --name ubuntu-link ubuntu
cd /files/
ls -l
Create content:
cd xyz
echo "yeaaahh" > new_good
Check it's really a symlink by going to abc:
cd ..
cd abc/
cat new_good
Sample session:
6. Check in windows host
Step out from the docker. Stay in the git-bash.
Again: This git-bash does not need to be privileged. The only moment we had to be admin was to "create" the symlink in windows.
From the unprivileged bash, explore abc as well as xyz and see that there's the content we created from inside the docker, appearing in both the original directory and in the symlink.
Sample session:
Final check
We can finally go to a classical CMD to see how it looks like. We can see it's clearly indicated that it's a symlink for a directory and we also see the target there:
Golden touch
If you have the "developer tools" activates as stated above, the only missing thing is the ENV VAR.
We can set this by editing the .bashrc at your windows home:
By doing this we can just use git-bash completely normally and start creating the symlinks from windows without any overload.
Caution
The symlinks created this way work from windows and are seen from inside docker. But not the oposite. If you create symlinks inside the container they don't get created in windows.
Therefore, in mounted volumes, setup the symlinks always from git-bash and consume them from the container. If you create them from the container, they still can be consumed from the container. But won't be usable from windows.
Conclussion
It can be done fully from the linux flavour commands via git-bash. Only that you need to be admin to create the links and tell the git-bash runtime to use that feature. And that the link needs to be done from windows, instead from inside the ubuntu.
I encountered a similar problem with my setup: developing on Windows 10 (where both the IDE and Docker are running), and having the website running inside the container (Linux).
I used to work on a library that is required by the website, working on both projects in parallel. And to do so, the library directory was symlinked (in host/Windows) in the vendor path.
Something like:
+ my-website
↪ vendor
↪ company
↪ my-package (->symlink here)
↪ ...
↪ docker-compose.yml
+ external-packages
↪ company
↪ my-package (real files here)
But with Docker, that setup doesn't work anymore.
So the trick is to mount a volume in docker-compose like this:
volumes:
- ./:/my-app
- ../external-packages/company:/my-app/vendor/company
So the files in vendor are 'seen' by the web server (inside the container), and we can keep the symlink (made in windows) between the my-package folders, so the IDE sees them as well.
I hope this will help you.

How can I export a database from ddev?

ddev currently lacks an export-db command (see https://github.com/drud/ddev/issues/767)
How can I export a database?
Use the ddev export-db command. You can do many things (from ddev export-db -h):
ddev export-db --file=/tmp/db.sql.gz
ddev export-db -f /tmp/db.sql.gz
ddev export-db --gzip=false --file /tmp/db.sql
ddev export-db > /tmp/db.sql.gz
ddev export-db --gzip=false > /tmp/db.sql
ddev export-db myproject --gzip=false --file=/tmp/myproject.sql
ddev export-db someproject --gzip=false --file=/tmp/someproject.sql
In addition, don't forget about ddev snapshot, which is a great and quick way to make a quick dump of your db, but it's not as portable as a text-based dump. (See ddev snapshot -h and ddev restore-snapshot -h.)
Using traditional techniques inside the container:
Because DDEV has all the familiar tools inside the container you can also use commands like mysqldump and mysql and psql inside the container:
ddev ssh
mkdir /var/www/html/.tarballs
mysqldump db | gzip >/var/www/html/.tarballs/db.sql.gz
# or with explicit authentication
mysqldump -udb -pdb -hdb db | gzip >/var/www/html/.tarballs/db.sql.gz
or for Drupal/drush users:
ddev ssh
drush sql-dump --gzip >.tarballs/my-project-db.sql.gz
That places the dump in the project's .tarballs directory for later use (it's on the host).
See database management docs for more info.
I think it's very usefull to have the TYPO3 pendant for this, thanks to Outdoorsman for the comment on GitHub Issue above.
Outdoorsman wrote:
I'm coming from the TYPO3 CMS world and also agree this would be a
good thing to have. I currently use
ddev ssh and ./vendor/bin/typo3cms database:export | gzip > project_name_db.sql.gz
if the typo3_console
extension is installed via composer.
Also You could use Drupal console:
ddev start
ddev ssh
drupal database:dump
drupal database:restore --file db-2018-07-04-11-31-22.sql
To explain more on #rfay answer, i generally prefer drush cli, however, its based on preference .
ddev start
ddev ssh
drush sql:dump --result-file=../db-export.sql

Resources