We are a group of developers that work on multiple ddev projects. Some of these projects have a "." in their name, which by now breaks the PhpStorm integration.
Is there an easy way to rename a project and allow all other developers to tell ddev (after they pulled the new ddev config.yaml), what the previous project name was, so data (like database) could be migrated?
Please use the instructions in the DDEV FAQ "How can I change the name of a project?"
Use this process:
Export the database of the project: ddev export-db --file=/path/to/db.sql.gz
ddev delete . By default this will make a snapshot, which is a nice safety valve.
Rename the project, ddev config --project-name=<new_name>
ddev start
ddev import-db --src=/path/to/db.sql.gz
Related
I have projects in windows but when i upgraded docker to work with wsl 2 then i have to run ddev commands from wsl console and db containers have empty database.
One way to to migrate dbs is to dump from old container and the import into new container. But is there a way to do this automatically for all projects? or atleast project by project.
Start the project in hyper-v docker environment and start up the project like ddev start. After running up the project then there are 2 ways to import the project either by taking a snapshot or exporting sql format which is more portable ( in case you want to setup project elsewhere other than ddev ).
To take snapshot you can use ddev snapshot command and it will make a db snapshot under .ddev/db_snapshots folder. Then you can copy it from there and place it in wsl2 project dir under the same dir like .ddev/db_snapshots. After that run ddev restore-snapshot [snapshot name]. for more docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#snapshotting-and-restoring-a-database
Other method is to use ddev export-db from the old project dir and then using ddev import-db in the new project dir under wsl2. Export command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#exporting-a-database Import command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#importing-a-database
In lieu of manually setting up my dotfiles (.bashrc, .inputrc, .vim & .vimrc etc) inside the docker container that DDEV creates... isn't there a way to automatically do this from the ddev config? I swear I saw this somewhere (maybe a blog post?) and I've been looking through https://ddev.readthedocs.io and websearching but can't find it described anywhere. Do I need to do docker cp ... or is there a ddev way?
Yes, you can provide custom in-web-container configuration using homeadditions, see docs. You can either add new configuration (anything you want in your home directory) per-project (by putting in your project's .ddev/homeadditions) or globally (by putting in the global ~/.ddev/homeadditions).
There's a blog example of doing this (from before you could do it globally in v1.15+) showing setting up oh-my-zsh, https://www.ddev.com/ddev-local/oh-my-zsh-using-custom-commands-and-other-goodies-to-add-to-ddev/
ddev currently lacks an export-db command (see https://github.com/drud/ddev/issues/767)
How can I export a database?
Use the ddev export-db command. You can do many things (from ddev export-db -h):
ddev export-db --file=/tmp/db.sql.gz
ddev export-db -f /tmp/db.sql.gz
ddev export-db --gzip=false --file /tmp/db.sql
ddev export-db > /tmp/db.sql.gz
ddev export-db --gzip=false > /tmp/db.sql
ddev export-db myproject --gzip=false --file=/tmp/myproject.sql
ddev export-db someproject --gzip=false --file=/tmp/someproject.sql
In addition, don't forget about ddev snapshot, which is a great and quick way to make a quick dump of your db, but it's not as portable as a text-based dump. (See ddev snapshot -h and ddev restore-snapshot -h.)
Using traditional techniques inside the container:
Because DDEV has all the familiar tools inside the container you can also use commands like mysqldump and mysql and psql inside the container:
ddev ssh
mkdir /var/www/html/.tarballs
mysqldump db | gzip >/var/www/html/.tarballs/db.sql.gz
# or with explicit authentication
mysqldump -udb -pdb -hdb db | gzip >/var/www/html/.tarballs/db.sql.gz
or for Drupal/drush users:
ddev ssh
drush sql-dump --gzip >.tarballs/my-project-db.sql.gz
That places the dump in the project's .tarballs directory for later use (it's on the host).
See database management docs for more info.
I think it's very usefull to have the TYPO3 pendant for this, thanks to Outdoorsman for the comment on GitHub Issue above.
Outdoorsman wrote:
I'm coming from the TYPO3 CMS world and also agree this would be a
good thing to have. I currently use
ddev ssh and ./vendor/bin/typo3cms database:export | gzip > project_name_db.sql.gz
if the typo3_console
extension is installed via composer.
Also You could use Drupal console:
ddev start
ddev ssh
drupal database:dump
drupal database:restore --file db-2018-07-04-11-31-22.sql
To explain more on #rfay answer, i generally prefer drush cli, however, its based on preference .
ddev start
ddev ssh
drush sql:dump --result-file=../db-export.sql
I'm working on a project using ddev and I don't know how to troubleshoot things because they're hidden in the containers they run in. For example, I've tried ddev logs but it doesn't give me enough information.
Use ddev list and ddev describe to get the general idea of what's going on, but then ddev logs is the first line of investigation. It gets the logs of the web container (both the nginx error log and the php-fpm error log, mixed together).
Extra approaches:
You could probably (temporarily) remove any custom nginx/php/mysql configuration that you might have added to the project in the .ddev folder, as those are common culprits.
Please make sure you're using the current docker images that match the ddev version you're using. I recommend deleting any "webimage" or "dbimage" lines in your .ddev/config.yaml.
ddev logs -f will "follow" the web logs, so you can see what happens when you hit a particular URL.
ddev logs -s db (or of course ddev logs -f -s db will show you the logs of the database container (MariaDB logs)
Use ddev ssh (for the web container) or ddev ssh -s db (for the db container) to actually go in there and look around. The most important logs are in /var/log/ and /var/log/nginx.
You can even use ddev logs when a container has crashed or stopped for some reason, and figure out what happened with it.
Don't forget the troubleshooting section in the docs.
Hello there we am currently developing a Laravel application. I want all my team members to work locally so we decided to use Docker for our local development environment. I did a little research and there is a project called laradock. After installing it I am supposed to go to http://localhost and the project should run. But I get this:
I am using apache2 and mysql
tl;dr
Go to ./laradock/.env and search for APACHE_DOCUMENT_ROOT then edit that line to this:
APACHE_DOCUMENT_ROOT=/var/www/public
Things to do after the change
For this change to take effect, you have to:
Rebuild the container: docker-compose build apache2
Restart the containers: docker-compose up
Explanation
As mentioned by simonvomeyser on GitHub this is a recent addition which had the same effect as rodion.arr's solution but this way you can leave the original config files untouched and use the .env file to store all your project related configurations. Obviously, since this is a docker config change, you have to rebuild and restart your container, as rodion-arr and 9bits ponted it out in the same thread.
Check you apache configuration (in my case [laradock_folder]/apache2/sites/default.apache.conf file).
You should have DocumentRoot /var/www/public/.
I suppose you have /var/www/ instead