Windows: How to migrate project databases from Docker Hyper-V backend to WSL2 backend - ddev

I have projects in windows but when i upgraded docker to work with wsl 2 then i have to run ddev commands from wsl console and db containers have empty database.
One way to to migrate dbs is to dump from old container and the import into new container. But is there a way to do this automatically for all projects? or atleast project by project.

Start the project in hyper-v docker environment and start up the project like ddev start. After running up the project then there are 2 ways to import the project either by taking a snapshot or exporting sql format which is more portable ( in case you want to setup project elsewhere other than ddev ).
To take snapshot you can use ddev snapshot command and it will make a db snapshot under .ddev/db_snapshots folder. Then you can copy it from there and place it in wsl2 project dir under the same dir like .ddev/db_snapshots. After that run ddev restore-snapshot [snapshot name]. for more docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#snapshotting-and-restoring-a-database
Other method is to use ddev export-db from the old project dir and then using ddev import-db in the new project dir under wsl2. Export command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#exporting-a-database Import command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#importing-a-database

Related

Renaming a ddev project

We are a group of developers that work on multiple ddev projects. Some of these projects have a "." in their name, which by now breaks the PhpStorm integration.
Is there an easy way to rename a project and allow all other developers to tell ddev (after they pulled the new ddev config.yaml), what the previous project name was, so data (like database) could be migrated?
Please use the instructions in the DDEV FAQ "How can I change the name of a project?"
Use this process:
Export the database of the project: ddev export-db --file=/path/to/db.sql.gz
ddev delete . By default this will make a snapshot, which is a nice safety valve.
Rename the project, ddev config --project-name=<new_name>
ddev start
ddev import-db --src=/path/to/db.sql.gz

Run two different projects in same VPS using Laradock

I am new in Docker. Me and my team decided to use Docker (Laradock) to run our application because we have a several project and using different specification.
Imagine we have 2 different project and want to run in same time, we have init laradock in each project and have custom our port in .env file so not conflict each other. Like PMA_PORT=8082 in project 1 and PMA_PORT=8085 in project 2 and so do in another port config.
When we run project 1 using command docker-compose up -d phpmyadmin apache2 mariadb, it runs well as expected. But the problem is when the project1 is run in background and we want to run project2 in background too. I use command docker-compose up -d phpmyadmin nginx mysql in the project 2. It also run well, but the project 1 is down although we have using different port.
This is the log info when I run that command
Removing laradock_mysql_1
Removing laradock_nginx_1
Recreating laradock_docker-in-docker_1 ... done
Starting fa6ba29f1fc8_laradock_mysql_1 ... done
Recreating laradock_phpmyadmin_1 ... done
Recreating laradock_workspace_1 ... done
Recreating laradock_php-fpm_1 ... done
Recreating d18266c4f247_laradock_nginx_1 ... done
How can I solve this problem?
You should use Laradock outside of your projects folders and you won't need to change ports or anything else.
Like the documentation says here: https://laradock.io/getting-started/#B
Your folder structure should be like that:
Projects (or whatever name you want)
laradock
Project_1
Project_2
Then, when you run the docker-compose up command inside the laradock folder, both your projects will be up and running.
Is that what you wanted?

Running tests with Visual Studio Docker compose support

I have added docker compose to my project. When I debug the project it loads the docker compose file. In the override yml I have specified a postgresql image and volume so it automatically brings up the development database. This is great because you can clone repo and not have to install any local software apart from docker.
The only thing that is not good is running tests. When I run tests it doesn't bring up the database container, it just executes the code inside the test project. So tester has to manually start the database image.
I feel like I am probably doing something wrong. Is there a better way to make the tests work with the visual studio docker compose support so it brings up the database automatically?
I thought about running the tests inside the docker file but I think that might get in the way of development. What is a good approach here?
I would not recommend running tests inside your Dockerfile. This will complicate your development process as you have said.
In terms of the database, you can just run it outside of docker-compose so that it is always running in the background. Just remove the postgres config from your docker-compose.yml and run postgres with docker run ... instead. This way it will always be running until you stop it with docker stop ...
docker run -v /tmp/pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=<PASSWORD> -d postgres

Using laradock docker configuration for developing

Hello there we am currently developing a Laravel application. I want all my team members to work locally so we decided to use Docker for our local development environment. I did a little research and there is a project called laradock. After installing it I am supposed to go to http://localhost and the project should run. But I get this:
I am using apache2 and mysql
tl;dr
Go to ./laradock/.env and search for APACHE_DOCUMENT_ROOT then edit that line to this:
APACHE_DOCUMENT_ROOT=/var/www/public
Things to do after the change
For this change to take effect, you have to:
Rebuild the container: docker-compose build apache2
Restart the containers: docker-compose up
Explanation
As mentioned by simonvomeyser on GitHub this is a recent addition which had the same effect as rodion.arr's solution but this way you can leave the original config files untouched and use the .env file to store all your project related configurations. Obviously, since this is a docker config change, you have to rebuild and restart your container, as rodion-arr and 9bits ponted it out in the same thread.
Check you apache configuration (in my case [laradock_folder]/apache2/sites/default.apache.conf file).
You should have DocumentRoot /var/www/public/.
I suppose you have /var/www/ instead

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Resources