I am new in Docker. Me and my team decided to use Docker (Laradock) to run our application because we have a several project and using different specification.
Imagine we have 2 different project and want to run in same time, we have init laradock in each project and have custom our port in .env file so not conflict each other. Like PMA_PORT=8082 in project 1 and PMA_PORT=8085 in project 2 and so do in another port config.
When we run project 1 using command docker-compose up -d phpmyadmin apache2 mariadb, it runs well as expected. But the problem is when the project1 is run in background and we want to run project2 in background too. I use command docker-compose up -d phpmyadmin nginx mysql in the project 2. It also run well, but the project 1 is down although we have using different port.
This is the log info when I run that command
Removing laradock_mysql_1
Removing laradock_nginx_1
Recreating laradock_docker-in-docker_1 ... done
Starting fa6ba29f1fc8_laradock_mysql_1 ... done
Recreating laradock_phpmyadmin_1 ... done
Recreating laradock_workspace_1 ... done
Recreating laradock_php-fpm_1 ... done
Recreating d18266c4f247_laradock_nginx_1 ... done
How can I solve this problem?
You should use Laradock outside of your projects folders and you won't need to change ports or anything else.
Like the documentation says here: https://laradock.io/getting-started/#B
Your folder structure should be like that:
Projects (or whatever name you want)
laradock
Project_1
Project_2
Then, when you run the docker-compose up command inside the laradock folder, both your projects will be up and running.
Is that what you wanted?
Related
I've got laravel sail which as I know is few containers (mysql, redis, laravel, ...). Is there an easy way to just pack up the whole thing to ex. Docker Hub and easly download it on production server, and when i update it on localhost and run docker push, just run docker pull. Then everything (like new commands in DockerFile | apt install thing) will be updated and working exacly how it worked on localhost
I read the documentation, but I cannot figure out how docker works and how to easly change project location (Ex. I'm working on project at work, sometimes at home and this will be much easier to run docker push when I need build source code and deploy it)
I'm keeping source code on github, and it's working for dev servers, but to deploy something I have to check all dependencies and DockerFile, .env file and other things to make it works on production.
Thanks for help!
You can use the existing docker-compose.yml and just run docker-compose up -d on production to start all containers. Just be sure to for example disable xdebug on production as it slows down every request.
I have projects in windows but when i upgraded docker to work with wsl 2 then i have to run ddev commands from wsl console and db containers have empty database.
One way to to migrate dbs is to dump from old container and the import into new container. But is there a way to do this automatically for all projects? or atleast project by project.
Start the project in hyper-v docker environment and start up the project like ddev start. After running up the project then there are 2 ways to import the project either by taking a snapshot or exporting sql format which is more portable ( in case you want to setup project elsewhere other than ddev ).
To take snapshot you can use ddev snapshot command and it will make a db snapshot under .ddev/db_snapshots folder. Then you can copy it from there and place it in wsl2 project dir under the same dir like .ddev/db_snapshots. After that run ddev restore-snapshot [snapshot name]. for more docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#snapshotting-and-restoring-a-database
Other method is to use ddev export-db from the old project dir and then using ddev import-db in the new project dir under wsl2. Export command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#exporting-a-database Import command docs https://ddev.readthedocs.io/en/latest/users/cli-usage/#importing-a-database
I got instructed to create a single dockerfile in the root of the project, but also got a tip to use the laradock as starting point.
How can I do this? The only way so far I know to create an docker environment is to run it with docker-compose command
No, Dockerfiles are single containers (services) by design. Laradock provides a docker-compose file that references multiple dockerfiles. However you could create a smaller docker-compose file that only starts the containers you need (let's say a webserver with php, a database server and redis).
Laradock ships with way to much containers in docker-compose, that is why the tutorial tells you to specify which containers you want to run.
docker-compose up -d nginx mysql
But if you specify a minimal docker-compose.yml, you just can type
docker-compose up -d
without any additional arguments
Yes, you could add all the required services to a single container, but that would be against what you try to achieve using Docker.
Hello there we am currently developing a Laravel application. I want all my team members to work locally so we decided to use Docker for our local development environment. I did a little research and there is a project called laradock. After installing it I am supposed to go to http://localhost and the project should run. But I get this:
I am using apache2 and mysql
tl;dr
Go to ./laradock/.env and search for APACHE_DOCUMENT_ROOT then edit that line to this:
APACHE_DOCUMENT_ROOT=/var/www/public
Things to do after the change
For this change to take effect, you have to:
Rebuild the container: docker-compose build apache2
Restart the containers: docker-compose up
Explanation
As mentioned by simonvomeyser on GitHub this is a recent addition which had the same effect as rodion.arr's solution but this way you can leave the original config files untouched and use the .env file to store all your project related configurations. Obviously, since this is a docker config change, you have to rebuild and restart your container, as rodion-arr and 9bits ponted it out in the same thread.
Check you apache configuration (in my case [laradock_folder]/apache2/sites/default.apache.conf file).
You should have DocumentRoot /var/www/public/.
I suppose you have /var/www/ instead
Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.