Laravel Sail Docker Hub - laravel

I've got laravel sail which as I know is few containers (mysql, redis, laravel, ...). Is there an easy way to just pack up the whole thing to ex. Docker Hub and easly download it on production server, and when i update it on localhost and run docker push, just run docker pull. Then everything (like new commands in DockerFile | apt install thing) will be updated and working exacly how it worked on localhost
I read the documentation, but I cannot figure out how docker works and how to easly change project location (Ex. I'm working on project at work, sometimes at home and this will be much easier to run docker push when I need build source code and deploy it)
I'm keeping source code on github, and it's working for dev servers, but to deploy something I have to check all dependencies and DockerFile, .env file and other things to make it works on production.
Thanks for help!

You can use the existing docker-compose.yml and just run docker-compose up -d on production to start all containers. Just be sure to for example disable xdebug on production as it slows down every request.

Related

Running tests with Visual Studio Docker compose support

I have added docker compose to my project. When I debug the project it loads the docker compose file. In the override yml I have specified a postgresql image and volume so it automatically brings up the development database. This is great because you can clone repo and not have to install any local software apart from docker.
The only thing that is not good is running tests. When I run tests it doesn't bring up the database container, it just executes the code inside the test project. So tester has to manually start the database image.
I feel like I am probably doing something wrong. Is there a better way to make the tests work with the visual studio docker compose support so it brings up the database automatically?
I thought about running the tests inside the docker file but I think that might get in the way of development. What is a good approach here?
I would not recommend running tests inside your Dockerfile. This will complicate your development process as you have said.
In terms of the database, you can just run it outside of docker-compose so that it is always running in the background. Just remove the postgres config from your docker-compose.yml and run postgres with docker run ... instead. This way it will always be running until you stop it with docker stop ...
docker run -v /tmp/pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=<PASSWORD> -d postgres

How can I see the changes I make to a Go app running in Heroku local?

I admit I'm a GoLang newbie. In an attempt to learn Go, I developed an app about a year ago (based on the Heroku Getting started repository) and deployed it to Heroku. I used the heroku local server to develop it locally and deployed it successfully. Now I want to make some changes but I don't have the original source, so I have cloned the app from the Heroku repository.
I have got it running locally with the following steps:
export GOPATH=~/project_path
export GOBIN=$GOPATH/bin
go get
go install
heroku local
So far, so good. The problem is that when I make a simple change to the code in main.go, it doesn't show up in the browser. I've tried running go install and restarting the server after making the change but it makes no difference.
I've noticed that the file name in the Procfile is now incorrect (go-getting-started instead of the name of my project folder) but the server still runs and changing the name doesn't make any difference, locally at least. Same goes for the Dockerfile.
What am I doing wrong please?
Every time you make a change to a Go file in the project you need to run go install and stop and restart the heroku local server.
You might want to just run the server yourself with PORT=5000 go run main.go so that you only have to restart one thing. Or you can check out something like https://github.com/pilu/fresh which will listen to filesystem changes and restart your server for you.

Using laradock docker configuration for developing

Hello there we am currently developing a Laravel application. I want all my team members to work locally so we decided to use Docker for our local development environment. I did a little research and there is a project called laradock. After installing it I am supposed to go to http://localhost and the project should run. But I get this:
I am using apache2 and mysql
tl;dr
Go to ./laradock/.env and search for APACHE_DOCUMENT_ROOT then edit that line to this:
APACHE_DOCUMENT_ROOT=/var/www/public
Things to do after the change
For this change to take effect, you have to:
Rebuild the container: docker-compose build apache2
Restart the containers: docker-compose up
Explanation
As mentioned by simonvomeyser on GitHub this is a recent addition which had the same effect as rodion.arr's solution but this way you can leave the original config files untouched and use the .env file to store all your project related configurations. Obviously, since this is a docker config change, you have to rebuild and restart your container, as rodion-arr and 9bits ponted it out in the same thread.
Check you apache configuration (in my case [laradock_folder]/apache2/sites/default.apache.conf file).
You should have DocumentRoot /var/www/public/.
I suppose you have /var/www/ instead

Laradock (container) files on Windows

I installed docker toolbox 1.11.2 and Laradock v.2 cloned from GitHub.
Everything seems to work except the laradock_workspace_1. When is generated it does not create files on the host machine (Windows 7 64-bit). In the docker-compose.yml I have tried playing with the volumes as suggested here
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- ../:/var/www/laravel
If I change the last line to ../.. then run docker-compose up, docker exec -it laradock_workspace_1 ls and I can see that it is traversing the folders on the host machine. I just don't see any files.
My goal here is to make the actual Laravel code external so I can edit them on the host machine and use git.
I can use the Kitematic app to make the changes I want but they seem lost if I do a docker-compose down. (and I get errors about things still being in use.)
I'm new to docker so any help is appreciated.
First, make sure your docker-machine is running. If it is, then follow below:
Open up Virtualbox GUI and right click your docker vm, and select settings, then go to Shared Folders.
Change the c\users to whatever folder your code lies in, like this:
This will mount your desired folder to /c/Users in the docker-machine vm.
After this, change the docker-compose.yml in the laradock folder to this:
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- /c/Users/pomodoro.xyz/code:/var/www/laravel
The logic behind this is, since we are running the docker in a VM, the docker-compose command looks for folder in the VM, not in the windows machines. Thats why we have provided the VM machine path to the docker-compose file.

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Resources