Docker based development environment for multiple projects - windows

I am wondering about the best architecture for a Docker based development environment with a LAMP stack.
Requirements
Working on multiple projects in parallel
Most projects are using the same LAMP stack (for simplicity let's assume that all projects are sharing the same stack and configuration)
The host is running Windows + VBox + Docker Toolbox (i.e. Boot2Docker)
Current architecture
One shared development environment running multiple containers (web, db, persistent data..) with vhosts configuration per site
Using scripts / Jenkins container to setup new project (new DB, vhost configuration..)
Running custom Samba container to share the data with the Windows machine (IDE is running on Windows)
As always there are pros. and cons., while this is quite easy for maintenance, we are unable to really deploy a specific project with a dedicated docker-compose.yml file, and we are also unable to get all the benefits of "micro services" like replacing PHP / MySQL version for a specific site.
The question is how can we use a per project docker-compose.yml file, but still have multiple projects running simultaneously (since all projects are using port 80).
Will it be better (and is it even possible?) to use random ports per project and run a proxy layer on top of those web containers?
Any other options or common design patterns for this use case?
Thanks.

The short answer is yes. Docker by default assigns random ports if no port it is specified. For mapping I would use: https://github.com/jwilder/nginx-proxy
You can have something like project1.yml project2.yml .... and to start the containers would be something like:
docker-comppse -f project1.yml up
However, I'm not sure why would you try to do that. You could use something like Rancher and split my development host into multiple small development environments.

Related

How to setup multiple visual studio solutions working together using docker compose (with debugging)

Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.

Docker and casual work/dev on virtual machine and IDE

I have a general question about good practices and lets say way of work between docker and IDE.
Right now i am learning docker and docker compose, and i must admit that i like the idea of containers! Ive deployed my whole spring boot microservices architecture on containers, and everything is working really well!
The thing is, that in every place of properties when i am declaring localhost address, i was forced to change localhost to custom container names, for example localhost:8888 --> naming-server:8888. It is okay for running in containers, but obviously when i am trying to run this on IDE, it will fail. I like working/optimizing/debugging microservices in IDE, but i dont want rebuilding image and returning whole docker-compose every time i made a tiny small change.
What does it look like in real dev?
Regards!
In my day job there are at least four environments my code can run in: my desktop development environment, a developer-oriented container environment, and pre-production and production container environments. All four of these environments can have different values for things like host names. That means they must be configurable in some way.
If you've hard-coded localhost as a hostname in your application source code, it will not run in any environment other than your development system, and it needs to be changed to a configuration option.
From a pure-Docker point of view, making these configurable via environment variables is easiest (and Spring can set property values from environment variables). Spring also has the notion of a profile, which in principle matches the concept of having different settings for different environments, but injecting a whole profile configuration can be a little more complex at deployment time.
The other practice I've found helpful is to have the environment variable settings default to reasonable things for developers. The pre-production and production deployments are all heavily scripted and so there's a reasonably strong guarantee that they will have all of the correct environment variables set. If $PGHOST defaults to localhost that's right for a non-Docker developer, and all of the container-based setups can set an appropriate value for their environment at deploy time.
Even though our actual deployment system is based on containers (via Kubernetes) I do my day-to-day development in a mostly non-Docker environment. I can run an individual microservice by launching it from a shell prompt, possibly with setting some environment variables, and services have unit tests that can run just on the checked-out source tree, without needing any Docker at all. A second step is to build an image and deploy it into the development environment, and our CI system runs integration tests against the images it builds.

What are the best gitlab-ci.yml CI/CD practices and runners configs?

This is a bit theoretical, but I'll try to explain my setup as much as I can:
1 server (instance) with a self-hosted gitlab
1 server (instance) for development
1 server (instance) for production
Let's say in my gitlab I have a ReactJs project and I configured my gitlab-ci.yml as following:
job deploy_dev Upon pushing to dev branch, the updates will be copied with rsync to /var/www/html/${CI_PROJECT_NAME} (As a deployment to dev server)
The runner that picks up the job deploy_dev is a shared runner installed on that same dev server that I deploy to and it picks up jobs with the tag reactjs
The question is:
If I want to deploy to production what is the best practice should I follow?
I managed to come up with a couple of options that I thought of but I don't know which one is the best practice (if any). Here is what I came up with:
Modify gitlab-ci.yml adding a job deploy_prod with the same tag reactjs but the script should rsync with the production server's /var/www/html/${CI_PROJECT_NAME} using SSH?
Set up another runner on production server and let it pick up the jobs with tags reactjs-prod and modify gitlab-ci.yml to have deploy_prod with the tag reactjs-prod?
You have a better way other than the 2 mentioned above?
Last question (related):
Where is the best place to install my runners? Is what I'm doing (Having my runners on my dev server) actually ok?
Please if you can explain to me the best way (that you would choose) with the reasons as in pros or cons I would be very grateful.
The best practice is to separate your CI/CD infrastructure from the infrastructure where you host your apps.
This is done to minimize the number of variables which can lead to problems with either your applications or your runners.
Consider the following scenarios when you have a runner on the same machine where you host your application: (The below scenarios can happen even if the runner and app are running in separate Docker containers. The underlying machine is still a single point of failure.)
The runner executes a CPU/RAM heavy job and takes up most of the resources on the machine. Your application starts experiencing performance problems.
The Gitlab runner crashes and puts the host machine in an inoperable state. (docker panic or whatever).
Your production app stops functioning.
Your app brakes the host machine (Doesn't matter how. It can happen), your CI/CD stops working and you can not deploy a fix to production.
Consider having a separate runner machine (or machines. Gitlab runner can scale horizontally), that is used to run your deployment jobs to both dev and production servers.
I agree with #cecunami's answer.
As an example, in our Org we have a dedicated VM only for the runner, which is explicitly monitored by our teams.
Since first creating the machine, the CPU, RAM and storage demand has grown massively, thus why the infrastructure is to be separated.

Deploy go app to docker in vagrant

Now i'm working on RESTfull API on go, using Windows and goclipse.
Testing environemnt consists of few VMs managed by Vagrant. These machines contain nginx, PostgreSQL etc. The app should be deployed into Docker on the separated VM.
There is no problem to deploy app on first time using guide like here: https://blog.golang.org/docker. I've read a lot of information and guides but still totally confused how to automate deploying process and update go app in docker after some changes in code done. On the current stage changes in code done very often, so deploying should be fast.
Could you please advise me with correct way to setup some kind of local CI for such case? What approach will be better?
Thanks a lot.

How to easily switch between dev and prod environments

What is the best way to get dev and test browsers to resolve our production domain name to dev and test environments? Say our production domain is widgets.com. In the past, we've used internal DNS for devwidgets.com, testwidgets.com, demowidgets.com, etc. But this is proving to be big pain. Seems better to have a host file or proxy server setup so each client can choose to resolve widgets.com to each pre-prod environment. Ideas? How have others solved this problem?
You can run different versions on different ports (easiest for internal and external setup) or on different cnames (for external setup):
dev.widgets.com:81
dev.widgets.com:82
...
dev1.widgets.com
dev2.widgets.com
...
This means that the different environments can be configured centrally through the web server rather than having to manage lots of different host files.
We have solved it by using internal dns, like you said. Each developer has his own environment, so I can goto www.ordomain.com.branch2.environment10, where environment10 is my specific environment, and branch2 refers to a specific checkout, in case I got multiple checkouts because I'm working on different projects simultaniously. Just the different environment may suffice for you.
In another situation I've configured a different cname, using dev.widgets.com for remotely getting to my development environment. Disadvantage is that anyone can reach it, so you should password protect it, or use an IP filter.
I wouldn't recomment using hosts files. This is hard to maintain, and you can't reach the live environment from your development pc.

Resources