Using Docker to share projects to work on, on Windows 10? - windows

I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)

But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.

Related

Bridging 2 docker-compose solutions?

In my line of work, I’m working with MS Visual Studio, Docker for Windows and VS’s Docker tooling.
I’m working on a multi-container solution that has dependencies on containers with long startup times (such as elasticsearch). I’ve experimented with the docker-compose 2.4 format and successfully used the depends_on with healthchecks combination in order to ensure the startup order of my projects. This however, negatively impacts my developement experience, as it takes a long amount of time to start my solution (using the VS / .dcproj / docker-compose integration) or to attach the debugger to it.
I’m looking to optimize my developement experience, while retaining the guarantee that my dependencies are up and running before my under-developement containers are being launched.
I was wondering if it would be a good solution to have the dependencies such as elasticsearch/consul etc. started up as a separate docker-compose solution and have it run side-by-side ?
Is it possible to bridge 2 docker-compose solutions using docker-network and preferable retain the docker network dns?
Is there a better conceptual solution than this?
Best regards

Building windowsservercore Docker image from scratch

I'm new to docker, and I work in a corporate environment that is very locked down. In short, I have docker installed on a Windows Server 2016, but I have no access to dockerhub at all and will not be getting access to dockerhub in the near future, and so I cannot rely upon "from microsoft/windowsservercore" in my dockerfile.
Nonetheless, I need to package a .Net server application in a docker image for evaluation purposes. Is it possible to build a windowsservercore image from scratch? I looked for the dockerfile in Github for reference, and while I found the one for dotnet-framework at https://github.com/Microsoft/dotnet-framework-docker, I could not locate a windowsservercore docker project there.
Images from the docker hub are often maintained by a wide community and trust-worthy. If you're in a closed network that doesn't allow direct access to the docker hub and you want to run this image on the production server anyway, pull the image on your laptop, then use docker save and docker load to export the image to a tar file and import it on the production machine.
Reference :
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/

How to reproduce Codeship's non-Docker CI environment locally?

Is there a way to reproduce Codeship's CI environment locally when not using Codeship's Docker support?
We don't share the build VMs for our classic infrastructure publicly at the moment. I'll bring this up with our engineering team, but I can't make any promises right now on what they'll decide.
We do however have a SSH debug feature available, that will allow you to access a build VM with your code cloned via SSH and run & tweak commands that way. See https://codeship.com/documentation/continuous-integration/ssh-access/ for more information.

Laravel, only developing on remote server

Hello I have been wanting to get into working with a framework and Laravel seems like a decent one to try.
I have seen a lot of tutorials that tell you how to setup Laravel locally with Homestead or variants.
I am wanting to install and setup Laravel on my dedicated remote server with my hosting company. From there I want to be able to work with it on my local MacBook or MacPro.
I have not been able to find a good tutorial to make this happen in the fashion I want to do it.
I work with PHP and related daily but usually login to FTP and edit files with TextWrangler and save them and go about my day so my methods are dated and not efficient.
One side note is that I also have a Dell PowerEdge server running CentOS and VestaCP in my office as my development server so nothing is done locally per say (on my own computer) so the question and answer will apply to both my remote server and my remote but local development server.
Any suggestions are always welcome.
Best Regards,
Bradley
Assuming you have full root access to your remote servers, you should install composer on them and install Laravel in whichever way suits you. Then you can edit your project files just as if you were working on it locally.
Seriously though, the biggest thing you should add to your development arsenal (in case you haven't already) which will make your development process so much more resilient is Git.
Set up a free Bitbucket account, get a free Git client, and learn how commits, pushes, pulls, branches and deployments work. The easiest approach for deployment is to use a service such as Envoyer.
That way you can develop and test locally (even if 'locally' is a remote machine) and not really have to worry about breaking your app by making a mistake in controller or something on the live server.

Where should I install my Continuous Integration Server?

I'm the lead developer at a startup and we currently have the following setup:
- Development Server
- Staging Server
- Production Server
- Paid Subversion Hosting
- My local machine
- 2 other developers' local machines
Where is the best place to host the CI server? On an entire new server? Or is my local machine sufficient for this?
Definitely not your local machine. I'd suggest a separate server unless you don't mind slowing down your dev server.
I say not your local machine because the last thing you want to be hindered by is builds. Nothing is more frustrating than a slow machine. And you should generally keep official builds generated off of a separate server.
Generally not local machine (when other options are available) as you mostly want to have the same "stuff" installed (or not installed) on the build server as you have on the production server, so that whatever is running on the the build server is running in as realistic a scenario as possible.
Speaking from a .NET point of view, this means that I don't want (for example) Visual Studio running on the build server, ruling out my local machine.
It would also be a good idea to be sure someone on your team has access to the machine and can perform actions on it, thus potentially ruling out the hosted solution.
Aside from that, as long as it's on a box with a half decent spec, I don't think it really matters.
I would put it at the development server, staging server, or the paid subversion hosting instance, if possible.

Resources