Continuous Integration workflow with docker swarm - continuous-integration

Here's my setup, this output was taken from docker-machine ls. Using docker machine to provision the swarm.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster-master * (swarm) digitalocean Running tcp://REDACTED:2376 cluster-master (master) v1.11.1
kv-store - digitalocean Running tcp://REDACTED:2376 v1.11.1
node-1 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
node-2 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
Right now I'm searching for a way to setup my CI/CD workflow. Here is my initial idea:
Create an automatic build on docker hub (bitbucket)
Once changes are pushed, trigger build on docker hub
Testing will be done on docker hub (npm test)
Create a webhook on docker hub once build is success.
The webhook will point to my own application that will then push the changes to the swarm
Questions:
Is it okay to run your testing on docker hub or should I rely on another service?
If I will rely on another service what is your recommended service?
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?

The first part of the process all looks fine. Where it gets complicated is managing the deployed production containers.
Is it okay to run your testing on docker hub or should I rely on
another service?
Yes it should be fine to run tests on docker hub assuming you don't need further integration tests.
I need to integrate my containers with amazon services and have a fairly non-standard deployment so this part of the testing has to be done on an amazon instance.
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
If you're just using one machine you don't need the added overhead of using swarm. If you're planning to scale to a larger multi-node deployment, yes deploy to a remote machine because you'll discover sooner the gotchas around using swarm.
You need to think about how you retire old versions and bring in the latest version of your containers to the swarm which is often called scheduling.
One simple approach that can be used is:
Remove traffic from old running container
Stop old running container
Pull latest container
Start latest container
Rinse and repeat for all running containers.
This is done in docker swarm by declaring a service. Then updating the image which can be watched as a task. For more information on the detail of this process see Apply rolling update to swarm and for how to do this in Amazon updating docker containers in ecs

Related

Kubernetes (microk8s) vs Traditional Docker Host Machine Architecture

I tried to find the answer in previous post, but i did not find it !
My question seems dumb, i'm just trying to figure it out :)
I'm new to docker and kubernetes, i'm trying to understand the architecture of kubernetes cluster, nodes, and pods.
I'm using two machines with docker installed, each machine have two containers running, i want to install MicroK8s to start playing with kubernetes, my questions are :
As below image > Can I install it on separate machine and connect it to my docker host machines so it will manage my containers their with support of some sort of (agent/ maybe services) ?, Or kubernetes/MicroK8s must be installed on the machine that will host the containers ?
Can i add my running docker containers directly to a pod ? or i must re-create them ?
Many thanks
You can play with any VM software(cpu virtualization required).
You can set up 3 VMs(master, node1,node2). You have to install kubernetes in each VM. When you connect them thru calico they communicate each other. When you make pods with app or db , you can loadbalance to node1 and node2 or more from master. Then you can create a service to export route to the pods. Or If you want to run everything in one big server, you can. Horizontal scaling or vertical scaling is your choice.
you cant mount a running docker container to the pod but you can load a docker image from any registry.

Kitematic or other GUI based options to connect to a remote docker host

I have installed CoreOS on a laptop to use it as a Docker host. I really like Kitematic on my mac to create and manager containers. I dont see an option to connect to the remote docker on CoreOS using Kitematic. Are there other tools I can use to connect to a remote docker host and use GUI rather than command line to manager it.
I also like Kitematic a lot! As an alternative in CoreOS, you can try docker-ui, and it's evolution portainer.
They are both docker containers that can help you find / run docker images and inspect docker volumes / network / container stats.
You can also launch new containers directly through the web UI. More information on this good review of the portainer's possibilities
Rancher UI from Rancher Labs maybe also be worth looking at. It is more designed as a docker orchestration tool (when you operate a docker swarm cluster for instance).

Docker for Windows Swarm IIS Service with Win10 Insider running but unreachable

I'm currently experimenting with Swarm Services with Docker for Windows. The new Win10 Insider build supports overlay networking for Windows containers and I was pleased to see my IIS service actually starting. The only issue i came across is that i can not reach the service in the browser, despite trying multiple things such as different ports and networks. The command issued is as following:
docker service create --name webfarm -p 80:80 microsoft/iis
I have also tried to use the --network flag to try different networks and I have made sure to test all IP addresses visible in the docker service inspect webfarm command.
docker service ps webfarm does indicate that my service is in state RUNNING and does not have any errors, so i don't know what else i can try. Especially since these commands worked fine on Linux with Apache.
I was wondering if anyone has been able to successfully create a service using Windows Containers on the Windows Insider build (15046), and if so, how?
Never mind, i found this actually is not supported yet.
The following source states:
"At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future." (https://stefanscherer.github.io/docker-swarm-mode-windows10/)
And indeed, the blogposts states the following:
"Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance." (https://blogs.technet.microsoft.com/virtualization/2017/02/09/overlay-network-driver-with-support-for-docker-swarm-mode-now-available-to-windows-insiders-on-windows-10/)
I guess I'll have to wait for this feature, in the meantime I will use the alternative.

Registering service on existing node in Consul

There is a Consul cluster in my local environment, and some developers' local machines as well. Each developer has a Tomcat server which runs some web artifacts in Docker container, so I want to register these artifacts as services on Tomcat deploy.
Assuming that we have already registered empty node for each developer's local machine, how can i register/deregister a new service on existing node? Do i need consul agent running on any node?
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically. I'd prefer HTTP API if possible (it's much easier to run on local machines).
Do i need consul agent running on any node?
Yes, even though you can add external services to a remote machine using curl post too, the service discovery is going to benifit you with the agent running on nodes too.
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically.
Registering a service is fairly easy on consul and you can find more details at the following link:
https://www.consul.io/intro/getting-started/services.html
However, if you wish to give better isolation to your developers, I would recommend running the consul agent server/client in docker and let registrator take care of everything.
Registrator from gliderlabs is service registry bridge for Docker. It automatically registers and deregisters services for any Docker container by inspecting containers as they come online.
You can find more details here: https://github.com/gliderlabs/registrator

How to deploy a Docker container on EC2 ECS from Docker Hub private repo?

I have an image in a private Docker Hub repository, which I'm trying to deploy on Amazon's Elastic Container Service. There seems to be nice web console to run a container from a public repository, but nothing for private. I've read and tried to understand the documentation for this, but I don't understand what that has to do with deploying my container, as it states "The Amazon ECS container agent allows container instances to connect to your cluster".
As an alternative to using the web console, I see mentions of setting up a task definition. It sounds like that's the manual version of what the web console does. I suspect my best bet is with this method, possibly with the help of the script here.
What is the simplest way for me to run an existing image on ECS that's hosted in a private repository?
Right, so a container instance is just the EC2 machine that happens to run the services defined in the cluster. The cluster can then connect to the EC2 machine as a container instance, but unless the EC2 machine is appropriately configured, it can't run your private repository.

Resources