Kitematic or other GUI based options to connect to a remote docker host - user-interface

I have installed CoreOS on a laptop to use it as a Docker host. I really like Kitematic on my mac to create and manager containers. I dont see an option to connect to the remote docker on CoreOS using Kitematic. Are there other tools I can use to connect to a remote docker host and use GUI rather than command line to manager it.

I also like Kitematic a lot! As an alternative in CoreOS, you can try docker-ui, and it's evolution portainer.
They are both docker containers that can help you find / run docker images and inspect docker volumes / network / container stats.
You can also launch new containers directly through the web UI. More information on this good review of the portainer's possibilities
Rancher UI from Rancher Labs maybe also be worth looking at. It is more designed as a docker orchestration tool (when you operate a docker swarm cluster for instance).

Related

docker swarm manager on windows through a tailscale network

so I want my windows machine to be a manger node in my docker swarm. all the compute power will be on linux swarm nodes.
another complication, I am using tailscale for the network. I cant seem to configure docker to use the tailscale network for listen-addr and advertise-addr.
it seems to work ok if I dont try to force it to use tailscale.
any ides?
I have tried looking for a config json file to add a tailscale dns entry. but cant find it on windows

Access local process from local cluster

I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.
How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?
A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy.
How does one add the Docker Desktop Cluster to the network?
As I mentioned in comments
I think what you're looking for is mentioned in the documentation here. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.
Deploy a plain HTTP registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Also found a tutorial for that on medium with macOS. Take a look here.
Is running the registry inside the kubernetes cluster an option?
That way you can use a NodePort service and push images to an address like
"localhost:9000/myrepo".
This is significant because Docker allows insecure (non SSL) connections for localhost.

Kubernetes (microk8s) vs Traditional Docker Host Machine Architecture

I tried to find the answer in previous post, but i did not find it !
My question seems dumb, i'm just trying to figure it out :)
I'm new to docker and kubernetes, i'm trying to understand the architecture of kubernetes cluster, nodes, and pods.
I'm using two machines with docker installed, each machine have two containers running, i want to install MicroK8s to start playing with kubernetes, my questions are :
As below image > Can I install it on separate machine and connect it to my docker host machines so it will manage my containers their with support of some sort of (agent/ maybe services) ?, Or kubernetes/MicroK8s must be installed on the machine that will host the containers ?
Can i add my running docker containers directly to a pod ? or i must re-create them ?
Many thanks
You can play with any VM software(cpu virtualization required).
You can set up 3 VMs(master, node1,node2). You have to install kubernetes in each VM. When you connect them thru calico they communicate each other. When you make pods with app or db , you can loadbalance to node1 and node2 or more from master. Then you can create a service to export route to the pods. Or If you want to run everything in one big server, you can. Horizontal scaling or vertical scaling is your choice.
you cant mount a running docker container to the pod but you can load a docker image from any registry.

Continuous Integration workflow with docker swarm

Here's my setup, this output was taken from docker-machine ls. Using docker machine to provision the swarm.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster-master * (swarm) digitalocean Running tcp://REDACTED:2376 cluster-master (master) v1.11.1
kv-store - digitalocean Running tcp://REDACTED:2376 v1.11.1
node-1 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
node-2 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
Right now I'm searching for a way to setup my CI/CD workflow. Here is my initial idea:
Create an automatic build on docker hub (bitbucket)
Once changes are pushed, trigger build on docker hub
Testing will be done on docker hub (npm test)
Create a webhook on docker hub once build is success.
The webhook will point to my own application that will then push the changes to the swarm
Questions:
Is it okay to run your testing on docker hub or should I rely on another service?
If I will rely on another service what is your recommended service?
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
The first part of the process all looks fine. Where it gets complicated is managing the deployed production containers.
Is it okay to run your testing on docker hub or should I rely on
another service?
Yes it should be fine to run tests on docker hub assuming you don't need further integration tests.
I need to integrate my containers with amazon services and have a fairly non-standard deployment so this part of the testing has to be done on an amazon instance.
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
If you're just using one machine you don't need the added overhead of using swarm. If you're planning to scale to a larger multi-node deployment, yes deploy to a remote machine because you'll discover sooner the gotchas around using swarm.
You need to think about how you retire old versions and bring in the latest version of your containers to the swarm which is often called scheduling.
One simple approach that can be used is:
Remove traffic from old running container
Stop old running container
Pull latest container
Start latest container
Rinse and repeat for all running containers.
This is done in docker swarm by declaring a service. Then updating the image which can be watched as a task. For more information on the detail of this process see Apply rolling update to swarm and for how to do this in Amazon updating docker containers in ecs

Forwarding of Docker Container running GUI on a non-GUI host

I have a small cluster with docker nodes, I access it via a gateway server that I ssh into. What I would like to do, is to run e.g. Eclipse with a GUI on the cluster and access that GUI on my computer.
What I have found so far is this: http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/
However, the problem I'm experiencing is that the host computer doesn't run any x-server, since it's only a node in a cluster, so I cannot mount the required directory into the container.
Is there a way to use GUI applications in a container with this setup?

Resources