Access local process from local cluster - macos

I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.
How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?
A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy.
How does one add the Docker Desktop Cluster to the network?

As I mentioned in comments
I think what you're looking for is mentioned in the documentation here. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.
Deploy a plain HTTP registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Also found a tutorial for that on medium with macOS. Take a look here.

Is running the registry inside the kubernetes cluster an option?
That way you can use a NodePort service and push images to an address like
"localhost:9000/myrepo".
This is significant because Docker allows insecure (non SSL) connections for localhost.

Related

docker swarm manager on windows through a tailscale network

so I want my windows machine to be a manger node in my docker swarm. all the compute power will be on linux swarm nodes.
another complication, I am using tailscale for the network. I cant seem to configure docker to use the tailscale network for listen-addr and advertise-addr.
it seems to work ok if I dont try to force it to use tailscale.
any ides?
I have tried looking for a config json file to add a tailscale dns entry. but cant find it on windows

Azure App Service Docker Linux Deployment - Where are my files?

I have a Azure CI pipeline, that deploys .NET Core API to a Linux docker image and pushes it to our Azure Container Registry. The files are deployed to /var/lib/mycompany/app using docker-compose and dockerfile. This is then used as an image for an App Service which provides our API. The app starts fine and works, but if I go to advanced tools in the app service and run a bash session, I can see all the logs files generated by docker, but I can't see any of the files I deployed in the locations I deployed them. Why is this, and where can I find them? Is it an additional volume somewhere, a symbolic link, a layer in docker I need to access by some mechanism, a host of some sort, or black magic?
Apologies for my ignorance.
All the best,
Stu.
Opening a bash session using the Advanced Tools will open the session in the underlying VM running your container. If you want to reach your container, you need to install an ssh server in it and use the SSH tab in the Advanced Tools or the Azure CLI.
az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name> &
How to configure your container
How to open an SSH session

Docker on Windows with a proxy

Hi im using Docker on windows 10 with a proxy.
Docker itself works fine with the proxy IP set correctly in the docker settings.
I can download images through docker.
The problem is that any container I want to run or build also needs these HTTP_PROXY and HHTPS_PROXY variables.
I can do this by adding it to build arguments, run arguments or the docker file.
However none of these solutions are perfect because they add machine specific variable values to either the docker files and/or the docker-compose files.
I have checked the MobyLinuxVM's values for these HTTP_PROXY and HHTPS_PROXY variables by hacking into it with this trick:
How to connect to docker VM (MobyLinux) from windows shell?
Eventhough these variables were displayed correctly any image that I run or dockerfile I build still needs to get these variables.
Is there a way that any container automatically gets these proxy environment variables from the docker deamon who already has them set?
I know Linux has this feature by nature, but it seems to be missing for Windows.
This does not provide a way to set those values or to get them in a container's context, but has stopped me from having to change my proxy settings every time I change IP addresses and keeps me from having to pass them to containers at runtime (builds are still a different story).
This works for me behind an NTLM-authenticating web proxy, even from home on VPN:
1) Get the IP address of the DummyDesperatePoitras virtual switch Docker for Windows creates (starts with 169.254., which is usually a non-routable IP)
2) Install CNTLM (not perfect, as it's not been updated in 5 years) and set it to listen on that "dummy" IP address
3) Use that "dummy" IP address as the proxy in Docker for Windows settings
4) Add your internal corporate DNS server's IP and the domain name to the daemon.json in Docker for Windows settings
Again, this works for running containers - I only have to deal with the proxy server when I run docker build, passing it along in the build-args. I've not found a way around that yet.
Detailed walkthrough: https://mandie.net/2017/12/10/docker-for-windows-behind-a-corporate-web-proxy-tips-and-tricks/
My advice is to use a tool to transparently route all your traffic to the proxy, without having to set any proxy configuration locally.
For windows there is proxifier. It will transparently route all the traffic from your host to the proxy.

Kitematic or other GUI based options to connect to a remote docker host

I have installed CoreOS on a laptop to use it as a Docker host. I really like Kitematic on my mac to create and manager containers. I dont see an option to connect to the remote docker on CoreOS using Kitematic. Are there other tools I can use to connect to a remote docker host and use GUI rather than command line to manager it.
I also like Kitematic a lot! As an alternative in CoreOS, you can try docker-ui, and it's evolution portainer.
They are both docker containers that can help you find / run docker images and inspect docker volumes / network / container stats.
You can also launch new containers directly through the web UI. More information on this good review of the portainer's possibilities
Rancher UI from Rancher Labs maybe also be worth looking at. It is more designed as a docker orchestration tool (when you operate a docker swarm cluster for instance).

Unable to access MongoDB within a container within a Docker Machine instance from Windows

I am running Windows 7 on my desktop at work and I am signed in to a regular user account on the VPN. To develop software, we are to normally open a Dev VM and work from in there however recently I've been assigned a task to research Docker and Mongo DB. I have very limited access to what I can install on the main machine.
Here lies my problem:
Is it possible for me to connect to a MongoDB instance inside a container inside the docker machine from Windows and make changes? I would ideally like to use a GUI tool such as Mongo Management Studio to make changes to a Mongo database within a container.
By inspecting the Mongo container, it has the ports listed as: 0.0.0.0:32768 -> 27017/tcp
and docker-machine ip (vm name) returns 192.168.99.111.
I have commented out the 127.0.0.1 binding host ip within the mongod.conf file also.
From what I have researched so far, most users resolve their problem by connecting to their docker-machine IP with the port they've set with -p or been given with -P. Unfortunately for me, trying to connect with 192.168.99.111:32768 does not work.
I am pretty stumped and quite new to this environment. I am able to get inside the container with bash and manipulate the database there however I'm wondering if I can do this within Windows.
Thank you if anyone can help.
After reading Smutje's advice to ping the VM IP and testing it out to no avail, I attempted to find a pingable IP which would hopefully move me closer to my goal.
By doing "ifconfig" within the Boot2Docker VM (but not inside the container), I was able to locate another IP listed under eth0. This IP looks something like 134.36.xxx.xxx to me and is pingable. With the Mongo container running I can now access the database from within Mongo Management Studio by connecting to 134.36.xxx.xxx:32768 and manipulate the data from there.
If you have the option of choosing the operating system for your dev VM, go with Ubuntu and setup docker with all of the the containers you want to test on that. Either way, you will need to have a VM for testing docker on windows since it uses VirtualBox if i'm not mistaken. Instead, setup an Ubuntu VM and do all of your testing on that.

Resources