Hi im using Docker on windows 10 with a proxy.
Docker itself works fine with the proxy IP set correctly in the docker settings.
I can download images through docker.
The problem is that any container I want to run or build also needs these HTTP_PROXY and HHTPS_PROXY variables.
I can do this by adding it to build arguments, run arguments or the docker file.
However none of these solutions are perfect because they add machine specific variable values to either the docker files and/or the docker-compose files.
I have checked the MobyLinuxVM's values for these HTTP_PROXY and HHTPS_PROXY variables by hacking into it with this trick:
How to connect to docker VM (MobyLinux) from windows shell?
Eventhough these variables were displayed correctly any image that I run or dockerfile I build still needs to get these variables.
Is there a way that any container automatically gets these proxy environment variables from the docker deamon who already has them set?
I know Linux has this feature by nature, but it seems to be missing for Windows.
This does not provide a way to set those values or to get them in a container's context, but has stopped me from having to change my proxy settings every time I change IP addresses and keeps me from having to pass them to containers at runtime (builds are still a different story).
This works for me behind an NTLM-authenticating web proxy, even from home on VPN:
1) Get the IP address of the DummyDesperatePoitras virtual switch Docker for Windows creates (starts with 169.254., which is usually a non-routable IP)
2) Install CNTLM (not perfect, as it's not been updated in 5 years) and set it to listen on that "dummy" IP address
3) Use that "dummy" IP address as the proxy in Docker for Windows settings
4) Add your internal corporate DNS server's IP and the domain name to the daemon.json in Docker for Windows settings
Again, this works for running containers - I only have to deal with the proxy server when I run docker build, passing it along in the build-args. I've not found a way around that yet.
Detailed walkthrough: https://mandie.net/2017/12/10/docker-for-windows-behind-a-corporate-web-proxy-tips-and-tricks/
My advice is to use a tool to transparently route all your traffic to the proxy, without having to set any proxy configuration locally.
For windows there is proxifier. It will transparently route all the traffic from your host to the proxy.
Related
I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?
There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.
I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.
How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?
A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy.
How does one add the Docker Desktop Cluster to the network?
As I mentioned in comments
I think what you're looking for is mentioned in the documentation here. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.
Deploy a plain HTTP registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Also found a tutorial for that on medium with macOS. Take a look here.
Is running the registry inside the kubernetes cluster an option?
That way you can use a NodePort service and push images to an address like
"localhost:9000/myrepo".
This is significant because Docker allows insecure (non SSL) connections for localhost.
After a long time of using LAMP and WAMP, I've decided to try out Docker (buying new hard drives today, so why not?).
I've managed to create containers for my website and everything works fine.
Content is updated, database is saved to the folder (so kind of persistent), however, I've read that it is possible to automatically start the project containers using integration inside the PhpStorm.
And here are the problems:
I am using Windows 10 Professional with Hyper-V enabled
Docker running as a service
Docker in Windows using NPIPE (Named Pipes)
PhpStorm only works with tcp:// unix:// URI
Tried to use socat to map pipe to tcp and failed (either device is busy, or unable to send 'send' command, or any other error, you name it)
Tried to start the Docker daemon using the configuration file with hosts set to pipes and tcp - failed again (guess it is only works for Azure)
Can someone give me a link to the detailed configuration of the Docker on the Windows or should I just fallback to WAMP, because I REALLY don't want to install VMWare or VBox on my machine, neither I want to use out-of-the-box solutions for hosting local WAMP server (XAMPP, Open Server, Denver, etc), I just don't trust them.
Here's what we have:
1) https://www.jetbrains.com/help/phpstorm/docker.html
2) https://www.jetbrains.com/help/phpstorm/docker-2.html
3) https://confluence.jetbrains.com/display/PhpStorm/Docker+Support+in+PhpStorm
4) https://github.com/JetBrains/phpstorm-workshop - you can checkout docker branch. This project contains some examples/tutorials you can try right inside IDE
If that doesn't help at all - please attach/describe an error message you're getting in IDE.
I am running Windows 7 on my desktop at work and I am signed in to a regular user account on the VPN. To develop software, we are to normally open a Dev VM and work from in there however recently I've been assigned a task to research Docker and Mongo DB. I have very limited access to what I can install on the main machine.
Here lies my problem:
Is it possible for me to connect to a MongoDB instance inside a container inside the docker machine from Windows and make changes? I would ideally like to use a GUI tool such as Mongo Management Studio to make changes to a Mongo database within a container.
By inspecting the Mongo container, it has the ports listed as: 0.0.0.0:32768 -> 27017/tcp
and docker-machine ip (vm name) returns 192.168.99.111.
I have commented out the 127.0.0.1 binding host ip within the mongod.conf file also.
From what I have researched so far, most users resolve their problem by connecting to their docker-machine IP with the port they've set with -p or been given with -P. Unfortunately for me, trying to connect with 192.168.99.111:32768 does not work.
I am pretty stumped and quite new to this environment. I am able to get inside the container with bash and manipulate the database there however I'm wondering if I can do this within Windows.
Thank you if anyone can help.
After reading Smutje's advice to ping the VM IP and testing it out to no avail, I attempted to find a pingable IP which would hopefully move me closer to my goal.
By doing "ifconfig" within the Boot2Docker VM (but not inside the container), I was able to locate another IP listed under eth0. This IP looks something like 134.36.xxx.xxx to me and is pingable. With the Mongo container running I can now access the database from within Mongo Management Studio by connecting to 134.36.xxx.xxx:32768 and manipulate the data from there.
If you have the option of choosing the operating system for your dev VM, go with Ubuntu and setup docker with all of the the containers you want to test on that. Either way, you will need to have a VM for testing docker on windows since it uses VirtualBox if i'm not mistaken. Instead, setup an Ubuntu VM and do all of your testing on that.
I am running docker on Debian Jessie which is behind a corporate proxy. To be able to download docker images, I need to add the following to my /etc/defaults/docker
http_proxy="http://localhost:3128/"
I can confirm that this works.
However, in order to be able to access the interwebz from within my container, I need to start all sessions with --net host and then setup these env variables:
export http_proxy=http://localhost:3128/
export https_proxy=https://localhost:3128/
export ftp_proxy=${http_proxy}
Ideally, I would like for the container to not need the host network, and not to know about the proxy (i.e. all outbound calls to port 20, 80, 443 in the container go via the host's proxy port). Is that possible?
Failing that, is it possible to have a site setup, which will ensure that these env variables are set locally but never exported as part of an image? I know I can pass these things with --env http_proxy=... etc, but that's clunky. I want it to work for all users on the system without having to use aliases.
(Disclaimer: I asked this on https://superuser.com/posts/890196 but the home for docker questions is a little ambiguous at the moment).
See Proxy all the Containers:
Host server runs a container running a proxy (squid, in this case) that can do transparent proxying. That container has some iptables rules that NAT traffic into the proxy server - this means that container needs to run in privileged mode.
Host server also contains (and here's the magic) ip route table entries that re-route all traffic from any container but the proxy that was destined for port 80, through the proxy container.
That last bit essentially means that for port 80 traffic, the route from container to the rest of the world goes through the proxy container - giving it the chance to NAT and transparent proxy.
https://github.com/silarsis/docker-proxy