How can I make all my docker containers use my proxy? - proxy

I am running docker on Debian Jessie which is behind a corporate proxy. To be able to download docker images, I need to add the following to my /etc/defaults/docker
http_proxy="http://localhost:3128/"
I can confirm that this works.
However, in order to be able to access the interwebz from within my container, I need to start all sessions with --net host and then setup these env variables:
export http_proxy=http://localhost:3128/
export https_proxy=https://localhost:3128/
export ftp_proxy=${http_proxy}
Ideally, I would like for the container to not need the host network, and not to know about the proxy (i.e. all outbound calls to port 20, 80, 443 in the container go via the host's proxy port). Is that possible?
Failing that, is it possible to have a site setup, which will ensure that these env variables are set locally but never exported as part of an image? I know I can pass these things with --env http_proxy=... etc, but that's clunky. I want it to work for all users on the system without having to use aliases.
(Disclaimer: I asked this on https://superuser.com/posts/890196 but the home for docker questions is a little ambiguous at the moment).

See Proxy all the Containers:
Host server runs a container running a proxy (squid, in this case) that can do transparent proxying. That container has some iptables rules that NAT traffic into the proxy server - this means that container needs to run in privileged mode.
Host server also contains (and here's the magic) ip route table entries that re-route all traffic from any container but the proxy that was destined for port 80, through the proxy container.
That last bit essentially means that for port 80 traffic, the route from container to the rest of the world goes through the proxy container - giving it the chance to NAT and transparent proxy.
https://github.com/silarsis/docker-proxy

Related

Is there a possibility to access a docker-compose container from another machine inside the local network?

I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?
There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.

Make k8s cluster services available to local docker containers

I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.

Bind custom IP to my Docker network's gateway to access containers from host

As far as I know, by default, Docker binds to 127.0.0.1 when running docker compose with the default network settings. To access my containers, I need to map alternate ports to access them through localhost, such as 45001:80 to access my web server container from the host.
I would like to bind my containers to an alternate IP then 127.0.0.1 on my machine so I can use the proper ports instead of having to forward the ports through localhost. For example, to access my web server container, instead of going to 127.0.0.1:45001, I would bind to something like 192.168.0.1 and access it via 192.168.0.1:80. I've tried searching for an answer for this, but I can't seem to find it. Going through the Docker documentation hasn't gotten me terribly far either.
Anybody know how I would accomplish this?

Docker on Windows with a proxy

Hi im using Docker on windows 10 with a proxy.
Docker itself works fine with the proxy IP set correctly in the docker settings.
I can download images through docker.
The problem is that any container I want to run or build also needs these HTTP_PROXY and HHTPS_PROXY variables.
I can do this by adding it to build arguments, run arguments or the docker file.
However none of these solutions are perfect because they add machine specific variable values to either the docker files and/or the docker-compose files.
I have checked the MobyLinuxVM's values for these HTTP_PROXY and HHTPS_PROXY variables by hacking into it with this trick:
How to connect to docker VM (MobyLinux) from windows shell?
Eventhough these variables were displayed correctly any image that I run or dockerfile I build still needs to get these variables.
Is there a way that any container automatically gets these proxy environment variables from the docker deamon who already has them set?
I know Linux has this feature by nature, but it seems to be missing for Windows.
This does not provide a way to set those values or to get them in a container's context, but has stopped me from having to change my proxy settings every time I change IP addresses and keeps me from having to pass them to containers at runtime (builds are still a different story).
This works for me behind an NTLM-authenticating web proxy, even from home on VPN:
1) Get the IP address of the DummyDesperatePoitras virtual switch Docker for Windows creates (starts with 169.254., which is usually a non-routable IP)
2) Install CNTLM (not perfect, as it's not been updated in 5 years) and set it to listen on that "dummy" IP address
3) Use that "dummy" IP address as the proxy in Docker for Windows settings
4) Add your internal corporate DNS server's IP and the domain name to the daemon.json in Docker for Windows settings
Again, this works for running containers - I only have to deal with the proxy server when I run docker build, passing it along in the build-args. I've not found a way around that yet.
Detailed walkthrough: https://mandie.net/2017/12/10/docker-for-windows-behind-a-corporate-web-proxy-tips-and-tricks/
My advice is to use a tool to transparently route all your traffic to the proxy, without having to set any proxy configuration locally.
For windows there is proxifier. It will transparently route all the traffic from your host to the proxy.

Proxying Docker Containers as Subdomains

I'm looking to proxy docker containers as subdomains of the docker host as below. I've seen several solutions that can accomplish something similar, but none really fit our need.
Host Machine: Corporate VPS running RHEL 7.2
Host Domain: host.net (fakename - but it's behind a corporate intranet, not reachable from public)
DNS Server: DNS for host.net is delegated to the host machine, so I need to run a dns server on :53 (this is new, which is why one isn't already setup)
Host IP: 172.16.10.12
Docker: v1.10
Subnet: dockernet 192.168.222.1/24
Subnet dns (docker created): dnsmasq on 192.168.122.1:53
Goal:
dnsmasq on host machine to serve host.net from 172.16.10.12
proxy all subdomains (*.host.net) to subnet dockernet so that any container joined to dockernet would be reachable by containername.host.net, containerhostname.host.net, alias1.host.net, etc.
have this happen automatically for any container that connects to dockernet
to have containers treated as hosts so we don't have to manually open up ports through docker: ex: rediscontainer.host.net:6379
Questions / Issues:
can't start dnsmasq on host machine because docker has already bound 192.168.122.1:53 - I believe I can configure dnsmasq to not listen on a specific IP, but I'm new to this
what's a relatively easy way to configure this? I was hoping I could configure dnsmasq and iptables to do this, but I'm not sure how to go about it, or if these two could accomplish my goal.
I assume that docker's built in dns for user defined networks is the easiest way to automate container name resolution, but is there an easier way?
My apologies for any ambiguity as I'm new to dns, subnets, etc. Any help is greatly appreciated!
Eric
I implemented such dynamic subdomains per containers using nginx-proxy.
This article also explains how to achieve the same from nginx base image and dockergen to generate nginx conf from docker events.

Resources