Proxying Docker Containers as Subdomains - proxy

I'm looking to proxy docker containers as subdomains of the docker host as below. I've seen several solutions that can accomplish something similar, but none really fit our need.
Host Machine: Corporate VPS running RHEL 7.2
Host Domain: host.net (fakename - but it's behind a corporate intranet, not reachable from public)
DNS Server: DNS for host.net is delegated to the host machine, so I need to run a dns server on :53 (this is new, which is why one isn't already setup)
Host IP: 172.16.10.12
Docker: v1.10
Subnet: dockernet 192.168.222.1/24
Subnet dns (docker created): dnsmasq on 192.168.122.1:53
Goal:
dnsmasq on host machine to serve host.net from 172.16.10.12
proxy all subdomains (*.host.net) to subnet dockernet so that any container joined to dockernet would be reachable by containername.host.net, containerhostname.host.net, alias1.host.net, etc.
have this happen automatically for any container that connects to dockernet
to have containers treated as hosts so we don't have to manually open up ports through docker: ex: rediscontainer.host.net:6379
Questions / Issues:
can't start dnsmasq on host machine because docker has already bound 192.168.122.1:53 - I believe I can configure dnsmasq to not listen on a specific IP, but I'm new to this
what's a relatively easy way to configure this? I was hoping I could configure dnsmasq and iptables to do this, but I'm not sure how to go about it, or if these two could accomplish my goal.
I assume that docker's built in dns for user defined networks is the easiest way to automate container name resolution, but is there an easier way?
My apologies for any ambiguity as I'm new to dns, subnets, etc. Any help is greatly appreciated!
Eric

I implemented such dynamic subdomains per containers using nginx-proxy.
This article also explains how to achieve the same from nginx base image and dockergen to generate nginx conf from docker events.

Related

Bind custom IP to my Docker network's gateway to access containers from host

As far as I know, by default, Docker binds to 127.0.0.1 when running docker compose with the default network settings. To access my containers, I need to map alternate ports to access them through localhost, such as 45001:80 to access my web server container from the host.
I would like to bind my containers to an alternate IP then 127.0.0.1 on my machine so I can use the proper ports instead of having to forward the ports through localhost. For example, to access my web server container, instead of going to 127.0.0.1:45001, I would bind to something like 192.168.0.1 and access it via 192.168.0.1:80. I've tried searching for an answer for this, but I can't seem to find it. Going through the Docker documentation hasn't gotten me terribly far either.
Anybody know how I would accomplish this?

How to setup Vagrant DNS servers in MacOS without change the DNS in the network setup

I have a vagrant machine and this vm runs a DNS server to resolve the internal domains of each micro-service instance running in a docker container inside the vagrant. Actually, after run vagrant up, I need to put the vagrant vm IP address in my network configuration to my computer resolve the development domain, so I can access the application, but the problem is that I work remotelly and frequently I need to connect in public hotspots that uses network authentication and if I have the vagrant DNS in my interface's configuration I could not connect to the hotspot without removing the vagrant IP, but I need to put it back after some minutes later to start working.
So, the question is, there is a way to configure an virtual interface or a VPN interface that points to the vagrant but does not block my network as I describe above ?
When I was using linux, I just put the vagrant IP in the resolv.conf and I had no headaches, but as MacOS does not have the resolv.conf like linux, I could not find a easy way to deal with theses problems.

Consul set up without docker for production use

I am doing a POC on Consul for supporting service discovery and multiple microservice versions. Consul clients and server cluster(3 servers) are set up on Linux VMs. I followed the documentation at Consul and the set up is successful.
Here is my doubt. My set up is completely on VMs. I've added a service definition using HTTP API. The same service is running on two nodes.
The services are correctly registered:
curl http://localhost:8600/v1/catalog/service/my-service
gives me the two node details.
When I do a DNS query:
dig #127.0.0.1 -p 8600 my-service.service.consul
I am able to see the expected results with the node which hosts the service. But I cannot ping the service since the service name is not resolved.
ping -c4 my-service or ping -c4 my-service.service.consul
ping: unknown host.
If I enter a mapping for my-service in /etc/hosts file, I can ping this, only from the same VM. I won't be able to ping this from another VM on the same LAN or WAN.
The default port for DNS is 53. Consul DNS interface listens to 8600. I cannot use Docker for DNS forwarding. Is it possible I missed something here? Can consul DNS query work without Docker/dnsmasq or iptables updates?
To be clear, here is what I would like to have as the end result:
ping my-service
This needs to ping the nodes I have configured, in a round robin fashion.
Please bear with me if this question is basic, and I've gone through each of the consul related questions in SO.
Also gone through this and this and these too says I need to do extra set up.
Wait! Please don't do this!
DO. NOT. RUN. CONSUL. AS. ROOT.
Please. You can, but don't. Instead do the following:
Run a caching or forwarding DNS server on your VMs. I'm bias toward dnsmasq because of its simplicity and stability in the common case.
Configure dnsmasq to forward the TLD .consul to the consul agent listening on 127.0.0.1:8600 (the default).
Update your /etc/resolv.conf file to point to 127.0.0.1 as your nameserver.
There are a few ways of doing this, and the official docs have a write up that is worth looking into:
https://www.consul.io/docs/guides/forwarding.html
That should get you started.
This can be a pretty complicated topic, but the simplest way is to change consul to bind to port 53 using the ports directive, and add some recursers to the consul config can pass real DNS requests on to a host that has full DNS functionality. Something like these bits:
{
"recursors": [
"8.8.8.8",
"8.8.4.4"
],
"ports": {
"dns": 53
}
}
Then modify your system to use the consul server for dns with a nameserver entry in /etc/resolve.conf. Depending on your OS, you might be able to use a port in the resolv.conf file, and avoid having to deal with Consul needing root to bind to port 53.
In a more complicated scenario, I know many people that use unbound or bind to do split DNS and essentially do the opposite, routing the .consul domain to the consul cluster on a non-privileged port at the org level of their DNS infrastructure.

How can I make all my docker containers use my proxy?

I am running docker on Debian Jessie which is behind a corporate proxy. To be able to download docker images, I need to add the following to my /etc/defaults/docker
http_proxy="http://localhost:3128/"
I can confirm that this works.
However, in order to be able to access the interwebz from within my container, I need to start all sessions with --net host and then setup these env variables:
export http_proxy=http://localhost:3128/
export https_proxy=https://localhost:3128/
export ftp_proxy=${http_proxy}
Ideally, I would like for the container to not need the host network, and not to know about the proxy (i.e. all outbound calls to port 20, 80, 443 in the container go via the host's proxy port). Is that possible?
Failing that, is it possible to have a site setup, which will ensure that these env variables are set locally but never exported as part of an image? I know I can pass these things with --env http_proxy=... etc, but that's clunky. I want it to work for all users on the system without having to use aliases.
(Disclaimer: I asked this on https://superuser.com/posts/890196 but the home for docker questions is a little ambiguous at the moment).
See Proxy all the Containers:
Host server runs a container running a proxy (squid, in this case) that can do transparent proxying. That container has some iptables rules that NAT traffic into the proxy server - this means that container needs to run in privileged mode.
Host server also contains (and here's the magic) ip route table entries that re-route all traffic from any container but the proxy that was destined for port 80, through the proxy container.
That last bit essentially means that for port 80 traffic, the route from container to the rest of the world goes through the proxy container - giving it the chance to NAT and transparent proxy.
https://github.com/silarsis/docker-proxy

Docker Minecraft Host

I am trying to host Minecraft servers in docker containers on an ec2 instance, and point a different subdomain to each container, for example
a.example.com -> container 1
b.example.com -> container 2
c.example.com -> container 3
...and so on.
If these containers were running a website, I could forward the traffic with Apache, or node-http-proxy, etc. But because these servers are running TCP services, I cannot route the traffic this way.
Is this possible? And if so, how?
The Minecraft client has supported SRV DNS records for a while now (since 1.3.1 according to google). I suggest you assign your Docker containers a stable set of port mapping with the -p flag, and then create SRV records for each FQDN pointing to the same IP but different ports.
Google gives several hits on the SRV entry format - this one is from the main MCF site: http://www.minecraftforum.net/topic/1922138-using-srv-records-to-hide-ports-on-your-server-ip/
I have four MC servers running on the same physical host with a single IP address, each with a separate friendly entry for players to use in the Minecraft client, so none of my users need to remember a port. It did cause confusion for a couple of my more technical players when they had a connectivity issue, tested with dig/ping, then thought the DNS resolution was broken when there was no A record to be found. Overall, I think that's a very small downside.
Doesn't HAProxy http://haproxy.1wt.eu/ route tcp traffic?

Resources