Docker Minecraft Host - proxy

I am trying to host Minecraft servers in docker containers on an ec2 instance, and point a different subdomain to each container, for example
a.example.com -> container 1
b.example.com -> container 2
c.example.com -> container 3
...and so on.
If these containers were running a website, I could forward the traffic with Apache, or node-http-proxy, etc. But because these servers are running TCP services, I cannot route the traffic this way.
Is this possible? And if so, how?

The Minecraft client has supported SRV DNS records for a while now (since 1.3.1 according to google). I suggest you assign your Docker containers a stable set of port mapping with the -p flag, and then create SRV records for each FQDN pointing to the same IP but different ports.
Google gives several hits on the SRV entry format - this one is from the main MCF site: http://www.minecraftforum.net/topic/1922138-using-srv-records-to-hide-ports-on-your-server-ip/
I have four MC servers running on the same physical host with a single IP address, each with a separate friendly entry for players to use in the Minecraft client, so none of my users need to remember a port. It did cause confusion for a couple of my more technical players when they had a connectivity issue, tested with dig/ping, then thought the DNS resolution was broken when there was no A record to be found. Overall, I think that's a very small downside.

Doesn't HAProxy http://haproxy.1wt.eu/ route tcp traffic?

Related

Is there a possibility to access a docker-compose container from another machine inside the local network?

I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?
There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.

Consul set up without docker for production use

I am doing a POC on Consul for supporting service discovery and multiple microservice versions. Consul clients and server cluster(3 servers) are set up on Linux VMs. I followed the documentation at Consul and the set up is successful.
Here is my doubt. My set up is completely on VMs. I've added a service definition using HTTP API. The same service is running on two nodes.
The services are correctly registered:
curl http://localhost:8600/v1/catalog/service/my-service
gives me the two node details.
When I do a DNS query:
dig #127.0.0.1 -p 8600 my-service.service.consul
I am able to see the expected results with the node which hosts the service. But I cannot ping the service since the service name is not resolved.
ping -c4 my-service or ping -c4 my-service.service.consul
ping: unknown host.
If I enter a mapping for my-service in /etc/hosts file, I can ping this, only from the same VM. I won't be able to ping this from another VM on the same LAN or WAN.
The default port for DNS is 53. Consul DNS interface listens to 8600. I cannot use Docker for DNS forwarding. Is it possible I missed something here? Can consul DNS query work without Docker/dnsmasq or iptables updates?
To be clear, here is what I would like to have as the end result:
ping my-service
This needs to ping the nodes I have configured, in a round robin fashion.
Please bear with me if this question is basic, and I've gone through each of the consul related questions in SO.
Also gone through this and this and these too says I need to do extra set up.
Wait! Please don't do this!
DO. NOT. RUN. CONSUL. AS. ROOT.
Please. You can, but don't. Instead do the following:
Run a caching or forwarding DNS server on your VMs. I'm bias toward dnsmasq because of its simplicity and stability in the common case.
Configure dnsmasq to forward the TLD .consul to the consul agent listening on 127.0.0.1:8600 (the default).
Update your /etc/resolv.conf file to point to 127.0.0.1 as your nameserver.
There are a few ways of doing this, and the official docs have a write up that is worth looking into:
https://www.consul.io/docs/guides/forwarding.html
That should get you started.
This can be a pretty complicated topic, but the simplest way is to change consul to bind to port 53 using the ports directive, and add some recursers to the consul config can pass real DNS requests on to a host that has full DNS functionality. Something like these bits:
{
"recursors": [
"8.8.8.8",
"8.8.4.4"
],
"ports": {
"dns": 53
}
}
Then modify your system to use the consul server for dns with a nameserver entry in /etc/resolve.conf. Depending on your OS, you might be able to use a port in the resolv.conf file, and avoid having to deal with Consul needing root to bind to port 53.
In a more complicated scenario, I know many people that use unbound or bind to do split DNS and essentially do the opposite, routing the .consul domain to the consul cluster on a non-privileged port at the org level of their DNS infrastructure.

Kibana web interface not loading

Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?
May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach
Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.

Proxying Docker Containers as Subdomains

I'm looking to proxy docker containers as subdomains of the docker host as below. I've seen several solutions that can accomplish something similar, but none really fit our need.
Host Machine: Corporate VPS running RHEL 7.2
Host Domain: host.net (fakename - but it's behind a corporate intranet, not reachable from public)
DNS Server: DNS for host.net is delegated to the host machine, so I need to run a dns server on :53 (this is new, which is why one isn't already setup)
Host IP: 172.16.10.12
Docker: v1.10
Subnet: dockernet 192.168.222.1/24
Subnet dns (docker created): dnsmasq on 192.168.122.1:53
Goal:
dnsmasq on host machine to serve host.net from 172.16.10.12
proxy all subdomains (*.host.net) to subnet dockernet so that any container joined to dockernet would be reachable by containername.host.net, containerhostname.host.net, alias1.host.net, etc.
have this happen automatically for any container that connects to dockernet
to have containers treated as hosts so we don't have to manually open up ports through docker: ex: rediscontainer.host.net:6379
Questions / Issues:
can't start dnsmasq on host machine because docker has already bound 192.168.122.1:53 - I believe I can configure dnsmasq to not listen on a specific IP, but I'm new to this
what's a relatively easy way to configure this? I was hoping I could configure dnsmasq and iptables to do this, but I'm not sure how to go about it, or if these two could accomplish my goal.
I assume that docker's built in dns for user defined networks is the easiest way to automate container name resolution, but is there an easier way?
My apologies for any ambiguity as I'm new to dns, subnets, etc. Any help is greatly appreciated!
Eric
I implemented such dynamic subdomains per containers using nginx-proxy.
This article also explains how to achieve the same from nginx base image and dockergen to generate nginx conf from docker events.

Proxmox external VM / CT access

I've just begun the setup of proxmox for our none profit educational VPS service. However, the problem we're facing is a lack of IPv4 addresses available to us.
Is it possible to route a sub-domain to the host servers IP address and then get that forwarded to the individual containers accordingly. For example:
SSH root#node-123.w-a-s-d.me
Will allow a client with the VM ID of 123 to access their server
And the same goes for things like: node-123.w-a-s-d.me
This would be the web address allowing any applications running on port 80 for that specific node
I'm unsure how to go about this and have looked online with no luck. I hope our goal is clear. I look forward to hearing from you. Josh
Exposing SSH that way will not be easy as you can only have one thing listening on port 22 for every given IP address, and while you could just adding random ports to each VPS and the forward it from primary box which holds public IP (and vms are behind nat) this is not exactly the best solution.
What you may want to do instead is set up one public-facing box that people can ssh into via public IP and from it SSH to subsequent private machines by their internal IP. Alternatively you can set that box with openVPN and set it to assign internal IP address to anyone connecting via it. While openVPN takes more time to set up right, it can come with it's own DNS so when connected to it calling out SSH root#node-123.w-a-s-d.me will automatically route you to the private IP address rather than the shared public facing one.
With HTTP this is much easier as you can set up a proxy on the front-facing machines which then proxies requests for given sub domain to specific internal IP address.

Resources