I have a Home Assistant Docker container.
It works fine so far.
Now I try to get my Video Surveillance into Home Assistant.
I have set up the RTSP Stream and so on.
So now to the Problem:
Every time I try to open the RTSP Stream from my Docker Container, it needs a random UDP high Port.
I only exposed the 8123 TCP from docker, but I also tried to expose more via port range and UDP, but I have other apps also running on my home Server, and they interfere.
Now the Question:
Is there a way to expose the whole Container to the Local Network with its own IP address?
I really won’t mess with the port problem inside my local network.
I know there is a solution out there with Hyper-V, but I really want to keep WSL 2.0 and not switch to legacy.
Related
I'm using docker-compose on linux. In my compose-file I have network_mode: "host" for bunch of containers. This is convenient for my stack to able to access my containers as localhost:<port>.
Now I've had to run this on windows and it doesn't work. I've read this in the docs:
The host networking driver only works on Linux hosts, and is not
supported on Docker Desktop for Mac, Docker Desktop for Windows, or
Docker EE for Windows Server.
So.... anyways I have to access my container's exposed port on windows by some other means, as localhost:port as I on linux? Or do I HAVE to map them out to some random IP and access something like 3.70.0.1:port on windows?
To clarify this question and related issue. See comments on the question for details.
Indeed, simply specifying
ports: <host>:<container>
allows you to access your container/services as localhost:<host> from the host on linux/windows (likely mac too). That being said, specifying network_mode: host in your docker-compose.yaml service definition actually disables this on windows (and likely mac) systems.
A related issue I was having was that in some of my service, the IP address for communication between containers was of the type localhost:<container>. If you use network_mode: host, then indeed from both the host and container's perspective, the service run on localhost. However if you just use port mapping, then while the service run on localhost from the host's perspective, they then run on docker's private network from the service's perspective. So they will expect an IP address of the form <service_name>:<container_port> to communicate between containers.
I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?
There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.
I would like to connect to a remote Docker Swarm (Ubuntu) from a Windows box.
In Linux it seams that you need to update the daemon.json file.
How do you achieve this in Windows?
Thanks!
The Docker engine has two parts, the daemon service (dockerd) that's running on your Ubuntu box, we'll call it the "server". Then the docker cli is what you can run from that server (docker) or from anything like your Windows machine (docker.exe). We'll call this the "client".
The client can talk to the server over two main ways, the socket, and a TCP port. The socket is usually reserved for local connections (SSH into the server and the docker client defaults to using the socket file to talk to the local server) or SSH tunnels, which are not something that works out of the box on Windows (maybe if you try the Windows Subsystem for Linux on Windows 10).
The other connection option is TCP, which isn't enabled on the server out of the box for security reasons. It has no authentication when enabled, so you'll want to use TLS to authenticate remotely, so Docker has steps for that. It's not a 3 min solution, so many look for an easier route to solve this problem.
The easier option for enabling TLS and the TCP port on the server is to use Docker Cloud with the "Bring Your Own Swarm" feature, which manages the certificates and security for you.
I'm using Docker for Mac Beta and it runs from spotlight.
Is there any way to run it from console or force to use any configuration file to specify ip address for docker host.
Right now it changing from 192.168.64.3 to 192.168.64.5 (each start of docker it can have any random IP)
probably I need to configure bridge interface?
com.docker.network.bridge.enable_ip_masquerade: true
com.docker.network.bridge.host_binding_ipv4: 0.0.0.0
Does anyone know how to do that?
You can connect to the Docker alpine host via unix socket but I have not been able to figure out how to bridge to the network.
The docs say:
Unfortunately, due to limtations in OSX, we’re unable to route traffic
to containers, and from containers back to the host.
Because of the way networking is implemented in Docker for Mac, you
cannot see a docker0 interface in OSX. This interface is actually
within HyperKit.
I'm being in a tough spot, having created 2 different virtual machines on Azure, with windows server 2012 R2 OS. I'm trying to host a game server for a game, which requires ports 7777 and 27015 opened.
What I did is simple, I went into the panel, set-up endpoints for 7777, 27015, for UDP and TCP, and added exceptions to firewall as well for incoming/outcoming 7777, 27015 TCP and UDP.
canyouseeme.org still apparently can't find my service and shows me the ports are not opened. It shows my remote connection port is opened though. What am I doing wrong? Is there anything more that I need to know?
Image showing forwarded ports
If you opened the ports on your Firewall and on the Endpoints screen you are probably fine to game. The problem is probably with the utility that you're testing with and not the ports themselves.
I logged onto an Azure VM that I know I can remotely connect to, tested an open port that I know is open with that website, and it said it did not find it. Maybe that site is using Ping, which gets stuck in Azure's load balancer. To test connectivity, try using PSPing. This will let you test connections to specific ports. https://technet.microsoft.com/en-us/sysinternals/bb896649