Is there a possibility to access a docker-compose container from another machine inside the local network? - windows

I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?

There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.

Related

Fetch data from gatServerProps of NextJs app when another api server is also running in localhost

According NextJs Documentations:
You should not use fetch() to call an API route in getServerSideProps. Instead, directly import the logic used inside your API route. You may need to slightly refactor your code for this approach.
Fetching from an external API is fine!
So we cannot use NextJs built-in APIs in getStaticProps or getServerSidePropsbut when I'm going to use another API service that is based on Laravel Framework as the back server and fetch it by Axios on the getServerSideProps function, I get Error: connect ECONNREFUSED 127.0.0.1:8080 error.
It should also be noted that everything is fine if the API server is addressed out of our development machine. In Other words, It will face when it's the development environment and both Laravel backend server and NextJs front-end server locate at localhost.
Could you help me out finding a solution for this problem?
When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer.
There are two pretty easy solutions.
Create a docker network, add both containers to that, and use container name instead of ip (https://www.tutorialworks.com/container-networking/)
Use host networking for this container: https://docs.docker.com/network/host/
Edit: Added a link for a tutorial on how to create and use docker networks
So, as #tperamaki's answer already mentions: "When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer."
You can use the ip of your machine in your local network. By example, 196.168.0.10:8080 instead 127.0.0.1:8080.
But you also can connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host.
In your case just add the port where the othe container is listening:
http://host.docker.internal:8080
In this section of the documentation Networking features in Docker Desktop for Mac, they explain how to connect from a container to a service on the host. Note that a mac is mentioned there, but I tried it on a linux distro and it also works (also in this other answer it is mentioned that it works for windows).

Cannot Access XAMPP Web Server From External Network

I currently have 4 websites running off my home desktop PC using XAMPP. They are running on ports 80, 81, 7733, and 25293.
The first three run fine when accessed from an external network, however the last (25293) won't load. (This site can't be reached. ERR_CONNECTION_FAILED)
I am port forwarding all 4 ports the exact same way. Just as soon as I'm not on my local network, the page stops loading.
I attempted to open up the port in my firewall as well however that achieved nothing. What can I do to resolve this?
The error I receive upon visiting the port on an external network:
This might be a common issue because you are using 5 digits port number, you may need port validation.
For example this was known issue for Drupal:
https://www.drupal.org/project/link/issues/182916
Are you running Linux, or Windows server?
Do you have another computer on the same network? If so, can that computer access your webserver? Try, because that must be your first step, then worry about being visible to the outside world.
I just saw this link and this one. Try to see if it solves your problem.

Docker on Windows with a proxy

Hi im using Docker on windows 10 with a proxy.
Docker itself works fine with the proxy IP set correctly in the docker settings.
I can download images through docker.
The problem is that any container I want to run or build also needs these HTTP_PROXY and HHTPS_PROXY variables.
I can do this by adding it to build arguments, run arguments or the docker file.
However none of these solutions are perfect because they add machine specific variable values to either the docker files and/or the docker-compose files.
I have checked the MobyLinuxVM's values for these HTTP_PROXY and HHTPS_PROXY variables by hacking into it with this trick:
How to connect to docker VM (MobyLinux) from windows shell?
Eventhough these variables were displayed correctly any image that I run or dockerfile I build still needs to get these variables.
Is there a way that any container automatically gets these proxy environment variables from the docker deamon who already has them set?
I know Linux has this feature by nature, but it seems to be missing for Windows.
This does not provide a way to set those values or to get them in a container's context, but has stopped me from having to change my proxy settings every time I change IP addresses and keeps me from having to pass them to containers at runtime (builds are still a different story).
This works for me behind an NTLM-authenticating web proxy, even from home on VPN:
1) Get the IP address of the DummyDesperatePoitras virtual switch Docker for Windows creates (starts with 169.254., which is usually a non-routable IP)
2) Install CNTLM (not perfect, as it's not been updated in 5 years) and set it to listen on that "dummy" IP address
3) Use that "dummy" IP address as the proxy in Docker for Windows settings
4) Add your internal corporate DNS server's IP and the domain name to the daemon.json in Docker for Windows settings
Again, this works for running containers - I only have to deal with the proxy server when I run docker build, passing it along in the build-args. I've not found a way around that yet.
Detailed walkthrough: https://mandie.net/2017/12/10/docker-for-windows-behind-a-corporate-web-proxy-tips-and-tricks/
My advice is to use a tool to transparently route all your traffic to the proxy, without having to set any proxy configuration locally.
For windows there is proxifier. It will transparently route all the traffic from your host to the proxy.

How can I make all my docker containers use my proxy?

I am running docker on Debian Jessie which is behind a corporate proxy. To be able to download docker images, I need to add the following to my /etc/defaults/docker
http_proxy="http://localhost:3128/"
I can confirm that this works.
However, in order to be able to access the interwebz from within my container, I need to start all sessions with --net host and then setup these env variables:
export http_proxy=http://localhost:3128/
export https_proxy=https://localhost:3128/
export ftp_proxy=${http_proxy}
Ideally, I would like for the container to not need the host network, and not to know about the proxy (i.e. all outbound calls to port 20, 80, 443 in the container go via the host's proxy port). Is that possible?
Failing that, is it possible to have a site setup, which will ensure that these env variables are set locally but never exported as part of an image? I know I can pass these things with --env http_proxy=... etc, but that's clunky. I want it to work for all users on the system without having to use aliases.
(Disclaimer: I asked this on https://superuser.com/posts/890196 but the home for docker questions is a little ambiguous at the moment).
See Proxy all the Containers:
Host server runs a container running a proxy (squid, in this case) that can do transparent proxying. That container has some iptables rules that NAT traffic into the proxy server - this means that container needs to run in privileged mode.
Host server also contains (and here's the magic) ip route table entries that re-route all traffic from any container but the proxy that was destined for port 80, through the proxy container.
That last bit essentially means that for port 80 traffic, the route from container to the rest of the world goes through the proxy container - giving it the chance to NAT and transparent proxy.
https://github.com/silarsis/docker-proxy

EC2, RHEL - No Route To Domain

This is probably incredibly simple and I'm just missing one step. The problem I was (originally) trying to solve was how to get a statically allocated hostname, one that would not change with each restart. I've done the following steps:
I have a domain registered on GoDaddy, and it points to my EIP. I use it to connect over SSH (putty) to my EC2 instance, so I know that part is working. I've opened ports 9080, 9060, 9043, and 9443 as well as SSH and FTP ports. And I've installed and started the software that uses those ports, and that stuff normally just works on a local RHEL install, so I think what's different here is the custom domain name.
I've added my EIP and fully qualified host name to my /etc/hosts file.
I've added my fully qualified host name to my /etc/hostname file and modified the /etc/rc.local script to set the hostname properly on a restart, and that works. If I execute the command hostname, it returns my fully qualified hostname, so that looks ok.
I cannot ping my server, but I think that's ok, because probably amazon blocks pings. So I don't think that's a symptom of anything.
I cannot open a to http://myserver.mydomain:9080/, which normally just works. Here it just times out.
If I do a wget http://myserver.mydomain:9080 from inside the EC2 instance, it returns failed: No Route To Host
But if I do a wget against localhost instead of the fully qualified name I get what I expect as a response.
So.... routing tables? Do those need to change? And if so how?
You probably don't want to do what you did. Everything in EC2 is NAT'd. Meaning that the IP assigned to your instance is a private/internal ip and the public IP is mapped to it by the routing system.
So internally, you want everything to resolve to the private IP, or you will get charged for traffic as it has to get routed to the edge and then route back in. Using the public DNS name will resolve correctly from the default DNS servers.
If you are using RHEL, you will need to make sure both the security group and the internal firewall (iptables) have ports opened. You could just disable the internal firewall since its a bit redundant with the security groups. On the other hand, it can provide some options security groups do not if you need them.

Resources