Access a host from within a Docker container on Windows - windows

I use Docker CE for Windows on latest Windows 10 and have built an image with a
script that runs a test against a web server.
(A litmus test suite for a WebDAV server to be exact, but I think the problem
is general.)
I run the web server on a Powershell console:
> wsgidav -p 8080 -H localhost
21:04:19.107 - <13348)> wsgidav INFO : Running WsgiDAV/3.0.0a3 Cheroot/6.4.0 Python/3.6.5
21:04:19.107 - <13348)> wsgidav INFO : Serving on http://localhost:8080 ...
From another Powershell console, I run my script in a Docker container (using FROM alpine).
The script starts and tries to access the endpoint, but does not succeed:
> docker pull mar10/litmus
> docker run --rm -p 8080:8080 mar10/litmus http://gateway.docker.internal:8080
-> running `basic':
0. init.................. FAIL (connection refused by `gateway.docker.internal' port 8080: Operation timed out)
I tried so far
Using the gateway.docker.internal hostname
using -p PORT:PORT
using --net=host
restarting the docker daemon (which interestingly sometimes also was neccessary to
fix timeouts in docker pull)
different IP addresses for the web server (127.0.0.1, localhost, 0.0.0.0, local IP)
Nothing worked so far (although the failure message may be different).
Maybe I just missed a working combination of the above, or any other trick?

FWIW, I was able to solve it by building the container with the --network host option and use a real IP of the client (instead of localhost or 0.0.0.0).
Details here: https://hub.docker.com/r/mar10/docker-litmus/

Related

Connect to a MariaDB Docker Container in a own Docker network remotly

Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.

Can't connect from outside of container to Clickhouse by HTTP on Mac OS

I'm trying to use ClickHouse with docker on Mac OS. I use next command:
docker run -d -p 8123:8123 --rm --name some-clickhouse-server -v /my/config/path/config.xml:/etc/clickhouse-server/config.xml --ulimit nofile=262144:262144 yandex/clickhouse-server:latest
Container successfully started, but when I try to connect to it by http curl 'http://localhost:8123' I have an error:
Failed to connect to localhost port 8123: Connection refused
When I connect to Clickhouse from Clickhouse-client (also using docker image) everything is OK
I ran Clickhouse-server image in -it mode, installed curl, started server and tried to connect clickhouse-server from inside of container, it's OK too
Also I tried to modify config.xml (which was copied from docker image) settings for listen_host (::, 0.0.0.0, ::1, 127.0.0.1)
and for every setting I tried to connect by curl for localhost, 127.0.0.1, 0.0.0.0 - nothing of this solved my problem
Normally, docker desktop write these details of host and container to /etc/hosts, after adding the clickhouse-service as follows has resolved this issue.
127.0.0.1 localhost clickhouse-service
I used Docker Toolbox on Mac OS (in conjunction with VirtualBox). So, I've migrated to Docker Desktop and this has solved my problem

Docker on Mac is running but refusing to expose port

Mac here, running Docker Community Edition Version 17.12.0-ce-mac49 (21995).
I have Dockerized a web app with a Dockerfile like so:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
I then build that image like so:
docker build -t myapp .
I then run a container of that image like so:
docker run -it -p 9200:9200 --net="host" --env-file ~/myapp-local.env --name myapp myapp
In the console I see the app start up without any errors, and all seems to be well. Even my metrics publishes (which publish heartbeat and other health metrics every 20 seconds) are printing to the console as I would expect them to. Everything seems to be fine.
Except when I go to run a curl against my app from another terminal/session:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"heyitsme","password":"12345"}' http://localhost:9200/v1/auth/signIn
curl: (7) Failed to connect to localhost port 9200: Connection refused
Now, if this were a situation where the /v1/auth/signIn path wasn't valid, or if there was something wrong with my request entity/payload, the server would pick up on it and send an error (I assure you; as I can confirm this exact same curl works when I run the server outside of Docker as just a standalone service).
So this is definitely a situation where the curl command can't connect to localhost:9200. Again, when I run my app outside of Docker, that same curl command works perfectly, so I know my app is trying to standup on port 9200.
Any ideas as to what could be going wrong here, or how I could begin troubleshooting?
The way you run your container has 2 conflicting parts:
-p 9200:9200 says: "publish (bind) port 9200 of the container to port 9200 of the host"
--net="host" says: "use the host's networking stack"
According to Docker for Mac - Networking docs / Known limitations, use cases, and workarounds, you should only publish a port:
I want to connect to a container from the Mac
Port forwarding works for localhost; --publish, -p, or -P all work. Ports exposed from Linux are forwarded to the Mac.
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.
The command to run the nginx webserver shown in Getting Started is an example of this.
$ docker run -d -p 80:80 --name webserver nginx
Check that your app bind to 0.0.0.0:9200 and not localhost:9200 or something similar
Problem seems to be in the network mode you are running the container.
Quick test: Login to your container and run the curl cmd there, hopefully it works. That would isolate the problem to request not being forwarded from host to container.
Try running your container on the default bridge network and test.
Refer to this blog for details on the network modes in docker
TLDR; You will need to add an IPtables entry to allow the traffic to enter your container.

Can't access docker container on port 80 on OSX

In my current job we have development environment made with docker-compose.
One container is nginx, which provide routing to other containers.
Everything seems fine and work to my colleague on windows and osx. But on my system (osx El Capitan), there is problem with accessing nginx container on port 80.
There is setup of container from docker-compose.yml
nginx:
build: ./dockerbuild/nginx
ports:
- 80:80
links:
- php
volumes_from:
- app
... and more
In ./dockerbuild/nginx there is nothing special, just nginx config as we know it from everywhere.
When I run everyting with docker-compose create and docker-compose start. Then docker ps give me
3b296c1e4775 docker_nginx "nginx -g 'daemon off" About an hour ago Up 47 minutes 0.0.0.0:80->80/tcp, 443/tcp docker_nginx_1
But when I try to access it for example via curl I get error. curl: (7) Failed to connect to localhost port 80: Connection refused
I try to run container with port 81 and everything works fine.
Port is really binded to docker
22:47 $ sudo lsof -i -n -P | grep TCP
...
com.docke 14718 schovi 38u IPv4 0x6e9c93c51ec4b617 0t0 TCP *:80 (LISTEN)
...
Firewall in osx is turned off and I have no other security.
if you are using docker-for-mac:
Accessing by localhost:80 is correct, though you still have to ensure you do not have a local apache/nginx service running. Often leftovers from boxen/homebrew exist binding that port, because thats what developers did back then :)
if you are using dockertoolbox/virtualbox/whatever hypervisor
You will not be able to access it by localhost, by by the docker-machine ip, so write docker-machine ip default and the use http://$ip:80 in your browser
if that does not help
Ensure your nginx container actually does work, so connect to the container: docker exec -i -t <containerid> bash
and then run ps aux nginx or if telnet is installed try to connect to localhost
Solved!
Problem was, that long long time ago I installed pow (super simple automated rails server which run application on app_name.local domain). And this beast left LaunchAgent script which update pf to forward port 80 to pow port.
In my current job we have development environment made with docker-compose.
A privilege to use.
[W]hen I try to access [nginx on port 80] for example via curl I get error.
Given there's nothing from causing you from accessing docker on your host os you should look at the app running inside the container to ensure it's binding to the correct host, e.g. 0.0.0.0 and not localhost.
For example, if you're running Nuxt inside a container with nuxt-ts observe Nuxt will default to localhost thereby causing the container not to connect to the docker network whereas npx nuxt-ts -H 0.0.0.0 gets things squared away with the container's internal server connecting to the ip of the docker network used (verify ip like docker container inspect d8af01990363).

How to access webserver running on localhost from a docker container on a network?

I have the following system configuration:
Docker container running on user defined network
docker-machine (with VirtualBox on OS:X forwarding port 9000 to 9000)
Local webserver running on http://localhost:9000
I do not know how to make a basic http request against this webserver, from within my docker container.
To test this I am using:
docker exec testcontainer curl --data "foobaz=foo" http://{hostname}:9000/
where I have tried, for hostnames:
'localhost'
'127.0.0.1'
'192.168.99.100' (docker-machine IP)
Each time I receive errors or timeouts. When I run the curl command locally (not in docker and on my host OS:X machine) I am able to successfully post the http request.
I cannot disconnect the docker container from my user-defined network. I also cannot add my webserver to that network, as it is not running in a container. Also, I know it is trivial to connect the other way (curl to a webserver running in a docker container) but this is not my use case.
How can I successfully route that http request from the docker container which is part of a user defined network to my localhost webserver?
You can do this with the actual IP address of your local computer.
So for example, if your en0 IP is 10.100.20.32 on your host OS, you can run:
docker exec testcontainer curl --data "foobaz=foo" http://10.100.20.32:9000/
which will successfully allow you to make the http requests.
Note that if you are doing this from a container on the host docker network, this is trivial, as you can directly access localhost or 0.0.0.0 without having to use the actual machine IP.
You might want to check what the IP address of the container is - you can find this out by running docker inspect.
However, if you want to access the server process running in your container using the docker machine IP, then you should "expose" port 9000 that your contained app is listening on (using Dockerfile) - in this way, you will still need to figure out which port the 9000 container port is mapped to (this shows when you list your containers via $ docker ps. You can also specify port binding option in command line when starting your container like this: $ docker run -p 9000:9000 <your-container>.

Resources