Map port of Elasticsearch in Docker - elasticsearch

I want to start an Elasticsearch container in Docker. By default I see nearly everywhere something like:
docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
Now is my question: Why are we mapping the port on our host network? I understand port mapping but I don't see the big advantage of it.
In my opinion I would always do something like this:
$ docker network create logging
20aa4c7bf2d8289d8cbd485c3e384f9371eed87204625998687c61e4bad27f14
$ docker run -d --name es --net logging docker.elastic.co/elasticsearch/elasticsearch:5.5.1
And connect to the ES by using it's name (es in this case) and deploying containers in the same network. I would think my ES is more secure in its private docker network.
I see there is an advantage for port mapping when your containers which need to connect to elasticsearch aren't in the same network. But are there other advantages or why is this always shown with port mapping?

So host access is more about accessibility. If you are running docker on local machine and you want to access the app only on that machine, then host mapping is not need.
Now if you need to access this app on a external computer other than your docker host then you need to do that port mapping.
docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
This maps the host port 9200 (left side) to 9200 inside the docker (right side). The listening interface is 0.0.0.0 which means all interfaces. And hence it is accessible to to anyone how has access to this machine.
If you want to make it more secure then you do it like below
docker run -d -p 127.0.0.1:9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
This would listen on local host only. So only you can access it on the machine. But if you need to access it from some place else then you would use a SSH tunnel
ssh -L 9200:127.0.0.1:9200 <user>#<HOSTIP>
And on that machine you can access it on 127.0.0.1:9200
Next level of security is added when you use a firewall like ufw, firewalld etc.
What you did with network command
docker network create logging
Basically creates new network and isolates other docker containers from accessing it on the host. But as long as external accessibility is concerned, you still need to map it to the host port
Hope this answers your question

Related

Connect to a MariaDB Docker Container in a own Docker network remotly

Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.

How to access a port on the host machine when running docker container on MacOS with --network=host?

I have set up a couple of containers that interact with each other. The main application container runs on --network = host because it queries several mySQL containers running on different ports exposed on the host network.
I am trying to hit the application on the host but get an error:
curl: (7) Failed to connect to 0.0.0.0 port 36081: Connection refused
I am working on Docker installed on MacOS.
I have read several questions that indicate that docker on MacOS runs on a VM. But what is the workaround to access the application from the host? Any way to get the IP of the said VM?
You cannot use --network=host on Mac to connect via host ports but binding to host port using -p options works.
https://docs.docker.com/docker-for-mac/networking/#/there-is-no-docker0-bridge-on-osx
I WANT TO CONNECT TO A CONTAINER FROM THE MAC
Port forwarding works
for localhost; --publish, -p, or -P all work. Ports exposed from Linux
are forwarded to the host.
Our current recommendation is to publish a port, or to connect from
another container. This is what you need to do even on Linux if the
container is on an overlay network, not a bridge network, as these are
not routed.
For your use case,
You need to create a docker network and attach both the DB and application containers to this network. Then the containers will be able to talk to each other by their name. You can also publish the application container port so that you can access it from your host.
https://docs.docker.com/network/bridge/
Instead of creating the network, attaching the containers to the network etc manually, you can use docker-compose.
https://docs.docker.com/compose/

Localhost vs 0.0.0.0 with Docker on Mac OS

I am reading the docs here and I find myself a bit confused, since running
docker run --name some-mysql -p 3306:3306 -d mysql
or
docker run --name some-mysql -p 127.0.0.1:3306:3306 -d mysql
then mysql --host localhost --port 3306 -u root gives me the following error :
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2).
But running mysql -u root -p --host 0.0.0.0 works.
Does someone have an explanation ?
With docker port forwarding, there are two network namespaces you need to keep track of. The first is inside your container. If you listen on localhost inside the container, nothing outside the container can connect to your application. That includes blocking port forwarding from the docker host and container-to-container networking. So unless your container is talking to itself, you always listen on 0.0.0.0 with the application you are running inside the container.
The second network namespace is on your docker host. When you forward a port with docker run -p 127.0.0.1:1234:5678 ... that configures a listener on the docker host interface 127.0.0.1 port 1234, and forwards it to the container namespace port 5678 (that container must be listening on 0.0.0.0). If you leave off the ip, docker will publish the port on all interfaces on the host.
So when you configure mysql to listen on 127.0.0.1, there's no way to reach it from outside of the container's networking namespace. If you need to prevent others outside of your docker host from reaching the port, configure that restriction when publishing the port on the docker run cli.
As described in the mysql documentation (https://dev.mysql.com/doc/refman/5.7/en/connecting.html), when you connect to 127.0.0.1 with the client, it'll try to use the unix sockets to perform this operation. Normally this would work fine since it's on the same host. In Docker the socket file is not available.

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

How to access webserver running on localhost from a docker container on a network?

I have the following system configuration:
Docker container running on user defined network
docker-machine (with VirtualBox on OS:X forwarding port 9000 to 9000)
Local webserver running on http://localhost:9000
I do not know how to make a basic http request against this webserver, from within my docker container.
To test this I am using:
docker exec testcontainer curl --data "foobaz=foo" http://{hostname}:9000/
where I have tried, for hostnames:
'localhost'
'127.0.0.1'
'192.168.99.100' (docker-machine IP)
Each time I receive errors or timeouts. When I run the curl command locally (not in docker and on my host OS:X machine) I am able to successfully post the http request.
I cannot disconnect the docker container from my user-defined network. I also cannot add my webserver to that network, as it is not running in a container. Also, I know it is trivial to connect the other way (curl to a webserver running in a docker container) but this is not my use case.
How can I successfully route that http request from the docker container which is part of a user defined network to my localhost webserver?
You can do this with the actual IP address of your local computer.
So for example, if your en0 IP is 10.100.20.32 on your host OS, you can run:
docker exec testcontainer curl --data "foobaz=foo" http://10.100.20.32:9000/
which will successfully allow you to make the http requests.
Note that if you are doing this from a container on the host docker network, this is trivial, as you can directly access localhost or 0.0.0.0 without having to use the actual machine IP.
You might want to check what the IP address of the container is - you can find this out by running docker inspect.
However, if you want to access the server process running in your container using the docker machine IP, then you should "expose" port 9000 that your contained app is listening on (using Dockerfile) - in this way, you will still need to figure out which port the 9000 container port is mapped to (this shows when you list your containers via $ docker ps. You can also specify port binding option in command line when starting your container like this: $ docker run -p 9000:9000 <your-container>.

Resources