Weblogic + Docker + Vagrant = Connection Issue - vagrant

first time poster, but have been very impressed with this community. I've spent an embarrassing amount of time this week trying to resolve this issue - there doesn't seem to be much info on the net & I am stuck. Thanks in advance for any insights!
I am moving an existing WLS application into Docker. Goal is to have a repeatable Dev environment with WLS inside container & those containers running inside Vagrant (custom RHEL 6.5 VirtualBox).
I configured & started WLS container. I am also able to access WLS services from the container on VM. However, when I try to access the container from the host, I receive a connection timeout error.
I am running a private network 10.10.10.41 on Vagrant with port forwarding 7771:7001 - if I access that IP:Port (as I normally would when running a service within Vagrant), I get a connection refused.
I am able to run WLS "natively" from the VM and access from the host successfully. I am also able to run Apache conatiners from within the VM and access them from the host successfully. So the issue appears specific to WLS running inside a container in VM.
I turned off the firewall on the VM, which I've read is a common issue with Vagrant + Docker.
I have a whole host of information to share, but rather than drink from the firehose I will start out with a couple pieces. Happy to attach any further info as necessary. Thanks again!
Vagrantfile
config.vm.network "private_network", ip: "10.10.10.41"
config.vm.network :forwarded_port, host: 7771, guest: 7001
Dockerfile
EXPOSE 7001
Dockerrun
docker run -d -p 7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin
Container IP
docker inspect -f '{{ .NetworkSettings.IPAddress }}' wladmin
172.17.0.13
nmap VM (localhost)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000044s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
111/tcp open rpcbind
nmap VM (Vagrant private network IP)
Nmap scan report for 10.10.10.41
Host is up (0.000053s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
nmap WLS Docker Container
Nmap scan report for my.domain.com (172.17.0.11)
Host is up (0.000055s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
7001/tcp open afs3-callback
7002/tcp open afs3-prserver

I found the root cause & wanted to share back.
It turns out that because Vagrant has a private network adapter, we have to bind the container to that adapter using.
docker run -d -p 10.10.10.41:7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin

Related

From a container running on Docker for Windows, how can I access a port on the host?

I'm running a CentOS-based container on Docker for Windows and trying to connect to an http service running on port 8545 of my host environment.
I've tried this, attempting a variety of suspected host names and IP addresses:
curl http://localhost:8545
But the error message I get is "curl: (7) Failed connect to localhost:8545; Connection refused"
How should I figure out what IP Address to use? Is there anything I need to configure as far as allowing the port number to be accessed from inside the container?
Localhost is not working yet I think with Docker for Windows.
There is few things you can try. First you can add EXPOSE 'portnumber' in the dockerfile so the container will listen on this port. You can also use docker run with -p 8545:8545, it will map the port of the container and the host.
To get the Ip address of the container you can use:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" containername
You can access the host using its ip but localhost/127.0.0.1 won't work (they will resolve to the Linux VM that is part of docker for windows). If you use the default network settings, your host should be reachable on 10.0.75.1 from your container

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

How to Ssh into docker container that is running inside Vagrant?

I am running a Vagrant VM under Windows 7 . The Vagrant VM is running a docker container. So the configuration is :
Windows7[Vagrant[Docker]]
I want to ssh from Windows into the Docker container.
The docker container is running sshd and I can successfully ssh from Vagrant VM to Docker container.
sudo docker ps
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64b13daab5f2 ubuntu:12.04 "/bin/bash" 14 minutes ago Up 14 minutes 0.0.0.0:49153->22/tcp thirsty_morse
From the Vagrant VM:
ssh root#localhost -p 49153
works just fine. So Vagrant VM's port 49153 is forwarded to Docker container's port 22.
I've added
config.vm.network "forwarded_port", guest:49153, host:49155
to Vagrantfile so that localhost:49155 on Windows is forwarded to Vagrant VM:49153
This is where things break down. When I try to ssh from Windows to localhost:49155, I get:
ssh: connect to host localhost port 49155: Connection refused
So Windows:49155 -> Vagrant:49153 is not working. I thought that it may be a problem related to listening on a port on Vagrant VM's external ip so I've installed rinetd into Vagrant VM and I've done:
bindadress bindport connectaddress connectport
0.0.0.0 49153 127.0.0.1 49153
Still no luck. What am I missing here?
Ok, answering my own question. It works now. I think the most likely reason for the problem was that port 49153/55 and its neighbours is actually used by some windows services by default. I changed to mapping for ports in the Vagrant file to use 9090 for Windows and everything worked. No need to rinetd either. I've also done:
sudo docker run -v /vagrant:/opt/data -p 0.0.0.0:49153:22 -i -t ubuntu:12.04
Notice the 0.0.0.0: it may or may not be relevant but this configuration is working for me.

Empty reply from server - can't connect to vagrant vm w/port forwarding

I'm running werkzeug (as part of a Tilestache setup) inside a Vagrant VM, running ubuntu 'precise.'
In my Vagrantfile, I have:
config.vm.network :forwarded_port, guest: 8080, host: 8080
When I start the server in the VM, I see:
* Running on http://127.0.0.1:8080/
If I curl that address from within the VM, I get the expected result. When I curl it from the host machine, I get:
curl: (52) Empty reply from server
And Chrome says "No data received."
Troubleshooting info:
The server responds to pings from the host machine
a port sniffer verifies that the port is open
running netstat -ntlp | grep 8080 in the vm shows that the server is listening on 8080
My local hostsfile doesn't have any weird conflicts
I'm also forwarding 22 => 2222, and I can ssh in with no trouble
I've disabled the firewall on the host, and i don't believe there's one on the guest (iptables and ufw are disabled, at least)
I've set auto_correct: true in case there are conflicts (there aren't)
I know I could set up a private network, but I'd like to understand why this isn't working and how to troubleshoot it.
Any other ideas?
When running a server from within a VM, start the server on 0.0.0.0 instead of 127.0.0.1.
127.0.0.1 is only accessible to the local machine, which for a VM means nothing outside of the VM can reach it! 0.0.0.0 is accessible from anywhere on the local network, which to a VM includes the host machine.
The answer came from here: Connection Reset when port forwarding with Vagrant
(Which apparently got its answer from here: https://stackoverflow.com/a/5999945/738675)
With help from: https://serverfault.com/questions/78048/whats-the-difference-between-ip-address-0-0-0-0-and-127-0-0-1
Google-bait:
Here are the errors you might receive if this is the problem:
Chrome: "No data received"
Firefox: "The connection was reset - The connection to the server was reset while the page was loading."
Safari: "Safari can’t open the page [URL] because the server unexpectedly dropped the connection"
curl: "Empty reply from server"
In the /etc/hosts inside the VM, change line
127.0.0.1 localhost -> 0.0.0.0 localhost
and then restart server
This can also be a problem with your firewall on the vagrant machine. If you can curl the address while on the vagrant box, then check your firewalld settings or turn it off:
on CENTOS:
sudo service firewalld stop
Then you should update your firewalld settings and restart it ;)

Docker container - how to configure so it gets a viable IP address when running in vagrant?

Docker (www.docker.io) looks terrific. However, after installing VirtualBox, Vagrant
... and finally Docker on a Mac, I'm finding it's not possible to access the service running in the Docker container from another computer (or from a terminal session on the Mac). The service I'm trying to access is Redis.
The problem appears to be that there's no route to the IP address assigned to the Docker container. In this case the container's IP is 172.16.42.2 while the Mac's IP is 196.168.0.3.
A couple notes:
It IS possible to access it - but only from within the VirtualBox session. This can be done using redis-cli -h 172.16.42.2 -p 6379.
I have added "config.vm.network :bridged" to the VagrantFile in an attempt to get the, but that didn't solve the problem.
The VM generated by vagrant is indeed isolated, in order to access it from your host, you can allocate a private network to it.
Instead of doing config.vm.network :bridged, try config.vm.network :private_network, ip: "192.168.50.4", It should do the trick
However, this will only allow you to access the VM itself, not the containers.
In order to do so, when running the container, you can add the -p option
ex: docker run -d -p 8989 base nc -lkp 8989
This will run a netcat listening on 8989 within a container and expose the port publicly. As it is also run with -d, the container will be in detached mode and the only output will be the container's ID
In order to expose the port, Docker do a simple NAT. In order to know the real port, you can
do docker port <ID of the container> 8989
Netcat will be available from the mac at 192.168.50.4:<result>
I just wrote a tutorial of how to use a host-only network and TCP routing to make this pretty easy. This way you don't have to map every specific port.
http://ispyker.blogspot.com/2014/04/accessing-docker-container-private.html
Important points ...
1) Add host-only network to Virtual Box
2) Tell the boot2docker VM to have an adapter on the host-only network
3) Add an IP for the new boot2docker VM host-only networking adapter
4) Route all Mac OS X traffic for the docker container subnet to that boot2docker VM host-only networking IP
Actual steps are on the blog with output so you can compare to what you see as you follow them.
I have installed tomcat from my Dockerfile and forwarded that to 6060 using vagrant`s port forwarding. These are the steps worked for me:
vagrant provision
vagrant up
vagrant ssh
box_name$ docker run -i -t -p 8080:8080 bsb_tomcat6 /bin/bash
Able to see tomcat up & running on localhost:6060, as I have done port forwarding to 6060 in my Vagrantfile
you also can define PRIVATE_NETWORK and FORWARD_DOCKER_PORTS environment variables to access your services that are running in docker containers:
$ vagrant halt
$ export PRIVATE_NETWORK=192.168.50.4
$ export FORWARD_DOCKER_PORTS=1
$ vagrant up
In my case i can access postgres from Mac using
$ telnet 192.168.50.4 49154
to find out actual application port you can use
$ sudo docker port 1854499c6547 5432
0.0.0.0:49154

Resources