Selenium standalone server is accessible only on localhost not on my IP's - macos

SSS == 'Selenium Standalone Serwer'
I've got:
installed SSS by homebrew
downloaded couple versions of SSS
macOS High Sierra with IP's:
192.168.0.1
172.18.0.1 - IP for docker
localhost/127.0.0.1
I have turned off my firewall
I run this server in one of the following ways:
selenium-server -port 4444
java -jar selenium-server-standalone-3.8.1.jar -port 4444
after this, I've got logs:
2017-12-22 12:34:23.280:INFO:osjs.AbstractConnector:main: Started ServerConnector#210ab13f{HTTP/1.1,[http/1.1]}{0.0.0.0:4444}
so as I understand 0.0.0.0 it listens for all IPs
BUT
I can not connect to this server using IP 192.168.0.1 or 172.18.0.1
I can use the only localhost for connecting to this server.
netstat doesn't display port 4444 as open.
When I do this same on Ubuntu 16.04 it works great. I can create a new session using all addresses, I can create a new session also from docker container.
Can you tell me what I'm doing wrong or what I don't know?

Related

How to Use sshuttle on Windows WSL2

We have a Jenkins server which is accessible only from within the VPC on the cloud. On Mac and Linux I use sshuttle to make a ssh connection to the bastion instance (to act a proxy) and open the Jenkins console in the browser. Everything works fine.
Now I'm on Windows and trying to do the same on WSL2. If I'm not mistaken previously, sshuttle didn't work on WSL1 (failed with some error message), but I managed to run it on WSL2 without any issue. The ssh connection is established and I can access my Jenkins (using curl).
Then I tried to access my Jenkins on Windows via WSL2:
1. I found the IP address of WSL2 and the port the ssh tunnle:
# lsof -i -n | grep ssh
sshuttle 1234 rad 5u IPv4 39270 0t0 TCP *:socks (LISTEN)
ssh 5678 rad 3u IPv4 40252 0t0 TCP 172.25.236.84:57578->bastion:ssh (ESTABLISHED)
2. I configured network proxy setting of Firefox (v77) to use my ssh tunnle:
Manual proxy configuration
SOCK host: 172.25.236.84
Port: 1080
SOCKS V5 (tested with V4 as well)
But loading the page fails with "The connection was reset" error on Firefox. I tested via Powershell that the SOCKS port is open and responding (using Test-NetConnection).
1. Any idea what the problem is? How to make it work?
2. If it's not gonna work, is there any other solution (e.g. Docker, etc)?
Thanks.
I'm not sure, but my guess is that sshuttle doesn't actually act as a SOCKS proxy and that's why the connection gets reset.
I managed to access my Jenkins on Windows machine using ssh SOCKS proxy: ssh -D 0.0.0.0:1080 rad#bastion and configured Firefox to use the SOCKS proxy.
Interestingly, for this you don't even need WSL. It seems Windows 10 has OpenSSH and you can use it. Just open CMD and type ssh -D 1080 rad#bastion and setup Firefox to use localhost as the proxy.
If there's any better solution or any comment/concern (apart from DNS over SOCKS) with this approach, please share.
Thanks.
As alternative on WSL(2) you can run a regular SSH tunnel.
Eg:
ssh -N -L 127.0.0.1:5432:some_domain_to_forward:5432 user#jumpbox_ip
and then just connect to 127.0.0.1:5432

Connecting PostgreSQL installed in docker inside Hyper-V Ubuntu from Windows 10 PgAdmin

I need help in connecting PostgreSQL which is installed in Docker inside HyperV ubuntu 18.4 from Windows 10 PgAdmin. So far I tried the following
Step 1: Install Postgres in Docker (Ubuntu running on Hyper-V)
sudo docker run -p 5432:5432 --name pg_test -e POSTGRES_PASSWORD=admin -d postgres
Step 2: Create a database
docker exec -it pg_test bash
psql -U postgres
create database mytestdb
Step 3: Get the ip address
sudo docker inspect pg_test | grep IPAddress
//returned with 172.17.0.2
Step 4: pg_hba.conf
host all all 0.0.0.0/0 md5
Step 5: When I try to connect from Windows PgAdmin 4, I get this below error -
Note: I have also tried using UBUNTU VM IP address, but no luck
Your's is a case where you are trying to connect to postgres from another subnet, i.e windows subnet to hyper visor subnet if you are not using bridged protocol.
So case 1:
If this is on NAT\HOST and not on bridge then you need to make sure you are able to ping the ubuntu server from windows server.
next is make sure that port is open from ubuntu's end. How do you check that, do a telnet on the port number from windows cmd prompt.
telnet 192.168.0.10 5432
if you are bridged and you can ping ping the server as well, checked that port is opened which is telnet works. You need to make sure that in the postgres.conf file
"listen address" is to "*". which is all.
Again from OS level in ubuntu run the command systemctl stop firewalld to stop firewall and then try to connect. IF this works then you need to open the port in the firewall using this command:
firewall-cmd --permanent --add-port 5432/tcp
I can see from you docker image that 5432 is already opened. This is more of port mapping and firewalld stuff.
You may want to check that pg_hba.conf is not restricted to local. It should not be the case for docker image but you never know.
See: https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html
Also, there is a typo: POSTGRES_PASSWOR=admin is missing D, it should be POSTGRES_PASSWORD=admin.
You don't need container IP. Since you have mapped container port to host machine (Ubuntu) anyone outsider just needs the Ubuntu machine IP, and on Ubuntu itself you can use localhost.

Setup ssh tunnel from docker container on macos Mojave 10.14

I am having trouble setting up an ssh tunnel on my mac machine. I have no problems setting up the tunnel on my ubuntu box. This is the command I run
ssh -nNT -L 172.18.0.1:4000:production-database-url:3306 jump-point
When I run this on my mac, I get the following error:
bind [172.18.0.1]:4000: Can't assign requested address
channel_setup_fwd_listener_tcpip: cannot listen to port: 4000 Could
not request local forwarding.
If I run without the bind_address (172.18.0.1), I am able to connect to the database via the tunnel.
If I bind to all interfaces (0.0.0.0), then tunnel is open, however, the connection to the database from inside the docker container does not work.
172.18.0.1 is the IP of docker's default bridge network gateway, not your host's IP.
You can run this command to check that.
$ docker network inspect bridge
Docker for Mac has limitations
There is no docker0 bridge on macOS (it's in the docker VM host on Mac and on Windows)
You cannot ping containers (without shaving a bunch of yaks)
Per-container IP addressing is not possible
Also note that this means the docker run option --net-host is not supported on Mac, but maybe that's a good thing
There is a workaround
These magic addresses resolve to the host's IP from within a container
docker.for.mac.localhost (deprecated)
docker.for.mac.host.internal (deprecated)
host.docker.internal
This resolves to the gateway of the host mac
gateway.docker.internal
Use the name host.docker.internal from within the container just like you would use localhost on the mac directly.
Don't worry about the bind address for the tunnel:
ssh -nNT -L 4000:production-database-url:3306 jump-point
You didn't mention which database but I take it from the port 3306 that it is MySQL.
To connect using the mysql cli from within a container, via an ssh tunnel on your host, to a remote mysql database server you can run:
mysql --host host.docker.internal [... other options go here]

Can't access docker container on port 80 on OSX

In my current job we have development environment made with docker-compose.
One container is nginx, which provide routing to other containers.
Everything seems fine and work to my colleague on windows and osx. But on my system (osx El Capitan), there is problem with accessing nginx container on port 80.
There is setup of container from docker-compose.yml
nginx:
build: ./dockerbuild/nginx
ports:
- 80:80
links:
- php
volumes_from:
- app
... and more
In ./dockerbuild/nginx there is nothing special, just nginx config as we know it from everywhere.
When I run everyting with docker-compose create and docker-compose start. Then docker ps give me
3b296c1e4775 docker_nginx "nginx -g 'daemon off" About an hour ago Up 47 minutes 0.0.0.0:80->80/tcp, 443/tcp docker_nginx_1
But when I try to access it for example via curl I get error. curl: (7) Failed to connect to localhost port 80: Connection refused
I try to run container with port 81 and everything works fine.
Port is really binded to docker
22:47 $ sudo lsof -i -n -P | grep TCP
...
com.docke 14718 schovi 38u IPv4 0x6e9c93c51ec4b617 0t0 TCP *:80 (LISTEN)
...
Firewall in osx is turned off and I have no other security.
if you are using docker-for-mac:
Accessing by localhost:80 is correct, though you still have to ensure you do not have a local apache/nginx service running. Often leftovers from boxen/homebrew exist binding that port, because thats what developers did back then :)
if you are using dockertoolbox/virtualbox/whatever hypervisor
You will not be able to access it by localhost, by by the docker-machine ip, so write docker-machine ip default and the use http://$ip:80 in your browser
if that does not help
Ensure your nginx container actually does work, so connect to the container: docker exec -i -t <containerid> bash
and then run ps aux nginx or if telnet is installed try to connect to localhost
Solved!
Problem was, that long long time ago I installed pow (super simple automated rails server which run application on app_name.local domain). And this beast left LaunchAgent script which update pf to forward port 80 to pow port.
In my current job we have development environment made with docker-compose.
A privilege to use.
[W]hen I try to access [nginx on port 80] for example via curl I get error.
Given there's nothing from causing you from accessing docker on your host os you should look at the app running inside the container to ensure it's binding to the correct host, e.g. 0.0.0.0 and not localhost.
For example, if you're running Nuxt inside a container with nuxt-ts observe Nuxt will default to localhost thereby causing the container not to connect to the docker network whereas npx nuxt-ts -H 0.0.0.0 gets things squared away with the container's internal server connecting to the ip of the docker network used (verify ip like docker container inspect d8af01990363).

Weblogic + Docker + Vagrant = Connection Issue

first time poster, but have been very impressed with this community. I've spent an embarrassing amount of time this week trying to resolve this issue - there doesn't seem to be much info on the net & I am stuck. Thanks in advance for any insights!
I am moving an existing WLS application into Docker. Goal is to have a repeatable Dev environment with WLS inside container & those containers running inside Vagrant (custom RHEL 6.5 VirtualBox).
I configured & started WLS container. I am also able to access WLS services from the container on VM. However, when I try to access the container from the host, I receive a connection timeout error.
I am running a private network 10.10.10.41 on Vagrant with port forwarding 7771:7001 - if I access that IP:Port (as I normally would when running a service within Vagrant), I get a connection refused.
I am able to run WLS "natively" from the VM and access from the host successfully. I am also able to run Apache conatiners from within the VM and access them from the host successfully. So the issue appears specific to WLS running inside a container in VM.
I turned off the firewall on the VM, which I've read is a common issue with Vagrant + Docker.
I have a whole host of information to share, but rather than drink from the firehose I will start out with a couple pieces. Happy to attach any further info as necessary. Thanks again!
Vagrantfile
config.vm.network "private_network", ip: "10.10.10.41"
config.vm.network :forwarded_port, host: 7771, guest: 7001
Dockerfile
EXPOSE 7001
Dockerrun
docker run -d -p 7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin
Container IP
docker inspect -f '{{ .NetworkSettings.IPAddress }}' wladmin
172.17.0.13
nmap VM (localhost)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000044s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
111/tcp open rpcbind
nmap VM (Vagrant private network IP)
Nmap scan report for 10.10.10.41
Host is up (0.000053s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
nmap WLS Docker Container
Nmap scan report for my.domain.com (172.17.0.11)
Host is up (0.000055s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
I found the root cause & wanted to share back.
It turns out that because Vagrant has a private network adapter, we have to bind the container to that adapter using.
docker run -d -p 10.10.10.41:7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin

Resources