Run ipython notebook from a remote server - amazon-ec2

I'm trying to run a ipython notebook from my remote server(Ubuntu 14.04 64 bits on Amazon EC2).
I can access to ipython notebook via ssh tunnelling as described in coderwall blog:
remote$ipython notebook --no-browser --port=8889
local$ssh -N -f -L localhost:8888:localhost:8889 remote_user#remote_host
However I can't have a simple access using http protocol as described in the official doc or this tutorial
remote$ipython notebook --no-browser --port=8889
And point my local browser to http://mypublicip:8889, the browser fails without any warning.

To resolve this problem, I needed to:
Run the notebook server listening on all IP addresses adding cli flag --ip=*:
remote$ipython notebook --no-browser --ip=* --port=8889
Add inbound rule to the amazon ec2 instance in order to listen to port 8889.
+-----------------+----------+------------+-----------+
| Type | Protocol | Port Range | Source |
+=================+==========+============+===========+
| Custom TCP Rule | TCP | 8889 | 0.0.0.0/0 |
+-----------------+----------+------------+-----------+
Of course now it's better to add authentification as the port is listening to all ip adresses

As Monkpit wrote below, your shell might try to glob * character. In that case you should write --ip=\*- Explicitly adding ip address to localhost also helped:
ipython notebook --no-browser --ip=localhost --port=7777

Related

Connect to Postgres running on the Windows host from WSL2

I am trying to connect to PostgreSQL DB installed on my windows machine from WSL2 however facing issues while connecting would appreciate if you can help me resolve this issue. In addition to that I have tried below options already.
Windows postgreSQL is running fine on port 5432
Have below entry in pg_hba.conf file
host all all 0.0.0.0/0 md5
Checked windows IP address for WSL2 from following command and then
Telnet , still no luck
sudo cat /etc/resolv.conf | grep nameserver | awk '{ print $2 }'
172.19.240.1
telnet 172.19.240.1 5432
Defined inbound firewall rule for port 5432 from Windows Defender
Firewall with Advanced Security Still not working
Disabled firewall completely - same issue :(
OS : Windows 10 Home
Error Message
Trying 172.19.240.1...

How to Use sshuttle on Windows WSL2

We have a Jenkins server which is accessible only from within the VPC on the cloud. On Mac and Linux I use sshuttle to make a ssh connection to the bastion instance (to act a proxy) and open the Jenkins console in the browser. Everything works fine.
Now I'm on Windows and trying to do the same on WSL2. If I'm not mistaken previously, sshuttle didn't work on WSL1 (failed with some error message), but I managed to run it on WSL2 without any issue. The ssh connection is established and I can access my Jenkins (using curl).
Then I tried to access my Jenkins on Windows via WSL2:
1. I found the IP address of WSL2 and the port the ssh tunnle:
# lsof -i -n | grep ssh
sshuttle 1234 rad 5u IPv4 39270 0t0 TCP *:socks (LISTEN)
ssh 5678 rad 3u IPv4 40252 0t0 TCP 172.25.236.84:57578->bastion:ssh (ESTABLISHED)
2. I configured network proxy setting of Firefox (v77) to use my ssh tunnle:
Manual proxy configuration
SOCK host: 172.25.236.84
Port: 1080
SOCKS V5 (tested with V4 as well)
But loading the page fails with "The connection was reset" error on Firefox. I tested via Powershell that the SOCKS port is open and responding (using Test-NetConnection).
1. Any idea what the problem is? How to make it work?
2. If it's not gonna work, is there any other solution (e.g. Docker, etc)?
Thanks.
I'm not sure, but my guess is that sshuttle doesn't actually act as a SOCKS proxy and that's why the connection gets reset.
I managed to access my Jenkins on Windows machine using ssh SOCKS proxy: ssh -D 0.0.0.0:1080 rad#bastion and configured Firefox to use the SOCKS proxy.
Interestingly, for this you don't even need WSL. It seems Windows 10 has OpenSSH and you can use it. Just open CMD and type ssh -D 1080 rad#bastion and setup Firefox to use localhost as the proxy.
If there's any better solution or any comment/concern (apart from DNS over SOCKS) with this approach, please share.
Thanks.
As alternative on WSL(2) you can run a regular SSH tunnel.
Eg:
ssh -N -L 127.0.0.1:5432:some_domain_to_forward:5432 user#jumpbox_ip
and then just connect to 127.0.0.1:5432

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

Docker running inside vagrant + remote python debugging in Pycharm

I'm running docker on top on vagrant and would like to debug application remotely using pycharm running on windows (which runs vagrant). Of course the docker host is then on vagrant - not the same machine pycharm is running on.
I have to specify the certificates folder and docker machine executable as a local files / directories. Does this mean I cannot debug applications using pycharm in this setup?
Of course I could ssh directly into the docker container but then I have no features pycharm gives me.
pycharm cannot remote debug because cannot connect with code in docker in vagrant
you need bridge port from docker with vagrant before this.
you need find vagrant ip and docker ip (by default, vagrant ip: 10.0.2.2, you can see when run vagrant ssh)
second determine port for debug( exam 21000)
use commant code in terminal
vagrant ssh
sudo iptables -t nat -A PREROUTING -p tcp --dport 21000 -j DNAT --to-destination 10.0.2.2:21000
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
set code for python file:
change 172.19.0.1 with your docker ip (in vagrant)
import pydevd
pydevd.settrace('172.19.0.1', port=21000, suspend=False)
set on breakpoint on code and try to debug
It is possible however not recommended, it has the potential to introduce a number of problem spots longer term and brings a increased security risk.
as per the docker documentation ...
If you are okay with the security risk and if docker toolbox using boot2docker is not an option for your situation, then you will need to ensure:
Docker client/server versions are identical
Port forwarding on your local vagrant box is setup
Add the TCP binding for the docker server, either as a replacement to the default unix socket binding and/or in addition.

The port 80 in Mac is used

We have to use port 80 for our server. But when I was trying to use it in Mac, it always said that the 80 is used, but I don't know which program uses it.
I searched it in Google, and someone said it's about apache, but I tried, which is not working. I found this: https://gist.github.com/kujohn/7209628 , but seems it's not working visiting our server by IP address.
I really don't know what's going on and how can I find out which program using port 80 and stop it.
Many thanks if anyone can help, I'm new using Mac. Thanks.
To find out what process is using port 80
go to Applications
open utilities.
open Activity Monitor.
click on the Memory tab,
look at the ports and the processes using them. Find port 80 and select it
go to the view on the menu bar and choose Quit process.
This will just kill the process, it will not stop a server instance that is already running from continuing to run.
(Correction: the Ports column shows the number of open ports (and files?), not the port number)
It is not clear if you are using a database management system or not and which one but one method that has worked for me using MAMP is as follows.
stop the server by using sudo apachectl stop command.
then change the port to port 80.
then restart your servers.
type the following in Terminal
sudo lsof -i -n -P | grep TCP
you will get a list - e.g. dropbox listens on 80
you can copy the output to a text editor, etc to search
On Mac ports below 1024 can only be bound by the root user.
Try launching your server as root user (with sudo), or try to use a port above 1024.
You can also try to add root permissions to your user in /etc/sudoers
# root and users in group wheel can run anything on any machine as any user
root ALL = (ALL) ALL
%admin ALL = (ALL) ALL
your_user_here ALL = (ALL) ALL
I was having this issue, apache was disabled via launchctl, but was still tying up port 80 after launch, I could start up apache and it would work, but after unloading it, I couldn't start up anything on port 80. I was using the built in web server for Python as an easy test. It would work on port 81, but not on port 80.
sudo python -m SimpleHTTPServer 80 -- wouldn't work
sudo python -m SimpleHTTPServer 81 -- would work
Here are the symptoms:
$ launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
/System/Library/LaunchDaemons/org.apache.httpd.plist: Could not find specified service
$ sudo lsof -i ':80'
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Python 3353 root 3u IPv4 0xe455777a82799f05 0t0 TCP *:http (LISTEN)
The fix for me (after way too much searching) was simple:
sudo pfctl -F all
This flushed the packet filter, releasing port 80 (and others I assume 8080, 443, whatever ports apache might be tying up)
After that, and relaunching the python server, it came right up.
Might be Skype that is using port 80. If you have Skype installed and running try to change to a different port in the settings.
Port numbers in the range from 0 to 1023 are classified as 'well-known' and port number 80 is reserved for HTTP. Typically you have servers listening on port 80 to handle HTTP requests.
Source:
http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

Resources