How to open incoming port 50070 in firewall (google compute engine) - hadoop

I have my Single node Hadoop installed on Google Compute Engine instance and i want to open port 50070 on that machine to access the hadoop dashboard. i configured in the firewall rule as tcp:50070 in compute engine networks. but still i am unable to access my port outside the network (ie . via internet). I tried nmap for the public ip of my GCE instance and i got a result which has only ssh port got opened all other ports are filtered .
Note: i am using debian 7.5 image

Make sure your daemon is listening on port 50070. If you have more than one networks in you project make sure the port is opened on the right network. You can run the following commands to check the information about your instance and network.
lsof -i
gcutil --project= getinstance
gcutil --project= listnetworks
gcutil --project= listfirewalls
gcutil --project= getfirewall

Check if IP/Port is allowed in iptables or not.
iptables -L
would show you all the records.
To allow port in iptables you can do the following:
sudo iptables -A INPUT -p tcp -m tcp --dport 50070 -j ACCEPT
sudo iptables-save -c

Short answer
In addition to configure the firewall rule at GCE web console make sure that your server is listening at 0.0.0.0 instead of 127.0.0.1
Long answer
In the context of servers, 0.0.0.0 means all IPv4 addresses on the local machine. If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs - Source
In contrast 127.0.0.1 is the IP address used to stablish a connection to the same machine used by the user this address is usually referred as the localhost.
It's often used when you want a network-capable application to only serve clients on the same host. A process that is listening on 127.0.0.1 for connections will only receive local connections on that socket. - Source
Hence, if you try to stablish a connection to your server from internet and your server is listening at 127.0.0.1 at your GCE machine, then, from the server point of view a request has never been received and as a consequence Goocle Cloud Firewall will refuse the connection because there is no server listening at the opened port (in your case 50070).
I hope this answer helps to solve your problem. Best regards.

Related

Enable remote access from one custom IP to Elasticsearch cluster

I've a VPS with installed Elasticsearch. the question is how it will be possible to connect this remote machine with my home IP? I know that with simple line possible to allow all connections, but it is not secure. When I try to add my custom IP, the ES is closed localhost connection and doesn't start properly.
Thank you in any advice!
First set network.host in elasticsearch.yml to the VPS public IP address, not localhost. Next you would need to open port 9200 (or whichever you are using) to you home computers specific IP address. So assuming your VPS is Linux you would achieve this by whitelisting your IP address in Iptables and opening this port to that IP address only.
iptables -A INPUT -p tcp -s <source> --dport 9200 -j ACCEPT
As to how secure this would be. In general the recommendations I've seen floating around are mostly agreeing on the fact that it's a good idea to only allow local connections to your elasticsearch instance. If you want to try allowing remote connections for testing purposes, then as I've mentioned it is enough to bind your public IP instead of localhost in elasticsearch.yml and opening the appropriate ports.
Thank you for etarhan again. One important thing, please check your iptables (firewall) rules before production for opening port for any external IPs. If they allow any remote connection anybody can update, delete your elasticsearch clusters. I solved it by following above instruction, opened remote connection to my home IP but closed any others:
iptables -A INPUT -p tcp -s <source --dport 9200 -j ACCEPT
iptables -A INPUT -p tcp --dport 9200 -j DROP

Able to open TCP port but not listening

Using Add rule in windows firewall, I was able to open TCP port 15537. When i am trying to executing command netstat -ano on terminal windows, this port is not listed. I tried to execute telnet command on terminal window (e.g. telnet IP port) but getting
Connecting To localhost...Could not open connection to the host, on port 15537: Connect failed
Then I downloaded PortQry application and execute it from different machine, this machine is also in the same network, the result I received was
"Not Listening".
I already spent more than 2 days and asked internal group but could not find solution.
Note: both machines are having Windows 10 OS.
No solution is needed as no problem is indicated in the question. You have opened a TCP port successfully. You have not made any attempt to cause anything to listen to that TCP port.
It's not clear what results you expected, but you got the results that you should have expected. Nothing is wrong. The port is open because you opened it. Nothing is listening on that port because you didn't set anything to listen on that port.
There may be some forwarding rules? Since the purpose of access is not on the local machine, the netstat command cannot see the port on listening, but it can see the next action based on this port, usually to do some forwarding
I am not very familiar with windows firewall configuration, but I know that if there is a forwarding rule in linux, like
-p tcp -m tcp --dport 8080 -j {other forwading chain}
we can not see 8080 listening on this host (netstat -tunpl), but telnet host:8080 may see connected
Use nmap instead of netstat for detecting opening port
nmap -p your_port_number your_local_ip
Run service on that port
For eg- In my case,in order to open port,I use
"service ssh start" or "service apache2 start "and it's open port 22 and 80 for connection respectively in my linux machine.
On using nmap in my lan network both ports opened.
Hope it help

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

Can't connect to public IP for EC2 instance

I have an EC2 instance which is running with the following security groups:
HTTP - TCP - 80 - 0.0.0.0/0
Custom UDP Rule - UDP - 1194 - 0.0.0.0/0
SSH - TCP - 22 - 0.0.0.0/0
Custom TCP Rule - TCP - 943 - 0.0.0.0/0
HTTPS - TCP - 443 - 0.0.0.0/0
However, when I try to access http://{PUBLIC_IP} or https://{PUBLIC_IP} in the browser, I get a "{IP} refused to connect" error. I'm new to AWS. Am I missing something here? What should I do to debug?
One way to debug this particular class of problem is to use netcat in order to determine where the problem lies.
If you run netcat against port 80 on the public IP address of your instance and just get a hang (no output at all), then most likely your security group isn't allowing traffic through. Here is an example from an EC2 instance that is in a security group that doesn't allow port 80 traffic inbound:
% nc -v 55.35.300.45 80
<just hangs>
Whereas if the security group is changed to allow port 80, but the EC2 instance doesn't have any process listening on port 80, you'll get the following:
% nc -v 55.35.300.45 80
nc: connectx to 52.38.300.43 port 80 (tcp) failed: Connection refused
Given that your browser gave you a similar "connection refused", most likely the problem is that there is no web server running on your instance. You can verify this by ssh'ing into the instance and seeing if you can connect to port 80 there:
ssh ec2-user#55.35.300.45
% nc -v localhost 80
nc: connect to localhost port 80 (tcp) failed: Connection refused
If you get something like the above, you're definitely not running a webserver.
I'm not sure if it's too late to help but I was stuck with a similar issue with my test server
SG Inbound: ssh -> 22
HTTP -> 80
NACL: default allow/deny settings
but still couldn't ping to the server from my browser, then I realize there's nothing running on the server that can serve the request, and I started httpd server (webserver) and it worked.
sudo yum -y install httpd
sudo service httpd start
this way you can test the connectivity if you are playing with SGs and NACLs and of course it's not the only way, just an example if you're figuring your System N/W out.
Have you installed webserver(ngingx/apache) to serve your requests. If so please share your the config files. (So that it will help to troubleshoot)
I think the reason is probably that you did not set up a web server for your EC2 instance, because if you try to access http://{PUBLIC_IP} or https://{PUBLIC_IP}, you need to have a background server to serve the http request as #Niranj Rajasekaran said.
By the way, by simply pinging the {PUBLIC_IP}, you could see if your connection to your EC2 instance is normal or not.
In command prompt or terminal, type
ping {PUBLIC_IP}
In my case, the server was running but available on just 127.0.0.1 so it refused connections from external hosts. To see if this is your situation, you can run
netstat -an | grep <port number>
If it says 127.0.0.1:<port number> instead of 0.0.0.0:<port number>, you have this problem.
Usually there's a flag or an argument in your server code somewhere to set the host to 0.0.0.0:
app.run(host='0.0.0.0') # flask example
However, in my case, I had already set this so I thought that couldn't possibly be the issue, which is how I ended up on this thread, which asks more generally about the problem. Unfortunately, I was using docker, and had set 0.0.0.0 on the container but was mapping that explicitly to 127.0.0.1 on the host in the docker-compose port-mapping:
ports:
- "127.0.0.1:<port number>:<port number>"
Changing that line to remove the host IP specification fixed the problem upon re-deploy:
ports:
- "<port number>:<port number>"

How to configure direct http access to EC2 instance?

This is a very basic Amazon EC2 question, but I'm stumped so here goes.
I want to launch an Amazon EC2 instance and allow access to HTTP on ports 80 and 8888
from anywhere. So far I can't even allow the instance to connect to on those ports using
its own IP address (but it will connect to localhost).
I configured the "default" security group for HTTP using the standard HTTP option on the management console (and also SSH).
I launched my instance in the default security group.
I connected to the instance on SSH port 22 twice and in one window launch an HTTP server
on port 80. In the other window I verify that I can connect to HTTP using the "localhost".
However when I try to access HTTP from the instance (or anywhere else) using either the public DNS or the Private IP address I het "connection refused".
What am I doing wrong, please?
Below is a console fragment showing the wget that succeeds and the two that fail run from the instance itself.
--2012-03-07 15:43:31-- http://localhost/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: /__whiff_directory_listing__ [following]
--2012-03-07 15:43:31-- http://localhost/__whiff_directory_listing__
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “__whiff_directory_listing__”
[ <=>
] 7,512 --.-K/s in 0.03s
2012-03-07 15:43:31 (263 KB/s) - “__whiff_directory_listing__” saved [7512]
[ec2-user#ip-10-195-205-30 tmp]$ wget http://ec2-50-17-2-174.compute-1.amazonaws.com/
--2012-03-07 15:44:17-- http://ec2-50-17-2-174.compute-1.amazonaws.com/
Resolving ec2-50-17-2-174.compute-1.amazonaws.com... 10.195.205.30
Connecting to ec2-50-17-2-174.compute-1.amazonaws.com|10.195.205.30|:80... failed:
Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$ wget http://10.195.205.30/
--2012-03-07 15:46:08-- http://10.195.205.30/
Connecting to 10.195.205.30:80... failed: Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$
The standard tcp sockets interface requires that you bind to a particular IP address when you send or listen. There are a couple of somewhat special addresses: localhost (which you're probably familiar with) which is 127.0.0.1. There's also a special address, 0.0.0.0 or INADDR_ANY (internet protocol, special shorthand for ANY ADDRESS). It's a way to listen on ANY or more commonly, ALL addresses on the host. This is a way to tell the kernel/stack that you're not interested in a particular IP address.
So, when you're setting up a server that listens to "localhost" you're telling the service that you want to use the special reserved address that can only be reached by users of this host, and while it exists on every host, making a connection to localhost will only ever reach the host you're making the request from.
When you want a service to be reachable everywhere (on a local host, on all interfaces, etc.) you can specify 0.0.0.0.
(0) It's silly but the first thing you need to do is to make sure that your web server is running.
(1) You need to edit your Security Group to let incoming HTTP packets access your website. If your website is listening on port 80, you need to edit the Security Group to open access to port 80 as mentioned above. If your website is listening on some other port, then you need to edit the Security Group to access that other port.
(2) If you are running a Linux instance, the iptables firewall may be running by default. You can check that this firewall is active by running
sudo service iptables status
on the command line. If you get output, then the iptables firewall is running. If you get a message "Firewall not running", that's pretty self-explanatory. In general, the iptables firewall is running by default.
You have two options: knock out the firewall or edit the firewall's configuration to let HTTP traffic through. I opted to knock out the firewall as the simpler option (for me).
sudo service iptables stop
There is no real security risk in shutting down iptables because iptables, if active, merely duplicates the functionality of Amazon's firewall, which is using the Security Group to generate its configuration file. We are assuming here that Amazon AWS doesn't misconfigure its firewalls - a very safe assumption.
(3) Now, you can access the URL from your browser.
(4) The Microsoft Windows Servers also run their personal firewalls by default and you'll need to fix the Windows Server's personal firewall, too.
Correction: by AWS default, AWS does not fire up server firewalls such iptables (Centos) or UAF (Ubuntu) when you are ordering the creation of new EC2 instances - That's why EC2 instances that are in the same VPC can ssh into each other and you can "see" the web server that you fired up from another EC2 instance in the same VPC.
Just make sure that your RESTful API is listening on all interfaces i.e. 0.0.0.0:portID
As you are getting connection refused (packets are being rejected) I bet it is iptables causing the problem. Try to run
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 8888 -j ACCEPT
and test the connection.
You will also need to add those rules permanently which you can do by adding the above lines into ie. /etc/sysconfig/iptables if you are running Red Hat.
Apparently I was "binding to localhost" whereas I needed to bind to 0.0.0.0 to respond to port 80 for the all incoming TCP interfaces (?). This is a subtlety of TCP/IP that I don't fully understand yet, but it fixed the problem.
Had to do the following:
1) Enable HTTP access on the instance config, it wasn't on by default only SSH
2) Tried to do nodejs server, so port was bound to 80 -> 3000 did the following commands to fix that
iptables -F
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo service iptables-persistent flush
Amazon support answered it and it worked instantly:
I replicated the issue on my end on a test Ubuntu instance and was able to solve it. The issue was that in order to run Tomcat on a port below 1024 in Ubuntu/Unix, the service needs root privileges which is generally not recommended as running a process on port 80 with root privileges is an unnecessary security risk.
What we recommend is to use a port redirection via iptables :-
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
I hope the above information helps.

Resources