Enable remote access from one custom IP to Elasticsearch cluster - elasticsearch

I've a VPS with installed Elasticsearch. the question is how it will be possible to connect this remote machine with my home IP? I know that with simple line possible to allow all connections, but it is not secure. When I try to add my custom IP, the ES is closed localhost connection and doesn't start properly.
Thank you in any advice!

First set network.host in elasticsearch.yml to the VPS public IP address, not localhost. Next you would need to open port 9200 (or whichever you are using) to you home computers specific IP address. So assuming your VPS is Linux you would achieve this by whitelisting your IP address in Iptables and opening this port to that IP address only.
iptables -A INPUT -p tcp -s <source> --dport 9200 -j ACCEPT
As to how secure this would be. In general the recommendations I've seen floating around are mostly agreeing on the fact that it's a good idea to only allow local connections to your elasticsearch instance. If you want to try allowing remote connections for testing purposes, then as I've mentioned it is enough to bind your public IP instead of localhost in elasticsearch.yml and opening the appropriate ports.

Thank you for etarhan again. One important thing, please check your iptables (firewall) rules before production for opening port for any external IPs. If they allow any remote connection anybody can update, delete your elasticsearch clusters. I solved it by following above instruction, opened remote connection to my home IP but closed any others:
iptables -A INPUT -p tcp -s <source --dport 9200 -j ACCEPT
iptables -A INPUT -p tcp --dport 9200 -j DROP

Related

IPsec - Clients cannot ping each other

I'm having a hard time to finalize a first working configuration with IPsec.
I want to have a IPsec server that creates a network with its clients, and I want the clients to be able to communicate each other through the server. I'm using Strongswan on both server and clients, and I'll have a few clients with other IPsec implementations.
Problem
So the server is reachable at 10.231.0.1 for every clients and the server can ping the clients. It works well. But the clients cannot reach each other.
Here is an output of tcpdump when I try to ping 10.231.0.2 from 10.231.0.3
# tcpdump -n host 10.231.0.3
[..]
21:28:49.099653 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
21:28:50.123649 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
I thought of farp plugin, mentionned here : https://wiki.strongswan.org/projects/strongswan/wiki/ForwardingAndSplitTunneling but the ARP request is not making its way to the server, it stays local.
Information
Server ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=300s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
left=%any
leftid=%any
leftcert=server.crt
leftsendcert=always
leftsourceip=10.231.0.1
leftauth=pubkey
leftsubnet=10.231.0.0/16
right=%any
rightid=%any
rightauth=pubkey
rightsourceip=10.231.0.2-10.231.254.254
rightsubnet=10.231.0.0/16
Client ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=route
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=60s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
right=server.url
rightid=%any
rightauth=pubkey
rightsubnet=10.231.0.1/32
left=%defaultroute
leftid=%any
leftauth=pubkey
leftcert=client.crt
leftsendcert=always
leftsourceip=10.231.0.3
leftsubnet=10.231.0.3/32
There should be nothing special or relevant in Strongswan's & charon's configuration file, but I can provide them if you think that could be usefull.
I've taken a few shortcuts in the configuration : I'm using VirtualIP but I'm not using a DHCP plugin or anything to distribute the IP. I'm setting the IP address manually on the clients like so :
ip address add 10.231.0.3/16 dev eth0
And here is a routing table on the client's side (automatically set like that by adding the IP and by Strongswann for the table 220) :
# ip route list | grep 231
10.231.0.0/16 dev eth0 proto kernel scope link src 10.231.0.3
# ip route list table 220
10.231.0.1 via 192.168.88.1 dev eth0 proto static src 10.231.0.3
I've also played with iptables and this rule
iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT
On both client and server, because I understood that could be a problem if I have MASQUERADE rules already set, but that did not changed anything.
I've also set those kernel parameters through sysctl on both client and server side :
sysctl net.ipv4.conf.default.accept_redirects=0
sysctl net.ipv4.conf.default.send_redirects=0
sysctl net.ipv4.conf.default.rp_filter=0
sysctl net.ipv4.conf.eth0.accept_redirects=0
sysctl net.ipv4.conf.eth0.send_redirects=0
sysctl net.ipv4.conf.eth0.rp_filter=0
sysctl net.ipv4.conf.all.proxy_arp=1
sysctl net.ipv4.conf.eth0.proxy_arp=1
sysctl net.ipv4.ip_forward=1
Lead 1
This could be related to my subnets declared in /32 in my client's configurations. At first I declared the subnet in /16 but I could not connect two clients with this configuration. The second client was taking the whole traffic for itself. So I understood I should limit the traffic selectors and this is how I did it but maybe I'm wrong.
Lead 2
This could be related to my way of assigning IP manually, and the mess it can introduce in the routing table. When I play with the routing table manually assigning gateway (like the public IP of the client as a gateway) then the ARP in TCPdump disappear and I see the ICMP request. But absolutely nothing on the server.
Any thoughts on what I've done wrong ?
Thanks

Jelastic - Collabora Online with Next Cloud without ssl (for testing)

For testing purpose, I want to install Collabora online on a Jelastic environment.
I'm trying to follow this basic steps : https://www.collaboraoffice.com/code/quick-tryout-nextcloud-docker/
First I configure the topology with the docker image given in the link.
Next cloud is installed successfully after I went to the given URL.
Then I add the variable extra_params=--o:ssl.enable=false as said in the instructions :
Then I try to map port by adding a Endpoint:
It map the port 9980 with the public port 11010.
So Finaly, I install the collabora app on nextCloud and configure the Collabora server url on the dedicated Collabora settings page:
jelastic-node-ndd.com:11010
And I got this message when trying to open an Open office document :
Failed to load Collabora Online - please try again later
I don't know how to investigate. When I try to reach the Collabora server on my brother with the given port, I got a connection fails error.
We think the main reason of the issue is the fact that port mapping doesn't work in your case.
In other words, telnet $(hostname) 11010 says "connection refused" inside container since mapping works correct only from the Internet.
This can be easily overcome by adding of External IP. So, in the settings of "Collabora online" you have to specify URL http://EXT.IP:9980 and remove mapping.
Another way is a mapping trick. In this case, you can left only internal IP and make mapping as you did.
Then, edit mapping and specify Private Port equal to Public Port
Further, inside container add NAT rule, like:
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 11010 -j DNAT --to-destination 172.21.0.2:9980
Where, 11010 - is your mapping port. 172.21.0.2 - IP you get while performing iptables -L DOCKER -vnt nat
Thus, DOCKER chain should look like this:
root#node210795-nextcloud-test:~# iptables -L DOCKER -vnt nat
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
19 1140 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
106 6360 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9980 to:172.21.0.2:9980
55 3300 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11031 to:172.21.0.2:9980
As a result, Collabora online URL in your case can be left as jelastic-node-ndd.com:11010
Besides this, you can face the issue described here
We were able to fix this issue using the article Setting up and configuring collabora/code Docker image (Use the configuration file directly). Before coping loolwsd.xml back to docker (step 3) you might need to chmod this file:
chmod 666 loolwsd.xml
Note:
It's better to specify additional parameter --restart always in step 5 of Quick tryout with Nextcloud docker
Variable DOCKER_EXPOSE_PORT should be left intact (80)
extra_params=--o:ssl.enable=false is a variable of collabora/code, so no needs to specify it in Variables

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

How to open incoming port 50070 in firewall (google compute engine)

I have my Single node Hadoop installed on Google Compute Engine instance and i want to open port 50070 on that machine to access the hadoop dashboard. i configured in the firewall rule as tcp:50070 in compute engine networks. but still i am unable to access my port outside the network (ie . via internet). I tried nmap for the public ip of my GCE instance and i got a result which has only ssh port got opened all other ports are filtered .
Note: i am using debian 7.5 image
Make sure your daemon is listening on port 50070. If you have more than one networks in you project make sure the port is opened on the right network. You can run the following commands to check the information about your instance and network.
lsof -i
gcutil --project= getinstance
gcutil --project= listnetworks
gcutil --project= listfirewalls
gcutil --project= getfirewall
Check if IP/Port is allowed in iptables or not.
iptables -L
would show you all the records.
To allow port in iptables you can do the following:
sudo iptables -A INPUT -p tcp -m tcp --dport 50070 -j ACCEPT
sudo iptables-save -c
Short answer
In addition to configure the firewall rule at GCE web console make sure that your server is listening at 0.0.0.0 instead of 127.0.0.1
Long answer
In the context of servers, 0.0.0.0 means all IPv4 addresses on the local machine. If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs - Source
In contrast 127.0.0.1 is the IP address used to stablish a connection to the same machine used by the user this address is usually referred as the localhost.
It's often used when you want a network-capable application to only serve clients on the same host. A process that is listening on 127.0.0.1 for connections will only receive local connections on that socket. - Source
Hence, if you try to stablish a connection to your server from internet and your server is listening at 127.0.0.1 at your GCE machine, then, from the server point of view a request has never been received and as a consequence Goocle Cloud Firewall will refuse the connection because there is no server listening at the opened port (in your case 50070).
I hope this answer helps to solve your problem. Best regards.

How to configure direct http access to EC2 instance?

This is a very basic Amazon EC2 question, but I'm stumped so here goes.
I want to launch an Amazon EC2 instance and allow access to HTTP on ports 80 and 8888
from anywhere. So far I can't even allow the instance to connect to on those ports using
its own IP address (but it will connect to localhost).
I configured the "default" security group for HTTP using the standard HTTP option on the management console (and also SSH).
I launched my instance in the default security group.
I connected to the instance on SSH port 22 twice and in one window launch an HTTP server
on port 80. In the other window I verify that I can connect to HTTP using the "localhost".
However when I try to access HTTP from the instance (or anywhere else) using either the public DNS or the Private IP address I het "connection refused".
What am I doing wrong, please?
Below is a console fragment showing the wget that succeeds and the two that fail run from the instance itself.
--2012-03-07 15:43:31-- http://localhost/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: /__whiff_directory_listing__ [following]
--2012-03-07 15:43:31-- http://localhost/__whiff_directory_listing__
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “__whiff_directory_listing__”
[ <=>
] 7,512 --.-K/s in 0.03s
2012-03-07 15:43:31 (263 KB/s) - “__whiff_directory_listing__” saved [7512]
[ec2-user#ip-10-195-205-30 tmp]$ wget http://ec2-50-17-2-174.compute-1.amazonaws.com/
--2012-03-07 15:44:17-- http://ec2-50-17-2-174.compute-1.amazonaws.com/
Resolving ec2-50-17-2-174.compute-1.amazonaws.com... 10.195.205.30
Connecting to ec2-50-17-2-174.compute-1.amazonaws.com|10.195.205.30|:80... failed:
Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$ wget http://10.195.205.30/
--2012-03-07 15:46:08-- http://10.195.205.30/
Connecting to 10.195.205.30:80... failed: Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$
The standard tcp sockets interface requires that you bind to a particular IP address when you send or listen. There are a couple of somewhat special addresses: localhost (which you're probably familiar with) which is 127.0.0.1. There's also a special address, 0.0.0.0 or INADDR_ANY (internet protocol, special shorthand for ANY ADDRESS). It's a way to listen on ANY or more commonly, ALL addresses on the host. This is a way to tell the kernel/stack that you're not interested in a particular IP address.
So, when you're setting up a server that listens to "localhost" you're telling the service that you want to use the special reserved address that can only be reached by users of this host, and while it exists on every host, making a connection to localhost will only ever reach the host you're making the request from.
When you want a service to be reachable everywhere (on a local host, on all interfaces, etc.) you can specify 0.0.0.0.
(0) It's silly but the first thing you need to do is to make sure that your web server is running.
(1) You need to edit your Security Group to let incoming HTTP packets access your website. If your website is listening on port 80, you need to edit the Security Group to open access to port 80 as mentioned above. If your website is listening on some other port, then you need to edit the Security Group to access that other port.
(2) If you are running a Linux instance, the iptables firewall may be running by default. You can check that this firewall is active by running
sudo service iptables status
on the command line. If you get output, then the iptables firewall is running. If you get a message "Firewall not running", that's pretty self-explanatory. In general, the iptables firewall is running by default.
You have two options: knock out the firewall or edit the firewall's configuration to let HTTP traffic through. I opted to knock out the firewall as the simpler option (for me).
sudo service iptables stop
There is no real security risk in shutting down iptables because iptables, if active, merely duplicates the functionality of Amazon's firewall, which is using the Security Group to generate its configuration file. We are assuming here that Amazon AWS doesn't misconfigure its firewalls - a very safe assumption.
(3) Now, you can access the URL from your browser.
(4) The Microsoft Windows Servers also run their personal firewalls by default and you'll need to fix the Windows Server's personal firewall, too.
Correction: by AWS default, AWS does not fire up server firewalls such iptables (Centos) or UAF (Ubuntu) when you are ordering the creation of new EC2 instances - That's why EC2 instances that are in the same VPC can ssh into each other and you can "see" the web server that you fired up from another EC2 instance in the same VPC.
Just make sure that your RESTful API is listening on all interfaces i.e. 0.0.0.0:portID
As you are getting connection refused (packets are being rejected) I bet it is iptables causing the problem. Try to run
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 8888 -j ACCEPT
and test the connection.
You will also need to add those rules permanently which you can do by adding the above lines into ie. /etc/sysconfig/iptables if you are running Red Hat.
Apparently I was "binding to localhost" whereas I needed to bind to 0.0.0.0 to respond to port 80 for the all incoming TCP interfaces (?). This is a subtlety of TCP/IP that I don't fully understand yet, but it fixed the problem.
Had to do the following:
1) Enable HTTP access on the instance config, it wasn't on by default only SSH
2) Tried to do nodejs server, so port was bound to 80 -> 3000 did the following commands to fix that
iptables -F
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo service iptables-persistent flush
Amazon support answered it and it worked instantly:
I replicated the issue on my end on a test Ubuntu instance and was able to solve it. The issue was that in order to run Tomcat on a port below 1024 in Ubuntu/Unix, the service needs root privileges which is generally not recommended as running a process on port 80 with root privileges is an unnecessary security risk.
What we recommend is to use a port redirection via iptables :-
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
I hope the above information helps.

Resources