I have an EC2 instance behind a load balancer. The security group attached to it allows for inbound connections (both ipv4 and ipv6 on port 6379). I am able to connect to my redis client:
redis-cli -h ec2-**-**-**-*.us-west-1.compute.amazonaws.com -p 6379
However, when I try to connect with nodeJS and express-session I get a ConnectionTimeoutError on EC2, but locally it works fine:
const redisClient = createClient() // uses default port localhost:6379
redisClient.connect().catch(console.error)
If there is a race condition here, like others mentioned, why does this race condition happen on EC2 and not locally? Is the default localhost incorrect since there is a load balancer in front of the instance?
Based on your comments, I'd say the problem is the load balancer. Redis communicates on a protocol based on TCP. An ALB is only for HTTP/HTTPS traffic, so it cannot handle this protocol. Use a Network Load Balancer instead, with a TCP listener. Also make sure your security group rule also allows TCP traffic for port 6379.
Redis client should be instantiated explicitly in a setup like this one (covers both ipv4 and ipv6 inbound traffic):
createClient({ socket: { host: '127.0.0.1', port: 6379 }, legacyMode: true })
As redis is self-hosted on EC2 with a load balancer in front of the instance, localhost may not be mapped to 127.0.0.1 as a loopback address. This means that the default createClient() without a host or port specified, might try to establish a connection to a different internal, loopback address.
(Make sure to all inbound traffic to tcp 6379, or the port you are using)
Related
I'm setting up a new server in AWS, Ubuntu 18.04 and created the inbound rules and attached to the server permitting the port:
Custom TCP Rule TCP 7096 0.0.0.0/0
Custom TCP Rule TCP 7096 ::/0
and have a script php file that has to be acessed in that port, and I used UFW created the rules permitting the port: 7096 and finally I disabled it, I'm using a site that checks if the port is open and no matter what I do, the access is always not permitted. I have tryed many solutions found on the net but nothing solves. All other rules are ok for ports: 80, 22, 3306, but for that no way.
Assuming your security group is configured properly, which it seems like it probably is based on the inbound rules you've defined, it could be that your EC2 instance isn't reachable from the internet.
Is the instance in a public subnet? Does it have a public IP assigned? Are you testing connectivity using the IP or the AWS domain name? Is it in the default VPC? Or have you made your own?
There's a myriad of reasons why it might be failing, so it will be difficult to figure out without some more information.
Before anything else, I have read about 30+ StackOverflow answers and none of them seem to address my particular flavour of this problem. Below I list all the answers I have already tried before asking for more advice.
I am trying to access my ec2 instance via socket in PHP from a different machine via fsockopen, pointed at my ec2 public IP (I have an Elastic fixed IP address 54.68.166.28) and designated port.
Behaviour: I can access the instance and the ChatScript application running inside from within the instance, via the public IP directly on the browser. But if I run the exact same webpage with the exact same socket call on an external machine targeting my instance's IP address (double checked it is the correct one) I get a 500 Internal Server Error when connecting on port 1024 (for my custom TCP connection), another 500 on port 443 (HTTPS). On port 80 (HTTP) it hangs 20+ seconds then gives me status 200 success, except it does not connect properly to the application and responds with nothing.
Troubleshooting:
I have set up my security group rules to accept incoming TCP from anywhere:
HTTP (80) TCP 80 0.0.0.0/0
HTTP (80) TCP 80 ::/0
HTTPS (443) TCP 443 0.0.0.0/0
HTTPS (443) TCP 443 ::/0
Custom (1024) TCP 1024 0.0.0.0/0
Custom (1024) TCP 1024 ::/0
Outbound rules span port range 0 - 65535 with destination 0.0.0.0/0, so should work.
I ssh every time without problems into the instance on port 22. SCP also works fine.
Checked $sudo service httpd status: running, which is why my UI displays there fine.
Checked $sudo /sbin/iptables -L and all my policies are set to ACCEPT with no rules
Checked $ netstat --listen -p and the app I am targeting is listening on port 0.0.0.0.0:1024.
Checked Network Utility and ports 80 and 1024 are registered as open. Port 443 is not. Pinging did not work for any of them, with 100% packet loss.
Checked my instance is associated to the security group with all the permissions - it is. IP is clearly correct or I could neither ssh nor serve webpages... which I can.
I stopped and restarted the instance.
I replaced the instance.
I think this is due diligence before asking for help... now I need it!
I realised my configuration was correct: the problem was that the hosted domain I used for the GUI, like most hosted domains, does not open custom ports, so tcp did not work.
We did a network traffic capture while using the Discovery Node API and found that there was port access attempted on 621XX ports (62111, 62112, etc) and we were wondering if there was a specific set of ports the Discovery service typically uses.
This information would help immensely when firewall and corporate proxy settings come in to play.
The Watson Discovery API is an HTTPS service so only needs TCP port 443 to work. I would suggest that the activity on ports 621** are dynamic or private ports that your app is using to make the connections. They are not ports that need to be punched through firewalls, they are merely the port at which HTTPS connections to the remote server on port 443 are terminated.
I have a web app running on a Amazon EC2 Instance on port 8080, the webapp while starting, starts a Socket io server listening on port 9092.
in the client file connecting to the Socket io server i have this:
io.connect('http://<IPADDRESS>:9092');
Unfortunately, this request is getting blocked as shown :
I thought the problem was about inbound rules of my EC2 instance, i therefore allowed traffic for the purpose as shown:
But the requests are still blocked...
NOTE: When my app is hosted locally, everything works fine.
So why is amazon behaving this way and what am i supposed to do to come across this issue?
UPDATE:
netstat -a -n | grep 9092 outputs this on instance:
Also have a look on what firefox shows me about a request attempt timings:
It turns that i was binding my server to the localhost address, as if it were accessed from the localhost.
Thanks to #robertklep comment, i did bound the server to the ec2 instance address and it's working now.
The easiest way to establish a socket connection with your server from outside of EC2 is to listen to all the incoming traffic:
server.listen(3000, '0.0.0.0');
This is only recommended for testing and development environment. Do not use this for production.
I am trying to run a socket server on an Amazon Web Services EC2 instance. The socket is able to run just fine on it's own, and telnetting locally on the instance can connect, but trying to telnet to the socket from the outside is failing. I have gone into the security groups to ensure the ports I am using are open for both TCP and UDP (though the socket server has been configured for TCP). Is there something else I am missing?
The server might be listening on the loopback interface or ipv6 by default. You can check that by running netstat --listen -p which will show you which program listens on which address/port. How to make the program listen on the external ipv4 interface depends on the program/programming language.