I have two Ubuntu instances in the EC2 and I want to cluster them.
One ip will be refered as - X (the "net addr" ifconfig displayed IP) and its public ip will be reffered as PX.
the other ip is Y and its public is Y.
So now I did the following on both machines.
installed the latest rabbbitmq.
installed the management plugin.
opened the port for 5672 (rabbit) and 15672(management plugin)
connected to rabbit with my test app.
connected to the ui.
So now for the cluster.
I did the following commands
on X
rabbitmqctl cluster_status
got the node name which was 'rabbit#ip-X' (where X is the inner IP)
on Y
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit#ip-X
I got
"The nodes provided are either offline or not running"
Obviously this is the private ip, so the other instance cant connect.
How do I tell the second instance where the first is located?
EDIT
Firewall is completely off, I have a telnet connection from one remote to the other
(to ports 5672(rmq),15672 (ui), 4369 (cluster port)).
The cookie on both servers (and the hash of the cookie in the logs is the same).
when recorded tcp when running the join cluster command and watched in wireshark. I saw the following (no ack. )
http://i.imgur.com/PLezLvQ.png
so I closed the firewall using
sudo ufw disable
(just for the tests) and I re-typed
sudo rabbitmqctl join_cluster --ram rabbit#ip-XX
and the connection was created - but terminated by the remote rabbit
here :
http://i.imgur.com/dxJLNfH.png
and the message is still
"The nodes provided are either offline or not running"
(the remote rabbit app is definitely running)
You need to make sure the nodes can access each other. RabbitMQ uses distributed Erlang primitives for communication across the nodes, so you also have to open up a few ports in the firewall. See:
http://learnyousomeerlang.com/distribunomicon#firewalls
for details.
You should also use the same data center for your nodes in the cluster, since RabbitMQ can get really sad on network partitions. If your nodes are in different data centers, you should use the shovel or federation plugin instead of clustering for replication of data.
Edit: don't forget to use the same Erlang cookie on all nodes, see http://www.rabbitmq.com/clustering.html for details.
The issue are probably TCP ports that should be opened.
You should do the following:
1) Create a Security Group for the Rabbit Servers (both will use it)
we will call it: rabbit-sg
2) In the Security Group, Define the following ports:
All TCP TCP 0 - 65535 sg-xxxx (rabbit-sg)
SSH TCP 22 0.0.0.0/0
Custom TCP Rule TCP 4369 0.0.0.0/0
Custom TCP Rule TCP 5672 0.0.0.0/0
Custom TCP Rule TCP 15672 0.0.0.0/0
Custom TCP Rule TCP 25672 0.0.0.0/0
Custom TCP Rule TCP 35197 0.0.0.0/0
Custom TCP Rule TCP 55672 0.0.0.0/0
3) make sure both EC2 use this security group,
note that we opened all TCP between the EC2
4) make sure the rabbit cookie is the same and that you reboot the EC2
after changing it in the slave EC2
Related
I have an EC2 instance behind a load balancer. The security group attached to it allows for inbound connections (both ipv4 and ipv6 on port 6379). I am able to connect to my redis client:
redis-cli -h ec2-**-**-**-*.us-west-1.compute.amazonaws.com -p 6379
However, when I try to connect with nodeJS and express-session I get a ConnectionTimeoutError on EC2, but locally it works fine:
const redisClient = createClient() // uses default port localhost:6379
redisClient.connect().catch(console.error)
If there is a race condition here, like others mentioned, why does this race condition happen on EC2 and not locally? Is the default localhost incorrect since there is a load balancer in front of the instance?
Based on your comments, I'd say the problem is the load balancer. Redis communicates on a protocol based on TCP. An ALB is only for HTTP/HTTPS traffic, so it cannot handle this protocol. Use a Network Load Balancer instead, with a TCP listener. Also make sure your security group rule also allows TCP traffic for port 6379.
Redis client should be instantiated explicitly in a setup like this one (covers both ipv4 and ipv6 inbound traffic):
createClient({ socket: { host: '127.0.0.1', port: 6379 }, legacyMode: true })
As redis is self-hosted on EC2 with a load balancer in front of the instance, localhost may not be mapped to 127.0.0.1 as a loopback address. This means that the default createClient() without a host or port specified, might try to establish a connection to a different internal, loopback address.
(Make sure to all inbound traffic to tcp 6379, or the port you are using)
I'm setting up a new server in AWS, Ubuntu 18.04 and created the inbound rules and attached to the server permitting the port:
Custom TCP Rule TCP 7096 0.0.0.0/0
Custom TCP Rule TCP 7096 ::/0
and have a script php file that has to be acessed in that port, and I used UFW created the rules permitting the port: 7096 and finally I disabled it, I'm using a site that checks if the port is open and no matter what I do, the access is always not permitted. I have tryed many solutions found on the net but nothing solves. All other rules are ok for ports: 80, 22, 3306, but for that no way.
Assuming your security group is configured properly, which it seems like it probably is based on the inbound rules you've defined, it could be that your EC2 instance isn't reachable from the internet.
Is the instance in a public subnet? Does it have a public IP assigned? Are you testing connectivity using the IP or the AWS domain name? Is it in the default VPC? Or have you made your own?
There's a myriad of reasons why it might be failing, so it will be difficult to figure out without some more information.
Before anything else, I have read about 30+ StackOverflow answers and none of them seem to address my particular flavour of this problem. Below I list all the answers I have already tried before asking for more advice.
I am trying to access my ec2 instance via socket in PHP from a different machine via fsockopen, pointed at my ec2 public IP (I have an Elastic fixed IP address 54.68.166.28) and designated port.
Behaviour: I can access the instance and the ChatScript application running inside from within the instance, via the public IP directly on the browser. But if I run the exact same webpage with the exact same socket call on an external machine targeting my instance's IP address (double checked it is the correct one) I get a 500 Internal Server Error when connecting on port 1024 (for my custom TCP connection), another 500 on port 443 (HTTPS). On port 80 (HTTP) it hangs 20+ seconds then gives me status 200 success, except it does not connect properly to the application and responds with nothing.
Troubleshooting:
I have set up my security group rules to accept incoming TCP from anywhere:
HTTP (80) TCP 80 0.0.0.0/0
HTTP (80) TCP 80 ::/0
HTTPS (443) TCP 443 0.0.0.0/0
HTTPS (443) TCP 443 ::/0
Custom (1024) TCP 1024 0.0.0.0/0
Custom (1024) TCP 1024 ::/0
Outbound rules span port range 0 - 65535 with destination 0.0.0.0/0, so should work.
I ssh every time without problems into the instance on port 22. SCP also works fine.
Checked $sudo service httpd status: running, which is why my UI displays there fine.
Checked $sudo /sbin/iptables -L and all my policies are set to ACCEPT with no rules
Checked $ netstat --listen -p and the app I am targeting is listening on port 0.0.0.0.0:1024.
Checked Network Utility and ports 80 and 1024 are registered as open. Port 443 is not. Pinging did not work for any of them, with 100% packet loss.
Checked my instance is associated to the security group with all the permissions - it is. IP is clearly correct or I could neither ssh nor serve webpages... which I can.
I stopped and restarted the instance.
I replaced the instance.
I think this is due diligence before asking for help... now I need it!
I realised my configuration was correct: the problem was that the hosted domain I used for the GUI, like most hosted domains, does not open custom ports, so tcp did not work.
Hello today configured vps on Google Cloud and put Vesta control panel, but the problem is not open one https that is, and the ip server and the domain itself does not open on https. Set up Google Cloud firewall and opened ports 80,443 but https does not open the site itself is not the ip of the server. Checked through online services port 443 is closed but settings of the server and a firewall of Google and ip tables say that port 443 is opened (checked by several services port 443) and in the browser through ip of the server and the domain on https do not open. Please tell me how to open port 443?
Same with ports 8443,8080.
I am not able to comment but here are some steps that might help to isolate the issue:
Check to see if the port is open or closed or filtered using nmap
nmap [ip_address]
Firewall rules are defined at the network level and therefore make sure that you follow this document while creating the firewall rules to allow incoming traffic on TCP port 80 and 443 (same for other ports). In this document in step 11, choose " specified protocols and ports" and enter tcp: 80, tcp: 443.
As you previously stated, you need to make sure there is no firewall running inside the VM blocking those connections.
You also need to verify if the application running on your vps is listening on port 443. To check this, try with this command.
sudo netstat -ntlp | grep LISTEN
In the output, if you don't see the application beside port number, check if your vps is rightly configured to ports for your application.
I was having the same issue with NGinx. And Found the root cause finally to be the Firewall (GCP VM Firewall) having a lower priority for the rule. ie: I had 65534 (which is super low priority) for the "Ingress 443" rule. Which did block the traffic coming into the SSL. Instead when I set this rule to 1, traffic started flowing and issue sorted.
What finally helped me was https://cloud.google.com/vpc/docs/using-firewalls
Thanks #Md Zubayer for the tip.
I am trying to run a socket server on an Amazon Web Services EC2 instance. The socket is able to run just fine on it's own, and telnetting locally on the instance can connect, but trying to telnet to the socket from the outside is failing. I have gone into the security groups to ensure the ports I am using are open for both TCP and UDP (though the socket server has been configured for TCP). Is there something else I am missing?
The server might be listening on the loopback interface or ipv6 by default. You can check that by running netstat --listen -p which will show you which program listens on which address/port. How to make the program listen on the external ipv4 interface depends on the program/programming language.