RabbitMQ cluster on EC2 without Route 53 - amazon-ec2

I’m trying to set up a clustered deployment of RabbitMQ in a VPC in EC2 based on the documentation here (https://www.rabbitmq.com/clustering.html) and here (https://www.rabbitmq.com/ec2.html)
We currently don't have Route 53 set up within our VPC and rely on the private IP addresses for connections between instances.
I've been trying to get the cluster working without setting up Route 53 by using the private IP address as the hostname as follows:
(Assuming private IP address is 10.0.1.33)
Alter /etc/hostname changing
ip-10-0-1-33
to
10.0.1.33
Alter /etc/hosts changing
127.0.0.1 localhost
to
127.0.0.1 localhost 10.0.1.33
Change hostname ie sudo hostname 10.0.1.33 (or reboot instance to pick up new hostname)
Add file rabbitmq-env.conf to /etc/rabbitmq with contents:
USE_LONGNAME=true
This seems to work and allow me to cluster the rabbit nodes using rabbitmqctl since rabbit treats the private IP address as a fully qualified domain name (USE_LONGNAME is necessary or else rabbit just uses the part of the IP address before the first dot) and the IP addresses are resolvable in the VPC - I get the nodes named rabbit#10.0.1.33 etc
My question is - is there anything I'm missing here or is this a reasonable approach?

Related

Cannot connect to Elasticsearch EC2 port 9200 using public ip

I'm having problems trying to connect to Elasticsearch (ES) on an EC2 instance from my local linux box via the EC2 instance public ip i.e. curl [PUBLIC_IP]:9200
I followed the steps in this guide: https://github.com/miztiik/elk-stack/tree/master/ElasticSearch.
My ES version is 6.8.9
Here's what's working and what's not:
On ES EC2 instance: curl localhost:9200 works
On another instance with same VPC: curl [PUBLIC_IP]:9200 works
On my local linux box: curl [PUBLIC_IP]:9200 doesn't work, however telnet [PUBLIC_IP] 9200 works i.e. it connects and gives me the escape character '^]'
My /etc/elasticsearch/elasticsearch.yml config has the following:
http.enabled: true
http.port: 9200
network.host: 0.0.0.0
http.cors.allow-origin: "*"
http.cors.enabled: true
There is only one (new) security group attached to the EC2 instance, which has the following inbound rules:
I also confirmed that the EC2 instance is in a public subnet i.e. connected to an internet gateway.
Thanks for any help.
Update
I also installed Apache httpd on the instance and rechecked everything. Here is the current state of things:
I can ping, telnet and connect to the web server (:80) from the outside.
I cannot connect to Elasticsearch (:9200) or Kibana (:5601) from the outside. All these I can however do within the VPC from another instance.
This sounds firewall related.
Check on the ECE2 security group and either modify the default Sec group or create new one and associate it with your instance.
For a test, modify your inbound as for your port as:
0.0.0.0/0 IPv4
And set network host as follows
network.host: _ec2 # if using the plugin
Otherwise
network.host: "{elastic_ip}”
If your ece2 instance doesn’t have public dns, you will have to edit your/etc/hosts file and add the IP address of your instance
network.bind_host
This specifies which network interface(s) a node should bind to in order to listen for incoming requests. A node can bind to multiple interfaces, e.g. two network cards, or a site-local address and a local address. Defaults to network.host.
network.publish_host
The publish host is the single interface that the node advertises to other nodes in the cluster, so that those nodes can connect to it. Currently an Elasticsearch node may be bound to multiple
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
https://discuss.elastic.co/t/elasticsearch-only-accessible-from-localhost/65782/3
https://www.elastic.co/blog/running-elasticsearch-on-aws
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#describing-security-group
How do I enable remote access/request in Elasticsearch 2.0?
I had the same issue on AWS. Try using the public DNS or the private IP in lieu of the public IP to connect another ec2 instance in the same VPC.

Amazon aws route53, redirect subdomain to ec2 app running under specific port

I have a domain name mydomain.com registered on amazon route 53.
I have an EC2 instance in which I installed a docker portainer image under 9000 port.
My docker image run perfectly under ec2 public ip address:
http://xxx.xxx.xxx.xxx:9000
What I want now is to create a subdomain: portainer.mydomain.com and pointed it to my EC2 portainer instance.
When I try to create a new record set portainer.mydomaon.com and point it to my docker image instance I can't specify the port value.
I know I miss something, I'm on my beginning on DNS domains.
Route 53 is a DNS resolver. Its job is to resolve domain to ip address. It has nothing to do with port.
But there are some alternatives:
Add a secondary ip to the instance to host multiple websites and bind them to port 80. You add an additional ip by attaching elastic network interface (ENI).
Add Application Load Balancer with host based routing (you will get much more control, you can even do path based routing as well). See: Listeners for Your Application Load Balancers - Elastic Load Balancing
S3 redirection (Route 53 Record Set on Different Port)

Adding windows spot instances in domain name

Was working on Windows HPC on AWS, till this time I was using on demand instances.Using the hostname of my instance I connect to HPC.Change of hostname was also possible(used userdata to give powershell script for hostname change).Now I want to do this through spot instances.How do I change the hostname?
Elastic IP, Route 53 or Elastic LoadBalancer.
Elastic IP - You can assign an Elastic IP to the instance when it comes up and move it to the new instances as it changes.
Route 53 - You can assign the instance to a CNAME and then connect to the route53 dns entry
Elastic LoadBalancer - You can create an ELB and put instances behind that and then connect to the ELB dns name.

Why I cannot connect to Kafka from outside?

I am running kafka on ec2 instance. So amazon ec2 instance has two ips one is internal ip and second one is for external use.
I created producer from local machine, but it redirect to internal ip and give me connection unsuccessful error. Can anybody help me to configure kafka on ec2 instance, so that I can run producer from local machine. I am tried many combinations but didn't work.
In the Kafka FAQ (updated for new properties) you can read:
When a broker starts up, it registers its ip/port in ZK. You need to make sure the registered ip is consistent with what's listed in bootstrap.servers in the producer config. By default, the registered ip is given by InetAddress.getLocalHost.getHostAddress(). Typically, this should return the real ip of the host. However, sometimes (e.g., in EC2), the returned ip is an internal one and can't be connected to from outside. The solution is to explicitly set the host ip and port to be registered in ZK by setting the advertised.listeners property in server.properties.
I solved this problem, by setting advertised.host.name in server.properties and metadata.broker.list in producer.properties to public IP address and host.name to 0.0.0.0.
The easiest way how to reach your Kafka server (version kafka_2.11-1.0.0) on EC2 from consumer in external network is to change the properties file
kafka_2.11-1.0.0/config/server.properties
And modify the following line
listeners=PLAINTEXT://ec2-XXX-XXX-XXX-XXX.eu-central-1.compute.amazonaws.com:9092
Using your public address
Verified on 2.11-2.0.0
I just did this in AWS. First get the Kafka server to listen on the correct interface/IP using host.name. For your case this would be the internal IP, not localhost, since your intent is for outside Kafka clients to connect. Any local clients will need to use that same address, not localhost.
Then set advertised.host.name to a host name, not an IP address. The trick is to get that host name to always resolve to the correct IP for both internal and external machines. I use /etc/hosts inside and DNS outside. See my full answer about Kafka and name resolution here.
If you want to access from LAN, change following 2 files-
In config/server.properties:
advertised.listeners=PLAINTEXT://server.ip.in.lan:9092
In config/producer.properties:
bootstrap.servers=server.ip.in.lan:9092
In my case, the server.ip.in.lan value was 192.168.15.150
Below are the steps to connect Kafka from outside of EC2 instance.
Open Kafka server properties file on EC2.
/kafka_2.11-2.0.0/config/server.properties
Set the value of advertised.listeners to
advertised.listeners=PLAINTEXT://ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com:9092
This should be your Public DNS (IPv4) of EC2 instance.
Stop Kafka server.
Start Kafka server to see above configuration changes in action.
Now you can connect to your Kafka of EC2 instance from outside or from your localhost.
Tried and tested on kafka_2.11-2.0.0
SSH to your EC2 instance or wheverver you're hosting Kafka.
sudo nano /etc/hosts
Add:
127.0.0.1 <your-host-name> localhost
In my case it's:
127.0.0.1 ec2-12-34-56-78.ap-southeast-1.compute.amazonaws.com
Save and exit.
For EC2 you should edit the /etc/hosts file to add:
XXX.XXX.XXX.XXX ip-YYY-YYY-YYY-YYY
where XXX... is your external IP and the ip-YYY-YYY-YYY-YYY is the string returned by the hostname command. You can use 127.0.0.1 instead of your external IP to communicate inside the server.
host.name is deprecated - as are advertised.host.name and advertised.port

Elasticsearch on EC2

I've spent some time now looking for information regarding elasticsearch.yml configurations that make my single instance Elasticsearch (on Windows 2012 Server EC2) accessible via public ip, but everytime I uncomment one or both of following settings the only thing that changes is, calling the private ip as well results in an error.
network.publish_host: <public ip>
network.bind_host: <private ip>
Is this correct and are there any other settings that have to be defined? Shouldn't it run with the default values?
This is more of a general answer as to how networking works within EC2 instead of a specific answer to your question. But it should help inform how to configure your application.
EC2 has 1:1 NAT between a public and private IP address. Because of this, only the private IP address is visible to the instance directly.
If you are binding a service to a network interface, it would be the one with the private IP.
Some services do require knowledge of the external IP address in order to function properly. The only one I have run into is FTP in a passive configuration, likely due to the fact that it needs to open a separate socket for data transfer.
In the case of elastic search, it appears that they have a special plugin that will help configure elastic search for the aws environment: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-network.html
I had the same problem.
Installed only one instance of ES on aws EC2 and wanted to grant it public access.
On ubuntu 16.04 this is what works for me:
in /etc/elasticsearch/elasticsearch.yml add this line:
network.host: <ec2 instance private ip>
The private ip should be something like 172.x.x.x
Also do not forget allow access in security group in your aws console for port 9200 (default) and ip address from which you will be sending requests.
So difference was setting not public but private ip address from aws console..
Also note that this can be dangerous as there is not any user/password or other access control

Resources