I have a single Kafka broker running in my local mac port 9092 and a topic produced in my local mac as well. I wanted to run a consumer in EC2 to consume my local mac broker topic. I have enabled inbound and outbound access to TCP port 9092 for all IP (0.0.0.0/0) in the EC2 security group.
When I run a consumer command in EC2:
bin/kafka-console-consumer.sh --bootstrap-server <localmac IP>:9092 --topic <topicname> --from-beginning
I get connect timeout error(org.apache.kafka.common.errors.TimeoutException:).
What other outbound/inbound rules in EC2 security group Am I missing here for EC2 to access the local mac address.
You need to configure an advertised listener on your broker for the EC2 consumer to connect to. At the moment you're connecting to the broker on the remote IP of your machine but the broker returns the local IP to the consumer for it to continue its requests to.
Ref: https://rmoff.net/2018/08/02/kafka-listeners-explained/
Related
We have kafka and zookeeper installed on a single AWS EC2 instance. We have kafka producers and consumers running on separate ec2 instances which are on the same VPC and have the same security group as that of kafka instance. In the producer or consumer config we are using the internal IP address of the kafka server to connect to it.
But we have noticed that we need to mention the public IP address of the EC2 server as advertised.listeners for letting the producers and consumers connect to the Kafka server:
advertised.listeners=PLAINTEXT://PUBLIC_IP:9092
Also we have to whitelist the public ip addresses and open traffic on 9092 port of each of our ec2 servers running producers and consumers.
We want the traffic to flow using internal IP addresses. Is there a way we need not whitelist the public ip addresses and open traffic on 9092 port for each one of our servers running producer or consumer?
If you don't want to open access to all for either one of your servers, I would recommend adding a proper high performance web server like nginx or Apache HTTPD in front of your applications' servers acting as a reverse proxy. This way you could also add SSL encryption and your server stays on a private network while only the web server would be exposed. It’s very easy and you can find many tutorials on how to set it up. Like this one: http://webapp.org.ua/sysadmin/setting-up-nginx-ssl-reverse-proxy-for-tomcat/
Because of the variable nature of the ecosystem that kafka may need to work in, it only makes sense that you are explicit in declaring the locations which kafka can use. The only way to guarantee that external parts of any system can be reached via an ip address is to ensure that you are using external ip addresses.
I'm trying to connect to MSMQ running on an AWS EC2 instance using the following connection :
MessageQueue = new MessageQueue("FormatName:Direct=TCP:xx.xxx.xx.xxx\\private$\\TestQueue");
I've enabled all the appropriate traffic with the AWS security group and windows firewall. The error I'm getting is "Remote computer is not available."
Is MSMQ meant to work between windows machines which are not in the same domain, and without having to use HTTP ?
I'm including this netstat trace because it looks like MSMQ service is not listening on the public interface, but not sure how to fix this if it is the issue.
C:\Users\Administrator>netstat -abno | findstr 1801
TCP 0.0.0.0:1801 0.0.0.0:0 LISTENING 4184
TCP [::]:1801 [::]:0 LISTENING 4184
Lets say Kafka is running as a single node broker on an AWS EC2 instance. The instance has the internal private IP 10.0.0.1. I want to connect to that broker directly from the same EC2 instance and from another EC2 instance in the same VPC and subnet. The security groups are allowing the connection.
Which settings do I have to use to get the connection running?
I tried listeners=PLAINTEXT://0.0.0.0:9092 and advertised.listeners=PLAINTEXT://0.0.0.0:9092. With that setting I can connect to the broker from local (the same instance where the broker is running), but I can't reach the broker from the second EC2 instance.
Does anybody have any idea?
If you are trying to connect to the Kafka instance inside of AWS from one EC2 instance to another the internal ip address should work.
The producer and consumers should make use of the internal private ip addresses as well for both the broker and zookeeper.
Additionally, you may need to verify that the IP Tables at the OS level aren't blocking the communication.
I have two Ubuntu instances in the EC2 and I want to cluster them.
One ip will be refered as - X (the "net addr" ifconfig displayed IP) and its public ip will be reffered as PX.
the other ip is Y and its public is Y.
So now I did the following on both machines.
installed the latest rabbbitmq.
installed the management plugin.
opened the port for 5672 (rabbit) and 15672(management plugin)
connected to rabbit with my test app.
connected to the ui.
So now for the cluster.
I did the following commands
on X
rabbitmqctl cluster_status
got the node name which was 'rabbit#ip-X' (where X is the inner IP)
on Y
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit#ip-X
I got
"The nodes provided are either offline or not running"
Obviously this is the private ip, so the other instance cant connect.
How do I tell the second instance where the first is located?
EDIT
Firewall is completely off, I have a telnet connection from one remote to the other
(to ports 5672(rmq),15672 (ui), 4369 (cluster port)).
The cookie on both servers (and the hash of the cookie in the logs is the same).
when recorded tcp when running the join cluster command and watched in wireshark. I saw the following (no ack. )
http://i.imgur.com/PLezLvQ.png
so I closed the firewall using
sudo ufw disable
(just for the tests) and I re-typed
sudo rabbitmqctl join_cluster --ram rabbit#ip-XX
and the connection was created - but terminated by the remote rabbit
here :
http://i.imgur.com/dxJLNfH.png
and the message is still
"The nodes provided are either offline or not running"
(the remote rabbit app is definitely running)
You need to make sure the nodes can access each other. RabbitMQ uses distributed Erlang primitives for communication across the nodes, so you also have to open up a few ports in the firewall. See:
http://learnyousomeerlang.com/distribunomicon#firewalls
for details.
You should also use the same data center for your nodes in the cluster, since RabbitMQ can get really sad on network partitions. If your nodes are in different data centers, you should use the shovel or federation plugin instead of clustering for replication of data.
Edit: don't forget to use the same Erlang cookie on all nodes, see http://www.rabbitmq.com/clustering.html for details.
The issue are probably TCP ports that should be opened.
You should do the following:
1) Create a Security Group for the Rabbit Servers (both will use it)
we will call it: rabbit-sg
2) In the Security Group, Define the following ports:
All TCP TCP 0 - 65535 sg-xxxx (rabbit-sg)
SSH TCP 22 0.0.0.0/0
Custom TCP Rule TCP 4369 0.0.0.0/0
Custom TCP Rule TCP 5672 0.0.0.0/0
Custom TCP Rule TCP 15672 0.0.0.0/0
Custom TCP Rule TCP 25672 0.0.0.0/0
Custom TCP Rule TCP 35197 0.0.0.0/0
Custom TCP Rule TCP 55672 0.0.0.0/0
3) make sure both EC2 use this security group,
note that we opened all TCP between the EC2
4) make sure the rabbit cookie is the same and that you reboot the EC2
after changing it in the slave EC2
I am trying to run a socket server on an Amazon Web Services EC2 instance. The socket is able to run just fine on it's own, and telnetting locally on the instance can connect, but trying to telnet to the socket from the outside is failing. I have gone into the security groups to ensure the ports I am using are open for both TCP and UDP (though the socket server has been configured for TCP). Is there something else I am missing?
The server might be listening on the loopback interface or ipv6 by default. You can check that by running netstat --listen -p which will show you which program listens on which address/port. How to make the program listen on the external ipv4 interface depends on the program/programming language.