Not able to connect to Kafka on AWS EC2 - amazon-ec2

I created an Ubuntu VM on AWS EC2 and in this same VM I'm running one instance of Zookeeper and one instance of Kafka. The Zookeeper and Kafka are running just fine, I was even able to create a topic, however, when I tried to connect from my local machine (macOS) from the terminal I get this message:
Producer clientId=console-producer] Connection to node -1 (ec2-x-x-x-x.ap-southeast-2.compute.amazonaws.com/x.x.x.x:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Inside /config/server.properties I changed the properties listeners and advertised.listeners (see image below) as I read in many topics related to my issue, but still no way of being able to connect Kafka on EC2 from my local machine:
I really don't know what I'm missing here...
Kafka version: kafka_2.12-2.2.1
listeners=PLAINTEXT://PRIVATE_IP_ADDRESS:9092
advertised.listeners=PLAINTEXT://PUBLIC_IP_ADDRESS:9092

After almost 3 days of struggling I was able to find out the problem. In case someone also has the same issue, I solved it by configuring the Security Group on AWS and adding the port 9092 which is the port where Kafka is running by default.

Related

Kafka broker on EC2 is not connecting to my zookeeper on my local network

Guys if someone has experienced this issue - please can you help me - i have been racking my brains on this with no success and have pored over as many posts as i can.
Scenario
I have zookeeper / two brokers and a producer and consumer running on my local partition from different IP addresses within my subnet and everything is perfect. My producer produces - consumer consumes life is happy.
I wanted to replicate this on EC2 an spun up a kafka broker on EC2 and want that broker to connect to my zookeeper but for some reason the broker on EC2 is unable to connect to my zookeeper.
Now for clarity sake :
My laptop IP :1.1.1.1 (1) in the attached image
Zookeeper IP: z.z.z.z.z (2) in the attached image
Broker 1 on my laptop : b.b.b.b.b
So the issue is from EC2 when i try connecting to zookeeper i get an error and time out - I do not understand what is going on also i have opened ports/IP to my laptop and have these in my inbound outbound sessions.
Please can someone help - also i dont understand why Kafka broker on EC2 is trying to connect to
z.z.z.z.us-east-2.compute.internal ....
Forgive me guys but I am not sure if / what i need to change.
In the broker config :
i have zookeeper config set as z.z.z.z:2181,1.1.1.1:2181
From EC2 teerminal i can ping my laptop public DNS - but cannot ping my internal partitioned IP on which zookeeper is running - i think this may be a cause also.
If you can please help shed some light on this and if u r in NY - beers on me.
Thank you !!!!
Screen shot of EC2 Kafka log
Solution
I could not find a definitive answer - i went over the docs - so i took a step back and started checking the logs.
This is what i found :
ssh / scp works from my zookeeper to EC2 but not from EC2 to zookeeper
zookeeper is on manjaro dedicated IP on my subnet
So fundamentally the issue is i did not have a port open
So i opened a port on my router updated (very important) the ip address in EC2 server.properties to have my public IP w the assigned port which i opened and forwarded to my Kali (stopped Majaro spun up Zookeeper on Kali) IP and 2181
Worked like a charm - EC2 was able to connect to zookeeper - all good.
Now from my Fedora i fired up a producer and got EC2 to consume - confirmed in logs plus in CMAK.
But from my Ubuntu consumer even though i see the consumer was able to connect i am unable to consume messages and consumer group not being able to create.
So some findings here :
Do not use Manajaro - use Kali - i am not 100% sure for the reason but my yum update resulted in Kali zookeeper to be able to more easily connect to EC2 - plus i can configure Kali more easily - i am pretty sure both are the same but i found it easier to navigate Kali than Manjaro. So i have zookeeper on Kali - and more i use Kali i am loving it - i can run my python socket scripts from Kali on my router and create a new subnet inside Kali - which i could not from Manjaro - fundamentally wanting to run another broker on the same partition / box.
So Kali thumbs up then Fedora, LinuxMint then Ubuntu then Manjaro - again I am 100% sure i am doing something very stupid and this is no reflection on these distributions its just my personal opinion on using them. Plus for some reason i get more info with sudo tracert on Kali than Manjaro - all in all Manjaro i am going to hut down and not touch / use too much time waste (IMHO) its too GUI driven ...
On Manjaro for some reason the IP was set to DHCP and kept on flipping and i also found if the IP flips for some reason i need to flush non committed messages on a topic and create a new cluster and then a new topic in CMAK - i am not sure why this wuld be the case - i am using 2.7.0.
Also - since i did not want to spend $$ on EC2 i am spun up the basic Amazon Linux2 and then created my own AMI w Java / Python (note yum works on Python 2 so u need to relink to Python 2 to do any yum updates and then link to 3.7) / Kafka - but all in all i had to set broker mem to 128M / 526M because of obvious limitation and because i wanted to use the free service from AWS) - so i think the reason my consumer connected but failed to consume any messages was because of throughput and network bandwidth - also i think i made a mistake - shud have used zone N Virginia instead of Ohio being in NY.
From what i have read so far - what i was trying to do w Port forwarding essentially is not recommended and Kafka docs specifically state that to do this in a trusted netwrork in a VPN etc etc - but I had to test to make sure it worked - just a proof of concept
on EC2 the system works perfectly soup to nuts no issues.
So issue resolved - but will need to check thruput on ubuntu for consumer why i could not register a consumer group - but thank you all in all what i wanted to test worked.

Nomad and consul setup

Should I run consul slaves alongside nomad slaves or inside them?
The later might not make sense at all but I'm asking it just in case.
I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications).
The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)
Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210.
Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.
I'm using latest version of nomad.
I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl
And the link below shows how I configured/ran my consul slave:
https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)
Nomad jobs listen on a specific host/port pair.
You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.
a93c5cb46a3e image-name bash 2 hours ago Up 2 hours 10.0.47.2:21435->8000/tcp, 10.0.47.2:21435->8000/udp foo-bar
Additionally, you will need to ensure that the consul nomad job is listening on port 0.0.0.0, or the specific ip of the machine. I believe that is this config value: https://www.consul.io/docs/agent/options.html#_bind
All those will need to match up in order to consul to be reachable.
More generally, I might recommend: if you're going to run consul with nomad, you might want to switch to host networking, so that you don't have to deal with the specifics of the networking within a container. Additionally, you could schedule consul as a system job so that it is automatically present on every host.
So I managed to solve the issue like this:
nomad.job.group.network.mode = host
nomad.job.group.network.port: port "http" {}
nomad.job.group.task.driver = docker
nomad.job.group.task.config.network_mode = host
nomad.job.group.task.config.ports = ["http"]
nomad.job.group.task.service.connect: connect { native = true }
nomad.job.group.task.env: SERVER_PORT= "${NOMAD_PORT_http}"
nomad.job.group.task.env: SPRING_CLOUD_CONSUL_HOST = "localhost"
nomad.job.group.task.env: SPRING_CLOUD_SERVICE_REGISTRY_AUTO_REGISTRATION_ENABLED = "false"
Running consul agent (slaves) using docker-compose alongside nomad agent (slave) with host as network mode + exposing all required ports.
Example of nomad job: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/location-update-publisher.hcl
Example of consul agent config (docker-compose file): https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Disclaimer: The LAB is part of Cluster Visualization Framework called: LiteArch Trafik which I have created as an interesting exercise to understand Nomad and Consul.
It took me long time to shift my mind from K8S to Nomad and Consul,
Integration them was one of my effort I spent in the last year.
When service resolution doesn't work, I found out it's more or less the DNS configuration on servers.
There is a section for it on Hashicorp documentation called DNS Forwarding
Hashicorp DNS Forwarding
I have created a LAB which explains how to set up Nomad and Consul.
But you can use the LAB seperately.
I created the LAB after learning the hard way how to install the cluster and how to integrate Nomad and Consul.
With the LAB you need Ubuntu Multipass installed.
You execute one script and you will get full functional Cluster locally with three servers and three nodes.
It shows you as well how to install docker and integrate the services with Consul and DNS services on Ubuntu.
After running the LAB you will get the links to Nomad, Fabio, Consul.
Hopefully it will guide you through the learning process of Nomad and Consul
LAB: LAB
Trafik:Trafik Visualizer

Spring boot app cannot connect to RabbitMQ on kubernetes Cluster

I deployed a RabbitMQ server on my kubernetes cluster and i am able to access the management UI from the browser. But my spring boot app cannot connect to port 5672 and i get connection refused error. The same code works , if i replace my application.yml properties from kuberntes host to localhost and run a docker image on my machine.I am not sure what i am doing wrong?
Has anyone tried this kind of setup.
Please help. Thanks!
Let's say the dns is named rabbitmq. If you want to reach it, then you have to make sure that rabbitmq's deployment has a service attached with the correct ports for exposure. So you would target http://rabbitmq:5672.
To make sure this or something alike exists you can debug k8s services. Run kubectl get services | grep rabbitmq to make sure the service exists. If it does, then get the service yaml by running 'kubectl get service rabbitmq-service-name -o yaml'. Finally, check spec.ports[] for the ports that allow you to connect to the pod. Search for '5672' in spec.ports[].port for amqp. In some cases, the port might have been changed. This means spec.ports[].port might be 3030 for instance, but spec.ports[].targetPort be 5672.
Do you are exposing TCP port of rabbitMQ to outside of cluster?
Maybe only management port has exposed.
If you can connect to management UI, but not on port 5672, maybe indicate that your 5672 port is not exposed outside of cluster.
Obs: if I not understand correctly your question, please let me know.
Good luck

NiFi - connect to another instance (S2S)

I'm trying to use the SiteToSiteProvenance Reporting Task.
The objective is to send provenance data between two dockerized instances of NiFi, one at port 8080 and another at port 9090.
I've created a input port creatively called "IN" on the destination NiFi and the service configuration on the source NiFi is:
However I'm getting the following error:
Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster
I've also exposed the port 10000 in the destination docker.
As mentioned in the comments, it appears there was a networking issue between the containers.
It was finally resolved by the asker by not using containers.

Connect to RabbitMQ on EC2 from external client

Similar questions have been asked
RabbitMQ on Amazon EC2 Instance & Locally?
and
cant connect from my desktop to rabbitmq on ec2
But they get different error messages.
I have a RabbitMQ server running on my linux EC2 instance which is set up correctly. I have created custom users and given them permissions to read/write to queues. Using a local client I am able to correctly receive messages. I have set up the security groups on EC2 so that ports (5672/25672) are open and can telnet to those ports. I also have set up rabbitmq.conf like this.
[
{rabbit, [
{tcp_listeners, [{"0.0.0.0",5672}]},
{loopback_users, []},
{log_levels, [{connection, info}]}
]
}
].
At the moment I have a client on the server publishing to the queue.
I have another client running on a server outside of EC2 which needs to consume data from the same queue (I can't run both on EC2 as the consume does a lot of plotting/graphical manipulation).
When I try to connect however from the external client using some test code
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setUri("amqp://****:****#****:5672/");
connection = factory.newConnection();
} catch (IOException e) {
e.printStackTrace();
}
I get the following error.
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED -
Login was refused using authentication mechanism PLAIN. For details
see the broker logfile.
However there is nothing in the broker logfile as if I never tried to connect.
I've tried connecting using the individual getter/setter methods of factory, I've tried using different ports (along with opening them up).
I was wondering if I need to use SSL or not to connect to EC2 but from reading around the web it seems like it should just work but I'm not exactly sure. I cannot find any examples of people successfully achieving what I'm trying to do and documenting it.
Thanks in advance
The answer was simply that I needed to specify the host to be the same IP I use to SSH into. I was trying to use the Elastic IP/public dns of the instance of the EC2 instance which I thought should point to the same machine.
Although I did try many things including setting up an SSL connection it was not necessary.
All that is needed is:
Create rabbitmq user using rabbitmqctrl and give it appropriate permissions
Open the needed ports on EC2 via Security Groups menu (default is 5672)
Use client library to connect to correct host name/username/password/port where the host name is the same as the machine that you normally SSH into.

Resources