Unable to connect to kafka cluster from outside - apache-kafka-connect

I have created my kubernetes cluster from https://github.com/LinkedInLearning/kafka-on-kubernetes-2899691.git but iam unable to consume or produce message externally although iam able to do so inside the cluster anyone can please help me out to connect to kafka cluster externally
I have tried creating a Load balancer but iam unable to consume or produce with that also

Related

Not able to connect to Kafka on AWS EC2

I created an Ubuntu VM on AWS EC2 and in this same VM I'm running one instance of Zookeeper and one instance of Kafka. The Zookeeper and Kafka are running just fine, I was even able to create a topic, however, when I tried to connect from my local machine (macOS) from the terminal I get this message:
Producer clientId=console-producer] Connection to node -1 (ec2-x-x-x-x.ap-southeast-2.compute.amazonaws.com/x.x.x.x:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Inside /config/server.properties I changed the properties listeners and advertised.listeners (see image below) as I read in many topics related to my issue, but still no way of being able to connect Kafka on EC2 from my local machine:
I really don't know what I'm missing here...
Kafka version: kafka_2.12-2.2.1
listeners=PLAINTEXT://PRIVATE_IP_ADDRESS:9092
advertised.listeners=PLAINTEXT://PUBLIC_IP_ADDRESS:9092
After almost 3 days of struggling I was able to find out the problem. In case someone also has the same issue, I solved it by configuring the Security Group on AWS and adding the port 9092 which is the port where Kafka is running by default.

NiFi - connect to another instance (S2S)

I'm trying to use the SiteToSiteProvenance Reporting Task.
The objective is to send provenance data between two dockerized instances of NiFi, one at port 8080 and another at port 9090.
I've created a input port creatively called "IN" on the destination NiFi and the service configuration on the source NiFi is:
However I'm getting the following error:
Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster
I've also exposed the port 10000 in the destination docker.
As mentioned in the comments, it appears there was a networking issue between the containers.
It was finally resolved by the asker by not using containers.

Could not establish site-to-site communication for apache nifi

I am working with two instances of nifi.
Instance-1: A secure nifi single node.
Instance-2: A secure 3-node nifi cluster on AWS.
My site to site settings have the below configurations:
Instance-1:
nifi.remote.input.host=<hostname running locally
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
Instance-2:
nifi.remote.input.host=<ec2 public fqdn>.compute.amazonaws.com
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
My remote processor group is in locally running nifi and I am trying to push a flowfile from local to AWS cluster. I am facing error as below:
Error while trying to connect RPG

How to access log files of neo4j deployed to aws ec2

I have deployed neo4j to ec2 using this https://github.com/neo4j-contrib/ec2neo
I am getting 503 service not available error. How can I access the neo4j logs on ec2. Can anybody help please.
The steps to access the logs are given in: ec2neo Output
Select the CloudFormation stack that you used to create the instance. Click on Outputs tab. It will give you the actual ssh command to use to access the EC2 instance.

Hadoop Dedoop Application unable to contact Hadoop Namenode : Getting "Unable to contact Namenode" error

I'm trying to use the Dedoop application that runs using Hadoop and HDFS on Amazon EC2. The Hadoop cluster is set up and the Namenode JobTracker and all other Daemons are running without error.
But the war Dedoop.war application is not able to connect to the Hadoop Namenode after deploying it on tomcat.
I have also checked to see if the ports are open in EC2.
Any help is appreciated.
If you're using Amazon AWS, I highly recommend using Amazon Elastic Map Reduce. Amazon takes care of setting up and provisioning the Hadoop cluster for you, including things like setting up IP addresses, NameNode, etc.
If you're setting up your own cluster on EC2, you have to be careful with public/private IP addresses. Most likely, you are pointing to the external IP addresses - can you replace them with the internal IP addresses and see if that works?
Can you post some lines of the Stacktrace from Tomcat's log files?
Dedoop must etablish an SOCKS proxy server (similar to ssh -D port username#host) to pass connections to Hadoop nodes on EC2. This is mainly because Hadoop resolves puplic IPS to EC2-internal IPs which breaks MR Jobs submission and HDFS management.
To this end Tomcat must be configured to to etablish ssh connections. The setup procedure is described here.

Resources