Zookeeper & Kafka error KeeperErrorCode=NodeExists - windows

I have written a kafka consumer and producer that worked fine until today.
This morning, when I started zookeeper and kafka, my consumer was not able to read messages, and I found this in the zookeeper logs:
INFO Got user-level KeeperException when processing sessionid:0x151c41e62e10000
type:create cxid:0x2a zxid:0x1e txntype:-1 reqpath:n/a
Error Path:/brokers/ids
Error:KeeperErrorCode = NodeExists for /brokers/ids
(org.apache.zookeeper.server.PrepRequestProcessor)

Look for log.dirs in your server.properties file and delete all the Kafka and zookeeper logs from there and try restarting zookeeper and Kafka respectively. I was facing the same issue and doing this resolved it.

According to Confluent at https://groups.google.com/forum/#!topic/confluent-platform/h0gEik_Ii1E on 2016/10/08
Those are not errors, you can see the log level is INFO. It is simply
logging that Kafka tried to create a node that already exists. Totally
normal behavior for Kafka and nothing to worry about.
Is there an actual problem related to the message or is everything working correctly?

go to Kafka root directory and look for the logs file. and clear all logs. For instance:
say your kafka is installed in the downloads folder:
cd ~/Downloads/kafka_2.13-2.6.0
rm -rf logs
It will resolve the issue.

Related

running zookeeper on windows got INFO ZooKeeper audit is disabled. and ERROR Exiting JVM with code 2

i tried to run zookeeper on windows using the following command
zookeeper-server-start.bat config\zookeeper.properties
and i got this error
INFO Reading configuration from: config\zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-03-01 00:06:37,850] WARN config\zookeeper.properties is relative. Prepend .\ to
indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-03-01 00:06:37,850] ERROR Invalid config, exiting abnormally
(org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing
config\zookeeper.properties
at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:198)
at
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:124)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:90)
Caused by: java.lang.IllegalArgumentException: config\zookeeper.properties file is missing
at
org.apache.zookeeper.server.util.VerifyingFileFactory.doFailForNonExistingPath(VerifyingFile Factory.java:54)
at
org.apache.zookeeper.server.util.VerifyingFileFactory.validate(VerifyingFileFactory.java:47)
at
org.apache.zookeeper.server.util.VerifyingFileFactory.create(VerifyingFileFactory.java:39)
at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:180)
... 2 more
Invalid config, exiting abnormally
[2022-03-01 00:06:37,853] INFO ZooKeeper audit is disabled.
(org.apache.zookeeper.audit.ZKAuditProvider)
[2022-03-01 00:06:37,855] ERROR Exiting JVM with code 2
(org.apache.zookeeper.util.ServiceUtils)
The error message already tell you what went wrong
Caused by: java.lang.IllegalArgumentException: config\zookeeper.properties file is missing at org.apache.zookeeper.server.util.VerifyingFileFactory.doFailForNonExistingPath(VerifyingFile Factory.java:54)
It's raising error because it cannot load the zookeeper.properties file when running zookeeper.
Kafka bat files for Windows is one folder nested inside windows folder, so you need to step out twice with ..\ to point to the config directory.
This should work for you
kafka_2.13-3.1.0\bin\windows> .\zookeeper-server-start.bat ..\..\config\zookeeper.properties
And same thing, to start kafka server afterward
kafka_2.13-3.1.0\bin\windows> .\kafka-server-start.bat ..\..\config\server.properties
These commands should point to the zookeeper.properties and server.properties inside config directory of kafka at
kafka_2.13-3.1.0\config
Assuming that you didn't modify/moving the default config directory of Kafka
Here I'm on windows, I had to change the bars to work:
from:
zookeeper-server-start.bat ..\..\config\zookeeper.properties
to:
zookeeper-server-start.bat ../../config/zookeeper.properties
Hope this help you!
This should work:
First, your directory should be on the Kafka folder just before the bin directory on both steps:
To start zookeeper:
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
To start the kafka server:
.\bin\windows\kafka-server-start.bat .\config\server.properties

Running Kafka on Windows but getting Access denied exception for logs folder

I've been trying to run both zookeeper and kafka 2.13 on my local windows machine. I have modified the server properties to point to c:/kafka/kafka-logs and zookeeper data to point to c:/kafka/zookeeper-data.
The zookeeper starts without any issues but when I attempt to start kafka with
.\bin\windows\kafka-server-start.bat .\config\server.properties
I get the below error saying AccessDeniedException. I have already tried the below:
Deleting the kafka-logs and zookeeper-data folders and running both zookeeper and kafka again - I still run into the error if I do that
Creating the kafka-logs folder before running kafka - I still get the access denied exception
Running the command prompt as administrator before typing the commands - does not work
Could anyone give some suggestions on how to fix this?
Thank you
what is the kafka version you are using?.
I has this issue in 3.0.0 . I downgraded to 2.8.1 and the issue is resolved.
I think it is something related to kafka.
I reproduced the same issue with Kafka 3.0. Downgrading to 2.8.1 will help.

AccessDeniedException when deleting a topic on Windows Kafka

I just installed Kafka (from Confluent Platform) on my Windows machine. I started up Zookeeper and Kafka and creating topics, producing to and consuming from them works. However, as soon as I delete a topic, Kafka crashes like this:
PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --create --partitions 1 --replication-factor 1
Created topic "foo".
PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --delete
Topic foo is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
This is the crash output:
[2018-06-08 09:44:54,185] ERROR Error while renaming dir for foo-0 in log dir C:\confluent-4.1.1\data\kafka (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
at kafka.log.Log.renameDir(Log.scala:577)
at kafka.log.LogManager.asyncDelete(LogManager.scala:828)
at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
at kafka.cluster.Partition.delete(Partition.scala:235)
at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:205)
at kafka.server.KafkaApis.handle(KafkaApis.scala:116)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
... 23 more
[2018-06-08 09:44:54,187] INFO [ReplicaManager broker=0] Stopping serving replicas in dir C:\confluent-4.1.1\data\kafka (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,192] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaFetcherManager)
[2018-06-08 09:44:54,193] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaAlterLogDirsManager)
[2018-06-08 09:44:54,195] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions and stopped moving logs for partitions because they are in the failed log directory C:\confluent-4.1.1\data\kafka. (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,195] INFO Stopping serving logs in dir C:\confluent-4.1.1\data\kafka (kafka.log.LogManager)
[2018-06-08 09:44:54,197] ERROR Shutdown broker because all log dirs in C:\confluent-4.1.1\data\kafka have failed (kafka.log.LogManager)
[2018-06-08 09:44:54,198] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaFetcherManager)
The user running Zookeeper and Kafka has full access rights to C:\confluent-4.1.1\data\kafka.
What am I missing?
I know I'm late to the party but keep in mind that even if you delete your topic manually or via some Kafka UI and you delete all the kafka logs, kafka still may not start because of the state that it syncs with ZK.
So, make sure you cleanup the ZK state by deleting ZK's log.
Please know these actions are irreversible. Also run as Administrator
I had a similar problem and it happen only under windows, see KAFKA-1194 and it still apply to Kafka 1.1.0
The only workaround available is to disable the cleaner log.cleaner.enable = false
For local development under windows you can ignore this issue since it does not apply in other OS.
I had similar problem after deleting a topic. I had to go to topic location and delete it manually and it worked.
/tmp/kafka-logs/[yourTopicName]
I am not sure if same will work for you, as I am also new to KAFKA.
1- stop zookeeper & Kafka server,
2- then go to ‘kafka-logs’ folder , there you will see list of kafka topic folders, delete folder with topic name
3- go to ‘zookeeper-data’ folder , delete data inside that.
4- start zookeeper & kafka server again.
note: if you get "The Cluster ID xxxxxxxxxx doesn't match stored clusterId" error, you have to delete all files in the kafkas log dir.
Problem:
I had similar problem after deleting a topic. zookeeper was started successfully but while running kafka I was getting above mentioned issue.
Analysis:
In my case, what I did was I redirected kafka logs to new folder location C:\Tools\kafka_2.13-2.6.0\kafka-test-logs. I forgot to create a folder kafka-test-logs. In this case it will create auto default folder with provided path name ex: Toolskafka_2.13-2.6.0kafka-test-logs. So even after deleting this logs folder it won't worked in my case.
Solution:
First I stopped zookeeper. I created new folder kafka-test-logs which I forgot earlier and then deleted default created logs for kafka and then restarted zookeeper and kafka server. That's all worked for me.
Thank you!! Cheers and Happy Coding.
I was also facing the same issue, then resolved it by downloading the following version of Kafka from this link,
Version 2.8.1
Then changed the zookeeper.properties file in the Config folder to
dataDir=C:/kafka/zookeeper
and server.properties file in the Config folder to
log.dirs=C:/kafka/kafka-logs
Make sure your Kafka folder is extracted and stored in the C:/ drive or else amend the path accordingly in the config file properties.

Kafka console producer Error in Hortonworks HDP 2.3 Sandbox

I have searched it all over and couldn't find the error.
I have checked This Stackoverflow Issue but it is not the problem with me
I have started a zookeeper server
Command to start server was
bin/zookeeper-server-start.sh config/zookeeper.properties
Then I SSH into VM by using Putty and started kafka server using
$ bin/kafka-server-start.sh config/server.properties
Then I created Kafka Topic and when I list the topic, it appears.
Then I opened another putty and started kafka-console-producer.sh and typed any message (even enter) and get this long repetitive exception.
Configuration files for zookeeper.properties, server.properties, kafka-producer.properties are as following (respectively)
The version of Kafka i am running is 8.2.2. something as I saw it in kafka/libs folder.
P.S. I get no messages in consumer.
Can any body figure out the problem?
The tutorial I was following was [This][9]
8http://%60http://www.bogotobogo.com/Hadoop/BigData_hadoop_Zookeeper_Kafka_single_node_single_broker_cluster.php%60
On the hortonworks sandbox have a look at the server configuration:
$ less /etc/kafka/conf/server.properties
In my case it said
...
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
...
This means you have to use the following command to successfully connect with the console-producer
$ cd /usr/hdp/current/kafka-broker
$ bin/kafka-console-producer.sh --topic test --broker-list sandbox.hortonworks.com:6667
It won't work, if you use --broker-list 127.0.0.1:6667 or --broker-list localhost:6667 . See also http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kafka.html
To consume the messages use
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
As you mentioned in your question that you are using HDP 2.3 and for that when you are running Console-Producer
You need to provide sandbox.hortonworks.com:6667 in Broker-list.
Please use the same while running Console-Consumer.
Please let me know in case still you face any issue.
Within Kafka internally there is a conversation that goes on between both producers and consumers (clients) and the broker (server). During those conversations clients often ask the server for the address of a server broker that's managing a particular partition. The answer is always a fully-qualified host name. Without going into specifics if you ever refer to a broker with an address that is not that broker's fully-qualified host name there are situations when the Kafka implementation runs into trouble.
Another mistake that's easy to make, especially with the Sandbox, is referring to a broker by an address that's not defined to the DNS. That's why every node on the cluster has to be able to address every other node in the cluster by fully-qualified host name. It's also why, when accessing the sandbox from another virtual image running on the same machine you have to add sandbox.hortonworks.com to the image's hosts file.

Command 'ZkStartPreservingDatastore' failed for service 'zookeeper1'

I need your help.I have a question with hadoop clustors starting.the error :
the error :Command 'ZkStartPreservingDatastore' failed for service 'zookeeper1'
please help me ,thank you.
Hope this helps. I've encountered the same issue on our production CDH4 cluster. I've tried to restart the service manually, and what I've noticed in the logs was the following error message:
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#193] - Too many connections from /10.0.2.133 - max is 50
So, I've edited /etc/zookeeper/conf/zoo.cfg and changed the maxClientCnxns to 500. After that I've manged to restart the zookeeper, and cluster came back.

Resources