I started zookeeper after that i have ran "kafka-server-start.bat mypath\server.properties" command to start kafka server.
Getting following error in kafka server window.
INFO Opening socket connection to server localhost/<unresolved>:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
WARN Session 0x0 for server localhost/<unresolved>:2181, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:815)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:837)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Below are the properties in server.properties
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=C:\KafkaLog
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
confluent.support.metrics.enable=true
confluent.support.customer.id=anonymous
group.initial.rebalance.delay.ms=0
Zookeeper properties are
dataDir=C:\ZookeeperLog
clientPort=2181
maxClientCnxns=0
We tried to check the following things in order to resolve them.
Updated the dataDir & log.dirs properties in config files (for matching windows platform)
Verified the zookeeper startup using netstat -aon | findstr '2181' command
Updated the zookeeper.connect url to 127.0.0.1:2181
Added the below Missing loopback entries in hosts file and restarted the system.
127.0.0.1 localhost
Related
I have downloaded confluent kafka in windows machine. Zookeeper is running successfully while running kafka server i am getting below exception.
INFO Opening socket connection to server localhost/<unresolved>:2181. Will not attempt to
authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
WARN Session 0x0 for server localhost/<unresolved>:2181, unexpected error, closing socket
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:815)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:837)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
Below options i tried to solve the issue but it didn't work.
1.Added listeners=PLAINTEXT://127.0.0.1:9092,advertised.listeners=PLAINTEXT://127.0.0.1:9092
in server.properties file
2.Added localhost ip address in hosts file
Below are properties in server.properties file
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=C:\KafkaLog
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
confluent.support.metrics.enable=true
confluent.support.customer.id=anonymous
group.initial.rebalance.delay.ms=0
Zookeeper properties are
dataDir=C:\ZookeeperLog
clientPort=2181
maxClientCnxns=0
I have recently installed NiFi, and it was working fine for few days. But suddenly today when i try to open it using run-nifi.bat, the NiFi window is getting closed in few seconds stating the below error:
2019-04-11 23:07:40,146 WARN [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the owner can read status file C:\Users\DOWNLO~1\NIFI-1~1.1-B\NIFI-1~1.1\bin\..\run\nifi.status; this may allows others to have access to the key needed to communicate with NiFi. Permissions should be changed so that only the owner can read this file
2019-04-11 23:07:40,149 INFO [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for Bootstrap requests on port 54149
2019-04-11 23:08:00,352 ERROR [NiFi logging handler] org.apache.nifi.StdErr Failed to start web server: Must configure HTTP or HTTPS connector
2019-04-11 23:08:00,352 ERROR [NiFi logging handler] org.apache.nifi.StdErr Shutting down...
2019-04-11 23:08:00,419 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
I do looked out for the org.apache.nifi.StdErr Failed to start web server: Must configure HTTP or HTTPS connector error, but unfortunately I cant find a similar one. I'm sure that no settings or properties has been changed since installation. Any suggestion guys?
I was getting the same error.
You need to check nifi-app.log file to get more details on this type of error.
Here’s what I did: Remove the port information from nifi.properties for HTTPS and only keep the setting for HTTP. Then restart nifi again.
Keep one property enable like https or http. in my case https:127.0.0.1:8443/nifi/
it works fine me.
When I tried to stop the ZooKeeper with command "zkServer stop", I got the following result:
call "C:\Program Files\Java\jdk1.8.0_121"\bin\java "-Dzookeeper.log.dir=C:\zookeeper-3.4.10\bin\.." "-Dzookeeper.root.logger=INFO,CONSOLE" -cp "C:\zookeeper-3.4.10\bin\..\build\classes;C:\zookeeper-3.4.10\bin\..\build\lib\*;C:\zookeeper-3.4.10\bin\..\*;C:\zookeeper-3.4.10\bin\..\lib\*;C:\zookeeper-3.4.10\bin\..\conf" org.apache.zookeeper.server.quorum.QuorumPeerMain "C:\zookeeper-3.4.10\bin\..\conf\zoo.cfg" stop
Output:
2017-09-01 13:55:22,070 [myid:] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
2017-09-01 13:55:22,072 [myid:] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
2017-09-01 13:55:22,072 [myid:] - INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
2017-09-01 13:55:22,072 [myid:] - WARN [main:QuorumPeerMain#113] - Either no config or no quorum defined in config, running in standalone mode
2017-09-01 13:55:22,145 [myid:] - ERROR [main:ZooKeeperServerMain#55] - Invalid arguments, exiting abnormally
java.lang.NumberFormatException: For input string: "C:\zookeeper-3.4.10\bin\..\conf\zoo.cfg"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.zookeeper.server.ServerConfig.parse(ServerConfig.java:59)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:84)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:53)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
2017-09-01 13:55:22,148 [myid:] - INFO [main:ZooKeeperServerMain#56] - Usage: ZooKeeperServerMain configfile | port datadir [ticktime] [maxcnxns]
I am sure I have started the Zookeeper, because when I tried to start a new one, it shows "java.net.BindException: Address already in use: bind"
Another strange problem is that I cannot find Zookeeper in the Windows Service list. However, when I tried to show all port usage in Windows PowerShell by netstat -and, I found the 2181 is in use:
Proto Local Address Foreign Address State
TCP 0.0.0.0:2181 0.0.0.0:0 LISTENING
[java.exe]
TCP [::1]:2181 [::1]:62268 ESTABLISHED
[java.exe]
TCP [::1]:2181 [::1]:62279 ESTABLISHED
[java.exe]
TCP [::1]:2181 [::1]:62280 ESTABLISHED
[java.exe]
TCP [::1]:2181 [::1]:62281 ESTABLISHED
[java.exe]
I was running ZooKeeper on Windows and wasn't able to stop ZooKeeper running at 2181 port using zookeeper-stop.sh, so I tried this double slash "//" method to taskkill. It worked
1. netstat -ano | findstr :2181
TCP 0.0.0.0:2181 0.0.0.0:0 LISTENING 8876
TCP [::]:2181 [::]:0 LISTENING 8876
2. taskkill //PID 8876 //F
SUCCESS: The process with PID 8876 has been terminated.
Credit goes to: How do I kill the process currently using a port on localhost in Windows?
It looks like there is an open bug concerning the start and stop commands in ZooKeeper
To start ZooKeeper, omit the start parameter and call bin\zkServer instead.
To stop it, if you don't see the process from the task manager. You need to connect to ZooKeeper server as an administrator and perform the kill commands.
More details are here.
I have a Storm cluster with 1 Nimbus, 4 Supervisors and 2 Zookeeper nodes. My Storm.yaml is as following:
storm.zookeeper.servers:
- "storage14"
- "storage15"
nimbus.seeds: ["storage01"]
#storm.local.hostname: "storage05"
supervisor.supervisors:
- "storage02"
- "storage03"
- "storage04"
- "storage05"
storm.local.dir: "/tmp/storm"
worker.childopts: "-Xmx%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump"
This storm.yaml file is used by both Nimbus and Supervisors. When Nimbus is started I have the storm.local.hostname commented out as is shown above.
However, when starting Supervisors on respective nodes, I uncomment the storm.local.hostname and set it to the hostname of the node on which the supervisor is being launched. For instance if I was launching the supervisor on storage05, the storm.yaml file would have the following additional config param:
storm.local.hostname: "storage05"
The problem is even though Nimubs is launched successfully and I can see it on the Storm UI, some supervisors do not seem to be able to connect to Nimbus. For instance of the 4 nodes I start supervisors on, Storm UI often shows only 2 of them connected. However, if I ssh in to these nodes and run jps, I can see that the supervisor process is running on ALL of these nodes.
The Supervisors at the nodes which do end up connecting are not the same always, so it is definitely not a problem with those specific nodes.
Another thing to notice is if I try to execute a topology on whatever nodes that got connected, it does not get registered by the cluster and I can not see that topology on the UI either.
What do you think might be causing this erratic behavior?
UPDATE:
Tail end of nimbus.log has the following lines
2017-01-25 00:04:25.216 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.317 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage15/192.168.140.195:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.317 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.686 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage15/192.168.140.195:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.686 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.787 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage14/192.168.140.194:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.787 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Your UPDATE (nimbus log) indicates that your Nimbus cannot connect Zookeeper cluster. Please check that Zookeeper cluster (storage14/storage15) is accessible from storage01 (not only node is accessible, but also do telnet to Zookeeper server via "telnet storage14 (and/or storage15) 2181").
When ZK connectivity issue is gone please try starting supervisor again.
I am using Kafka and zookeeper, and creating connection between them but the connection is getting dropped again and again when I try to create new Kafka::Consumer
ZOOKEEPER = '127.0.0.1:2181'
CLIENT_ID = '************'
TOPICS = ['*****']
#consumer = Kafka::Consumer.new(CLIENT_ID, TOPICS, zookeeper: ZOOKEEPER, logger: nil)
I also checked the zookeeper and kafka log file and got that my kafka to zookeeper connection is dropped when I try to create new Kafka::Consumer
Kafka Log:
...
[2016-03-04 16:14:47,553] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-03-04 16:16:11,419] INFO Unable to read additional data from server sessionid 0x1533ff65f850003, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2016-03-04 16:16:11,520] INFO zookeeper state changed (Disconnected) (org.I0Itec.zkclient.ZkClient)
[2016-03-04 16:16:13,128] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-03-04 16:16:13,129] WARN Session 0x1533ff65f850003 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
...
Zookeeper Log:
...
2016-04-04 10:30:30,577 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#839] - Client attempting to establish new session at /127.0.0.1:51152
2016-04-04 10:30:30,579 - INFO [SyncThread:0:FileTxnLog#199] - Creating new log file: log.725
2016-04-04 10:30:30,668 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x153df9fc2a70000 with negotiated timeout 6000 for client /127.0.0.1:51152
2016-04-04 10:30:31,714 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#627] - Got user-level KeeperException when processing sessionid:0x153df9fc2a70000 type:delete cxid:0x26 zxid:0x728 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2016-04-04 10:30:31,883 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#627] - Got user-level KeeperException when processing sessionid:0x153df9fc2a70000 type:create cxid:0x2d zxid:0x729 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
...
Installed Gems
Using ione 1.2.3
Using json 1.8.3
Using thor 0.19.1
Using zookeeper 1.4.11
Using poseidon 0.0.5
Using bundler 1.11.2
Using cassandra-driver 2.1.5
Using kazoo-ruby 0.4.0
Using kafka-consumer 0.1.2
I am exactly not able to get where is the version problem
Getting error:
~/../kazoo-ruby-0.4.0/lib/kazoo/broker.rb:83:in `from_json': Kazoo::VersionNotSupported
~/../kazoo-ruby-0.4.0/lib/kazoo/cluster.rb:38:in `block (3 levels) in brokers'
Got the solution, I was using Kafka version 0.9.0.1 or 0.8.0 Beta which is creating some version issue. Now, I downloaded and installed Kafka 0.8.2.2 with scala version 2.11 Release which is working for me.