I have downloaded kafka 2.4.0 binaries for Windows.
When I start kafka from command line using kafka-server-start.bat using server.properties file, I get the following errors :
[2020-02-04 15:37:33,775] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (host.docker.internal/10.177.172.141:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-04 15:37:34,931] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (host.docker.internal/10.177.172.141:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-04 15:37:36,122] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (host.docker.internal/10.177.172.141:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-04 15:37:37,364] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (host.docker.internal/10.177.172.141:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-02-04 15:37:38,692] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (host.docker.internal/10.177.172.141:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
From the Zookeeper logs I get these errors :
[2020-02-04 15:37:23,056] INFO Client attempting to establish new session at /127.0.0.1:51457 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-02-04 15:37:23,071] INFO Established session 0x100000298ee0000 with negotiated timeout 6000 for client /127.0.0.1:51457 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-02-04 15:37:23,198] INFO Got user-level KeeperException when processing sessionid:0x100000298ee0000 type:create cxid:0x1 zxid:0x12c txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-02-04 15:37:23,214] INFO Got user-level KeeperException when processing sessionid:0x100000298ee0000 type:create cxid:0x2 zxid:0x12d txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-02-04 15:37:23,230] INFO Got user-level KeeperException when processing sessionid:0x100000298ee0000 type:create cxid:0x3 zxid:0x12e txntype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for /brokers/topics (org.apache.zookeeper.server.PrepRequestProcessor)
I have docker installed, but it is not running. The server.properties file is untouched.
I am not able to understand and debug why it is connecting to "host.docker.internal".
Note : Kafka ran successfully once. I remember I did a force shutdown of windows when kafka was running. I do not know whether this can be the issue.
I tried the lower version of kafka, I am still getting the same error.
why it is connecting to "host.docker.internal
Most likely, your Windows Hosts file has been updated with that property.
You need to edit your server.properties to use
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://127.0.0.1:9092
Windows Hosts file was not looking good to me.
I uninstalled Docker.
Removed all entries of "host.docker.internal", "gateway.docker.internal", "kubernetes.docker.internal".
Restarted my system and Kafka was running properly.
Related
My hadoop servers are got down because of disk space issue,then we increased disk space then HDFS,zookeeper,kafka started working but HBase is not working.
It is throwing below exception while restarting Hbase from Ambari.
org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'.
Based on the suggestion I ran the command hbase hbck -fixVersionFile as a hbase user, then I am getting error like this:
2019-12-10 19:04:59,535 INFO [ReadOnlyZKClient-slave01.testiot.cloud:2181,slave02.testiot.cloud:2181,slave03.testiot.cloud:2181#0x619bfe29-SendThread(slave02.testiot.cloud:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.1.0.5:39250, server: slave02.testiot.cloud/10.1.0.7:2181
2019-12-10 19:04:59,560 INFO [ReadOnlyZKClient-slave01.testiot.cloud:2181,slave02.testiot.cloud:2181,slave03.testiot.cloud:2181#0x619bfe29-SendThread(slave02.testiot.cloud:2181)] zookeeper.ClientCnxn: Session establishment complete on server slave02.testiot.cloud/10.1.0.7:2181, sessionid = 0x26ef0e604f530e3, negotiated timeout = 60000
2019-12-10 19:05:03,908 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=4163 ms ago, cancelled=false, msg=java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master, details=
2019-12-10 19:05:07,945 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=36, started=8200 ms ago, cancelled=false, msg=java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master, details=
2019-12-10 19:05:17,964 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=8, retries=36, started=18219 ms ago, cancelled=false, msg=java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master, details=
2019-12-10 19:05:28,024 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=9, retries=36, started=28279 ms ago, cancelled=false, msg=java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master, details=
I am running a cluster of three nodes. When I checked hbase.root.dir hbase.version file was not there.
HbaseVersion -2.0.2
zookeeper - 3.4.6
I included the hbase version file from another server. Now its working fine.
we are using spark application version 2.1 in out ambari cluster
ambari thrift servers isn't stable and restarted all times
from the log we can see that:
ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
we found the following link that described solution for this problem
https://markobigdata.com/2016/08/11/yarn-application-has-already-ended-it-might-have-been-killed-or-unable-to-launch-application-master/
but after we set the parameters as described in the article , the problem still exsist
please advice what is the solution for this?
full log:
tail -f spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-master01.sys873dns.com.out
Spark Command: /usr/jdk64/jdk1.8.0_112/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx10000m org.apache.spark.deploy.SparkSubmit --conf spark.driver.memory=50g --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server --executor-cores 7 spark-internal
========================================
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
18/02/08 09:38:07 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:47)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:81)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:745)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/02/08 09:38:07 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/02/08 09:38:07 ERROR Utils: Uncaught exception in thread main
java.lang.NullPointerException
I give also the yarn logs:
grep -i erro yarn-yarn-resourcemanager-master01.sys873dns.com.log
018-02-08 11:19:00,993 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master01.sys873dns.com/23.1.29.61:2181. Will not attempt to authenticate using SASL (unknown error)
2018-02-08 11:19:15,767 ERROR resourcemanager.ResourceManager (LogAdapter.java:error(69)) - RECEIVED SIGNAL 15: SIGTERM
2018-02-08 11:19:27,281 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master01.sys873dns.com/23.1.29.61:2181. Will not attempt to authenticate using SASL (unknown error)
2018-02-08 11:29:00,064 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master01.sys873dns.com/23.1.29.61:2181. Will not attempt to authenticate using SASL (unknown error)
2018-02-08 11:29:01,839 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master01.sys873dns.com/23.1.29.61:2181. Will not attempt to authenticate using SASL (unknown error)
2018-02-08 11:29:15,725 ERROR resourcemanager.ResourceManager (LogAdapter.java:error(69)) - RECEIVED SIGNAL 15: SIGTERM
2018-02-08 11:29:27,033 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master03.sys873dns.com/23.1.29.63:2181. Will not attempt to authenticate using SASL (unknown error)
ons.YarnException: Unauthorized request to start container.
2018-02-08 12:56:11,144 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0028_000008. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 12:59:39,822 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0029_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:01,671 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0029_000004. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:18,062 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0029_000006. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:20,245 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0030_000003. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:42,100 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0030_000006. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:56,310 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0030_000008. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:58,511 INFO amlauncher.AMLauncher (AMLauncher.java:run(273)) - Error launching appattempt_1518089370033_0030_000010. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
2018-02-08 13:00:58,537 INFO rmapp.RMAppImpl (RMAppImpl.java:transition(1063)) - Application application_1518089370033_0030 failed 10 times due to Error launching appattempt_1518089370033_0030_000010. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
last log
2018-02-08 14:14:54,410 INFO rmapp.RMAppImpl (RMAppImpl.java:handle(778)) - application_1518089370033_0050 State change from FINAL_SAVING to FAILED
2018-02-08 14:14:54,410 INFO capacity.ParentQueue (ParentQueue.java:removeApplication(385)) - Application removed - appId: application_1518089370033_0050 user: hive leaf-queue of parent: root #applications: 1
2018-02-08 14:14:54,412 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:onApplicationCompleted(119)) - Application application_1518089370033_0050 completed, purging application-level records
2018-02-08 14:14:54,412 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:purgeRecordsAsync(198)) - records under / with ID application_1518089370033_0050 and policy application: {}
2018-02-08 14:14:55,393 INFO rmcontainer.RMContainerImpl (RMContainerImpl.java:handle(422)) - container_e09_1518089370033_0049_10_000001 Container Transitioned from RUNNING to COMPLETED
2018-02-08 14:14:55,393 INFO scheduler.SchedulerNode (SchedulerNode.java:releaseContainer(220)) - Released container container_e09_1518089370033_0049_10_000001 of capacity <memory:10240, vCores:1> on host worker02.sys768.com:45454, which currently has 0 containers, <memory:0, vCores:0> used and <memory:30720, vCores:6> available, release resources=true
2018-02-08 14:14:55,393 INFO attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:rememberTargetTransitionsAndStoreState(1209)) - Updating application attempt appattempt_1518089370033_0049_000010 with final state: FAILED, and exit status: -1000
2018-02-08 14:14:55,398 INFO attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(809)) - appattempt_1518089370033_0049_000010 State change from LAUNCHED to FINAL_SAVING
2018-02-08 14:14:55,399 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:onContainerFinished(144)) - Container container_e09_1518089370033_0049_10_000001 finished, purging container-level records
2018-02-08 14:14:55,400 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:purgeRecordsAsync(198)) - records under / with ID container_e09_1518089370033_0049_10_000001 and policy container: {}
2018-02-08 14:14:55,408 INFO resourcemanager.ApplicationMasterService (ApplicationMasterService.java:unregisterAttempt(685)) - Unregistering app attempt : appattempt_1518089370033_0049_000010
2018-02-08 14:14:55,408 INFO security.AMRMTokenSecretManager (AMRMTokenSecretManager.java:applicationMasterFinished(124)) - Application finished, removing password for appattempt_1518089370033_0049_000010
2018-02-08 14:14:55,408 INFO attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(809)) - appattempt_1518089370033_0049_000010 State change from FINAL_SAVING to FAILED
2018-02-08 14:14:55,408 INFO rmapp.RMAppImpl (RMAppImpl.java:transition(1330)) - The number of failed attempts is 10. The max attempts is 10
2018-02-08 14:14:55,409 INFO rmapp.RMAppImpl (RMAppImpl.java:rememberTargetTransitionsAndStoreState(1123)) - Updating application application_1518089370033_0049 with final state: FAILED
2018-02-08 14:14:55,409 INFO rmapp.RMAppImpl (RMAppImpl.java:handle(778)) - application_1518089370033_0049 State change from ACCEPTED to FINAL_SAVING
2018-02-08 14:14:55,409 INFO recovery.RMStateStore (RMStateStore.java:transition(228)) - Updating info for app: application_1518089370033_0049
2018-02-08 14:14:55,409 INFO capacity.CapacityScheduler (CapacityScheduler.java:doneApplicationAttempt(811)) - Application Attempt appattempt_1518089370033_0049_000010 is done. finalState=FAILED
2018-02-08 14:14:55,409 INFO scheduler.AppSchedulingInfo (AppSchedulingInfo.java:clearRequests(124)) - Application application_1518089370033_0049 requests cleared
2018-02-08 14:14:55,410 INFO capacity.LeafQueue (LeafQueue.java:removeApplicationAttempt(795)) - Application removed - appId: application_1518089370033_0049 user: hive queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2018-02-08 14:14:55,417 INFO rmapp.RMAppImpl (RMAppImpl.java:transition(1063)) - Application application_1518089370033_0049 failed 10 times due to AM Container for appattempt_1518089370033_0049_000010 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://master02.sys768.com:8088/cluster/app/application_1518089370033_0049 Then click on links to logs of each attempt.
Diagnostics: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1212891131-25.1.53.61-1518077044052:blk_1073741833_1009 file=/hdp/apps/2.6.0.3-8/spark2/spark2-hdp-yarn-archive.tar.gz
Failing this attempt. Failing the application.
2018-02-08 14:14:55,418 INFO rmapp.RMAppImpl (RMAppImpl.java:handle(778)) - application_1518089370033_0049 State change from FINAL_SAVING to FAILED
2018-02-08 14:14:55,418 INFO capacity.ParentQueue (ParentQueue.java:removeApplication(385)) - Application removed - appId: application_1518089370033_0049 user: hive leaf-queue of parent: root #applications: 0
2018-02-08 14:14:55,419 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:onApplicationCompleted(119)) - Application application_1518089370033_0049 completed, purging application-level records
2018-02-08 14:14:55,419 INFO integration.RMRegistryOperationsService (RMRegistryOperationsService.java:purgeRecordsAsync(198)) - records under / with ID application_1518089370033_0049 and policy application: {}
[root#master02 yarn]#
I am using Kafka and zookeeper, and creating connection between them but the connection is getting dropped again and again when I try to create new Kafka::Consumer
ZOOKEEPER = '127.0.0.1:2181'
CLIENT_ID = '************'
TOPICS = ['*****']
#consumer = Kafka::Consumer.new(CLIENT_ID, TOPICS, zookeeper: ZOOKEEPER, logger: nil)
I also checked the zookeeper and kafka log file and got that my kafka to zookeeper connection is dropped when I try to create new Kafka::Consumer
Kafka Log:
...
[2016-03-04 16:14:47,553] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-03-04 16:16:11,419] INFO Unable to read additional data from server sessionid 0x1533ff65f850003, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2016-03-04 16:16:11,520] INFO zookeeper state changed (Disconnected) (org.I0Itec.zkclient.ZkClient)
[2016-03-04 16:16:13,128] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-03-04 16:16:13,129] WARN Session 0x1533ff65f850003 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
...
Zookeeper Log:
...
2016-04-04 10:30:30,577 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#839] - Client attempting to establish new session at /127.0.0.1:51152
2016-04-04 10:30:30,579 - INFO [SyncThread:0:FileTxnLog#199] - Creating new log file: log.725
2016-04-04 10:30:30,668 - INFO [SyncThread:0:ZooKeeperServer#595] - Established session 0x153df9fc2a70000 with negotiated timeout 6000 for client /127.0.0.1:51152
2016-04-04 10:30:31,714 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#627] - Got user-level KeeperException when processing sessionid:0x153df9fc2a70000 type:delete cxid:0x26 zxid:0x728 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2016-04-04 10:30:31,883 - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#627] - Got user-level KeeperException when processing sessionid:0x153df9fc2a70000 type:create cxid:0x2d zxid:0x729 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
...
Installed Gems
Using ione 1.2.3
Using json 1.8.3
Using thor 0.19.1
Using zookeeper 1.4.11
Using poseidon 0.0.5
Using bundler 1.11.2
Using cassandra-driver 2.1.5
Using kazoo-ruby 0.4.0
Using kafka-consumer 0.1.2
I am exactly not able to get where is the version problem
Getting error:
~/../kazoo-ruby-0.4.0/lib/kazoo/broker.rb:83:in `from_json': Kazoo::VersionNotSupported
~/../kazoo-ruby-0.4.0/lib/kazoo/cluster.rb:38:in `block (3 levels) in brokers'
Got the solution, I was using Kafka version 0.9.0.1 or 0.8.0 Beta which is creating some version issue. Now, I downloaded and installed Kafka 0.8.2.2 with scala version 2.11 Release which is working for me.
While running a topology in storm we are getting error like this,
8983 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl -
Starting
9144 [main] INFO **backtype.storm.daemon.nimbus** - Shutting down master
9199 [Thread-6-EventThread] INFO backtype.storm.zookeeper - Zookeeper state upd
ate: :connected:none
9241 [main] INFO backtype.storm.daemon.nimbus - Shut down master
9273 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl -
Starting
9306 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0003, likely client has closed socket
9354 [main] INFO backtype.storm.daemon.supervisor - Shutting down c094c3b1-a378
-4c4f-af35-9278647c217a:4beddc09-4675-4fb9-8bdc-9cf5013ce9ca
9358 [main] INFO backtype.storm.daemon.supervisor - Shut down c094c3b1-a378-4c4
f-af35-9278647c217a:4beddc09-4675-4fb9-8bdc-9cf5013ce9ca
9361 [main] INFO **backtype.storm.daemon.superviso**r - Shutting down supervisor c0
94c3b1-a378-4c4f-af35-9278647c217a
9364 [Thread-5] INFO **backtype.storm.event** - Event manager interrupted
9369 [Thread-6] INFO backtype.storm.event - Event manager interrupted
9425 [main] INFO **backtype.storm.daemon.supervisor** - Shutting down supervisor 38
6d8d71-c9b5-4b51-bd6e-f9f605034ea0
9428 [Thread-8] INFO backtype.storm.event - Event manager interrupted
9429 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0007, likely client has closed socket
9429 [Thread-9] INFO backtype.storm.event - Event manager interrupted
9473 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - EndOfStreamException: Unable to read additional data from cli
ent sessionid 0x143af55728d0009, likely client has closed socket
9476 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
9503 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache.zookeeper.serv
er.NIOServerCnxn - Ignoring exception
**java.nio.channels.ClosedChannelException**: null
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.jav
a:211) ~[na:1.7.0_03]
at org.apache.zookeeper.server.NIOServerCnxn$Factory.run(NIOServerCnxn.j
ava:242) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
9510 [main] INFO **backtype.storm.testing** - Done shutting down in process zookeep
er
9513 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\c9b1bc1a-a950-4098-af77-f81a4d2b112f
9520 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\7e75c468-18ea-4787-a4ac-496fb108db71
9527 [main] INFO backtype.storm.testing - Unable to delete file: C:\Users\sowmi
ya\AppData\Local\Temp\7e75c468-18ea-4787-a4ac-496fb108db71\version-2\log.1
9529 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\fa7b3c9b-ac93-4090-b9e2-63f10019e61f
9543 [main] INFO backtype.storm.testing - Deleting temporary path C:\Users\sowm
iya\AppData\Local\Temp\55f1fd11-508e-43bb-b340-0d9b79f3af33
9579 [Thread-6-EventThread] INFO com.netflix.curator.framework.state.Connection
StateManager - State change: SUSPENDED
9580 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.Connec
tionStateManager - There are no ConnectionStateListeners registered.
9583 [Thread-6-EventThread] WARN backtype.storm.cluster - Received event :disco
nnected::none: with disconnected Zookeeper.
11232 [Thread-6-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnx
n - Session 0x143af55728d000b for server null, unexpected error, closing socket
connection and attempting reconnect
**java.net.ConnectException: Connection refused: no further information**
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_0
3]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701
) ~[na:1.7.0_03]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
~[zookeeper-3.3.3.jar:3.3.3-1073969]
13992 [Thread-6-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnx
n - Session 0x143af55728d000b for server null, unexpected error, closing socket
connection and attempting reconnect
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_0
3]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701
) ~[na:1.7.0_03]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
Whwn we are trying to run the topology jar file all the operation like nimbus,zookeeper and supervisor process going to dead.please help us to know why this is happened.
Please help us to rectify this error and help to proceed further.
Thank you,
Sowmiya
Priya
This looks like a zookeeper issue. It looks like your processes are not being able to connect to zookeeper. Can't say more without more information.
I've got a serious Hbase crash problem. I'm using HBase 0.94.7 with one master and two region servers. The HBase master crashed regularly, I can't even get it restarted. I've got the master logs as following:
DEBUG master.AssignmentManager: Handling transition=RS_ZK_REGION_CLOSED, server=master,60020,1374506461230, region=46c2333f401964bf877254be19c2cc8c
DEBUG handler.ClosedRegionHandler: Handling CLOSED event for 6423df864603aa6e8c45c726ab3ae62f
DEBUG master.AssignmentManager: Forcing OFFLINE; was=LogDetail,\x00\x00\x01\xE8\x00\x00\x01?\xF8\xB3\x8F\x17\xCE\xE2g\x84,1374498065657.6423df864603aa6e8c45c726ab3ae62f. state=CLOSED, ts=1374508769672, server=slave,60020,1374506460892
DEBUG zookeeper.ZKAssign: master:60000-0x14006f52f3f000e Creating (or updating) unassigned node for 6423df864603aa6e8c45c726ab3ae62f with OFFLINE state
FATAL master.HMaster: Unexpected state : LogDetail,\x00\x00\x01\xE8\x00\x00\x01?\xF6\xC17p&c\x8F\x14,1374498085655.c2f4143750eb1559a1dd92e937ea712d. state=PENDING_OPEN, ts=1374508769697, server=master,60020,1374506461230 .. Cannot transit it to OFFLINE.
java.lang.IllegalStateException: Unexpected state : LogDetail,\x00\x00\x01\xE8\x00\x00\x01?\xF6\xC17p&c\x8F\x14,1374498085655.c2f4143750eb1559a1dd92e937ea712d. state=PENDING_OPEN, ts=1374508769697, server=master,60020,1374506461230 .. Cannot transit it to OFFLINE.
at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:1879)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1688)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1424)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1399)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1394)
at org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:105)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
INFO master.HMaster: Aborting
DEBUG handler.ClosedRegionHandler: Handling CLOSED event for 0710b486dcb3d51465695b51db376255
....
DEBUG master.AssignmentManager: The znode of region LogDetail,\x00\x00\x01\xE8\x00\x00\x01?\xF6\xC17p&c\x8F\x14,1374498085655.c2f4143750eb1559a1dd92e937ea712d. has been deleted.
INFO master.AssignmentManager: The master has opened the region LogDetail,\x00\x00\x01\xE8\x00\x00\x01?\xF6\xC17p&c\x8F\x14,1374498085655.c2f4143750eb1559a1dd92e937ea712d. that was online on master,60020,1374506461230
DEBUG master.AssignmentManager: Handling transition=M_ZK_REGION_OFFLINE, server=master,60000,1374508461536, region=c9cfdd360c09b292412ba5ad88815e6f
DEBUG catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker#5c061cd2
INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x14006f52f3f000f
INFO zookeeper.ZooKeeper: Session: 0x14006f52f3f000f closed
INFO zookeeper.ClientCnxn: EventThread shut down
INFO master.AssignmentManager$TimerUpdater: master,60000,1374508461536.timerUpdater exiting
INFO master.SplitLogManager$TimeoutMonitor: master,60000,1374508461536.splitLogManagerTimeoutMonitor exiting
INFO master.AssignmentManager$TimeoutMonitor: master,60000,1374508461536.timeoutMonitor exiting
INFO zookeeper.ZooKeeper: Session: 0x14006f52f3f000e closed
INFO zookeeper.ClientCnxn: EventThread shut down
INFO master.HMaster: HMaster main thread exiting
ERROR master.HMasterCommandLine: Failed to start master
I also found something unusual in the ZK log:
INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /master:37856
INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /master:37856
INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x140100dda0300e1 with negotiated timeout 180000 for client /master:37856
WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x140100dda0300e1, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:662)
INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /master:37856 which had sessionid 0x140100dda0300e1
Can anybody help to see what the problem is? Is it related to the unassigned region or something like this? I've tried the bin/hbase hbck -repair and bin/hbase hbck -fix, but it doesn't help.
Thanks
After checked the log of my region server very carefully, I got the answer.
Cause
It turns out that there is one library called 'SNAPPY' for the compression of the hbase table is not well installed on the region server. And all my tables are created using this compression algorithm. When the master tries to balance the region to the region server, it failed. Eventually the master aborted.
Solution
Install and configure the SNAPPY on EVERY NODE as following:
apt-get install libsnappy1
su hbase
mkdir /home/hbase/hbase-0.94.7/lib/native/Linux-amd64-64
ln -s /usr/lib/libsnappy.so.1.1.2 /home/hbase/hbase-0.94.7/lib/native/Linux-amd64-64/libsnappy.so
exit (-> root)
ln -s /usr/lib/libsnappy.so.1.1.2 /usr/lib64/libsnappy.so.1.1.2
ln -s /usr/lib/libsnappy.so.1.1.2 /usr/lib64/libsnappy.so.1
ln -s /usr/lib/libsnappy.so.1.1.2 /usr/lib64/libsnappy.so
ln -s /usr/lib/libsnappy.so.1 /usr/lib/libsnappy.so
Now everything is OK! The regions are well balanced over region servers.
Check the region server log, if it is caused by LZO compressor missing and you are using Cloudera Hadoop,you can install lzo easily according to the following instruction:
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-impala/v1/v1-0-1/Installing-and-Using-Impala/ciiu_lzo.html