I configured HBase today and I configured it correctly at first. However, when I ran HBase use the code 'start-all.sh' again, I could not see 'Hmaster' anywhere. It just shows like:
[root#master bin]# jps
25164 QuorumPeerMain
83447 HRegionServer
44542 NameNode
44789 DataNode
45098 SecondaryNameNode
45378 ResourceManager
45536 NodeManager
56678 <no information available>
56949 Jps
when I 'jps' again, '':
enter image description here
and the log shows:
[root#master bin]# cd /home/hadoop/hbase-2.2.3/logs
[root#master logs]# ls
hbase-root-master-master.log hbase-root-regionserver-master.out.1
hbase-root-master-master.out hbase-root-regionserver-master.out.2
hbase-root-master-master.out.1 hbase-root-regionserver-master.out.3
hbase-root-regionserver-master.log hbase-root-regionserver-master.out.4
hbase-root-regionserver-master.out SecurityAuth.audit
[root#master logs]# tail hbase-root-master-master.log
2022-04-28 17:29:56,674 INFO [master/master:16000] zookeeper.ZooKeeper: Session: 0x100000e4a0d0020 closed
2022-04-28 17:29:56,674 INFO [master/master:16000] regionserver.HRegionServer: Exiting; stopping=master,16000,1651138191876; zookeeper connection closed.
2022-04-28 17:29:56,674 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x100000e4a0d0020
2022-04-28 17:29:56,674 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:244)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2940)
[root#master logs]#
I solve the problem by adding the following configuration to the configuration file "hbase-site.xml":
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
I do not why, but it works.
Related
When I use start-all.cmd, then datanode, resourcemanager, nodemanager are working properly but namenode is not working!
19/11/04 22:09:14 WARN namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1012)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:691)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:634)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)
19/11/04 22:09:14 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
19/11/04 22:09:14 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
19/11/04 22:09:14 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1012)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:691)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:634)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)
19/11/04 22:09:14 INFO util.ExitUtil: Exiting with status 1
19/11/04 22:09:14 INFO namenode.NameNode: SHUTDOWN_MSG:
Open cmd as administrator.
Type and run stop-all.cmd
Then run hadoop namenode –format
Finally run start-all.cmd
Hope it will work for you.
I've created a standalone hadoop cluster using this tutorial. Then I installed HBase over hadoop by following this tutorial.
I ran Hadoop by
cd /usr/local/hadoop/sbin/
./start-all.sh
And HBase by
cd /usr/local/hbase/bin
./start-hbase.sh
Then when I do jps, I get:
3761 Jps
835 NameNode
966 DataNode
3480 HMaster
3608 HRegionServer
1465 ResourceManager
1610 NodeManager
3418 HQuorumPeer
1150 SecondaryNameNode
But after some time it shows:
1779 SecondaryNameNode
1557 DataNode
2870 HQuorumPeer
2200 NodeManager
2061 ResourceManager
3246 Jps
1423 NameNode
So that's a pretty large indicator that something is wrong. Now, I checked the zookeeper logs in /usr/local/hbase/logs/hbase-hduser-zookeeper-stal.log and it showed:
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:os.version=4.15.0-47-generic
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.name=hduser
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.home=/home/hduser
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/home/hduser
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: tickTime set to 3000
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: minSessionTimeout set to -1
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: maxSessionTimeout set to 90000
2019-04-29 07:54:46,780 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
which doesn't seem like any error whatsoever.
So, I checked HBase's errors in /usr/local/hbase/logs/hbase-hduser-master-stal.log and I got:
2019-04-29 07:55:11,513 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3100)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3111)
Caused by: java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362)
at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411)
at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:489)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3093)
... 5 more
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 25 more
There was a similar question, which was answered by:
HBase 2.1.0 release uses HTrace, that is an incubating Apache
Foundation project.
There is a folder for 3rd-party libraries in HBase lib folder,
client-facing-thirdparty. You need to copy
htrace-core-3.1.0-incubating.jar from there to the HBase lib
directory. (see reference)
There is also another solution at Cloudera Community that changes a
configuration instead of adding the library manually.
The first solution includes:
The HMaster refuse to start due to the error below:
Java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster Caused by:
java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
This is because in hbase 2.0, we have 2 different version of
htrace-core.x.x.x.incubating.jar
cd /usr/local/hbase/lib/client-facing-thirdparty/:
htrace-core-3.1.0-incubating.jar
htrace-core-4.2.0-incubating.jar
Currently, only version 3.1.0 has the required class SamplerBuilder.
We need to remove version 4.2.0:
mv htrace-core-4.2.0-incubating.jar htrace-core-4.2.0-incubating.jar.bak
But, when I did cd to the /usr/local/hbase/lib/client-facing-thirdparty and do ls -a I get:
. audience-annotations-0.5.0.jar findbugs-annotations-1.3.9-1.jar log4j-1.2.17.jar slf4j-log4j12-1.7.25.jar
.. commons-logging-1.2.jar htrace-core4-4.2.0-incubating.jar slf4j-api-1.7.25.jar
As one can see, there is only one htrace file, not two. So, I downloaded htrace-3.1.0, from here, and copied it into /usr/local/hbase/lib/client-facing-thirdparty, and renamed htrace-core4-4.2.0-incubating.jar to htrace-core4-4.2.0-incubating.jar.bak. Then I restarted hadoop and HBase. Still no change. jps didn't show HMaster and HRegionServer now.
HBase configuration files:
<configuration>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/user/hduser/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.master</name>
<value>localhost:60010</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://localhost:9000/user/hduser/zookeeper</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/hbase/tmp</value>
<description>Temporary directory on the local filesystem.</description>
</property>
</configuration>
And hbase-env.sh looks like:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_REGIONSERVERS=/usr/local/hbase/conf/regionservers
export HBASE_MANAGES_ZK=true
export HBASE_PID_DIR=/var/hbase/pids
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"
So, what should I do now? Any help is appreciated.
I am running HBase on Hadoop in standalone mode. I have successfully installed hadoop,zookeeper and hbase but in hbase master is not starting. Below is my hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/kumar/hdata/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
I have started hadoop and zookeeper services:
start-all.sh
zkServer.sh start
start-hbase.sh
and the processes I am getting in Jps command
2133 DataNode
1974 NameNode
2679 NodeManager
2365 SecondaryNameNode
3917 QuorumPeerMain
2527 ResourceManager
3935 Jps
Hbase shell is starting succcesfully, but when i run any command in shell like 'list' I am getting below error:
ERROR: KeeperErrorCode = NoNode for /hbase/master
After that I try to run master with below command
hbase master start
and I am getting below error:
2018-10-15 18:51:51,380 ERROR [main] server.ZooKeeperServer: ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes
2018-10-15 18:51:51,437 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:34034
2018-10-15 18:51:51,479 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.ServerCnxn: The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, 2003003507=wchs, 2003003504=wchp, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2018-10-15 18:51:51,479 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.ServerCnxn: The list of enabled four letter word commands is : [[wchs, stat, stmk, conf, ruok, mntr, srvr, envi, srst, isro, dump, gtmk, crst, cons]]
2018-10-15 18:51:51,479 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.NIOServerCnxn: Processing stat command from /127.0.0.1:34034
2018-10-15 18:51:51,485 INFO [Thread-2] server.NIOServerCnxn: Stat command output
2018-10-15 18:51:51,491 INFO [main] zookeeper.MiniZooKeeperCluster: Started MiniZooKeeperCluster and ran successful 'stat' on client port=2182
Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
2018-10-15 18:51:51,497 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:217)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2983)
2018-10-15 18:51:51,500 INFO [Thread-2] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:34034 (no session established for client)
Also I am not getting any response for local hbase web URL
localhost:60010
HBase has a built-in instance of Zookeeper for just development environments and by default when you start HBase by the command start-hbase.sh it start the Zookeeper daemon, too. The error is because of you already start a standalone Zookeeper that uses the port 2181. When you start HBase it tries to start it's built-in zookeeper in port 2181, too and it got the error!
If you want to use standalone Zookeeper component, first edit the file hbase-env.sh and add the line: export HBASE_MANAGES_ZK=false (You also can search for HBASE_MANAGES_ZK variable in the file and set it to false). So now when you start the HBase, it just starts HBase Daemon and not Zookeeper anymore. Remember you should start the Zookeeper daemon before HBase.
I solved this by adding this in hbase-site.xml:
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper</value>
</property>
in my case I was working on hbase 2.2.0 in standalone
I'm using Hadoop2.4.0 / Hbase 0.98.0 / Hive 0.14.0
Hadoop and HBase were running fine until I restarted my HMaster. The following error appears in hbase-hduser-master-master.log file :
2015-02-17 05:46:15,157 INFO [master:master:60000] master.TableNamespaceManager: Namespace table not found. Creating...
2015-02-17 05:46:15,193 DEBUG [master:master:60000] lock.ZKInterProcessLockBase: Acquired a lock for /hbase/table-lock/hbase:namespace/write-master:600000000000004
2015-02-17 05:46:15,212 DEBUG [master:master:60000] lock.ZKInterProcessLockBase: Released /hbase/table-lock/hbase:namespace/write-master:600000000000004
2015-02-17 05:46:15,212 FATAL [master:master:60000] master.HMaster: Master server abort: loaded coprocessors are: []
2015-02-17 05:46:15,213 FATAL [master:master:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:120)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:232)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:86)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1049)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:913)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
at java.lang.Thread.run(Unknown Source)
2015-02-17 05:46:15,214 INFO [master:master:60000] master.HMaster: Aborting
2015-02-17 05:46:15,214 INFO [master,60000,1424180766819-BalancerChore] balancer.BalancerChore: master,60000,1424180766819-BalancerChore exiting
2015-02-17 05:46:15,215 INFO [master,60000,1424180766819-ClusterStatusChore] balancer.ClusterStatusChore: master,60000,1424180766819-ClusterStatusChore exiting
2015-02-17 05:46:15,215 INFO [CatalogJanitor-master:60000] master.CatalogJanitor: CatalogJanitor-master:60000 exiting
2015-02-17 05:46:15,216 DEBUG [master:master:60000] master.HMaster: Stopping service threads
2015-02-17 05:46:15,216 INFO [master:master:60000] ipc.RpcServer: Stopping server on 60000
2015-02-17 05:46:15,216 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2015-02-17 05:46:15,218 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-02-17 05:46:15,218 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-02-17 05:46:15,218 INFO [master:master:60000.oldLogCleaner] cleaner.LogCleaner: master:master:60000.oldLogCleaner exiting
2015-02-17 05:46:15,218 INFO [master:master:60000.oldLogCleaner] master.ReplicationLogCleaner: Stopping replicationLogCleaner-0x14b97c83f580008, quorum=slave:2181,master:2181, baseZNode=/hbase
2015-02-17 05:46:15,219 INFO [master:master:60000.archivedHFileCleaner] cleaner.HFileCleaner: master:master:60000.archivedHFileCleaner exiting
2015-02-17 05:46:15,219 INFO [master:master:60000] master.HMaster: Stopping infoServer
2015-02-17 05:46:15,223 INFO [master:master:60000.oldLogCleaner] zookeeper.ZooKeeper: Session: 0x14b97c83f580008 closed
2015-02-17 05:46:15,223 INFO [master:master:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-02-17 05:46:15,229 INFO [master:master:60000] mortbay.log: Stopped SelectChannelConnector#0.0.0.0:60010
2015-02-17 05:46:15,236 DEBUG [master:master:60000] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker#19f9598
2015-02-17 05:46:15,236 INFO [master:master:60000] client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14b97c83f580007
2015-02-17 05:46:15,237 INFO [master:master:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-02-17 05:46:15,238 INFO [master:master:60000] zookeeper.ZooKeeper: Session: 0x14b97c83f580007 closed
2015-02-17 05:46:15,238 INFO [master,60000,1424180766819.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor: master,60000,1424180766819.splitLogManagerTimeoutMonitor exiting
2015-02-17 05:46:15,243 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-02-17 05:46:15,243 INFO [master:master:60000] zookeeper.ZooKeeper: Session: 0x14b97c83f580006 closed
2015-02-17 05:46:15,243 INFO [master:master:60000] master.HMaster: HMaster main thread exiting
2015-02-17 05:46:15,243 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)
what's wrong here and what HMaster Aborted means ?
for more information this is what my hbase-site.xml looks like:
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbase/zookeeper</value>
</property>
I ran into this problem today! My solution is as follows:
Step 1:stop Hbase.
Step 2:run the follow command
hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
This command is used to repair MetaData of Hbase
Step 3:delete the data in zookeeper (WARNING It will make you lost you old data)
./opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/zookeeper/bin/zkCli.sh
you can use ls / to scan the data in zookeeper
use rmr /hbase to delete the hbase's data in zookeeper
Step 4:Start hbase
This is based on the other answer but to clarify for upgrade cloudera 5.4
Step 1:
service hbase-regionserver stop
service hbase-master stop
Step 2:
hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
Step 3: Delete the data in zookeeper (WARNING It will make you lost your old data)
cd /usr/lib/zookeeper/bin/
./zkCli.sh
It opens up the zookeeper shell.
Then run:
ls /
rmr /hbase
Step 4:Start hbase
service hbase-master restart
service hbase-regionserver restart
In todays episode of hbase is bringing me to my wits end we have an issue where the hbase master starts and then very quickly dies. My master log is like so:
2014-06-20 12:52:40,469 FATAL [master:hdev01:60000] master.HMaster: Master serve
r abort: loaded coprocessors are: []
2014-06-20 12:52:40,470 FATAL [master:hdev01:60000] master.HMaster: Unhandled ex
ception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(Cre
ateTableHandler.java:120)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceT
able(TableNamespaceManager.java:232)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNames
paceManager.java:86)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:106
2)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j
ava:926)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:615)
at java.lang.Thread.run(Thread.java:662)
2014-06-20 12:52:40,473 INFO [master:hdev01:60000] master.HMaster: Aborting
2014-06-20 12:52:40,473 DEBUG [master:hdev01:60000] master.HMaster: Stopping ser
vice threads
2014-06-20 12:52:40,473 INFO [master:hdev01:60000] ipc.RpcServer: Stopping serv
er on 60000
2014-06-20 12:52:40,473 INFO [CatalogJanitor-hdev01:60000] master.CatalogJanito
r: CatalogJanitor-hdev01:60000 exiting
2014-06-20 12:52:40,473 INFO [hdev01,60000,1403283149823-BalancerChore] balance
r.BalancerChore: hdev01,60000,1403283149823-BalancerChore exiting
2014-06-20 12:52:40,474 INFO [RpcServer.listener,port=60000] ipc.RpcServer: Rpc
Server.listener,port=60000: stopping
2014-06-20 12:52:40,474 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.res
ponder: stopped
2014-06-20 12:52:40,474 INFO [master:hdev01:60000] master.HMaster: Stopping inf
oServer
2014-06-20 12:52:40,474 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.res
ponder: stopping
2014-06-20 12:52:40,474 INFO [master:hdev01:60000.oldLogCleaner] cleaner.LogCle
aner: master:hdev01:60000.oldLogCleaner exiting
2014-06-20 12:52:40,475 INFO [hdev01,60000,1403283149823-ClusterStatusChore] ba
lancer.ClusterStatusChore: hdev01,60000,1403283149823-ClusterStatusChore exiting
2014-06-20 12:52:40,476 INFO [master:hdev01:60000.oldLogCleaner] master.Replica
tionLogCleaner: Stopping replicationLogCleaner-0x246ba2ab1e4001c, quorum=hdev02:
5181,hdev01:5181,hdev03:5181, baseZNode=/hbase
2014-06-20 12:52:40,479 INFO [master:hdev01:60000] mortbay.log: Stopped SelectC
hannelConnector#0.0.0.0:16010
2014-06-20 12:52:40,478 INFO [master:hdev01:60000.archivedHFileCleaner] cleaner
.HFileCleaner: master:hdev01:60000.archivedHFileCleaner exiting
2014-06-20 12:52:40,483 INFO [master:hdev01:60000.oldLogCleaner] zookeeper.ZooK
eeper: Session: 0x246ba2ab1e4001c closed
2014-06-20 12:52:40,484 INFO [master:hdev01:60000-EventThread] zookeeper.Client
Cnxn: EventThread shut down
2014-06-20 12:52:40,589 DEBUG [master:hdev01:60000] catalog.CatalogTracker: Stop
ping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker#f3f348b
2014-06-20 12:52:40,591 INFO [master:hdev01:60000] client.HConnectionManager$HC
onnectionImplementation: Closing zookeeper sessionid=0x246ba2ab1e4001b
2014-06-20 12:52:40,592 INFO [master:hdev01:60000] zookeeper.ZooKeeper: Session
: 0x246ba2ab1e4001b closed
2014-06-20 12:52:40,592 INFO [master:hdev01:60000-EventThread] zookeeper.Client
Cnxn: EventThread shut down
2014-06-20 12:52:40,695 INFO [hdev01,60000,1403283149823.splitLogManagerTimeout
Monitor] master.SplitLogManager$TimeoutMonitor: hdev01,60000,1403283149823.split
LogManagerTimeoutMonitor exiting
2014-06-20 12:52:40,696 INFO [master:hdev01:60000] zookeeper.ZooKeeper: Session
: 0x246ba2ab1e4001a closed
2014-06-20 12:52:40,696 INFO [main-EventThread] zookeeper.ClientCnxn: EventThre
ad shut down
2014-06-20 12:52:40,696 INFO [master:hdev01:60000] master.HMaster: HMaster main
thread exiting
2014-06-20 12:52:40,697 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMaster
CommandLine.java:194)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandL
ine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLi
ne.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2803)
I thought this might be some remnant of an old run so I deleted the files in hbases data directory, the zookeepers data directory and my hdfs. I still got the same error. Strangely my HMaster popper back up again temporarily when I ran stop-hbase.sh although there wasn't much I could do with it.
My Hbase version is 98.3 and my hadoop is 2.2.0. My hbase-site.comf is
<configuration>
<property>
<name>hbase.master</name>
<value>hdev01:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdev01:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>5181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>10000</value>
<description></description>
</property>
<property>
<name>hbase.client.retries.number</name>
<value>10</value>
<description></description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hdev01,hdev02,hdev03</value>
<description>Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
</description>
</property>
</configuration>
EDIT
Attempted hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair, my error now is HBase file layout needs to be upgraded. You have version null and I want version 8. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'
Which is unhelpful since without a master hbck will not actually run.
Edited edit
I nuked and restarted my dfs and then tried repairing and starting things again, i am now back where i started.
hbase namespace is the internal namespace HBAse uses for its own management tables. Try to run the offline repair tool
from the $HBASE_HOME directory:
./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
su - hdfs
hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
(restart the hbase master.if still u are facing issue then do following)
zookeeper-client (enter)
rmr /hbase
quit
Then restart the hbase master service
#shash:
When HBase manages ZooKeeper( i.e. HBASE_manages_ZK=true), the command to access and clean hbase data is :
hbase zkcli. Afterwards you clean hbae using the command rmr /hbase, then you quit.