Hbase daemon crashes at start - macos

I am trying to run Hbase 0.96.1.1 for Hadoop 2 on a Mac book air. When I run ./start-hbase.sh,
starting master, logging to.....
but it crashes right after.
I checked the log file and this the error message it spat out:
Fri Mar 28 12:49:20 PDT 2014 Starting master on ms12
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
2014-03-28 12:49:21,203 INFO [main] util.VersionInfo: HBase 0.96.1.1-hadoop2
2014-03-28 12:49:21,203 INFO [main] util.VersionInfo: Subversion file:///home/jon/proj/hbase-svn/hbase-0.96.1.1 -r Unknown
2014-03-28 12:49:21,204 INFO [main] util.VersionInfo: Compiled by jon on Tue Dec 17 12:22:12 PST 2013
2014-03-28 12:49:21,894 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-03-28 12:49:21,894 INFO [main] server.ZooKeeperServer: Server environment:host.name=guest-wireless-nup-nat-206-117-89-004.usc.edu
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.6.0_65
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=Apple Inc.
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.home=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
2014-03-28 12:49:21,895 INFO [main] server.ZooKeeperServer: Server environment:java.class.path=/Users/hbase/hbase-0.96.1.1-hadoop2/conf:/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib/tools.jar:/Users/hbase/hbase-0.96.1.1-hadoop2:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/activation-1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/aopalliance-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/asm-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/avro-1.7.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-beanutils-1.7.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-beanutils-core-1.8.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-cli-1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-codec-1.7.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-collections-3.2.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-compress-1.4.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-configuration-1.6.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-daemon-1.0.13.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-digester-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-el-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-httpclient-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-io-2.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-lang-2.6.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-logging-1.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-math-2.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/commons-net-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/core-3.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/findbugs-annotations-1.3.9-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/gmbal-api-only-3.0.0-b023.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-framework-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-server-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-http-servlet-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/grizzly-rcm-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guava-12.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guice-3.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/guice-servlet-3.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-annotations-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-auth-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-client-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-hdfs-2.2.0-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-hdfs-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-app-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-core-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-api-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-client-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-server-common-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hadoop-yarn-server-nodemanager-2.2.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hamcrest-core-1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-client-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-common-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-common-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-examples-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-hadoop-compat-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-hadoop2-compat-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-it-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-it-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-prefix-tree-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-protocol-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-server-0.96.1.1-hadoop2-tests.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-server-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-shell-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-testing-util-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/hbase-thrift-0.96.1.1-hadoop2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/high-scale-lib-1.1.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/htrace-core-2.01.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/httpclient-4.1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/httpcore-4.1.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-core-asl-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-jaxrs-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-mapper-asl-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jackson-xc-1.8.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jamon-runtime-2.3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jasper-compiler-5.5.23.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jasper-runtime-5.5.23.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.inject-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.servlet-3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/javax.servlet-api-3.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jaxb-api-2.2.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jaxb-impl-2.2.3-1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-client-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-core-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-grizzly2-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-guice-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-json-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-server-1.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-test-framework-core-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jersey-test-framework-grizzly2-1.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jets3t-0.6.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jettison-1.3.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-sslengine-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jetty-util-6.1.26.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jruby-complete-1.6.8.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsch-0.1.42.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-2.1-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-api-2.1-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsp-api-2.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/jsr305-1.3.9.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/junit-4.11.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/libthrift-0.9.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/log4j-1.2.17.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/management-api-3.0.0-b012.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/metrics-core-2.1.2.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/netty-3.6.6.Final.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/paranamer-2.3.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/protobuf-java-2.5.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/servlet-api-2.5-6.1.14.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/servlet-api-2.5.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/slf4j-api-1.6.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/snappy-java-1.0.4.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/stax-api-1.0.1.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/xmlenc-0.52.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/xz-1.0.jar:/Users/hbase/hbase-0.96.1.1-hadoop2/lib/zookeeper-3.4.5.jar:
2014-03-28 12:49:21,897 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=.:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/var/folders/ww/vvdhqz_d2ggcht76g3fp2zh00000gn/T/
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.name=Mac OS X
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.arch=x86_64
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:os.version=10.9.2
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.name=ms12
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.home=/Users/ms12
2014-03-28 12:49:21,898 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/Users/hbase/hbase-0.96.1.1-hadoop2/bin
2014-03-28 12:49:21,921 INFO [main] server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2 snapdir /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2
2014-03-28 12:49:21,962 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
2014-03-28 12:49:21,972 INFO [main] persistence.FileTxnSnapLog: Snapshotting: 0x0 to /Users/hbase/zookeeper-storage-2/zookeeper_0/version-2/snapshot.0
2014-03-28 12:49:22,269 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53624
2014-03-28 12:49:22,278 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Processing stat command from /127.0.0.1:53624
2014-03-28 12:49:22,283 INFO [Thread-3] server.NIOServerCnxn: Stat command output
2014-03-28 12:49:22,284 INFO [Thread-3] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53624 (no session established for client)
2014-03-28 12:49:22,287 INFO [main] zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 2181
2014-03-28 12:49:22,328 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:140)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:200)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:150)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:177)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2779)
Caused by: java.net.UnknownHostException: No such interface $iface
at org.apache.hadoop.net.DNS.getIPs(DNS.java:183)
at org.apache.hadoop.net.DNS.getIPs(DNS.java:145)
at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:344)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:362)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:341)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:414)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:256)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:137)
... 7 more
It seems that iface is an network interface on Linux system. Does that mean this version can not be run on Mac?
Edited:
I tested hbase version 0.98 also. Same issue. The only version that is working is hbase 0.94 but it is not compatible with hadoop 2.

It sounds like you used the instructions here:
http://opentsdb.net/setup-hbase.html
But did not do them correctly. The string $iface should not actually show up in your hbase-site.xml. It is expanded to the value of your loopback interface device when you write out your config using the exact commands given in those instructions. If you just copy-paste the config from there it won't work. On a mac it should result in lo0 for each of the below properties...
<property>
<name>hbase.zookeeper.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.master.dns.interface</name>
<value>lo0</value>
</property>

I had the same issue running HBase 98.6-hadoop2 on Ubuntu 12.04. It seems that something changed in the configuration needed for a standalone run mode. Try this in your hbase-site.xml configuration file
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///{your hbase data directory}</value>
</property>
<property>
<name>hbase.zookeper.property.dataDir</name>
<value>file:///{your zookeper stuff directory}</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>default</value>
</property>
<property>
<name>hbase.master.dns.interface</name>
<value>default</value>
</property>
</configuration>
Maybe these links can be of some help
http://hbase.apache.org/book/config.files.html#hbase_default_configurations
http://www.sujee.net/tech/articles/hadoop/hadoop-dns/

Related

NoClassDefFoundError while running HBase, no error in zookeeper

I've created a standalone hadoop cluster using this tutorial. Then I installed HBase over hadoop by following this tutorial.
I ran Hadoop by
cd /usr/local/hadoop/sbin/
./start-all.sh
And HBase by
cd /usr/local/hbase/bin
./start-hbase.sh
Then when I do jps, I get:
3761 Jps
835 NameNode
966 DataNode
3480 HMaster
3608 HRegionServer
1465 ResourceManager
1610 NodeManager
3418 HQuorumPeer
1150 SecondaryNameNode
But after some time it shows:
1779 SecondaryNameNode
1557 DataNode
2870 HQuorumPeer
2200 NodeManager
2061 ResourceManager
3246 Jps
1423 NameNode
So that's a pretty large indicator that something is wrong. Now, I checked the zookeeper logs in /usr/local/hbase/logs/hbase-hduser-zookeeper-stal.log and it showed:
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2019-04-29 07:54:45,677 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:os.version=4.15.0-47-generic
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.name=hduser
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.home=/home/hduser
2019-04-29 07:54:45,678 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/home/hduser
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: tickTime set to 3000
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: minSessionTimeout set to -1
2019-04-29 07:54:45,782 INFO [main] server.ZooKeeperServer: maxSessionTimeout set to 90000
2019-04-29 07:54:46,780 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
which doesn't seem like any error whatsoever.
So, I checked HBase's errors in /usr/local/hbase/logs/hbase-hduser-master-stal.log and I got:
2019-04-29 07:55:11,513 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3100)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3111)
Caused by: java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362)
at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411)
at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:489)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3093)
... 5 more
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 25 more
There was a similar question, which was answered by:
HBase 2.1.0 release uses HTrace, that is an incubating Apache
Foundation project.
There is a folder for 3rd-party libraries in HBase lib folder,
client-facing-thirdparty. You need to copy
htrace-core-3.1.0-incubating.jar from there to the HBase lib
directory. (see reference)
There is also another solution at Cloudera Community that changes a
configuration instead of adding the library manually.
The first solution includes:
The HMaster refuse to start due to the error below:
Java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster Caused by:
java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
This is because in hbase 2.0, we have 2 different version of
htrace-core.x.x.x.incubating.jar
cd /usr/local/hbase/lib/client-facing-thirdparty/:
htrace-core-3.1.0-incubating.jar
htrace-core-4.2.0-incubating.jar
Currently, only version 3.1.0 has the required class SamplerBuilder.
We need to remove version 4.2.0:
mv htrace-core-4.2.0-incubating.jar htrace-core-4.2.0-incubating.jar.bak
But, when I did cd to the /usr/local/hbase/lib/client-facing-thirdparty and do ls -a I get:
. audience-annotations-0.5.0.jar findbugs-annotations-1.3.9-1.jar log4j-1.2.17.jar slf4j-log4j12-1.7.25.jar
.. commons-logging-1.2.jar htrace-core4-4.2.0-incubating.jar slf4j-api-1.7.25.jar
As one can see, there is only one htrace file, not two. So, I downloaded htrace-3.1.0, from here, and copied it into /usr/local/hbase/lib/client-facing-thirdparty, and renamed htrace-core4-4.2.0-incubating.jar to htrace-core4-4.2.0-incubating.jar.bak. Then I restarted hadoop and HBase. Still no change. jps didn't show HMaster and HRegionServer now.
HBase configuration files:
<configuration>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/user/hduser/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.master</name>
<value>localhost:60010</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://localhost:9000/user/hduser/zookeeper</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/hbase/tmp</value>
<description>Temporary directory on the local filesystem.</description>
</property>
</configuration>
And hbase-env.sh looks like:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_REGIONSERVERS=/usr/local/hbase/conf/regionservers
export HBASE_MANAGES_ZK=true
export HBASE_PID_DIR=/var/hbase/pids
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"
So, what should I do now? Any help is appreciated.

Hbase setup configuration: HMaster is not running

I am trying to setup HBase in a fully distributed mode: consisting of 1 master and 2 region servers. I have set HBASE_MANAGES_ZK = true in hbase-env.sh. The hadoop cluster is running on the cluster with following configurations:
Master: node-master
Regionserver1: node1
Regionserver2: node2
When I am starting HBase, I can see that RegionServers are getting started and HQuorumPeer on master also, but HMaster is not showing.
Please find the logs as below:
Master hbase-site.xml
<configuration>
<property>
<name>hbase.master</name>
<value>nodemaster.hbasecluster.com:60000</value>
<description>The host and port that the HBase master runs at.A value of ‘local’ runs the master and a regionserver in a single </description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://nodemaster.hbasecluster.com:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) </description>
</property>
<property>
<name>hbase.zookeeper.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper’s config zoo.cfg. The port at which the clients will connect. </description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>nodemaster.hbasecluster.com</value>
<description>Comma separated list of servers in the ZooKeeper Quorum. </description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/hbase/tmp</value>
<description>Temporary directory on the local filesystem.</description>
</property>
</configuration>
/etc/hosts on master
127.0.0.1 localhost
192.168.2.154 nodemaster.hbasecluster.com node-master
192.168.2.186 node1.hbasecluster.com node1
192.168.2.187 node2.hbasecluster.com node2
Logs on regionserver1
Fri Aug 17 12:32:15 IST 2018 Starting regionserver on node1.hbasecluster.com
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15701
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15701
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2018-08-17 12:32:15,420 INFO [main] regionserver.HRegionServer: STARTING executorService HRegionServer
2018-08-17 12:32:15,422 INFO [main] util.VersionInfo: HBase 2.1.0
2018-08-17 12:32:15,422 INFO [main] util.VersionInfo: Source code repository git://zhangduo-Gen8/home/zhangduo/hbase/code revision=e1673bb0bbfea21d6e5dba73e013b09b8b49b89b
2018-08-17 12:32:15,422 INFO [main] util.VersionInfo: Compiled by zhangduo on Tue Jul 10 17:26:48 CST 2018
2018-08-17 12:32:15,422 INFO [main] util.VersionInfo: From source with checksum c8fb98abf2988c0490954e15806337d7
2018-08-17 12:32:15,703 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-root
2018-08-17 12:32:15,703 INFO [main] util.ServerCommandLine: hbase.rootdir: hdfs://nodemaster.hbasecluster.com:9000/hbase
2018-08-17 12:32:15,703 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2018-08-17 12:32:15,703 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: nodemaster.hbasecluster.com
2018-08-17 12:32:15,703 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-root-regionserver-node1.hbasecluster.com.log
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=182
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:MAIL=/var/mail/root
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:LOGNAME=root
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:PWD=/root
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=root
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/tmp/hbase-root-regionserver.znode
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:SSH_CLIENT=192.168.2.154 46760 22
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-root-regionserver-node1.hbasecluster.com
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/root/install/hbase-2.1.0/bin/../logs
2018-08-17 12:32:15,704 INFO [main] util.ServerCommandLine: env:USER=root
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: root/install/hbase-2.1.0/bin/../lib/spymemcached-2.12.2.jar:/root/install/hbase-2.1.0/bin/../lib/validation-api-1.1.0.Final.jar:/root/install/hbase-2.1.0/bin/../lib/xmlenc-0.52.jar:/root/install/hbase-2.1.0/bin/../lib/xz-1.0.jar:/root/install/hbase-2.1.0/bin/../lib/zookeeper-3.4.10.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/log4j-1.2.17.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar:/root/install/hbase-2.1.0/bin/../lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=true
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=192.168.2.154 46760 192.168.2.186 22
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/tmp/hbase-root-regionserver.autostart
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/root/install/hbase-2.1.0/bin/../logs -Dhbase.log.file=hbase-root-regionserver-node1.hbasecluster.com.log -Dhbase.home.dir=/root/install/hbase-2.1.0/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/0
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/root/install/hbase-2.1.0/bin/..
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:SHLVL=3
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:HOME=/root
2018-08-17 12:32:15,705 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2018-08-17 12:32:15,706 INFO [main] util.ServerCommandLine: vmName=OpenJDK 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.171-b11
2018-08-17 12:32:15,707 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/root/install/hbase-2.1.0/bin/../logs, -Dhbase.log.file=hbase-root-regionserver-node1.hbasecluster.com.log, -Dhbase.home.dir=/root/install/hbase-2.1.0/bin/.., -Dhbase.id.str=root, -Dhbase.root.logger=INFO,RFA, -Dhbase.security.logger=INFO,RFAS]
2018-08-17 12:32:21,194 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2018-08-17 12:32:21,245 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-08-17 12:32:21,489 INFO [main] regionserver.RSRpcServices: regionserver/node1:16020 server-side Connection retries=45
2018-08-17 12:32:21,503 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2018-08-17 12:32:21,505 INFO [main] ipc.RpcExecutor: Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2018-08-17 12:32:21,505 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2018-08-17 12:32:21,639 INFO [main] ipc.RpcServerFactory: Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService
2018-08-17 12:32:21,832 INFO [main] io.ByteBufferPool: Created with bufferSize=64 KB and maxPoolSize=1.88 KB
2018-08-17 12:32:21,937 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.UnsupportedOperationException: Constructor threw an exception for org.apache.hadoop.hbase.ipc.NettyRpcServer
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:66)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:45)
at org.apache.hadoop.hbase.ipc.RpcServerFactory.createRpcServer(RpcServerFactory.java:66)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.createRpcServer(RSRpcServices.java:1271)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:1238)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:1191)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createRpcServices(HRegionServer.java:733)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:571)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2991)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3009)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:58)
... 17 more
Caused by: org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newIOException(Errors.java:117)
at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.bind(Socket.java:285)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel.doBind(AbstractEpollChannel.java:714)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollServerSocketChannel.doBind(EpollServerSocketChannel.java:70)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1283)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:989)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254)
at org.apache.hbase.thirdparty.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:364)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:309)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
2018-08-17 12:32:21,940 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2994)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3009)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2991)
... 5 more
Caused by: java.lang.UnsupportedOperationException: Constructor threw an exception for org.apache.hadoop.hbase.ipc.NettyRpcServer
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:66)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:45)
at org.apache.hadoop.hbase.ipc.RpcServerFactory.createRpcServer(RpcServerFactory.java:66)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.createRpcServer(RSRpcServices.java:1271)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:1238)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:1191)
I
at org.apache.hadoop.hbase.regionserver.HRegionServer.createRpcServices(HRegionServer.java:733)
at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:571)
... 10 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:58)
... 17 more
Caused by: org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newIOException(Errors.java:117)
at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.bind(Socket.java:285)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel.doBind(AbstractEpollChannel.java:714)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollServerSocketChannel.doBind(EpollServerSocketChannel.java:70)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1283)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:989)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254)
at org.apache.hbase.thirdparty.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:364)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:309)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
regionserver hbase-site.xml
<configuration>
<property>
<name>hbase.master</name>
<value>nodemaster.hbasecluster.com:60000</value>
<description>The host and port that the HBase master runs at.A value of ‘local’ runs the master and a regionserver in a single </description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://nodemaster.hbasecluster.com:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) </description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper’s config zoo.cfg. The port at which the clients will connect. </description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>nodemaster.hbasecluster.com</value>
<description>Property from ZooKeeper’s config zoo.cfg. The port at which the clients will connect. </description>
</property>
<property>
<name>hbase.zookeeper.distributed</name>
<value>true</value>
</property>
</configuration>
/etc/hosts file in regionserver1
127.0.0.1 localhost
192.168.2.154 nodemaster.hbasecluster.com node-master
192.168.2.186 node1.hbasecluster.com node1
192.168.2.187 node2.hbasecluster.com node2
Master node jps output:
19717 SecondaryNameNode
20441 HQuorumPeer
20781 Jps
19470 NameNode
19887 ResourceManager
regionserver jps output:
28404 NodeManager
28185 DataNode
28844 Jps
28687 HRegionServer
EDIT: I was trying to run ./bin/start-hbase.sh. When I used the commands ./bin/hbase-daemon.sh start master I get the following error in my master logs.
2018-08-20 11:50:42,742 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:635)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:358)
at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:407)
at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:383)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:691)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:600)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:484)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2965)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2983)
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 25 more
2018-08-20 11:50:42,744 ERROR [main] master.HMasterCommandLine: Master exiting
The zookeeper was able to create connections to the slaves and the Region servers are running on each slave.
I hope you are using bin/hbase-daemon.sh start master to start the master, if yes, there should be more logs telling you about the actual problem with ERROR/FATAL just before the master is shutting down, and also you should see a similar line "master.HMaster: STARTING service HMaster" line in the logs when master starting up.
Below log line in regionserver says that either the regionserver port(16020) is utilized by another regionserver or application. Probably you have seen this while starting the regionserver again.
Caused by: org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newIOException(Errors.java:117)
at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.bind(Socket.java:285)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel.doBind(AbstractEpollChannel.java:714)

Unable to start Hadoop (3.1.0) in Pseudomode on Ubuntu (16.04)

I am trying to follow the Getting Started guide from the Hadoop Apache website, in particular from the Pseudo distributed configuration,
Getting started guide from Apache Hadoop 3.1.0
but I am unable to start the Hadoop Name- and Data Nodes. Can anyone help advise ? even if its things I can run to try to debug/investigate further.
At the end of the logs I see an Error message (not sure if its important or a red-herring).
2018-04-18 14:15:40,003 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2018-04-18 14:15:40,006 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 11 msec
2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2018-04-18 14:15:40,029 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Initializing quota with 4 thread(s)
2018-04-18 14:15:40,033 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Quota initialization completed in 2 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2018-04-18 14:15:40,037 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
> 2018-04-18 14:15:40,232 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
> SIGTERM
>
> 2018-04-18 14:15:40,236 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 1:
> SIGHUP
>
> 2018-04-18 14:15:40,236 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at c0315/127.0.1.1
I have confirmed, that I can ssh localhost without a password prompt. I have also run the following steps from the above mentioned Apache Getting Started guide,
$ bin/hdfs namenode -format
$ sbin/start-dfs.sh
But I cant run step 3. to browse the location at http://localhost:9870/. When I run >jsp from the terminal prompt I just get returned,
14900 Jps
I was expecting a list of my nodes.
I will attach the full logs.
Can anyone help even with ways to debug this please ?
Java Version,
$ java --version
java 9.0.4
Java(TM) SE Runtime Environment (build 9.0.4+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
EDIT1 : I have repeated the steps with Java8 as well and get the same error message.
EDIT2: Following the comment suggestions below I have checked that I am definitely pointing at Java8 now and I have also commented out the localhost setting for 127.0.0.0 from the /etc/hosts file
Ubuntu version,
$ lsb_release -a
No LSB modules are available.
Distributor ID: neon
Description: KDE neon User Edition 5.12
Release: 16.04
Codename: xenial
I have tried running the commands, bin/hdfs version
Hadoop 3.1.0
Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with protoc 2.5.0
From source with checksum 14182d20c972b3e2105580a1ad6990
This command was run using /home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
when I try bin/hdfs groups it doesnt return but gives me,
018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
when I try, $ bin/hdfs lsSnapshottableDir
lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
when I try, $ bin/hdfs classpath
/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
I have not been able to figure out (I just tried again since I miss NEON so much) but even though :9000 is not in use, the OS sends a SIGTERM in my case too.
The only way I have found to solve this was to go back to stock Ubuntu, sadly.

HBase master fail to start - impl.MetricsSystemImpl: Source name ugi already exists

I have configured HBase on top of HDFS for distributed mode.I formatted Hadoop namenode, HDFS is configured correctly. It is up and running.
HBase master is not starting. The error is "impl.MetricsSystemImpl: Source name ugi already exists". The following is detailed error log for HBase master
apreduce/*:/contrib/capacity-scheduler/*.jar
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:CLASS_PATH=.
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=193.60.151.202 36343 192.168.0.84 22
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HADOOP_COMMON_LIB_NATIVE_DIR=/home/ubuntu/hadoop/lib/native
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/1000
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/home/ubuntu/hbase-0.98.20-hadoop1/bin/..
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:HOME=/home/ubuntu
2016-07-13 14:06:19,374 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2016-07-13 14:06:19,377 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.91-b14
2016-07-13 14:06:19,377 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx1000m, - XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/home/ubuntu/hbase-0.98.20- hadoop1/bin/../logs, -Dhbase.log.file=hbase-ubuntu-master-master.log, - Dhbase.home.dir=/home/ubuntu/hbase-0.98.20-hadoop1/bin/.., - Dhbase.id.str=ubuntu, -Dhbase.root.logger=INFO,RFA, - Djava.library.path=/home/ubuntu/hadoop/lib, -Dhbase.security.logger=INFO,RFAS]
2016-07-13 14:06:19,435 DEBUG [main] master.HMaster: master/master/192.168.0.84:60000 HConnection server-to-server retries=350
2016-07-13 14:06:19,649 INFO [main] ipc.RpcServer: master/master/192.168.0.84:60000: started 10 reader(s).
2016-07-13 14:06:19,722 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2016-07-13 14:06:19,801 INFO [main] impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2016-07-13 14:06:19,803 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-07-13 14:06:19,803 INFO [main] impl.MetricsSystemImpl: HBase metrics system started
2016-07-13 14:06:19,807 INFO [main] impl.MetricsSourceAdapter: MBean for source jvm registered.
2016-07-13 14:06:19,810 INFO [main] impl.MetricsSourceAdapter: MBean for source IPC,sub=IPC registered.
2016-07-13 14:06:19,988 INFO [main] impl.MetricsSourceAdapter: MBean for source ugi registered.
2016-07-13 14:06:19,988 WARN [main] impl.MetricsSystemImpl: Source name ugi already exists!
2016-07-13 14:06:20,188 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3119)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:193)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3133)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:852)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:790)
Wed Jul 13 15:57:58 UTC 2016 Stopping hbase (via master)
Wed Jul 13 16:06:47 UTC 2016 Stopping hbase (via master)
Wed Jul 13 16:11:11 UTC 2016 Starting master on master
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 125284
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
Any pointer to resolve the error is much appreciated....

hadoop namenode not starting/formatting on Ubuntu

am trying to set up a hadoop instance on Ubuntu. The namenode is not starting up. When i do jps command I can see all but namenode . Here is my hdfs-site.xml file.
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/ac/hadoop/dfs</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/ac/hadoop/dfs</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
and heres my core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
The error that i got is
ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
When I formatted namenode I got this on prompt
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hanu/127.0.1.1
STARTUP_MSG: args = [–format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.8.0_31
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
15/02/03 15:03:41 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hanu/127.0.1.1
I've tried to to change files as per various suggestions out there but nothing is working. I think namenode is not formatting properly.
Whats wrong in my setup and how can I get it corrected.Any help is appreciated. Thanks
The reason you are seeing the error message is because of command typo, that is why namenode class is showing the Usage error, may be you have issued the command option improperly.
Make sure you type the command properly:
bin/hadoop namenode -format
and then try to start the NameNode, you could start NameNode service on foreground just to see if everything is working out properly and if you don't see any errors you could kill the process and start all the services using start-all.sh script.
Here's how you could start NameNode process on foreground:
bin/hadoop namenode
once started these are the log messages to look for to validate a proper startup:
15/02/04 10:42:44 INFO http.HttpServer: Jetty bound to port 50070
15/02/04 10:42:44 INFO mortbay.log: jetty-6.1.26
15/02/04 10:42:45 INFO mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
15/02/04 10:42:45 INFO namenode.NameNode: Web-server up at: 0.0.0.0:50070
15/02/04 10:42:45 INFO ipc.Server: IPC Server Responder: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server listener on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 0 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 1 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 2 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 3 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 4 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 5 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 6 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 7 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 8 on 8020: starting
15/02/04 10:42:45 INFO ipc.Server: IPC Server handler 9 on 8020: starting
you could kill the service by sending <Ctrl+C> to the process.

Resources