I'm trying to setup HBase 0.96 to run on top of my Hadoop 2.2.0 cluster. I run start-hbase.sh and the master along with the regions startup. I can log into each region and see the processes running. However when check to see how many regions are up either through the web ui or a shell command I get a response of 0. Based on the logs it looks like the region servers are starting up not unable to notify the master that they are running. I confirmed that the master is listening on port 60000 and ports 60000 along with 60020 are both open. I've included my hbase-site file along with the logs from a region server.
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/master</value>
</property>
Log File:
2013-11-08 20:08:58,357 INFO [regionserver60020] regionserver.HRegionServer: reportForDuty to master=10.119.102.58,60000,1383941300240 with port=60020, startcode=1383941300420
2013-11-08 20:09:18,636 WARN [regionserver60020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connec$
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1667)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1708)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:5402)
at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1924)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:790)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending local=/100.65.$
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:573)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:858)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1532)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1421)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1650)
... 5 more
2013-11-08 20:09:18,676 WARN [regionserver60020] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.
I don't think the hbase.zookeeper.quorum is set correctly, which may cause the connection timeout. If you just wan't to test 0.96, start it in standalone mode and then make sure the zookeeper cluster is running before you change to distributed mode.
The HRegionServer complains that it can't connect to HMaster in order to report status (up).
It's probable that the HMaster process is not running so you may want to start it, or if you already started it to check the master log file.
Check your master server is listening on port 60000 by using the following command
netstat -l
tcp6 0 0 Vostro-350:60000 : LISTEN
if the server is listening on ipv6 then disabled it.
To disable you have to append the following to the file: /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
After reboot you should validate that IPV6 is really off by:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
(0 = IPV6 on ; 1 = IPV6 off)
Ref link : Connecting and Persisting to HBase
Related
I am trying to get a personal HBase development environment set up. I have hdfs and yarn running, but cannot get HBase to start.
I have started up hadoop 2.7.1, by running start-dfs.sh and start-yarn.sh. I have verified these are running by testing hdfs dfs -mkdir /test and running a sample MR job bundled in the examples, I have browsed HDFS at port 50070.
I have started zookeeper 3.4.6 on port 2181 and set its dataDir. My zoo.cfg has:
dataDir=/Users/.../tools/hd/zookeeper_data
clientPort=2181
I observe its zookeeper_server.PID file in the dataDir I chose, and when I run jps I see the below:
51074 NodeManager
50743 DataNode
50983 ResourceManager
50856 SecondaryNameNode
57848 QuorumPeerMain
58731 Jps
50653 NameNode
QuorumPeerMain above matches the PID in zookeeper_server.PID, as I would expect. Is this expectation correct? From what I have done so far, should it be expected that any more processes should be showing here?
I installed hbase-1.1.2. I configure hbase-site.xml. I set the hbase.rootDir to be hdfs://localhost:8200/hbase, my hdfs is running at localhost:8200. I set hbase.zookeeper.property.dataDir to my zookeeper's dataDir, with the expectation that it will use this property to find the PID of a running zookeeper. Is this expectation correct or have I misunderstood? The config in hbase-site.xml is:
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>Users/.../tools/hd/zookeeper_data</value>
</property>
When I run start-hbase.sh my server fails to start. I see this log message:
2015-09-26 19:32:43,617 ERROR [main] master.HMasterCommandLine: Master exiting
To investigate I ran hbase master start and get more detail:
2015-09-26 19:41:26,403 INFO [Thread-1] server.NIOServerCnxn: Stat command output
2015-09-26 19:41:26,405 INFO [Thread-1] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:63334 (no session established for client)
2015-09-26 19:41:26,406 INFO [main] zookeeper.MiniZooKeeperCluster: Started MiniZooKeeperCluster and ran successful 'stat' on client port=2182
Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
2015-09-26 19:41:26,406 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:214)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2304)
So I have a few questions:
Should I be trying to set up a zookeeper before running HBase?
Why when I have started a zookeeper and told HBase where its dataDir is, does HBase try to start its own zookeeper?
Anything obviously stupid/misguided in the above?
The script you are using to start hbase start-hbase.sh will try to start the following components, in order:
zookeeper
hbase master
hbase regionserver
hbase master-backup
So, you could either stop the zookeeper which is started by you (or) you could start the daemons individually yourself:
# start hbase master
bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} start master
# start region server
bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} --hosts ${HBASE_CONF_DIR}/regionservers start regionserver
HBase stand alone starts it's own zookeeper (if you run start-hbase.sh), but it if fails to start or keep running, the other need hbase daemons won't work.
Make sure you explicitly set the properties for your interface lo0 in the hbase-site.xml file:
<property>
<name>hbase.zookeeper.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>lo0</value>
</property>
<property>
<name>hbase.master.dns.interface</name>
<value>lo0</value>
</property>
I found that when my wifi was on, if these entries were missing, zookeeper filed to start.
I am trying to setup an edge node to a cluster in my work place. The cluster is CDH 5.* Hadoop Yarn. It has it's own internal private high speed network. The edge node is outside the private network.
I ran the steps for hadoop client setup and configured the core-site.xml
sudo apt-get install hadoop-client
Since the cluster is hosted on it's own private network the IP addresses in the internal network are different.
10.100.100.1 - Namemode
10.100.100.2 - Data Node 1
10.100.100.4 - Data Node 2
100.100.100.6 - Date Node 3
To handle this I requested the cluster admin to add the following properties to the hdfs-site.xml on the namenode so that the listening ports are not just open to internal IP range:
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
After this setting is done and the services are restarted. I am able to run the following command:
hadoop fs -ls /user/hduser/testData/XML_Flows/test/test_input/*
This works fine. But when I try to cat the file, I get the following error:
*administrator#administrator-Virtual-Machine:/etc/hadoop/conf.empty$ hadoop fs -cat /user/hduser/testData/XML_Flows/test/test_input/*
*15/05/04 15:39:02 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3035)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:744)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:659)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:78)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:104)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:99)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
15/05/04 15:39:02 WARN hdfs.DFSClient: Failed to connect to /10.100.100.6:50010 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]*
Same error message is repeated multiple times.
I copied the rest of the xml's i.e. hdfs-site.xml, yarn-site.xml, mapred-site.xml from the cluster data nodes to be on the safer side. But still I got the same error. Does anyone have any idea about this error or how to make edge nodes work on clusters running on private network.
The username of the edge node is "administrator" whereas the cluster is configured using "hduser" id. Could this be a problem ? I have configured password less login between the edge node and the name node.
Hadoop version=2.4.1
hbase version=0.98.6
i have hadoop up and running prefectly fine on below conf:
107.108.86.119-hadoop namenode,SecondaryNameNode
107.109.155.100-datanode1
107.109.155.102-datanode2
now i install hbase as below conf:-
107.108.86.114:-hmaster,HQuorumPeer
107.109.155.100-regionserver1
107.109.155.102-regionserver2
when i do jps following process are running:
107.109.155.102:-hregionserver,datanode
107.109.155.100:-hregionserver,datanode
107.108.86.119:-NameNode,secondaryNameNode
107.108.86.114:-hmaster
but on doing status on hbase shell is showing "0 servers, 0 dead, NaN average load"
on entering cmd on hbase shell showing ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later
logs on regionserver showing:
regionserver.HRegionServer: reportForDuty to master=localhost,60000,1415007213689 with port=60020, startcode=1415007215055
regionserver.HRegionServer: error telling master we are up
my hbase-site.xml-
<property>
<name>hbase.master</name>
<value>107.108.86.114:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://push-mcd2:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>107.108.86.114</value>
</property>
while /etc/hosts of hmaster is:
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
107.109.155.100 push-ws1
107.109.155.102 push-ws2
107.108.86.114 push-mcd1
107.108.86.119 push-mcd2
WHILE slaves file are also almost similiar to above one.
conf/hbase-env.sh
export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.22 export HBASE_CLASSPATH=/home/hadoop/hadoop-0.20.2/conf export HBASE_MANAGES_ZK=true
so what change i make so hbase will run on above cluster
Why does your regionserver log mentions that it is looking for HBase Master on localhost?
Form information above you have setup Master on a node different for either regionservers, please check your config is correct on each node.
logs on regionserver showing: regionserver.HRegionServer:
reportForDuty to master=localhost,60000,1415007213689 with port=60020,
startcode=1415007215055 regionserver.HRegionServer: error telling
master we are up
Also in /etc/hosts on each node please update first two lines from
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
to
127.0.0.1 localhost
<Actual_IP_Address_for_Host> arpita-ubuntu
This is necessary if you don't have automatic dns name resolution in place.
Also please use IP instead of localhost in all config settings.
If you still face issues, check if the respective ports are open or not.
Hope this helps you.
While running Pig Script which is inserting data into HBase i got following error.
2013-10-25 14:57:03,147 [main-SendThread(localhost:2181)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2013-10-25 14:57:03,147 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
2013-10-25 14:57:03,169 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x41ead7bb150092, negotiated timeout = 180000
Also when i see slave logs it says that unable to establish connection to master, But master says that connection is established successfully. Below are the logs of both:
Master Log :
zookeeper.ZooKeeper: Initiating client connection, connectString=slave:2181,hadoop-master:2181,ubuntu:2181 sessionTimeout=180000 watcher=hconnection
13/10/24 19:51:56 INFO zookeeper.ClientCnxn: Opening socket connection to server slave:2181. Will not attempt to authenticate using SASL (unknown error)
13/10/24 19:51:56 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 104744#hadoop-master
13/10/24 19:51:56 INFO zookeeper.ClientCnxn: Socket connection established to slave:2181, initiating session
13/10/24 19:51:56 INFO zookeeper.ClientCnxn: Session establishment complete on server slave:2181, sessionid = 0x141ead77c250002, negotiated timeout = 180000
Slave Log
2013-10-24 19:51:32,174 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id = 1, proposed zxid=0x80000002a
2013-10-24 19:51:32,180 WARN org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel to 0 at election address hadoop-master:3888
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
Below are my configurations:
hbase-env.sh
export HBASE_MANAGES_ZK=true
hbase-site.xml
<configuration>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
<description>
The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:54310/hbase</value>
<description>
The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>
The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>
Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop-master,slave,ubuntu</value>
<description>
Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>
Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
By entering command jps i get following output:
At Master:
104744 HMaster
91841 JobTracker
80184 TaskTracker
91475 DataNode
91222 NameNode
105062 HRegionServer
91747 SecondaryNameNode
104666 HQuorumPeer
At Slave:
11533 HQuorumPeer
2444 SecondaryNameNode
5970 TaskTracker
11756 HRegionServer
This is probably an issue with your HBase config not being on the classpath when running the Pig script. One thing you can try is to set the zookeeper connection settings in the Pig script to see if it connects to the correct instance. If it does, you can add the HBase config to the classpath before launching your script.
I am running an hdfs instance in pseudo-distributed mode, and tried to make another hbase instance connected to it on the same server. Logs in hadoop are fine, but I constantly got the connection failure in hbase' log
==================================================================================
2012-05-01 10:49:07,212 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-05-01 10:49:07,213 WARN org.apache.zookeeper.ClientCnxn: Session 0x13708dc552d0001 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
2012-05-01 10:49:08,882 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-05-01 10:49:08,882 WARN org.apache.zookeeper.ClientCnxn: Session 0x13708dc552d0001 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
==================================================================================
Configuration of core-site.xml#hadoop
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Configuration of hbase-site.xml#hbase
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I also tried to replace localhost with the actual ip of the server, but got the same error.
First, you need to make sure your hbase master node is running, you can use jps to check.
If it is not running, you can run it by start-hbase.sh command or hbase master start.
And then check its status by other commands, like netstat -an | grep 9000
Second, if the previous method does not work, check your firewall configuration such as iptables and SELinux.
Use sudo iptables -L to check your iptables configuration. You can disable the iptables by sudo service iptables stop command under redhat based linux systems.
Use getenforce to check if SElinux is in enforcing mode.
Third, check the system configuration, for example, ssh etc.
You need to replace the core hadoop jar in the $HBASE_HOME/bin/lib/hadoop-{{version}}core.jar with the one in the $HADOOP_HOME/hadoop-{{version}}core.jar
I was running into the same problem when I tried to install hbase 0.92 from hbase 0.90-xxx which was working fine, i replaced the hbase-env.sh and hbase-site.xml from the old hbase to the new but forgot to copy the hadoop core jar.
I'm always suspicious when I see localhost in a config. Also when you use localhost, then it becomes very difficult (to impossible) to access any services from the psuedo distributed system from a host other than the one you are running on.
You did say you tried the IP address, but you might want to make sure its the IP address that the node really thinks its at.
Check the zookeeper logs from when it starts up and see what IP address it "thinks" its at. There should be a line like:
2012-01-31 09:32:46,083 - INFO [main:Environment#97] - Server environment:host.name=ip-10-8-127-58.ec2.internal
Then use the value of host.name as the value for all places in ALL Hadodop, HBase, Hive, Zookeeper, etc config files that need a hostname (assuming they are all on the same machine as you said you are in psuedo-distributed mode)
Also you did not show your hbase.zookeeper.quorum setting in hbase-site.xml. That is where hbase gets its knowledge of the zookeeper's address
<property>
<name>hbase.zookeeper.quorum</name>
<value>ip-10-8-127-58.ec2.internal</value>
</property>
I think Hbase cannot find the zookeeper quorum, you have to set the hbase.zookeeper.quorum property in hbase-site.xml. Also check if classpath is set properly or not, check this doc out http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath