I'm struggling to setup a Hbase distributed cluster with 2 nodes, one is my machine and one is the VM, using the "host-only" Adapter in VirtualBox.
My problem is that the region server (from VM machine) can't connect to Hbase master running on host machine. Although in Hbase shell I can list, create table, ..., in regionserver on VM machine ('slave'), the log always show
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: Connection refused
Previously, I've successfully setup Hadoop, HDFS and MapReduce on this cluster with 2 nodes named as 'master', and 'slave', 'master' as master node and both 'master' and 'slave' work as slave nodes, these names bound to the vboxnet0 interface of VirtualBox (the hostnames in /etc/hostname is different). I'also specify the "slave.host.name" property for each node as 'master' and 'slave'.
It seems that the Hbase master on the 'master' always run with 'localhost' hostname, from the slave machine, I can't telnet to the hbase master with 'master' hostname. So is there any way to specify hostname use for Hbase master as 'master', I've tried specify some properties about DNS interface for ZooKeeper, Master, RegionServer to use internal interface between master and slave, but it still does not work at all.
/etc/hosts for both as something like
127.0.0.1 localhost
127.0.0.1 ubuntu.mymachine
# For Hadoop
192.168.56.1 master
192.168.56.101 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The answer that #Infinity provided seems to belong to version ~0.9.4.
For version 1.1.4.
according to the source code from
org.apache.hadoop.hbase.master.HMaster
the configuration should be:
<property>
<name>hbase.master.hostname</name>
<value>master.local</value>
<!-- master.local is the DNS name in my network pointing to hbase master -->
</property>
After setting this value, region servers are able to connect to hbase master;
however, in my environment, the region server complained about:
com.google.protobuf.ServiceException: java.net.SocketException: Invalid argument
The problem disappeared after I installed oracle JDK 8 instead of open-jdk-7 in all of my nodes.
So in conclusion, here is my solution:
use dns name server instead of setting /etc/hosts, as hbase is very
picky on hostname and seems requires DNS lookup as well as reverse
DNS lookup.
upgrade jdk to oracle 8
use the setting item mentioned
above.
My host file is like
127.0.0.1 localhost
192.168.2.118 shashwat.machine.com shashwat
make your hosts file as following:
127.0.0.1 localhost
For Hadoop
192.168.56.1 master
192.168.56.101 slave
and in hbase conf put following entries :
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
<description>The host and port that the HBase master runs at.</description>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/cluster/Hadoop/hbase-0.90.4/temp</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
If you are using localhost anywhere remove that and replace it with "master" which is name for namenode in your hostfile....
one morething you can do
sudo gedit /etc/hostname
this will open the hostname file bydefault ubuntu will be there so make it master. and restart your system.
For hbase specify in "regionserver" file inside conf dir put these entries
master
slave
and restart.everything.
There are two things that fix this class of problem for me:
1) Remove all "localhost" names, only have 127.0.0.1 pointing to the name of the hmaster node.
2) run "hostname X" on your hbase master node, to make sure the hostname matches what is in /etc/hosts.
Not being a networking expert, I can't say why this is important, but it is :)
Most of the time the error is coming from Zookeeper that send a wrong hostname.
You can check what Zookeeper sends as HBase master host:
Find Zookeeper bin folder:
bin/zkCli.sh -server 127.0.0.1:2181
get /hbase/master
This should give you the HBase master IP that answer Zookeeper, so this IP must be accessible.
Related
I have configured Hadoop to use IP address instead of localhost but when I started it, it still posted an error that can not ssh to localhost.
hdfs-site.xml:
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///<hdfs_path>/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///<hdfs_path>/hdfs/namenode</value>
</property>
</configuration>
core-site.xml:
<name>fs.defaultFS</name>
<value>hdfs://<IP_address>/</value>
<description>NameNode URI</description>
When I started Hadoop, the below error appeared:
localhost: ssh_exchange_identification: Connection closed by remote host
When I tried to allow shh to localhost in our test server, the error disappeared. But problem is that in our production server, ssh to localhost is not allowed. I tried to avoid using localhost and use IP address instead, but why Hadoop still needs to ssh to localhost?
Only the start-dfs, start-yarn, or start-all shell scripts use SSH, and the SSH connection has nothing to do with the XML files.
That's not required; you can run hadoop namenode and hadoop datanode, and the YARN daemon processes directly. However, you'll still need to somehow get into a shell of each machine to run those commands, if they don't start at boot or ever fail to (re)start
Hadoop version=2.4.1
hbase version=0.98.6
i have hadoop up and running prefectly fine on below conf:
107.108.86.119-hadoop namenode,SecondaryNameNode
107.109.155.100-datanode1
107.109.155.102-datanode2
now i install hbase as below conf:-
107.108.86.114:-hmaster,HQuorumPeer
107.109.155.100-regionserver1
107.109.155.102-regionserver2
when i do jps following process are running:
107.109.155.102:-hregionserver,datanode
107.109.155.100:-hregionserver,datanode
107.108.86.119:-NameNode,secondaryNameNode
107.108.86.114:-hmaster
but on doing status on hbase shell is showing "0 servers, 0 dead, NaN average load"
on entering cmd on hbase shell showing ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later
logs on regionserver showing:
regionserver.HRegionServer: reportForDuty to master=localhost,60000,1415007213689 with port=60020, startcode=1415007215055
regionserver.HRegionServer: error telling master we are up
my hbase-site.xml-
<property>
<name>hbase.master</name>
<value>107.108.86.114:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://push-mcd2:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>107.108.86.114</value>
</property>
while /etc/hosts of hmaster is:
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
107.109.155.100 push-ws1
107.109.155.102 push-ws2
107.108.86.114 push-mcd1
107.108.86.119 push-mcd2
WHILE slaves file are also almost similiar to above one.
conf/hbase-env.sh
export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.22 export HBASE_CLASSPATH=/home/hadoop/hadoop-0.20.2/conf export HBASE_MANAGES_ZK=true
so what change i make so hbase will run on above cluster
Why does your regionserver log mentions that it is looking for HBase Master on localhost?
Form information above you have setup Master on a node different for either regionservers, please check your config is correct on each node.
logs on regionserver showing: regionserver.HRegionServer:
reportForDuty to master=localhost,60000,1415007213689 with port=60020,
startcode=1415007215055 regionserver.HRegionServer: error telling
master we are up
Also in /etc/hosts on each node please update first two lines from
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
to
127.0.0.1 localhost
<Actual_IP_Address_for_Host> arpita-ubuntu
This is necessary if you don't have automatic dns name resolution in place.
Also please use IP instead of localhost in all config settings.
If you still face issues, check if the respective ports are open or not.
Hope this helps you.
I am not able to setup the hbase in distributed mode. It works fine when i setup it on one machine(standalone mode). My Zookeeper, hmaster and region server starts properly.
But when i go to hbase shell and look for the status. It shows me 0 region server. I am attaching my logs of regions server. Plus the host files of my master(namenode) and slave(datanode). I have tried every P&C which are given on stackoverflow for changing the host file, but didn't work for me.
2013-06-24 15:03:45,844 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server naresh-pc/192.168.0.108:2181. Will not attempt to authenticate using SASL (unknown error)
2013-06-24 15:03:45,845 WARN org.apache.zookeeper.ClientCnxn: Session 0x13f75807d960001 for server null, unexpected error, closing socket connection and attempting to reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
Slave /etc/hosts :
127.0.0.1 localhost
127.0.1.1 ubuntu-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Master /etc/hosts :
127.0.0.1 localhost
127.0.1.1 naresh-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hbase-site.xml :
<configuration>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example,
"host1.mydomain.com,host2.mydomain.com".
By default this is set to localhost for local and
pseudo-distributed modes of operation. For a
fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
</description>
</property>
</configuration>
Zookeeper log:
2013-06-28 18:22:26,781 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x13f8ac0b91b0002, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:722)
2013-06-28 18:22:26,858 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /192.168.0.108:57447 which had sessionid 0x13f8ac0b91b0002
2013-06-28 18:25:21,001 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x13f8ac0b91b0002, timeout of 180000ms exceeded
2013-06-28 18:25:21,002 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x13f8ac0b91b0002
Master Log:
2013-06-28 18:22:41,932 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1502022 ms
2013-06-28 18:22:43,457 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1503547 ms
Remove 127.0.1.1 from hosts file and turn of IPV6. That should fix the problem.
Your Regionserver is looking for HMaster at naresh-pc but you do not have any such entry in your /etc/hosts file. Please make sure your configuration is proper.
Can you try all of this:
Make sure your /conf/regionservers file has just one entry: slave
Not sure what HBase version you are using, but instead of using port 54310 for hbase.rootdir property in your hbase-site.xml use port 9000
Your /etc/hosts file, on BOTH master and slave should should only have these custom entries:
127.0.0.1 localhost
192.168.0.108 master
192.168.0.126 slave
I am concerned that your logs state Opening socket connection to server naresh-pc/192.168.0.108:2181
Clearly the system thinks that zookeeper is on host naresh-pc, but in your config you are setting zookeeper quorum at host master, which HBase will bind to. That's a problem right there. In my experience, HBase is EXTREMELY fussy about host names, so make sure they are all in synch in all your configs and in your /etc/hosts file.
Also, this may be a minor issue, but wouldn't hurt to specify the zookeper data directory in your .xml file to have a minimum set of settings that should make the cluster work: hbase.zookeeper.property.dataDir
We have an 8 node cluster using CDH3u2 configured using Cloudera Manager. We have a dedicated master node running our only instance of zookeeper. When I configure hive to run local hadoop, executed from the master node, I have no problem retreiving the data from HBase. When I run distributed map/reduce via hive, I am getting the following error when the slave nodes connect to zookeeper.
HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default).
We have tried setting max connections higher (we even tried removing the limit). This is a development cluster that has very few users, I know that the problem is not that there are too many connections (I am able to connect to zookeeper from the slave nodes using ./zkCli).
Server side logs indicate that the session was terminated by the client.
Client side hadoop log says:
'Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
Any idea why I am unable to maintian a connection to zookeeper via Hive Map/Reduce?
Configs for hbase and zookeeper are:
# Autogenerated by Cloudera SCM on Wed Dec 28 08:42:23 CST 2011
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/zookeeper
clientPort=2181
maxClientCnxns=1000
minSessionTimeout=4000
maxSessionTimeout=40000
HBase Site-XML is:
<property>
<name>hbase.rootdir</name>
<value>hdfs://alnnimb01:8020/hbase</value>
<description>The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR</description>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
<description>The port master should bind to.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)</description>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
<description>The port for the hbase master web UI Set to -1 if you do not want the info server to run.</description>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
<description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed.</description>
</property>
<property>
<name>zookeeper.znode.rootserver</name>
<value>root-region-server</value>
<description>Path to ZNode holding root region location. This is written by the master and read by clients and region servers. If a relative path is given, the parent folder will be ${zookeeper.znode.parent}. By default, this means the root location is stored at /hbase/root-region-server.</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>The ZooKeeper client port to which HBase clients will connect</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>alnnimb01.aln.experian.com</value>
<description>Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".</description>
Turns out that the Map/Reduce submitted by Hive is trying to connect to zookeeper at 'localhost', regardless of how the zookeeper.quorom is setup in the config file. I changed /etc/hosts to have to the alias 'localhost' point to the IP of my master node and the connection to zookeeper is maintained. Still looking for a better resolution, but this will work for now.
I figured it out. It was a configuration issue (as I suspected all along). The solution was to:
-set ‘hbase.zookeeper.quorum’ within the ‘hive-site.xml’ and place it in the ‘hadoop-conf’ directory
What threw me off was that there is no 'hbase.zookeeper.quorum' in hive-default.xml. I had been playing with 'hive.zookeeper.quorum' which was not the correct configuration to change.
I'm sorry for posting a new answer. I wanted to comment on the previous answer but the commenting UI seems to have disappeared >.< ...
Anyway, I wanted to say that I am experiencing the same problem, and it is solved by doing the /etc/hosts hack, but that seems like a very dirty solution...
Did anyone figure out a way of fixing this cleanly...??
Thanks :) !
I meet exactly the same problem. What I did is to use the following conf to start hive cli and it works fine.
hive --hiveconf hbase.zookeeper.quorum={zk-host}
You should config HBase to use the external zookeeper and replace {zk-host} with the host of zookeeper.
I'm still looking for how to resolve this when using jdbc to access hive.
I'm attempting to run running HBase in pseudo-distributed mode. I have followed all of the steps in the tutorial.
My hbase-site.xml looks like this:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
My regionservers looks like this (default):
localhost
In the logs, Zookeeper starts OK, MiniZK starts OK, then I get a BindException with this being the culprit:
Caused by: java.net.BindException: Problem binding to /192.168.0.1:0 : Cannot assign requested address
Where in the world did it get the address 192.168.0.1? And why is it trying to bind to port 0? That IP is my NAT gateway. The IP address of the machine it's on is 192.168.0.200.
I have looked in all of the config files but don't see anywhere that I would specify that address.
** UPDATE **
It looks like the problem was that HBase was trying to reverse-lookup my IP address by my hostname which-- because I'm using my router as a DNS-- resolved to ... my router.
When I add an "alias" in the /etc/hosts file to 127.0.0.1 it resolves just fine.
#arnon-rotem-gal-oz, I just installed whatever came in the HBase tarball. I'm assuming miniZK is a scaled-down version of Zookeeper? I'm not running a separate instance of it.
The code you posted did the trick to resolve the next problem that came up.
Check the zookeeper configuration file (zoo.cfg in the zookeeper/conf directory)
Also why do you have both zookeeper and miniZK?
Also (not directly related to your question) you need to tell hbase where to find the zookeeper e.g. adding the following to your hbase-site.xml
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>