Node /hbase is not in ZooKeeper - hadoop

When running PredictionIO on a local single node Hadoop (HortonWorks) I get the following error:
Storage Backend Connections
Verifying Meta Data Backend
Verifying Model Data Backend
Verifying Event Data Backend
ERROR client.HConnectionManager$HConnectionImplementation - The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
ERROR storage.Storage$ - Error initializing storage client for source HBASE
ERROR storage.Storage$ - org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:313)
at scala.None$.get(Option.scala:311)
at io.prediction.data.storage.Storage$.sourcesToClientMeta(Storage.scala:90)
at io.prediction.data.storage.Storage$.getDataObject(Storage.scala:193)
at io.prediction.data.storage.Storage$.getDataObject(Storage.scala:178)
at io.prediction.data.storage.Storage$.getLEvents(Storage.scala:274)
at io.prediction.data.storage.Storage$.verifyAllDataObjects(Storage.scala:246)
at io.prediction.tools.Console$.status(Console.scala:1259)
at io.prediction.tools.Console$$anonfun$main$1.apply(Console.scala:646)
at io.prediction.tools.Console$$anonfun$main$1.apply(Console.scala:601)
at scala.Option.map(Option.scala:145)
at io.prediction.tools.Console$.main(Console.scala:601)
at io.prediction.tools.Console.main(Console.scala)
Unable to connect to all storage backend(s) successfully. Please refer to error message(s) above. Aborting.
How should I configure HBase + Zookeepr on a local Hadoop installation?

At Hbase folder /conf/hbase-site.xml, config or add:
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>YOUR ZOOKEEPER FOLDER</value>
</property>
For example:
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/myname/zookeeper-3.4.6</value>
</property>

Related

hadoop Protocol message tag had invalid wire type

I Set up hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh and sbin/start-yarn.sh both succeed. And I can see the following after jps on the master node.
22437 DataNode
22988 ResourceManager
24668 Jps
22748 SecondaryNameNode
23244 NodeManager
The jps outcome on the slave node is
19693 DataNode
19966 NodeManager
I then run the PI example.
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100
Which gives me there error-log
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
The problem seems with the HDFS file system since trying out the command bin/hdfs dfs -mkdir /user fails with the similar exception.
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;
where xxx.ww.y.zz is the ip-address of Master-R5-Node
I have checked and followed all the recommendations of ConnectionRefused on Apache and on this site.
Despite the week long effort, I cannot get it fixed.
Thanks.
There are so many reasons to what may lead to the problem I faced. But I finally ended up fixing it using some of the following things.
Make sure that you have the needed permission to the /hadoop and hdfs temporary files. (you have to figure out where that is for your paticular case)
remove the port number from fs.defaultFS in $HADOOP_CONF_DIR/core-site.xml. It should look like this:
`<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://my.master.ip.address/</value>
<description>NameNode URI</description>
</property>
</configuration>`
Add the following two properties to `$HADOOP_CONF_DIR/hdfs-site.xml
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
Voila! You should now be up and running!

Configure edge node to launch Hadoop jobs on cluster running on a private network

I am trying to setup an edge node to a cluster in my work place. The cluster is CDH 5.* Hadoop Yarn. It has it's own internal private high speed network. The edge node is outside the private network.
I ran the steps for hadoop client setup and configured the core-site.xml
sudo apt-get install hadoop-client
Since the cluster is hosted on it's own private network the IP addresses in the internal network are different.
10.100.100.1 - Namemode
10.100.100.2 - Data Node 1
10.100.100.4 - Data Node 2
100.100.100.6 - Date Node 3
To handle this I requested the cluster admin to add the following properties to the hdfs-site.xml on the namenode so that the listening ports are not just open to internal IP range:
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
After this setting is done and the services are restarted. I am able to run the following command:
hadoop fs -ls /user/hduser/testData/XML_Flows/test/test_input/*
This works fine. But when I try to cat the file, I get the following error:
*administrator#administrator-Virtual-Machine:/etc/hadoop/conf.empty$ hadoop fs -cat /user/hduser/testData/XML_Flows/test/test_input/*
*15/05/04 15:39:02 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader.
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3035)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:744)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:659)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:78)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:104)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:99)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
15/05/04 15:39:02 WARN hdfs.DFSClient: Failed to connect to /10.100.100.6:50010 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]*
Same error message is repeated multiple times.
I copied the rest of the xml's i.e. hdfs-site.xml, yarn-site.xml, mapred-site.xml from the cluster data nodes to be on the safer side. But still I got the same error. Does anyone have any idea about this error or how to make edge nodes work on clusters running on private network.
The username of the edge node is "administrator" whereas the cluster is configured using "hduser" id. Could this be a problem ? I have configured password less login between the edge node and the name node.

After changing CDH5 Kerberos Authentication i am not able to access hdfs

I am trying to implement Kerberos authentication. I am using Hadoop 2.3 version of hadoop on cdh5.0.1. I have done the following changes :
Added following properties to core-site.xml
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
After restarting the daemon when i am issuing hadoop fs -ls / command, I am getting following error :
ls: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "cldx-xxxx-xxxx/xxx.xx.xx.xx"; destination host is: "cldx-xxxx-xxxx":8020;
Please help me out.
Thanks in advance,
Ankita Singla
There is a lot more to configuring a secure HDFS cluster than just specifying hadoop.security.authentication as Kerberos. See Configuring Hadoop Security in CDH 5 about the required config settings. You'll need to create appropriate keytab files. Only after you configured everything and you confirmed that none of the Hadoop services report any error in their respective logs (namenode, datanode on all hosts, resourcemanager, nodemanager on all nodes etc) can you attempt to connect.

Hbase Error java.lang.RuntimeException: Unable to run quorum server

I am not able to start Hbase, whenever i start i get only Hmaster and Hregionserver in jps. Hquorompeer keeps missing.I checked logs and i am getting below error:
java.lang.RuntimeException: Unable to run quorum server
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:454)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:151)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:80)
at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:70)
Caused by: java.io.IOException: Failed to process transaction type: 1 error: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:153)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
... 4 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:211)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:151)
The reason that you are encountering this error could be the data directory where Zookeeper stores snapshots and logs is corrupted.
In order to avoid the HQuorumpeer daemon to die out, you need to provide a path to a new directory where zookeeper can store its snapshots. To do this you need to add the following property in Hbase.site.xml
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>location of the newly created folder</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
the default path of "hbase.zookeeper.property.dataDir" is /tmp/hbase-*/zookeeper(/tmp/hbase-hadoop/zookeeper), remove it and try to start zookeeper again
Removing all files from ZooKeeper directory solved the issue. In my case
rm /var/lib/zookeeper/version-2/*

Hive/HBase Integration - Zookeeper Session Closes Immediately

We have an 8 node cluster using CDH3u2 configured using Cloudera Manager. We have a dedicated master node running our only instance of zookeeper. When I configure hive to run local hadoop, executed from the master node, I have no problem retreiving the data from HBase. When I run distributed map/reduce via hive, I am getting the following error when the slave nodes connect to zookeeper.
HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default).
We have tried setting max connections higher (we even tried removing the limit). This is a development cluster that has very few users, I know that the problem is not that there are too many connections (I am able to connect to zookeeper from the slave nodes using ./zkCli).
Server side logs indicate that the session was terminated by the client.
Client side hadoop log says:
'Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
Any idea why I am unable to maintian a connection to zookeeper via Hive Map/Reduce?
Configs for hbase and zookeeper are:
# Autogenerated by Cloudera SCM on Wed Dec 28 08:42:23 CST 2011
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/zookeeper
clientPort=2181
maxClientCnxns=1000
minSessionTimeout=4000
maxSessionTimeout=40000
HBase Site-XML is:
<property>
<name>hbase.rootdir</name>
<value>hdfs://alnnimb01:8020/hbase</value>
<description>The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR</description>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
<description>The port master should bind to.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)</description>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
<description>The port for the hbase master web UI Set to -1 if you do not want the info server to run.</description>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
<description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed.</description>
</property>
<property>
<name>zookeeper.znode.rootserver</name>
<value>root-region-server</value>
<description>Path to ZNode holding root region location. This is written by the master and read by clients and region servers. If a relative path is given, the parent folder will be ${zookeeper.znode.parent}. By default, this means the root location is stored at /hbase/root-region-server.</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>The ZooKeeper client port to which HBase clients will connect</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>alnnimb01.aln.experian.com</value>
<description>Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".</description>
Turns out that the Map/Reduce submitted by Hive is trying to connect to zookeeper at 'localhost', regardless of how the zookeeper.quorom is setup in the config file. I changed /etc/hosts to have to the alias 'localhost' point to the IP of my master node and the connection to zookeeper is maintained. Still looking for a better resolution, but this will work for now.
I figured it out. It was a configuration issue (as I suspected all along). The solution was to:
-set ‘hbase.zookeeper.quorum’ within the ‘hive-site.xml’ and place it in the ‘hadoop-conf’ directory
What threw me off was that there is no 'hbase.zookeeper.quorum' in hive-default.xml. I had been playing with 'hive.zookeeper.quorum' which was not the correct configuration to change.
I'm sorry for posting a new answer. I wanted to comment on the previous answer but the commenting UI seems to have disappeared >.< ...
Anyway, I wanted to say that I am experiencing the same problem, and it is solved by doing the /etc/hosts hack, but that seems like a very dirty solution...
Did anyone figure out a way of fixing this cleanly...??
Thanks :) !
I meet exactly the same problem. What I did is to use the following conf to start hive cli and it works fine.
hive --hiveconf hbase.zookeeper.quorum={zk-host}
You should config HBase to use the external zookeeper and replace {zk-host} with the host of zookeeper.
I'm still looking for how to resolve this when using jdbc to access hive.

Resources