HBase Error : zookeeper.znode.parent mismatch - hadoop

i am trying to learn Hadoop and i'v reached HBase section in Hadoop Definitive Guide.
i tried to start HBase and got error. Could someone give me step-by-step guide?
opel#ubuntu:~$ zkServer.sh start
JMX enabled by default
Using config: /home/opel/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
opel#ubuntu:~$ start-hbase.sh
starting master, logging to /home/opel/hbase-0.94.20/logs/hbase-opel-master-ubuntu.out
opel#ubuntu:~$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.20, r09c60d770f2869ca315910ba0f9a5ee9797b1edc, Fri May 23 22:00:41 PDT 2014
hbase(main):001:0> status
14/06/02 22:40:44 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:45 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:47 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:49 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:51 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:55 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:59 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times
Here is some help for this command:
Show cluster status. Can be 'summary', 'simple', or 'detailed'. The
default is 'summary'. Examples:
hbase> status
hbase> status 'simple'
hbase> status 'summary'
hbase> status 'detailed'
is there anything wrong?

I had the same problem. For me the solution was to add the following property to the hbase-site.xml (for me it can be found under /usr/lib/hbase/conf directory):
<configuration>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
</configuration>
But this is only for the standalone mode. I still have no idea how to solve this problem when using external ZooKeeper.

there wont be any problem with the configurations if you are using the Cloudera Manager VM.
The problem is HMaster is not up. To resolve it, go to Cloudera Manager and restart the HBase services. it will resolve the issue.

When I had this problem I could able to fix this by not using zookeeper.
If you're running HBase in standalone mode then you don't need zookeeper. I could able to skip the zookeeper part my making the hbase.cluster.distributed property false.
<property>
    <name>hbase.cluster.distributed</name>
    <value>false</value>
  </property>
Now, I could able to play with hbase without zookeeper.

In cloudera management page, Goto services -> hbase1 and start the sevice problem will be resolved. No need to make the hbase unsecure property.

This problem tooks me a whole night, and this is how i resolved it:
After starting hadoop, go to : http://localhost:50070/dfshealth.html#tab-datanode
You will see a list of available datanode in a table, you just need to add it in your hbase-site.xml as follow for me:
<configuration>
<property>
<name>zookeeper.znode.parent</name>
<value>127.0.0.1:50010</value>
</property>
</configuration>

Best thing check your HBase logs. It will give you the clear idea about the error. In my case i was running Kafka + zookeeper and HBase on the same server. So, whenever i was trying to run hbase shell i was kept getting same error on the console. When I checked logs and found
port is already in use
so i just changed the value for
hbase.zookeeper.property.clientPort
in hbase-site.xml file and everything start running.

open zookeeper/bin and run the command - ./zkServer.sh start
After successful execution, execute command - /zkCli.sh
then execute command get /hbase-unsecure
if it returns as null then, create -s /testmaster "127.0.0.1:2222"
Also, edit hbase-site.xml by adding
<property>
<name>zookeeper.znode.parent</name>
<value>/testmaster</value>
</property>
PS - keep the value of hbase.cluster.distributed property as false.
Hope this solves your error.

Related

HBase - hbase:metadata holds info about non existing RegionServer ID - "Master startup cannot progress, in holding-pattern until region onlined."

I cannot start Hbase Master because I am getting this error:
[Thread-18] master.HMaster: hbase:meta,,1.1588230740
is NOT online; state={1588230740 state=OPEN, ts=1569328636085, server=regionserver17,16020,1566375930434};
ServerCrashProcedures=true.
Master startup cannot progress, in holding-pattern until region onlined.
Hbase Master is active and green but actually it is not started properly since it generates those WARNings in logs and actually I cannot even do the list in Hbase shell because then I get error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
hbase:meta is referencing some non existing ID 1566375930434 which does not exist in WALs nor in zookeeper-client /hbase-unsecure/rs list.
I tried with these commands:
$ sudo -u hdfs hdfs dfs -rm -r /apps/hbase/data/WALs/
$ zookeeper-client rmr /hbase-unsecure/rs
I also tried and this:
rm -f /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2/*
and restarted the Hbase but still always having the same issue.
If anyone can give me additional advice what to try.
Thanks
We resolved this issue.
Solution is to
stop Hbase
log to zookeeper-client as root
execute command rmr /hbase-unsecure/meta-region-server
start Hbase
You maybe config Zookeeper with OS path. This error could happens when you start and stop many time. I will get this case, so I config Zookeeper dir with hdfs path. This is my hbase-site.xml
<property> <name>hbase.zookeeper.property.dataDir</name> <value>hdfs://master:9000/user/hdoop/zookeeper</value> </property>
Goodluck for you.

Yarn JobHistory Error: Failed redirect for container_1400260444475_3309_01_000001

My MR job is executed successfully .But when i am checking its History getting error as:
Failed redirect for container_1400260444475_3309_01_000001
Failed while trying to construct the redirect url to the log server. Log Server url may not be configured Unknown container. Container either has not started or has already completed or doesn't belong to this node at all.
Also my HistoryServer is running fine.
Good thing is,older jobs(retired ones) i can browse from JobHistory UI.
Only it is missing for recent jobs.
Do i need to change log rolling properties,retention period ?
Thanks in advance !!!
Try these steps
add to mapred-site.xml
<name>mapreduce.jobhistory.address </name>
<value>hostName:10020</value>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hostName:19888</value>
add to yarn-site.xml
<property>
<name>yarn.log.server.url</name>
<value>http://<LOG_SERVER_HOSTNAME>:19888/jobhistory/logs</value>
</property>
start history server with
$ mr-jobhistory-daemon.sh start historyserver
I got it fixed by adding actual hostname instead of 0.0.0.0 in mapred-site.xml
<name>mapreduce.jobhistory.address </name>
<value>hostName:10020</value>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hostName:19888</value>
And double check that map-reduce history server is running!
service hadoop-mapreduce-historyserver status
And accessible:
curl localhost:19888 -I

zookeeper.znode.parent mismatch exception

I have installed hadoop 2.2.0 & hbase-0.94.18 on ubuntu 12.04. When I try to run the command
create 't1','c1'
in hbase shell, I get the following error-
ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'.
There could be a mismatch with the one configured in the master.
What's wrong?
A few things in no particular order:
To start with, let the error display continue. It will try 7 times and then exit. Before it exits, it will show the name of exception occurring. Try to look it up. It probably says MasterNotRunningException.
Verify that master is indeed running by doing $sudo jps. You should see an entry for HMaster. If not, start the hbase-master service.
Assuming you're going for pseudo-distributed mode, you may also want to check your /etc/hosts to make sure that entries point to 127.0.0.1 and not 127.0.1.1.
For cloudera's installs, here is a guide on how to setup HBase in pseudo-distributed mode. It also includes instructions to install hbase-master and zookeeper correctly.
Maybe you should check the file hbase-site.xml about zookeeper.znode.parent whether it's right. its default value is /hbase
Mine was set by default to /hbase-unsecure (hbase-site.xml)

Whirr: hadoop-proxy.sh not working

I have installed Whirr and created an EC2 cluster. The cluster is created correctly and I can ssh to the nodes and check that Hadoop is working correctly. However, whenever I try to use the hadoop-proxy.sh, I get the following message:
bind: Cannot assign requested address
And if I try to see the HDFS in a different shell (I have previously configured the HADOOP_CONF_DIR variable), I get the following error:
13/11/29 05:15:09 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Bad connection to FS. command aborted. exception: Server IPC version 7 cannot communicate with client version 4
I have tried with different properties files when setting up the cluster, using CDH, without using it... But I am still getting the same error. This is the properties file that I am currently using to launch the cluster with Whirr:
whirr.cluster-name=otrotest
whirr.instance-templates=1 hadoop-namenode+yarn-resourcemanager+mapreduce-historyserver,2 hadoop-datanode+yarn-nodemanager
whirr.provider=aws-ec2
whirr.identity=MY_ID
whirr.credential=MY_SECRET_KEY
whirr.private-key-file=/home/hduser/.ssh/whirr_id_rsa
whirr.public-key-file=/home/hduser/.ssh/whirr_id_rsa.pub
whirr.env.MAPREDUCE_VERSION=2
whirr.env.repo=cdh4
whirr.hadoop.install-function=install_cdh_hadoop
whirr.hadoop.configure-function=configure_cdh_hadoop
whirr.mr_jobhistory.start-function=start_cdh_mr_jobhistory
whirr.yarn.configure-function=configure_cdh_yarn
whirr.yarn.start-function=start_cdh_yarn
whirr.hardware-id=t1.micro
whirr.image-id=us-west-2/ami-6aad335a
whirr.location-id=us-west-2
whirr.java.install-function=install_openjdk
whirr.java.install-function=install_oab_java
I am new to Whirr and I guess I am missing something... But I don't know how to solve this. Any help would be much appreciated. Thanks in advance.

Error occured when using HDFS to store the data of HBase

When I set hbase.rootdir configuration in hbase-site.xml to local filesystem like file://hbase_root_dir_path, hbase worked OK.But when I change it to hdfs://localhost:9000/hbase, hbase was also OK at the beginning. After a short time(usually a few seconds), however, it didn't work.I found the HMaster stopped with jps command.Of course I could not open the localhost:60010 web page.I read the log, and found sth wrong like the following:
INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x13e35b26eb80001 type:delete cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/hbase/backup-masters/localhost,35320,1366700487007 Error:KeeperErrorCode = NoNode for /hbase/backup-masters/localhost,35320,1366700487007
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2182. Will not attempt to authenticate using SASL (unknown error)
ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=person,,1365998702159.a5af90c23325829096517fb3b15bca17., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
WARN org.apache.zookeeper.ClientCnxn: Session 0x13e35b26eb80002 for server null, unexpected error, closing socket connection and attempting reconnect
I use the pseudo-distributed mode of hbase in Ubuntu 12.04 LTS.
In my /etc/hosts, I have already changed the the IP of hostname to 127.0.0.1.And my hadoop safemode status if OFF.My hadoop version is 1.0.4 and my hbase version is 0.94.6.1(both are the latest stable release), the HBase Reference guide says hbase-0.94.x works fine with hadoop-1.0.x.
I think sth about the HDFS results the problem, because it really works with the local filesystem.By the way, there is a hbase-x.x.x-security release, what's the difference between it and hbase-x.x.x release and do I need to use the security release?
Dit you set your Zookeeper quorum? It seems Zookeeper is trying to connect to your localhost.
Try setting the addresses of the machines you wan't to use using the hbase.zookeeper.quorum property in hbase-site.xml. Also, if you're not managing your own Zookeeper instance make sure that in hbase-env.sh this line isn't commented export HBASE_MANAGES_ZK=true.

Resources