org.apache.hadoop.hbase.NotServingRegionException: Region is not online: -ROOT-,,0 what is the reason behind for this error - hadoop

Thanks for taking interesting in my Question :) when ever i fire query like scan,put, create for any table in hbase shell am getting following error. and hbase shell gives the result listing of tables and description of tables .... so would you please help me to clear out of this ?
And also can u please tell me the meaning of the structure -ROOT-,,0
About versions am using
HBase 0.92.1-cdh4.1.2
Hadoop 2.0.0-cdh4.1.2
ERROR: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: -ROOT-,,0

I had the same error. Zookeeper was handled by HBase.
(It wasn't standalone!)
So a quick fix is:
$ hbase zkcli
zookeeper_cli> rmr /hbase/root-region-server

By clearing the zookeeper nodes hbase started working fine :) what exactly i followed was (it is not recommended and you should have your
HBase and ZK shut down first):
### shut down ZM and HBase
1) for each ZK node:
su // login as root
cd $ZOOKEEPER_HOME
cp data/myid myid // backup existing myid file to ZooKeeper's home folder
rm data/* -Rf
rm dadalog/* -Rf
mkdir -p data
mkdir -p datalog
cp myid data/myid // restore the myid backup so no need to recreate myid again
2) for each ZK node:
(start ZK )
3) finally
(start HBase)
By clearing data and datalog, you should have a very clean ZooKeeper.
Hope these help and good luck.
Thanks

My Error was ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region ROLE,,1457743249518.221f6f7fdacacbe179674267f8d06575. is not online on ddtmwutelc3ml01.azure-dev.us164.corpintra.net,16020,1459486618702
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2235)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Resolution is :
run below command
$ hbase zkcli
zookeeper_cli> rmr /hbase/root-region-server
then
Stop hbase and zk
backup mydir from cd /hadoop/zookeeper
deleter everything from cd /hadoop/zookeeper
restart zookeeper then hbase

Related

HBase - hbase:metadata holds info about non existing RegionServer ID - "Master startup cannot progress, in holding-pattern until region onlined."

I cannot start Hbase Master because I am getting this error:
[Thread-18] master.HMaster: hbase:meta,,1.1588230740
is NOT online; state={1588230740 state=OPEN, ts=1569328636085, server=regionserver17,16020,1566375930434};
ServerCrashProcedures=true.
Master startup cannot progress, in holding-pattern until region onlined.
Hbase Master is active and green but actually it is not started properly since it generates those WARNings in logs and actually I cannot even do the list in Hbase shell because then I get error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
hbase:meta is referencing some non existing ID 1566375930434 which does not exist in WALs nor in zookeeper-client /hbase-unsecure/rs list.
I tried with these commands:
$ sudo -u hdfs hdfs dfs -rm -r /apps/hbase/data/WALs/
$ zookeeper-client rmr /hbase-unsecure/rs
I also tried and this:
rm -f /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2/*
and restarted the Hbase but still always having the same issue.
If anyone can give me additional advice what to try.
Thanks
We resolved this issue.
Solution is to
stop Hbase
log to zookeeper-client as root
execute command rmr /hbase-unsecure/meta-region-server
start Hbase
You maybe config Zookeeper with OS path. This error could happens when you start and stop many time. I will get this case, so I config Zookeeper dir with hdfs path. This is my hbase-site.xml
<property> <name>hbase.zookeeper.property.dataDir</name> <value>hdfs://master:9000/user/hdoop/zookeeper</value> </property>
Goodluck for you.

In cloudera 5.13.0 services are not starting

I have mistakenly deleted /var/log/* folder, due to that the services were not starting in Cloudera which were installed in that specific node. And the log files are not getting generated. And no clear error message available in cloudera manager. Can someone please suggest me how to proceed further.
Please find the below image for your understanding.
Thanks in advance.
You need to create empty folders like
sudo mkdir -p /var/log/cloudera-scm-agent
sudo mkdir -p /var/log/hadoop-hdfs
sudo mkdir -p /var/log/cloudera-scm-server
sudo mkdir -p /var/log/hadoop-mapreduce
etc.. based on your services selected in Cloudera Manager. Because Cloudera doesn't create the log directories(It just creates the file)
You can test it by doing a global search in Cloudera Manager with /var/log and you will find a lot of log directory names. Just created it. It should work

Kylin Sample Cube on Cloudera doesn't work properly

I'm just trying to figure out what's going wrong with my SampleCube, but I don't know how to just find a solution.
First of all, I'm using Cloudera, cdh 5.8.0, Hadoop 2.6.0. I have Hive, HBase and so on.
I had to download binaries for cdh from Kylin's site, and...
Problems which I had and were solved:
1) I had to set a variable KYLIN_HOME, because neither bin/check-env.sh nor bin/kylin.sh start worked properly. I'd just set it with:
$ echo "export KYLIN_HOME=/home/cloudera/Kylin_Folder/apache_kylin" >> ~/.bashrc
$ source ~/.bashrc
2) I had just problems with mkdir and creating a "/kylin" folder. I found a solution and tried instruction below. It works.
sudo -u hdfs hadoop fs -mkdir /kylin
3) And now I try to do sample from Kylin's site
But my cube has no storage at all! That's what I have:
Overall view
When I opened a buld view screen, my build stopped at "#1 Step Name: Create Intermediate Flat Hive Table"
And when I click "Log", I see that:
Log inside
Please, help me with that, I would be grateful.
OK, then. I've just found what I had to do.
Steps:
1) Download Kylin for CDH 5.7/5.8 and extract to /opt
2) Export KYLIN_HOME in .bash_profile
3) Restart CDH
4) Run services in cloudera in order: ZooKeeper, HDFS, HBase, Hive, Hue, YARN
5) Add cloudera user to hdfs group: sudo usermod -a -G hdfs cloudera
6) Create kylin folder: sudo -u hdfs hadoop fs -mkdir /kylin
7) Change ownership: sudo -u hdfs hadoop fs -chown cloudera:supergroup /kylin
8) Change permissions: sudo -u hdfs hadoop fs -chmod go+w /kylin
9) Load sample: $KYLIN_HOME/bin/sample.sh
10) Start kylin: $KYLIN_HOME/bin/kylin.sh start
11) Navigate to: http://quickstart.cloudera:7070/kylin

hbase(main):007:0> create 'test', 'data' Error

I installed and configured hbase-0.94.2, while Connecting to the running instance of HBase using the hbase shell command and trying to create a table named test with a single column family named data :
hbase(main):007:0> create 'test', 'data'the shell displays an error and a stack trace .
what should I do to resolve this.
I followed this tutorial
Actually I resolved this problem by restarting Hbase
step1:
$cd /usr/local/hbase/bin
step2:
$./start-hbase.sh
localhost: zookeeper running as process 3669. Stop it first.
master running as process 3783. Stop it first.
localhost: regionserver running as process 3926. Stop it first.
step3:
$kill 3669
$kill 3783
$kill 3926
step4:
./start-hbase.sh
step5:
verify if it works : http://localhost:60010/
posting the stacktrace you will be useful for us to help you out.
also, 0.94.2 is old you should consider moving to a 0.98.x release.
here are my easy step to start using hbase:
$ wget https://archive.apache.org/dist/hbase/hbase-0.98.0/hbase-0.98.0-hadoop2-bin.tar.gz
$ tar xzvf hbase-0.98.0-hadoop2-bin.tar.gz
$ export HBASE_HOME=pwd/hbase-0.98.0-hadoop2
$ export PATH=$HBASE_HOME/bin:$PATH
$ start-hbase.sh
Now hbase is up and running and you can start using the shell
$ hbase shell

Hive failed to create /user/hive/warehouse

I just get started on Apache Hive, and I am using my local Ubuntu box 12.04, with Hive 0.10.0 and Hadoop 1.1.2.
Following the official "Getting Started" guide on Apache website, I am now stuck at the Hadoop command to create the hive metastore with the command in the guide:
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
the error was mkdir: failed to create /user/hive/warehouse
Does Hive require hadoop in a specific mode? I know I didn't have to do much to my Hadoop installation other that update JAVA_HOME so it is in standalone mode. I am sure Hadoop itself is working since I am run the PI example that comes with hadoop installation.
Also, the other command to create /tmp shows the /tmp directory already exists so it didn't recreate, and /bin/hadoop fs -ls is listing the current directory.
So, how can I get around it?
Almost all examples of the documentation have this command wrong. Just like unix you will need the "-p" flag to create the parent directories as well unless you have already created them. This command will work.
$HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
When running hive on local system, just add to ~/.hiverc:
SET hive.metastore.warehouse.dir=${env:HOME}/Documents/hive-warehouse;
You can specify any folder to use as a warehouse. Obviously, any other hive configuration method will do (hive-site.xml or hive -hiveconf, for example).
That's possibly what Ambarish Hazarnis kept in mind when saying "or Create the warehouse in your home directory".
This seems like a permission issue. Do you have access to root folder / ?
Try the following options-
1. Run command as superuser
OR
2.Create the warehouse in your home directory.
Let us know if this helps. Good luck!
When setting hadoop properties in the spark configuration, prefix them with spark.hadoop.
Therefore set
conf.set("spark.hadoop.hive.metastore.warehouse.dir","/new/location")
This works for older versions of Spark. The property has changed in spark 2.0.0
Adding answer for ref to Cloudera CDH users who are seeing this same issue.
If you are using Cloudera CDH distribution, make sure you have followed these steps:
launched Cloudera Manager (Express / Enterprise) by clicking on the desktop icon.
Open Cloudera Manager page in browser
Start all services
Cloudera has /user/hive/warehouse folder created by default. Its just that YARN and HDFS might not be up and running to access this path.
While this is a simple permission issue that was resolved with sudo in my comment above, there are a couple of notes:
create it in home directory should work as well, but then you may need to update hive setting for the path of metastore, which I think defaults to /user/hive/warehouse
I ran into another error of CREATE TABLE statement with Hive shell, the error was something like this:
hive> CREATE TABLE pokes (foo INT, bar STRING);
FAILED: Error in metadata: MetaException(message:Got exception: java.io.FileNotFoundException File file:/user/hive/warehouse/pokes does not exist.)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
It turns to be another permission issue, you have to create a group called "hive" and then add the current user to that group and change ownership of /user/hive/warehouse to that group. After that, it works. Details can be found from this link below:
http://mail-archives.apache.org/mod_mbox/hive-user/201104.mbox/%3CBANLkTinq4XWjEawu6zGeyZPfDurQf+j8Bw#mail.gmail.com%3E
if you r running linux check (in hadoop core-site.xml ) data directory & permission, it looks like you ve kept the default which is /data/tmp and im most cases that will take root permission ..
change the xml config file , delete /data/tmp and run fs format (OC after you ve modified the core xml config)
I recommend using upper versions of hive i.e. 1.1.0 version, 0.10.0 is very buggy.
Run this command and try to create a directory it would grant full permission for the user in hdfs /user directory.
hadoop fs -chmod -R 755 /user
I am using MacOS and homebrew as package manager. I had to set the property in hive-site.xml as
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/local/Cellar/hive/2.3.1/libexec/conf/warehouse</value>
</property>

Resources