Can't create table in hbase - hadoop

I'm new to hbase. I installed hbase on my linux without installing hadoop/hdfs. It's a standalone hbase instance running against local file system. I then started hbase using bin/start-hbase.sh, and could see the org.apache.hadoop.hbase.master.HMaster process running using ps -ef. However, when I use jps | grep HMaster, there was no output. I then used ./bin/hbase shell and tried to create a table, and it gave me the following error message:
ERROR: Can't get master address from ZooKeeper; znode data == null
Can someone help me with this?
Thanks,
Gary

Related

Error: Could not find or load main class backup

I have setup a Hbase on top of hadoop in my linux system. I creates a sample table in hbase shell and its working fine. However, when I try to run backup command I am getting an error in the terminal as follows:
> hbase backup create full hdfs://localhost:8020/data/backup
> Error: Could not find or load main class backup
OR
> hbase backup help
> Error: Could not find or load main class backup
I have installed apache Hadoop 2.7.3 and HBase 2.1.4. The Hbase is of Apache and not of Cloudera or Hortonworks.
I see that in the docs (http://hbase.apache.org/book.html#_backup_and_restore_commands), hbase command can be used. Please help here.

Error while working on hive which is installed on edgenode

I am new to hadoop/HIve learning and struggling to fix this, for a distributed hadoop environment where should hive and pig need to install, is this edge node or where my hadoop installed
Hadoop installed on different server say hadoopVM, 2 separate data nodes DN1, DN2 & Edge Nodes from where I can submit jobs to hadoop to load any files to HDFS
till here i have no issue, i am trying to install hive edge node and getting below error
Attached error which i am getting on edgenode server
It seems that the Meta Store service is not started. start the service by issuing the following command in one of the session and don't close that session, and parallel start another session and try to use hive.
Active session mode:
sudo hive --service metastore
Background service mode:
If you add "&&" then service will be started and keep running as a background process.
sudo hive --service metastore &&
Altarnative:
If you still facing the problem then this is the problem because the new version of MySQL, you can refer my answer at below link.
SemanticException in Hive Shell Mode

Greenplum issue - HDFS Protocol Installation for GPHDFS access to HDP 2.x cluster

When i am trying to read external table using GPHDFS Protocol. Additionally, I am not able to access HDP2.X files via greenplum cluster.
Getting Error
devdata=# select count(*) from schema.ext_table;
ERROR: external table gphdfs protocol command ended with error. Error occurred during initialization of VM (seg5 slice1 datanode0:40001 pid=13407)
DETAIL:
java.lang.OutOfMemoryError: unable to create new native thread
Command: 'gphdfs://Authorithy/path
More symptoms
Not able to run Hadoop list file command from gpadmin user at greenplum cluster.
that is
gpadmin$hdfs dfs -ls hdfs://namenode/file/path
We tried :
checked Setting related to gphdfs vm paramteres.

configure hive with hadoop

I have configured hadoop 2.2.0 as single node cluster ( was able to run example jar)
Now I need to make hive perform queries using this hadoop
should I set
mapred.job.tracker
to
yarn.resourcemanager.resource-tracker.address
property?
tried so, but can't see the data loaded into hive tables in hdfs
I don't have enough reputation points to add a comment, so trying to help via an answer.
What are the daemons currently running for Hadoop? Use ps -eaf |
grep "java" to check.
Do you see the JobTracker running or the ResourceManager?
Also, can you elaborate on the steps you performed to install Hive?
I have screen cast, Installing Apache Hive that walks you through installing Hive. Next, you can follow my blog post Apache Hive - Getting Started. Hope this helps.

FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed

I am using Ubuntu 12.04, hadoop-0.23.5, hive-0.9.0.
I specified my metastore_db separately to some other place $HIVE_HOME/my_db/metastore_db in hive-site.xml
Hadoop runs fine, jps gives ResourceManager,NameNode,DataNode,NodeManager,SecondaryNameNode
Hive gets started perfectly,metastore_db & derby.log also created,and all hive commands run successfully,I can create databases,table,etc. But after few day later,when I run show databases,or show tables, get below error
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I had this problem too and the accepted answer did not help me so will add my solution here for others:
My problem was I had a single machine with a pseudo distributed set up installed with hive. It was working fine with localhost as the host name. However when we decided to add multiple machines to the cluster we also decided to give the machines proper names "machine01, machine 02 etc etc".
I changed all the hadoop conf/*-site.xml files and the hive-site.xml file too but still had the error. After exhaustive research I realized that in the metastore db hive was picking up the URIs not from *-site files, but from the metastore tables in mysql. Where all the hive table meta data was saved are two tables SDS and DBS. Upon changing the DB_LOCATION_URI column and LOCATION in the tables DBS and SDS respectively to point to the latest namenode URI, I was back in business.
Hope this helps others.
reasons for this
If you changed your Hadoop/Hive version,you may be specifying previous hadoop version (which has ds.default.name=hdfs://localhost:54310 in core-site.xml) in your hive-0.9.0/conf/hive-env.sh
file
$HADOOP_HOME may be point to some other location
Specified version of Hadoop is not working
your namenode may be in safe mode ,run bin/hdfs dfsadmin -safemode leave or bin/hadoop dsfadmin -safemode leave
In case of fresh installation
the above problem can be the effect of a name node issue
try formatting the namenode using the command
hadoop namenode -format
1.Turn off your namenode from safe mode. Try the commands below:
hadoop dfsadmin -safemode leave
2.Restart your Hadoop daemons:
sudo service hadoop-master stop
sudo service hadoop-master start

Resources