I have manually installed hadoop and hive on my Ubuntu 16.04 laptop. Hive was working fine and I created a few test databases (derby).
On restarting laptop, I found that hive was running but running any command like show databases, it was giving error.
I followed the solutions given web. ie:
1) rename metastore_db to metastore_db.tmp.
2) run schematool to generate new metastore_db
3) remove tmp metastore_db.tmp (Not removing gives error when you run hive)
Now I am able to run hive but on running show databases I see only default database.
Is there any way to add databases I created previously (for exxample /user/hive/warehouse/computersalesdb.db saved in hdfs filesystem) to newly generated metastore?
* UPDATE *
On further analysis I found, metastore_db folder is being created where ever I run hive. So this seems to be the cause of problem. The solution is:
1) As advised in comment by #cricket_007 have metastore in mysql or any other rdbms you are using.
2) Always run hive from same folder
3) set property “javax.jdo.option.ConnectionURL” to create metastore in specific folder, which is defined in hive-site.xml
Leaving this comment for the benefit of other nubes like me :D
I find myself in a bit of a 'hive' pickle here. On booting the Hive CLI from my home directory, I can access the 'fooDB' database, which I had created earlier:
hadoop#server-7:~$ hive
/usr/local/hive/hive-1.1.0-cdh5.5.2/bin/hive: line 258: no: command not found
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> SHOW DATABASES;
OK
default
fooDB
Time taken: 0.717 seconds, Fetched: 2 row(s)
But when I try to boot it from any other location in my file-system, I am unable to access 'fooDB':
hadoop#server-7:~/Downloads$ hive
/usr/local/hive/hive-1.1.0-cdh5.5.2/bin/hive: line 258: no: command not found
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> SHOW DATABASES;
OK
default
Time taken: 0.72 seconds, Fetched: 1 row(s)
Basically, the objects created after starting the Hive CLI from one particular location in the file-system, let's say '/home/hadoop/dir1', is not accessible from any other location in the file-system via. the Hive CLI and vice-versa.
The relevant-hive section from my .bashrc looks like this:
## HIVE VARIABLES ##
export HIVE_HOME=/usr/local/hive/hive-1.1.0-cdh5.5.2
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=$PATH:$HIVE_HOME/bin
So I am not really sure how to proceed here. I also tried using an alias for hive, which did not help. Any help would be appreciated. Thanks!
After scouring the web, I finally bumped into this
which was exactly what I was looking for.
Hope this helps out people coming across the above problem!
I have the NameNode service installed on a machine called hadoop.
The core-site.xml file has the fs.defaultFS (equivalent to fs.default.name) set to the following:
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:8020</value>
</property>
I have a very simple table called test_table that currently exists in the Hive server on the HDFS. That is, it is stored under /user/hive/warehouse/test_table. It was created using a very simple command in Hive:
CREATE TABLE new_table (record_id INT);
If I attempt to load data into the table locally (that is, using LOAD DATA LOCAL), everything proceeds as expected. However, if the data is stored on the HDFS and I want to load from there, an issue occurs.
I run a very simple query to attempt this load:
hive> LOAD DATA INPATH '/user/haduser/test_table.csv' INTO TABLE test_table;
Doing so leads to the following error:
FAILED: SemanticException [Error 10028]: Line 1:17 Path is not legal ''/user/haduser/test_table.csv'':
Move from: hdfs://hadoop:8020/user/haduser/test_table.csv to: hdfs://localhost:8020/user/hive/warehouse/test_table is not valid.
Please check that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do not conflict.
As the error states, it is attempting to move from hdfs://hadoop:8020/user/haduser/test_table.csv to hdfs://localhost:8020/user/hive/warehouse/test_table. The first path is correct because it references hadoop:8020; the second path is incorrect, because it references localhost:8020.
The core-site.xml file clearly states to use hdfs://hadoop:8020. The hive.metastore.warehouse value in hive-site.xml correctly points to /user/hive/warehouse. Thus, I doubt this error message has any true value.
How can I get the Hive server to use the correct NameNode address when creating tables?
I found that the Hive metastore tracks the location of each table. You can see the that location be running the following in the Hive console.
hive> DESCRIBE EXTENDED test_table;
Thus, this issue occurs if the NameNode in core-site.xml was changed while the metastore service was still running. Therefore, to resolve this issue the service should be restarted on that machine:
$ sudo service hive-metastore restart
Then, the metastore will use the new fs.defaultFS for newly created tables such.
Already Existing Tables
The location for tables that already exist can be corrected by running the following set of commands. These were obtained from Cloudera documentation to configure the Hive metastore to use High-Availability.
$ /usr/lib/hive/bin/metatool -listFSRoot
...
Listing FS Roots..
hdfs://localhost:8020/user/hive/warehouse
hdfs://localhost:8020/user/hive/warehouse/test.db
Correcting the NameNode location:
$ /usr/lib/hive/bin/metatool -updateLocation hdfs://hadoop:8020 hdfs://localhost:8020
Now the listed NameNode is correct.
$ /usr/lib/hive/bin/metatool -listFSRoot
...
Listing FS Roots..
hdfs://hadoop:8020/user/hive/warehouse
hdfs://hadoop:8020/user/hive/warehouse/test.db
I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below
ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.
I have resolved the problem. These are the steps I followed:
Go to $HIVE_HOME/bin/metastore_db
Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
Deleted the lock entries from db.lck and dbex.lck
Log out from hive shell as well as from all running instances of HDFS
Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
Log out from all hive shell and HDFS instances
Relogin and you should see your old tables
Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.
Advantage: You don't have to duplicate the effort of re-creating the entire database.
Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead
I was told that generally we get this exception if we the hive console not terminated properly.
The fix:
Run the jps command, look for "RunJar" process and kill it using
kill -9 command
See: getting error in hive
Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?
For instance, if you're using MySQL to hold your metadata db, you wll need to copy
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.
This fixed that same error for me.
I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:
Login as $HIVE_USER
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks
For instance, I use MySQL to hold metadata db, I copied
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder
My error resolved
I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),
I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then
started hive, worked fine.
Cheers,
Ajmal
I have faced this issue and in my case it was while running hive command from command line.
I resolved this issue by running kinit command as I was using kerberized hive.
kinit -kt <your keytab file location> <kerberos principal>
I set up a hbase cluster to store data from opentsdb. Recently due to reboot of some of the nodes, hbase lost the table "tsdb". I can still it on hbase's master node page, but when I click on it, it gives me a tableNotFoundException
org.apache.hadoop.hbase.TableNotFoundException: tsdb
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:952)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
......
I entered hbase shell, trying to locate 'tsdb' table, but got the similar message
hbase(main):018:0> scan 'tsdb'
ROW COLUMN+CELL
ERROR: Unknown table tsdb!
However when I tried to re-create this table, hbase shell told me the table already exist...
hbase(main):013:0> create 'tsdb', {NAME => 't', VERSIONS => 1, BLOOMFILTER=>'ROW'}
ERROR: Table already exists: tsdb!
And I can also list the table in hbase shell
hbase(main):001:0> list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.6730 seconds
Taking a look at the log, I found this which should be the cause of my issue
2012-05-14 12:06:22,140 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: tsdb, row=tsdb,,99999999999999
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:157)
at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
It says cannot find row of tsbb in .META., but there are indeed tsdb rows in .META.
hbase(main):002:0> scan '.META.'
ROW COLUMN+CELL
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:regioninfo, timestamp=1336311752799, value={NAME => 'tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x
x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00 05\x00\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.', STARTKEY => '\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\
\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5. x00\x05\x00\x001', ENDKEY => '\x00\x00\x10O\xA3\x8C\x80\x00\x00\x01\x00\x00\x0B\x00\x00\x02\x00\x00\x19\x00\x00\x03\x00\x00\x1A\x00\x00\x05\x00\x001', ENCODED => 7cd0d2205d9ae5f
cadf843972ec74ec5,}
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:server, timestamp=1337011527000, value=brycobapd01.usnycbt.amrs.bankofamerica.com:60020
x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00
\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.
tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\ column=info:serverstartcode, timestamp=1337011527000, value=1337011518948
......
tsdb-uid,,1336081042372.a30d8074431c6a31c6a0a30e61fedefa. column=info:server, timestamp=1337011527458, value=bry200163111d.usnycbt.amrs.bankofamerica.com:60020
tsdb-uid,,1336081042372.a30d8074431c6a31c6a0a30e61fedefa. column=info:serverstartcode, timestamp=1337011527458, value=1337011519807
6 row(s) in 0.2950 seconds
Here is the result after I ran "hbck" on the cluster
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/249438af5657bf1881a837c23997747e on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/4f8c65fb72910870690b94848879db1c on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/63276708b4ac9f11e241aca8b56e9def on HDFS, but not listed in META or deployed on any region server
ERROR: Region hdfs://slave-node-1:9000/hbase/tsdb/e54ee4def67d7f3b6dba75a3430e0544 on HDFS, but not listed in META or deployed on any region server
ERROR: (region tsdb,\x00\x00\x0FO\xA2\xF1\xD0\x00\x00\x01\x00\x00\x0E\x00\x00\x02\x00\x00\x12\x00\x00\x03\x00\x00\x13\x00\x00\x05\x00\x001,1336311752340.7cd0d2205d9ae5fcadf843972ec74ec5.) First region should start with an empty key. You need to create a new region and regioninfo in HDFS to plug the hole.
ERROR: Found inconsistency in table tsdb
Summary:
-ROOT- is okay.
Number of regions: 1
Deployed on: master-node,60020,1337011518948
.META. is okay.
Number of regions: 1
Deployed on: slave-node-2,60020,1337011519845
Table tsdb is inconsistent.
Number of regions: 5
Deployed on: slave-node-2,60020,1337011519845 slave-node-1,60020,1337011519807 master-node,60020,1337011518948
tsdb-uid is okay.
Number of regions: 1
Deployed on: slave-node-1,60020,1337011519807
5 inconsistencies detected.
Status: INCONSISTENT
I have run
bin/hbase hbck -fix
which unfortunately does not solve my problem
Could someone help me out on this that
Is it possible to recover this table "tsdb"?
If 1 cannot be done, is it a suggested way to gracefully remove 'tsdb', and create a new one?
I'd be greatly appreciated if anybody can let me know what is the most suggested way to reboot a node? Currently, I am leaving my master node always up. For other nodes, I run this command immediately after its reboot.
command:
# start data node
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start jobtracker
# start hbase
bin/hbase-daemon.sh start zookeeper
bin/hbase-daemon.sh start regionserver
Many Thanks!
A bit late, maybe it's helpful to the searcher.
Run the ZooKeeper shell hbase zkcli
In the shell run ls /hbase/table
Run rmr /hbase/table/TABLE_NAME
Restart Hbase
I am not very sure why you are unable to scan it. However, to recreate the table, you can try this:
1) Delete all entries in the .META table for this table manually, and
2) Delete the directory corresponding to this table from HDFS
Try creating the table again after that.
If you are using cdh4.3 then the path in zookeeper should be /hbase/table94/
To expand on #Devin Bayer's answer, run:
delete /hbase/table/<name_of_zombie_table>
if you find any zombie tables being maintained by the zookeeper. For more help on this issue, you should google 'HBase zombie tables'.
try to fix meta
hbase hbck
hbase hbck -fixMeta
hbase hbck -fixAssignments
hbase hbck -fixReferenceFiles
after and try again
More instructions on deleting the tables:
~/hbase-0.94.12/bin/hbase shell
> truncate 'tsdb'
> truncate 'tsdb-meta'
> truncate 'tsdb-uid'
> truncate 'tsdb-tree'
> exit
I also had to restart the tsd daemon.
I get a similar error message when I try an HBase connection from a Java client on a machine that doesn't have the TCP privilege to access the HBase machines.
The table indeed exists when I do hbase shell on the HBase machine itself.
Does opentsdb has all the privileges/port config to access the HBase machine ?
I do face these issues at my workplace. I usually either delete the znodes and them remove the corresponding table or restart hbase both HMaster and Hregionserver to get hbck status OK.
It is enough to remove the specified table from your zookeeper path.
For example if zookeeper.znode.parent is configured to blob in hbase-site.xml you should start zkCli.sh in your zookeeper server shell and remove that directory by rmr /blob/table/tsdb command.
hbase-clean.sh --cleanZk
It works well, simple enough.