java.sql.SQLException: Failed to start database 'metastore_db' ERROR, while initializing database using hive - hadoop

I installed Hadoop and Hive on 3 cluster. I have able to login to hive from my cluster node where HIVE is running.
root#NODE_3 hive]# hive Logging initialized using configuration in
jar:file:/usr/lib/hive/lib/hive-common-0.10.0-cdh4.2.0.jar!/hive-log4j.properties
Hive history
file=/tmp/root/hive_job_log_root_201304020248_306369127.txt hive> show
tables ; OK Time taken: 1.459 seconds hive>
But when i try to run some hive test on my cluster nodes , I am getting following given below error.
Here it is trying to initilize data base as user =ashsshar{my username}
3/04/02 02:32:44 INFO mapred.JobClient: Cleaning up the staging area
hdfs://scaj-ns/user/ashsshar/.staging/job_201304020010_0080 13/04/02
02:32:44 ERROR security.UserGroupInformation:
PriviledgedActionException as:ashsshar (auth:SIMPLE)
cause:java.io.IOException: javax.jdo.JDOFatalDataStoreException:
Failed to create database '/var/lib/hive/metastore/metastore_db', see
the next exception for details. NestedThrowables:
java.sql.SQLException: Failed to create database
'/var/lib/hive/metastore/metastore_db', see the next exception for
details. java.io.IOException: javax.jdo.JDOFatalDataStoreException:
Failed to create database '/var/lib/hive/metastore/metastore_db', see
the next exception for details. NestedThrowables:
java.sql.SQLException: Failed to create database
'/var/lib/hive/metastore/metastore_db', see the next exception for
details.
I have tried two things .
1 . Giving permission to cd /var/lib/hive/metastore/metastore_db
Removing rm /var/lib/hive/metastore/metastore_db/*lck
But still i am getting the same error

It seems to be an issue with creating the metastore. I solved this by creating a directory and setting the value to that directory as follows:
step-1: create a directory on home say its: hive-metastore-dir
step-2: being super user edit the hive-site.xml (its in: /usr/lib/hive/conf) as follows:
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
to
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/home/hive-metastore-dir/metastore/metastore_db;create=true</value>
step-3: start the CLI as sudo hive and perform your queries.

You may login to hive client from a directory where the user has write access. By default, hive will try to create temporary directory in local and HDFS when a shell is opened up.

follow this steps if you are using CDH
1. copy /usr/lib/hive/conf/hive-site.xml and paste into /usr/lib/spark/conf/
This will solve the problem of "metastore_db" error
Thanks

Related

Hive jdbc connection is giving error if MR is involved

I am working on Hive-jdbc connection in HDP 2.1
Code is working fine for queries where mapreduce is not involved like "select * from tabblename". The same code is showing error when the query is modified with a 'where' clause or if we specify columnnames(which will run mapreduce in the the background).
I have verified the correctness of the query by executing it in HiveCLI.
Also I have verified the read/write permissions for the table for the user through which I am running the java-jdbc code.
The error is as follows
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
at com.testing.poc.hivejava.HiveJDBCTest.main(HiveJDBCTest.java:25)
Today I also got this exception when I submit a hive task from java.
The following error:
org.apache.hive.jdbc.HiveDriverorg.apache.hive.jdbc.HiveDriverhive_driver:
org.apache.hive.jdbc.HiveDriverhive_url:jdbc:hive2://10.174.242.28:10000/defaultget
connection sessucess获取hive连接成功!
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
I tried to use the sql execute in hive and it works well. Then I saw the log in /var/log/hive/hadoop-cmf-hive-HIVESERVER2-cloud000.log.out then I found the reason of this error. The following error:
Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=anonymous, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
Solution
I used the following command :
sudo -u hdfs hadoop fs -chmod -R 777 /
This solved the error!
hive_driver:org.apache.hive.jdbc.HiveDriver
hive_url:jdbc:hive2://cloud000:10000/default
get connection sessucess
获取hive连接成功!
Heart beat
执行insert成功!
If you use beeline to execute the same queries, do you see the same behaviour as you get while running your test program?
The beeline client also uses the open source JDBC driver and connects to Hive server, which is similar to what you do in your program. HiveCLI on the other hand has Hive embedded in it and does not connect to a remote Hive server by default. You can use HiveCLI to connect to a remote Hive Server 1 but I don't believe you can use it to connect to Hive Server2 (use beeline for Hive Server 2).
For this error, you can take a look at the hive.log and hiveserver2.log on the server side to get more insight into what might have caused the MapReduce error.
Hope this helps.
Cheers,
Holman

Hive derby issue

I installed hive-0.12.0 recenlty, But when I run queries in hive shell it shows the below error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
This is contained in my hive-default.xml.template:
javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=/home/hduser/hive-0.12.0/metastore/metastore_db;create=true
JDBC connect string for a JDBC metastore
Could any one help?
Seems problem with your metastore. Since you are using the default hive metastore embedded derby. Lock file would be there in case of abnormal exit. if you remove that lock file this issue should get solve
rm metastore_db/*.lck

java.sql.SQLException: Failed to start database '/var/lib/hive/metastore/metastore_db' in hive

I am a starter to hive. When I try to execute any hive commands:
hive>SHOW TABLES;
it's showing the below error:
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to start database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
It looks like derby locking issue. you can temporarily fix this issue by deleting the lock file inside the directory /var/lib/hive/metastore/metastore_db. But this issue will also occur in future also
sudo rm -rf /var/lib/hive/metastore/metastore_db/*.lck
With default hive metastore embedded derby, it is not possible to start multiple instance of hive at the same time. By changing hive metastore to mysql or postgres server this issue can be solved.
See the following cloudera documentation for changing hive metastore
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html
I've encountered similar error when I forgot about another instance of spark-shell running on same node.
update hive-site.xml under ~/hive/conf folder as below name/value and try this:
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
In my case I needed to create a directory and grant proper permissions:
$ sudo mkdir /var/lib/hive/metastore/
$ sudo chown hdfs:hdfs /var/lib/hive/metastore/

FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below
ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.
I have resolved the problem. These are the steps I followed:
Go to $HIVE_HOME/bin/metastore_db
Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
Deleted the lock entries from db.lck and dbex.lck
Log out from hive shell as well as from all running instances of HDFS
Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
Log out from all hive shell and HDFS instances
Relogin and you should see your old tables
Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.
Advantage: You don't have to duplicate the effort of re-creating the entire database.
Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead
I was told that generally we get this exception if we the hive console not terminated properly.
The fix:
Run the jps command, look for "RunJar" process and kill it using
kill -9 command
See: getting error in hive
Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?
For instance, if you're using MySQL to hold your metadata db, you wll need to copy
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.
This fixed that same error for me.
I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:
Login as $HIVE_USER
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks
For instance, I use MySQL to hold metadata db, I copied
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder
My error resolved
I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),
I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then
started hive, worked fine.
Cheers,
Ajmal
I have faced this issue and in my case it was while running hive command from command line.
I resolved this issue by running kinit command as I was using kerberized hive.
kinit -kt <your keytab file location> <kerberos principal>

Unable to instantiate HiveMetaStoreClient

I have a 3 nodes cluster running hive.
When i try to run some test from outside the cluster i am getting following given below error
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Logging initialized using configuration in file:/net/slc01nwj/scratch/ashsshar/view_storage/ashsshar_bda_latest_2/work/hive_scratch/conf/hive-log4j.properties
When I login to cluster node and execute hive its working fine.
hive> show databases ;
OK
default
Following error is genereted in test log files
13/04/04 03:10:49 ERROR security.UserGroupInformation: PriviledgedActionException as:ashsshar {my username }(auth:SIMPLE) cause:java.io.IOException: javax.jdo.JDOFatalDataStoreException: Failed to create database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to create database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
My hive-site.xml file contains this connection property ::
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
I have changed the /var/lib/hive/metastore/metastore_db at my cluster node, but still getting the same error
I have also tried removing all *lck files from above directory
Does {username} have the permissions to create
/var/lib/hive/metastore/metastore_db ?
If it is a test cluster you could do
sudo chmod -R 777 /var/lib/hive/metastore/metastore_db
or chown it to the user running it.
Try removing the $HADOOP_HOME/build folder. I had same problem with hive-0.10.0 or above versions. Then I tried hive-0.9.0 and got a different set of errors. Luckily found this thread Hive doesn't work on install. Tried the same trick and it worked for me magically. I am using default derby db.
this is for permissions issue for hive folder. please de the following will work well.
go to hive user ,for me hduser,
sudo chmod -R 777 hive
This issue occur due to abrupt termination of hive shell. Which created a unattended db.lck file.
TO resolve this issue,
browse to your metastore_db location
remove the tmp, dbex.lck and db.lck files.
Open the hive shell again. It will work.
You can see tmp, dbex.lck and db.lck files get created once again.
It worked after i moved the metastore from /var/lib/hive/. I did that by editing: /etc/hive/conf.dist/hive-site.xml
from:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
to:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/home/prashant/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
`
Pls make sure of that whether you have a MetaStore_db in your hadoop directory already, if have, remove it and format your hdfs again,
and then try to start hive
Yes it's privilege problem. Enter your hive shell by following command:
sudo -u hdfs hive

Resources