java.sql.SQLException: Failed to start database '/var/lib/hive/metastore/metastore_db' in hive - hadoop

I am a starter to hive. When I try to execute any hive commands:
hive>SHOW TABLES;
it's showing the below error:
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to start database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

It looks like derby locking issue. you can temporarily fix this issue by deleting the lock file inside the directory /var/lib/hive/metastore/metastore_db. But this issue will also occur in future also
sudo rm -rf /var/lib/hive/metastore/metastore_db/*.lck
With default hive metastore embedded derby, it is not possible to start multiple instance of hive at the same time. By changing hive metastore to mysql or postgres server this issue can be solved.
See the following cloudera documentation for changing hive metastore
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html

I've encountered similar error when I forgot about another instance of spark-shell running on same node.

update hive-site.xml under ~/hive/conf folder as below name/value and try this:
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>

In my case I needed to create a directory and grant proper permissions:
$ sudo mkdir /var/lib/hive/metastore/
$ sudo chown hdfs:hdfs /var/lib/hive/metastore/

Related

What ist the reason for the error message "Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)"?

There is output the following error information:
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***
Hive and Hadoop daemons are running fine. I ran Hive services after the Metastore services via Cygwin. I deleted all the Metastore db directories. Then I am facing also the same issue mentioned above after running the commands below.
In Windows command prompt window:
C:\hadoop_new\db-derby-10.14.2.0\bin\startNetworkServer -h 0.0.0.0
In Cygwin terminal window:
$HIVE_HOME/bin/schematool -dbType derby -initSchema
Error:
hive > FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Solution :
Go to below directory:
C:\hadoop_new\apache-hive-3.1.2\scripts\metastore\upgrade\derby\hive-schema-3.1.0.derby.sql
comment the 'NUCLEUS_ASCII' function and 'NUCLEUS_MATCHES' function
rerun schematool -dbType derby -initSchema, then everything goes well!
I had a similar issue. The error was because I attempted to start hive before establishing the schema. Try the following:
$mv metastore_db metastore_db.tmp
Now run your schema setup.
I found this solution here.

Hive derby issue

I installed hive-0.12.0 recenlty, But when I run queries in hive shell it shows the below error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
This is contained in my hive-default.xml.template:
javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=/home/hduser/hive-0.12.0/metastore/metastore_db;create=true
JDBC connect string for a JDBC metastore
Could any one help?
Seems problem with your metastore. Since you are using the default hive metastore embedded derby. Lock file would be there in case of abnormal exit. if you remove that lock file this issue should get solve
rm metastore_db/*.lck

FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below
ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.
I have resolved the problem. These are the steps I followed:
Go to $HIVE_HOME/bin/metastore_db
Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
Deleted the lock entries from db.lck and dbex.lck
Log out from hive shell as well as from all running instances of HDFS
Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
Log out from all hive shell and HDFS instances
Relogin and you should see your old tables
Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.
Advantage: You don't have to duplicate the effort of re-creating the entire database.
Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead
I was told that generally we get this exception if we the hive console not terminated properly.
The fix:
Run the jps command, look for "RunJar" process and kill it using
kill -9 command
See: getting error in hive
Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?
For instance, if you're using MySQL to hold your metadata db, you wll need to copy
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.
This fixed that same error for me.
I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:
Login as $HIVE_USER
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks
For instance, I use MySQL to hold metadata db, I copied
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder
My error resolved
I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),
I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then
started hive, worked fine.
Cheers,
Ajmal
I have faced this issue and in my case it was while running hive command from command line.
I resolved this issue by running kinit command as I was using kerberized hive.
kinit -kt <your keytab file location> <kerberos principal>

Unable to instantiate HiveMetaStoreClient

I have a 3 nodes cluster running hive.
When i try to run some test from outside the cluster i am getting following given below error
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Logging initialized using configuration in file:/net/slc01nwj/scratch/ashsshar/view_storage/ashsshar_bda_latest_2/work/hive_scratch/conf/hive-log4j.properties
When I login to cluster node and execute hive its working fine.
hive> show databases ;
OK
default
Following error is genereted in test log files
13/04/04 03:10:49 ERROR security.UserGroupInformation: PriviledgedActionException as:ashsshar {my username }(auth:SIMPLE) cause:java.io.IOException: javax.jdo.JDOFatalDataStoreException: Failed to create database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to create database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
My hive-site.xml file contains this connection property ::
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
I have changed the /var/lib/hive/metastore/metastore_db at my cluster node, but still getting the same error
I have also tried removing all *lck files from above directory
Does {username} have the permissions to create
/var/lib/hive/metastore/metastore_db ?
If it is a test cluster you could do
sudo chmod -R 777 /var/lib/hive/metastore/metastore_db
or chown it to the user running it.
Try removing the $HADOOP_HOME/build folder. I had same problem with hive-0.10.0 or above versions. Then I tried hive-0.9.0 and got a different set of errors. Luckily found this thread Hive doesn't work on install. Tried the same trick and it worked for me magically. I am using default derby db.
this is for permissions issue for hive folder. please de the following will work well.
go to hive user ,for me hduser,
sudo chmod -R 777 hive
This issue occur due to abrupt termination of hive shell. Which created a unattended db.lck file.
TO resolve this issue,
browse to your metastore_db location
remove the tmp, dbex.lck and db.lck files.
Open the hive shell again. It will work.
You can see tmp, dbex.lck and db.lck files get created once again.
It worked after i moved the metastore from /var/lib/hive/. I did that by editing: /etc/hive/conf.dist/hive-site.xml
from:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
to:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/home/prashant/hive/metastore/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
`
Pls make sure of that whether you have a MetaStore_db in your hadoop directory already, if have, remove it and format your hdfs again,
and then try to start hive
Yes it's privilege problem. Enter your hive shell by following command:
sudo -u hdfs hive

java.sql.SQLException: Failed to start database 'metastore_db' ERROR, while initializing database using hive

I installed Hadoop and Hive on 3 cluster. I have able to login to hive from my cluster node where HIVE is running.
root#NODE_3 hive]# hive Logging initialized using configuration in
jar:file:/usr/lib/hive/lib/hive-common-0.10.0-cdh4.2.0.jar!/hive-log4j.properties
Hive history
file=/tmp/root/hive_job_log_root_201304020248_306369127.txt hive> show
tables ; OK Time taken: 1.459 seconds hive>
But when i try to run some hive test on my cluster nodes , I am getting following given below error.
Here it is trying to initilize data base as user =ashsshar{my username}
3/04/02 02:32:44 INFO mapred.JobClient: Cleaning up the staging area
hdfs://scaj-ns/user/ashsshar/.staging/job_201304020010_0080 13/04/02
02:32:44 ERROR security.UserGroupInformation:
PriviledgedActionException as:ashsshar (auth:SIMPLE)
cause:java.io.IOException: javax.jdo.JDOFatalDataStoreException:
Failed to create database '/var/lib/hive/metastore/metastore_db', see
the next exception for details. NestedThrowables:
java.sql.SQLException: Failed to create database
'/var/lib/hive/metastore/metastore_db', see the next exception for
details. java.io.IOException: javax.jdo.JDOFatalDataStoreException:
Failed to create database '/var/lib/hive/metastore/metastore_db', see
the next exception for details. NestedThrowables:
java.sql.SQLException: Failed to create database
'/var/lib/hive/metastore/metastore_db', see the next exception for
details.
I have tried two things .
1 . Giving permission to cd /var/lib/hive/metastore/metastore_db
Removing rm /var/lib/hive/metastore/metastore_db/*lck
But still i am getting the same error
It seems to be an issue with creating the metastore. I solved this by creating a directory and setting the value to that directory as follows:
step-1: create a directory on home say its: hive-metastore-dir
step-2: being super user edit the hive-site.xml (its in: /usr/lib/hive/conf) as follows:
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true</value>
to
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/home/hive-metastore-dir/metastore/metastore_db;create=true</value>
step-3: start the CLI as sudo hive and perform your queries.
You may login to hive client from a directory where the user has write access. By default, hive will try to create temporary directory in local and HDFS when a shell is opened up.
follow this steps if you are using CDH
1. copy /usr/lib/hive/conf/hive-site.xml and paste into /usr/lib/spark/conf/
This will solve the problem of "metastore_db" error
Thanks

Resources