HiveException java.lang.RuntimeException: - hadoop

I'm getting this exception when I try to show databases.
vallabh#vallabh:~$ hive
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Following is my .bashrc file
# Set HIVE_HOME
export HIVE_HOME=/opt/apache-hive-3.0.0-bin
export HIVE_CONF_DIR=/opt/apache-hive-3.0.0-bin/conf/
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:/opt/hadoop-3.0.1/lib/*:.
export CLASSPATH=$CLASSPATH:/opt/apache-hive-3.0.0-bin/lib/*:
following is my hive-env.sh file
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/opt/hadoop-3.0.1
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/apache-hive-3.0.0-bin/conf
following is my hive-config.sh file
export HADOOP_HOME=/opt/hadoop-3.0.1
following is my hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="confguration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contri utor license agreements. See the NOTICE fle distri uted with
this work for additional information regarding copyright ownership.
The ASF licenses this fle to You under the Apache License, Version 2.0
(the "License"); you may not use this fle except in compliance with
the License. You may o tain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required y applica le law or agreed to in writing, software-->
<confguration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jd c:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide data ase-specifc SSL fag in
the connection URL.
For example, jd c:postgresql://myhost/d ?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password#123</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fxedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
</confguration>
What will be the possible cause of the error.?

Related

How to map hive warehouse path in hive-site.xml?

I'm new to ubuntu, and I'm trying to install hive 3.0.0 on my system.
I'm following a tutorial on internet and I came across this command.
hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -mkdir /tmp
This command are used to store metadata. But I can't find those in filesystem.
So my question is that, what is the meaning of this commands?
And how to map that metadata on hive-site.xml?
Following is the hive-site.xml from my tutorial.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="confguration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contri utor license agreements. See the NOTICE fle distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this fle to You under the Apache License, Version 2.0
(the "License"); you may not use this fle except in compliance with
the License. You may o tain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required y applicable law or agreed to in writing, software-->
<confguration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specifc SSL fag in
the connection URL.
For example, jdbc:postgresql://myhost/d ?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password#123</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fxedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
</confguration>
**I'm using mysql to store metadata. Please explain necessary changes, if any.
THANKS IN ADVANCE!!!
**
This command are used to store metadata
Not quite. That's the Hive Warehouse directory on HDFS. This is where the raw data exists, not the metadata - that is entirely in Mysql.
HDFS is configured by the core-site.xml of the Hadoop client libraries required by Hive, not the hive-site.xml file
But I can't find those in filesystem.
Probably because those are HDFS paths, not places on your local filesystem
what is the meaning of this commands?
To create the default warehouse and temp scratch directories needed to store data to be used by Hive processes

Unable to upgrade hive to 2.3.2 in cloudera VM

I upgraded hive version in cloudera vm to 2.3.2'. It is installed successfully and I copiedhive-site.xmlfile from older/hive/conffolder to the newerconffolder and there is no problem with the metastore. However when I am executing query like'drop table table_name'` then it throws below exception :
FAILED: SemanticException Unable to fetch table table_name. Invalid method name: 'get_table_req'
Below is my hive-site.xml file:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cloudera</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-0.8.1-cdh4.0.0.jar</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
</configuration>
Below are my bashrc variables:
#Setting hive variables
export HIVE_HOME="/usr/lib/apache-hive-2.3.2-bin"
export PATH="$HIVE_HOME/bin:$PATH"
NOTE: I am able to create tables but when I execute any select query it fails and throws the above exception. Where am I going wrong? Do I need to copy any other file as well ?? Thanks in advance.
check your metastore version and upprade metastore version

Not able to start hive

I installed Hadoop on windows and also setup hive. When I start hive using hive.cmd, I get the following error
16/12/28 18:14:05 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
It has not created the metastore_db folder in the hive\bin path.
I also tried using the schematool to initialize the schemas. But it gives me "'schematool' is not recognized as an internal or external command,
operable program or batch file."
My environment variables are as follows :
HIVE_BIN_PATH : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\bin
HIVE_HOME : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin
HIVE_LIB : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\lib
PATH : C:\hadoop-2.7.1.tar\hadoop-2.7.1\bin;C:\apache\db-derby-10.12.1.1-bin\bin;C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\bin;
Here is my hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby://localhost:1527/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.ClientDriver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.server2.enable.impersonation</name>
<value>true</value>
<description>Enable user impersonation for HiveServer2</description>
</property>
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
</configuration>
I have added the derby.jar, derby-client.jar and derbytools.jar to the hive\lib folder. I have also added the slf4j-api-1.5.8.jar to the hive\lib folder. But it still does not work. Any pointers on this one?

Hadoop - Tables not displayed in Hive

The problem i am facing is:
Everytime I login in to HIVE CLI, all the created databases & tables are gone. I can see them in the warehouse directory in Hadoop GUI. However same is not reflecting through CLI. Please help me resolve the issue.
I am using Hadoop - 1.0.4 & Hive - 1.2.1.
I have configured (warehouse dir, temp dir, derby metastore dir) inhive-site.xml as per documentation.
properties in hive-site.xml
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hadoop/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hadoop/hive</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>700</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value/>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
<description>Number of retries while opening a connection to metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/usr/hadoop/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

Impala cannot find com.mysql.jdbc.Driver

I'm trying to set up Cloudera Impala with CDH4 in pseudo distributed mode on Red Hat 5. I have Hive using JDBC to connect to a MySQL metastore, but I'm having trouble setting up Impala with JDBC. I've been following the instructions found here: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_impala_jdbc.html
I've extracted the JARs to a directory and included that directory in $CLASSPATH. I've also included /usr/lib/hive/lib in $CLASSPATH, which has mysql-connector-java-5.1.25-bin.jar.
In both my Hive and Impala conf directories, I have hive-site.xml including the following properties:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
But when I run sudo service impala-server restart, the server log has this error:
ERROR common.MetaStoreClientPool: Error initializing Hive Meta Store client
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
Which it says is cause by this:
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.datasource.dbcp.DBCPDataSourceFactory.makePooledDataSource(DBCPDataSourceFactory.java:80)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initDataSourceTx(ConnectionFactoryImpl.java:144)
... 57 more
Is there any step I'm missing to configure Impala with JDBC?
I fixed this by copying mysql-connector-java-5.1.25-bin.jar to /var/lib/impala - the startup script was telling the classpath to look here for the connector jar for some reason.

Resources