I upgraded hive version in cloudera vm to 2.3.2'. It is installed successfully and I copiedhive-site.xmlfile from older/hive/conffolder to the newerconffolder and there is no problem with the metastore. However when I am executing query like'drop table table_name'` then it throws below exception :
FAILED: SemanticException Unable to fetch table table_name. Invalid method name: 'get_table_req'
Below is my hive-site.xml file:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cloudera</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-0.8.1-cdh4.0.0.jar</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
</configuration>
Below are my bashrc variables:
#Setting hive variables
export HIVE_HOME="/usr/lib/apache-hive-2.3.2-bin"
export PATH="$HIVE_HOME/bin:$PATH"
NOTE: I am able to create tables but when I execute any select query it fails and throws the above exception. Where am I going wrong? Do I need to copy any other file as well ?? Thanks in advance.
check your metastore version and upprade metastore version
Related
I'm getting this exception when I try to show databases.
vallabh#vallabh:~$ hive
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Following is my .bashrc file
# Set HIVE_HOME
export HIVE_HOME=/opt/apache-hive-3.0.0-bin
export HIVE_CONF_DIR=/opt/apache-hive-3.0.0-bin/conf/
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:/opt/hadoop-3.0.1/lib/*:.
export CLASSPATH=$CLASSPATH:/opt/apache-hive-3.0.0-bin/lib/*:
following is my hive-env.sh file
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/opt/hadoop-3.0.1
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/apache-hive-3.0.0-bin/conf
following is my hive-config.sh file
export HADOOP_HOME=/opt/hadoop-3.0.1
following is my hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="confguration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contri utor license agreements. See the NOTICE fle distri uted with
this work for additional information regarding copyright ownership.
The ASF licenses this fle to You under the Apache License, Version 2.0
(the "License"); you may not use this fle except in compliance with
the License. You may o tain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required y applica le law or agreed to in writing, software-->
<confguration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jd c:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide data ase-specifc SSL fag in
the connection URL.
For example, jd c:postgresql://myhost/d ?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password#123</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.fxedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
</confguration>
What will be the possible cause of the error.?
I installed Hadoop on windows and also setup hive. When I start hive using hive.cmd, I get the following error
16/12/28 18:14:05 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
It has not created the metastore_db folder in the hive\bin path.
I also tried using the schematool to initialize the schemas. But it gives me "'schematool' is not recognized as an internal or external command,
operable program or batch file."
My environment variables are as follows :
HIVE_BIN_PATH : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\bin
HIVE_HOME : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin
HIVE_LIB : C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\lib
PATH : C:\hadoop-2.7.1.tar\hadoop-2.7.1\bin;C:\apache\db-derby-10.12.1.1-bin\bin;C:\hadoop-2.7.1.tar\apache-hive-2.1.1-bin\bin;
Here is my hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby://localhost:1527/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.ClientDriver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.server2.enable.impersonation</name>
<value>true</value>
<description>Enable user impersonation for HiveServer2</description>
</property>
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>True</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
</configuration>
I have added the derby.jar, derby-client.jar and derbytools.jar to the hive\lib folder. I have also added the slf4j-api-1.5.8.jar to the hive\lib folder. But it still does not work. Any pointers on this one?
The problem i am facing is:
Everytime I login in to HIVE CLI, all the created databases & tables are gone. I can see them in the warehouse directory in Hadoop GUI. However same is not reflecting through CLI. Please help me resolve the issue.
I am using Hadoop - 1.0.4 & Hive - 1.2.1.
I have configured (warehouse dir, temp dir, derby metastore dir) inhive-site.xml as per documentation.
properties in hive-site.xml
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hadoop/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hadoop/hive</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>700</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value/>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
<description>Number of retries while opening a connection to metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/usr/hadoop/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
I am facing a problem to start the Hive web UI. Although the hive-hwi-0.11.0.war file did exist under /usr/local/hive-0.11.0/lib/, the same error message always appeared when I tried to start HWI:
...FATAL hwi.HWIServer: HWI WAR file not found at /usr/local/hive-0.11.0/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
It seemed that the $HIVE_HOME path was repeated twice when the .war file was being searched regardless how I set the value for hive.hwi.war.file.
Values that I have tried:
setup 1: ${HIVE_HOME}/lib/hive-hwi-0.11.0.war
setup 2: /usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war
setup 3: lib/hive-hwi-0.11.0.war
BTW, I set up all the hive configurations in $HIVE_HOME/conf/hive-site.xml. Anyone has a solution for this issue? Thanks!
Below is my hive-site.xml:
<configuration>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://client2/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.servermode</name>
<value>thrift</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master1</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://client2:9083</value>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>10.19.209.100</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/local/hive-0.11.0/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>
It appears that you're setting $HIVE_HOME and then passing the full path in the hive-site.xml resulting in the incorrect path that you see in your error output.
Try changing the hive-site.xml file by just passing the lib location to append to the already set $HIVE_HOME path variable as follows:
<property>
<name>hive.hwi.war.file</name>
<value>/lib/hive-hwi-0.11.0.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
Then restart Hive and try the WebUI again.
Just to add to #apesa's answer, you might need to add two more properties along with what #apesa mentioned.
<property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
hive.hwi.listen.host and hive.hwi.listen.port are optional only if the things are working with the default values.
Hope this helps...!!!
I'm trying to set up Cloudera Impala with CDH4 in pseudo distributed mode on Red Hat 5. I have Hive using JDBC to connect to a MySQL metastore, but I'm having trouble setting up Impala with JDBC. I've been following the instructions found here: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_impala_jdbc.html
I've extracted the JARs to a directory and included that directory in $CLASSPATH. I've also included /usr/lib/hive/lib in $CLASSPATH, which has mysql-connector-java-5.1.25-bin.jar.
In both my Hive and Impala conf directories, I have hive-site.xml including the following properties:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
But when I run sudo service impala-server restart, the server log has this error:
ERROR common.MetaStoreClientPool: Error initializing Hive Meta Store client
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
Which it says is cause by this:
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.datasource.dbcp.DBCPDataSourceFactory.makePooledDataSource(DBCPDataSourceFactory.java:80)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initDataSourceTx(ConnectionFactoryImpl.java:144)
... 57 more
Is there any step I'm missing to configure Impala with JDBC?
I fixed this by copying mysql-connector-java-5.1.25-bin.jar to /var/lib/impala - the startup script was telling the classpath to look here for the connector jar for some reason.