I would like to use Hue as a visualization interface for hive, the server hiveserver 2 starts well and I can work in command without problem.
My hadoop is also functional (single node running on localhost), I managed to configure the hdfs files for hue and I can easily view hdfs files with the interface hue. but my big problem for weeks is to make a HIVE request with hue (even if I configured according to the research I found on the internet). I can not do it and get stuck on it
your help will be really appreciated.
this is hive-site.xml
<?xml version="1.0"?>
-<configuration>
-<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive_temp</value>
<description>Local scratch space for Hive jobs</description>
</property>
-<property>
<name>hive.execution.engine</name>
<value>mr</value>
<description> Expects one of [mr, tez, spark]. Chooses execution engine. Options are: mr (Map reduce, default)</description>
</property>
-<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true&useSSL=false</value>
<description>metadata is stored in a MySQL server</description>
</property>
-<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
-<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>Username to use against metastore database</description>
</property>
-<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password to use against metastore database</description>
</property>
-<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive_tmp</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission</description>
</property>
-<property>
<name>hive.exec.scratchdir</name>
<value>/user/hive/warehouse</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission</description>
</property>
</configuration>
and hive configuration in HUE pseudo-distributed.ini
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=localhost
# Port where HiveServer2 Thrift server runs on.
hive_server_port=10002
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/usr/local/hive/conf
This error information shows up:
Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.net.ConnectException Call From undefined.hostname.localhost/192.168.xx.xxx to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused);
I don't understand why I will visit the localhost:9000, the host in my core-site.xml is hdfs://192.168.xx.xx:9000, why did I visit localhost:9000
Is it the default host?
Please make sure that hive-site.xml is present in your spark config directory /etc/spark/conf/ and configure the hive configuration settings.
## core-site.xml
fs.defaultFS
## Hive config
hive.metastore.uris
In hive-site.xml, you can configure the as follows. Please configure your hive meta-store details.
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-xx.xx.xx.xx:8020</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://ip-xx.xx.xx:9083</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastorecreateDatabase
IfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password for connecting to mysql server </description>
</property>
</configuration>
Your error is related to HDFS, not Hive or SparkSQL
You need to ensure that your HADOOP_HOME or HADOOP_CONF_DIR are correctly setup in the spark-env.sh if you would like to connect to the correct Hadoop environment rather than use the defaults.
I reset the metastore in Mysql. I use the localhost in my core-site.xml at that time, I init my metastore.So I reset the metastore, and the problem solved.
First,go to the mysql command lineļ¼drop the database(metastore) which you set in your hive-site.xml.
Then,change dictory to the $HIVE_HOME/bin,execute schematool -initSchema -dbType mysql, and the problem is solved.the error due to the metastore in mysql is too late(I have set the metastore in standby environments),I turn to the cluster environment later, but the metastore is previous, so I can create table in hive,not in sparksql.
Thank someone who helps me.#Ravikumar,#cricket_007
Im trying to install hive on windows.I'm almost complete my install.But while staring hive command im getting the below error.
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Beeline version 2.1.1 by Apache Hive
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Connection is already closed.
Here's my hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveusera</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server </description>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
</configuration>
I'm using MySQL as the backend for my hive
I tried with all possible solution deleting *.lck,Coping the mysql-connector-java-5.1.42-bin to the /lib
Nothing helps me.
I got the Answer.I'm actually missing the schema for my meta store in MySQL. Im now add it and works fine :)
The problem i am facing is:
Everytime I login in to HIVE CLI, all the created databases & tables are gone. I can see them in the warehouse directory in Hadoop GUI. However same is not reflecting through CLI. Please help me resolve the issue.
I am using Hadoop - 1.0.4 & Hive - 1.2.1.
I have configured (warehouse dir, temp dir, derby metastore dir) inhive-site.xml as per documentation.
properties in hive-site.xml
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hadoop/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hadoop/hive</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>700</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value/>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
<description>Number of retries while opening a connection to metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/usr/hadoop/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
When i am trying to start hive or spark i am getting this error.
16/07/13 16:55:12 ERROR Schema: Failed initialising database.
No suitable driver found for jdbc:;derby;databaseName=metastore_db;create=true
org.datanucleus.exceptions.NucleusDataStoreException: No suitable driver found for jdbc:;derby;databaseName=metastore_db;create=true
I am not able to resolve it.Can anyone help?
looks like hive lib path is not set in spark-env.sh file. Follow these steps
Copy hive-site.xml from HIVE_HOME/conf to SPARK_HOME/conf folder.
Add hive lib path to classpath in SPARK_HOME/conf/spark-env.sh
Restart the Spark cluster for everything to take effect.
In order to setup mysql as hive metastore hive-site.xml should have these properties setup:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://MYSQL_HOST:3306/hive_{version}</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore/description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>XXXXXXXX</value>
<description>Username to use against metastore database/description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>XXXXXXXX</value>
<description>Password to use against metastore database/description>
</property>
if this does not resolve the error, provide more info about steps you follow to install/configure youe environment