Im trying to install hive on windows.I'm almost complete my install.But while staring hive command im getting the below error.
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Beeline version 2.1.1 by Apache Hive
Error applying authorization policy on hive configuration: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Connection is already closed.
Here's my hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveusera</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server </description>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
</configuration>
I'm using MySQL as the backend for my hive
I tried with all possible solution deleting *.lck,Coping the mysql-connector-java-5.1.42-bin to the /lib
Nothing helps me.
I got the Answer.I'm actually missing the schema for my meta store in MySQL. Im now add it and works fine :)
Related
This error information shows up:
Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.net.ConnectException Call From undefined.hostname.localhost/192.168.xx.xxx to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused);
I don't understand why I will visit the localhost:9000, the host in my core-site.xml is hdfs://192.168.xx.xx:9000, why did I visit localhost:9000
Is it the default host?
Please make sure that hive-site.xml is present in your spark config directory /etc/spark/conf/ and configure the hive configuration settings.
## core-site.xml
fs.defaultFS
## Hive config
hive.metastore.uris
In hive-site.xml, you can configure the as follows. Please configure your hive meta-store details.
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-xx.xx.xx.xx:8020</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://ip-xx.xx.xx:9083</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastorecreateDatabase
IfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password for connecting to mysql server </description>
</property>
</configuration>
Your error is related to HDFS, not Hive or SparkSQL
You need to ensure that your HADOOP_HOME or HADOOP_CONF_DIR are correctly setup in the spark-env.sh if you would like to connect to the correct Hadoop environment rather than use the defaults.
I reset the metastore in Mysql. I use the localhost in my core-site.xml at that time, I init my metastore.So I reset the metastore, and the problem solved.
First,go to the mysql command lineļ¼drop the database(metastore) which you set in your hive-site.xml.
Then,change dictory to the $HIVE_HOME/bin,execute schematool -initSchema -dbType mysql, and the problem is solved.the error due to the metastore in mysql is too late(I have set the metastore in standby environments),I turn to the cluster environment later, but the metastore is previous, so I can create table in hive,not in sparksql.
Thank someone who helps me.#Ravikumar,#cricket_007
When i am trying to start hive or spark i am getting this error.
16/07/13 16:55:12 ERROR Schema: Failed initialising database.
No suitable driver found for jdbc:;derby;databaseName=metastore_db;create=true
org.datanucleus.exceptions.NucleusDataStoreException: No suitable driver found for jdbc:;derby;databaseName=metastore_db;create=true
I am not able to resolve it.Can anyone help?
looks like hive lib path is not set in spark-env.sh file. Follow these steps
Copy hive-site.xml from HIVE_HOME/conf to SPARK_HOME/conf folder.
Add hive lib path to classpath in SPARK_HOME/conf/spark-env.sh
Restart the Spark cluster for everything to take effect.
In order to setup mysql as hive metastore hive-site.xml should have these properties setup:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://MYSQL_HOST:3306/hive_{version}</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore/description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>XXXXXXXX</value>
<description>Username to use against metastore database/description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>XXXXXXXX</value>
<description>Password to use against metastore database/description>
</property>
if this does not resolve the error, provide more info about steps you follow to install/configure youe environment
I am running Hive2 on my ubuntu and trying to create tables both through the hive interface and through beeline\jdbc.
I have no problem creating tables through hive interface but when accessing through jdbc I get a permission denied error.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=hive, access=WRITE, inode="/user/hive/warehouse/page_view":cto:supergroup:drwxrwxr-x
From the exception I see it is trying to create the table in a directory which does not exist (/user/hive/warehouse/...)
my hive-default.xml has the following configuration:
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/local/apache-hive-1.2.1-bin/metastore_db</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value/>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
<description>Number of retries while opening a connection to metastore</description>
</property>
<property>
<name>hive.metastore.failure.retries</name>
<value>1</value>
<description>Number of retries upon failure of Thrift metastore calls</description>
</property>
<property>
<name>hive.metastore.client.connect.retry.delay</name>
<value>1s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Number of seconds for the client to wait between consecutive connection attempts
</description>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>600s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
MetaStore Client socket timeout in seconds
</description>
</property>
<property>
<name>hive.metastore.client.socket.lifetime</name>
<value>0s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
MetaStore Client socket lifetime in seconds. After this time is exceeded, client
reconnects on the next MetaStore operation. A value of 0s means the connection
has an infinite lifetime.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mine</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.ds.connection.url.hook</name>
<value/>
<description>Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used</description>
</property>
<property>
<name>javax.jdo.option.Multithreaded</name>
<value>true</value>
<description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/usr/local/apache-hive-1.2.1-bin/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
So why it is trying to create the metastore_db under /user/hive/warehouse?
My problem was I did not change the hive-default.xml.template name to hive-site.xml as I should.
I'm trying to set up Cloudera Impala with CDH4 in pseudo distributed mode on Red Hat 5. I have Hive using JDBC to connect to a MySQL metastore, but I'm having trouble setting up Impala with JDBC. I've been following the instructions found here: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_impala_jdbc.html
I've extracted the JARs to a directory and included that directory in $CLASSPATH. I've also included /usr/lib/hive/lib in $CLASSPATH, which has mysql-connector-java-5.1.25-bin.jar.
In both my Hive and Impala conf directories, I have hive-site.xml including the following properties:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
But when I run sudo service impala-server restart, the server log has this error:
ERROR common.MetaStoreClientPool: Error initializing Hive Meta Store client
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
Which it says is cause by this:
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.datasource.dbcp.DBCPDataSourceFactory.makePooledDataSource(DBCPDataSourceFactory.java:80)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initDataSourceTx(ConnectionFactoryImpl.java:144)
... 57 more
Is there any step I'm missing to configure Impala with JDBC?
I fixed this by copying mysql-connector-java-5.1.25-bin.jar to /var/lib/impala - the startup script was telling the classpath to look here for the connector jar for some reason.
I am using hive-0.9.0 with mysql as a metastore.
I am getting one exception as :
hive> show tables;
FAILED: Error in metadata: org.apache.thrift.transport.TTransportException:java.net.SocketTimeoutException:Read timed out
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Error in metadata: MetaException(message:Could not connect to meta store using any of the URIs provided)
Any pointers on to would be helpful.
Regards
Neeraj
Did you configure the URL & credential for metastore correctly? Did you try restart your metastore service by
hive --service metastore
pls check your hive config ${HIVE_HOME}/conf/hive-site.xml
hive mysql configuration example:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
go to hive/hcatalog/sbin
./hcat_server.sh
This should help