Interpreter hive not found in zeppelin's jdbc interpreter - hadoop

I have installed zeppelin on my centOS system. It is not listing hive under JDBC interpreter.
I have hive installed on my system. Hive metastore and hiveserver2 are running. HIVE_HOME and HADOOP_HOME are set correctly.
Error on Zeppelin editor :
paragraph_1490339323949_-1789938581's Interpreter hive not found
Error in Zeppelin log files :
ERROR [2017-03-24 15:56:18,913] ({qtp1566723494-18} NotebookServer.java[afterStatusChange]:2018) - Error
org.apache.zeppelin.interpreter.InterpreterException: paragraph_1490346145929_-1782899327's Interpreter hive not found
at org.apache.zeppelin.notebook.Note.run(Note.java:572)
at org.apache.zeppelin.socket.NotebookServer.persistAndExecuteSingleParagraph(NotebookServer.java:1626)
at org.apache.zeppelin.socket.NotebookServer.runParagraph(NotebookServer.java:1600)
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:263)
at org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:59)
at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Any help will be appreciated.
Thanks.

You can resolve the above issue by :-
1) setting the following properties zeppelin.apache.org/docs/0.7.0/interpreter/hive.html for jdbc interpreter.
2) using %jdbc as an Interpreter
%jdbc select date
Hope this Helps!!!...

Related

Failed to initialize schema for HiveServer2 in Apache Hive 3.0.0 on Cygwin (Windows 10)

I already had a Hadoop 3.0.0 cluster consisting of 2 machine: 1 namenode + RM and 1 datanode. I tried to install Apache Hive 3.0.0 by following this document.
When I run schematool -dbType derby -initSchema --verbose on Cygwin, an exception was thrown:
$ schematool -dbType derby -initSchema --verbose
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/BigSol/apache-hive-3.0.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/BigSol/hadoop-3.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 3.0.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.0.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.0.0
at org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:137)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:580)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:562)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1445)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
*** schemaTool failed ***
When viewing the line of code that thrown the exception, I found that Hive tried to find a SQL schema located at %HIVE_HOME%\scripts\metastore\upgrade\derby\hive-schema-3.0.0.derby.sql.
I doubt that Cygwin messed up the path so that Hive didn't find that schema.
My questions:
How can I correct the path (or fix the problem)?
Are there batch files equivalent to *.sh files in %HIVE_HOME%\bin directory as Hive 2.1.1 have?
I found the solution. After running schematool on a Linux machine and copied metastore_db directory to Windows machine, I managed to start HiveServer2 but the beeline CLI said that the jar in C:\cygdrive\c\BigSol\apache-hive-3.0.0-bin\lib\hive-beeline-3.1.0.jar was not found.
It turned out that java in Cygwin parse the wrong path. I made a symbolic link from C:\cygdrive\c to C:\ and it worked.

Why does Hive return FAILED: SemanticException...Unable To Instantiate

I have installed Hive, added it to PATH and am able to open it using the hive command in Terminal.
However, when I attempt to run a basic command such as
SHOW TABLES;
I am presented with the error:
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
The instructions I am following do not suggest that anything has to be instantiated.
For reference, I am using the book Hadoop: The Definitive Guide (4th Edition) and running it locally on my machine.
When running JPS the following services are running:
2528 DataNode
7232 RunJar
2441 NameNode
7401 Jps
2634 SecondaryNameNode
282
2842 NodeManager
2751 ResourceManager
I fixed by removing the derby database files
rm -rf $HIVE_HOME/bin/metastore_db
and
$HIVE_HOME/bin/schematool -initSchema -dbType derby
I was able to resolve this problem by initializing the schema. I am surprised it is not mentioned anywhere.
To initialize the schema:
Navigate to your Hive installation folder
[install folder]/bin/schematool -initSchema -dbType derby
Next you should receive some messages confirming
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.derby.sql
Initialization script completed
schemaTool completed
Start hive
Run any basic commands to determine Hive is functioning such as SHOW TABLES;

Specify a vaild path to the correct hive jars using $HIVE_METASTORE_JARS or change spark.sql.hive.metastore.version to 1.2.1

When i try to run spark-submit on the Jar which had HiveContext,getting the below error.
Spark-defaults.conf had
spark.sql.hive.metastore.version 0.14.0
spark.sql.hive.metastore.jars ----/external_jars/hive-metastore-0.14.0.jar
#spark.sql.hive.metastore.jars maven
I would like to use Hive Metastore version 0.14. both spark and hadoop are on diff clusters.
Can anyone helping me with resolving this one?
16/09/19 16:52:24 INFO HiveContext: default warehouse location is /apps/hive/warehouse
Exception in thread "main" java.lang.IllegalArgumentException: Builtin jars can only be used when hive execution version == hive metastore version. Execution: 1.2.1 != Metastore: 0.14.0.
Specify a vaild path to the correct hive jars using $HIVE_METASTORE_JARS or change spark.sql.hive.metastore.version to 1.2.1.
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:254)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:237)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:441)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$cla
try
val hadoopConfig: Configuration = spark.hadoopConfiguration
hadoopConfig.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getNam‌​e)
hadoopConfig.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
in the spark

Hive Internal Error: java.lang.ClassNotFoundException(org.apache.atlas.hive.hook.HiveHook)

I am running a hive query throwh oozie using hue..
I am creating a table through hue-oozie work flow...
My job is failing but when I check in hive the table is created.
Log shows below error:
16157 [main] INFO org.apache.hadoop.hive.ql.hooks.ATSHook - Created ATS Hook
2015-09-24 11:05:35,801 INFO [main] hooks.ATSHook (ATSHook.java:<init>(84)) - Created ATS Hook
16159 [main] ERROR org.apache.hadoop.hive.ql.Driver - hive.exec.post.hooks Class not found:org.apache.atlas.hive.hook.HiveHook
2015-09-24 11:05:35,803 ERROR [main] ql.Driver (SessionState.java:printError(960)) - hive.exec.post.hooks Class not found:org.apache.atlas.hive.hook.HiveHook
16159 [main] ERROR org.apache.hadoop.hive.ql.Driver - FAILED: Hive Internal Error: java.lang.ClassNotFoundException(org.apache.atlas.hive.hook.HiveHook)
java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
Not able to identify the issue....
I am usig HDP 2.3.1
Basically this error is due to missing atlas jar in oozie share lib.
In HDP the Atlas jar is available in /usr/hdp/2.3.0.0-2557/atlas/
Put all the jars related to atlas in hadoop share lib ..
hadoop fs -put /usr/hdp/2.3.0.0-2557/atlas/hook/hive/* /user/oozie/share/lib/lib200344/hive
Add 'export HIVE_AUX_JARS_PATH=<atlas package>/hook/hive' in hive-env.sh .
Copy <atlas package>/conf/application.propertiesto hive conf directory.
Restart the oozie services. This will solve this problem. If anybody face the problem please comment here so that I can help.
[Comment by Immo Huneke: when using the Hortonworks sandbox VM, I found that just putting the jar files in the share/lib folder under HDFS was enough to resolve the problem. I didn't have to update hive-env.sh or copy the application.properties file. But check the exact path of your share/lib folder by executing the command hdfs dfs -ls /user/oozie/share/lib before copying.]
hive>add jar /usr/hdp//atlas/hook/hive/hive-bridge-${VERSION}.jar
it will be ok.
hope help for u.
It Seems You CLASS is not found exception.
Have you installed Oozie Sharedlib, if Yes, please update all the hive dependent Jar in the sharedLib Location, and check if the status
Also check if Hive Client is available in all the Nodes under the cluster and same should be running
​I tried each and every possible solution mentioned in this forum and in stackoverflow, but it did not resolve my issue.
Finally, I resolved it by copying all the jars in /hook/hive to lib (create a new lib folder at job.properties level) folder of my oozie workflow

Integrating Pig with Hbase

I have installed hadoop-2.5.0, pig 0.13.0 and HBase 0.98.6.1 in linux. When trying to run simple pig script, error occurs as
2014-10-14 16:01:54,891 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org.apache.hadoop.hbase.util.Bytes.equals([BLjava/nio/ByteBuffer;)Z
Details at logfile: /home/labuser/pig_1413279561970.log
Pasted the log below...
Pig Stack Trace
ERROR 2998: Unhandled internal error. org.apache.hadoop.hbase.util.Bytes.equals([BLjava/nio/ByteBuffer;)Z
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.util.Bytes.equals([BLjava/nio/ByteBuffer;)Z
at org.apache.hadoop.hbase.TableName.(TableName.java:281)
at org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:344)
at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:382)
at org.apache.hadoop.hbase.TableName.(TableName.java:82)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
It seems that HBase 0.98.6.1 version does not support for pig 0.13.0
So how to make it works? or which version of HBase does support for pig 0.13.0?
The root cause for this has been identified to be https://issues.apache.org/jira/browse/HBASE-6658 where it says the class "org.apache.hadoop.hbase.filter.WritableByteArrayComparable" was renamed.
You may need to re-compile using the HBase profile you're using.

Resources