Error while adding UDF in hive - hadoop

I have to add a UDF in hive.
The query I am trying is :
create function strip1 as 'com.hadoopbook.hive.Strip' using jar '/home/hduser/Hadoop-tutorial/hadoop-book-master/ch17-hive/src/main/java/com/hadoopbook/hive/Strip.jar'
But I am getting a exception as :
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.FunctionTask. Hive warehouse is non-local, but /home/hduser/Hadoop-tutorial/hadoop-book-master/ch17-hive/src/main/java/com/hadoopbook/hive/Strip.jar specifies file on local filesystem. Resources on non-local warehouse should specify a non-local scheme/path
Can anyone tell how to solve this ?

Three options:
copy the jar on hdfs and use that path.
OR
as error is telling you: In the $HIVE_HOME/conf directory there is the hive-default.xml and/or hive-site.xml which has the hive.metastore.warehouse.dir property. add hdfs:/ to this path, and restart/re-run the hive shell/script:
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://usr/hive/warehouse </value>
<description>location of the warehouse directory</description>
</property>
OR
if you are running hive queries from hive shell then:
hive> set hive.metastore.warehouse.dir;
hive.metastore.warehouse.dir=/user/hive/warehouse
above command prints the path, just prefix the hdfs:/ to it as below and then re-run your hive command(s) :
hive> set hive.metastore.warehouse.dir="hdfs://user/hive/warehouse";

You could setting the configuration hive.aux.jars.path to /home/hduser/Hadoop-tutorial/hadoop-book-master/ch17-hive/src/main/java/com/hadoopbook/hive/
and create hive udf function via below command:
create function strip1 as 'com.hadoopbook.hive.Strip'

You can first try to add UDF jar to a hdfs location instead of the local directory:
$ add jar "hdfs://user/cloudera/hive/udf/Strip.jar"
and then create hive function as below:
$ create function test_function as "com.hadoopbook.hive.Strip"
Hope this helps :)

Related

how hive is running without hive-site.xml file?

I am trying to set up hive on my local. I started all Hadoop processes and set up the {hive}/bin path. On command prompt I can run hive commands , create and read tables. My questions are -
1) is hive-site.xml is optional file ?
2) in absence of hive-site.xml file, how hive get information regrading metastore and other configuration?
If you're running Hive queries from your local machine which has Hadoop installed, hive-site.xml is not needed as you are talking directly to hive/bin in the Hive installation directory. You don't need to tell Hive where to find Hive.
If you wanted to run Hive commands from another machine, but interacting with Hive on your local machine, you'd need hive-site.xml.

Hive not storing Warehouse in HDFS

I have downloaded hive installation on my local system and copied hive-site.xml into Spark conf directory. I tried to create a managed table in Hive context using spark shell.
I have put following property in hive-site.xml (present in spark's conf directory):
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
Also I have put HADOOP_HOME in spark-env.sh:
export HADOOP_CONF_DIR=/opt/hadoop/conf
As per Hive documentation, the hive warehouse should get stored in HDFS, but the warehouse is getting stored in local drive (/user/hive/warehouse).
Please help me out in understanding why Hive is not storing warehouse directory in HDFS.
Please define your Spark dependencies using 2.0.2
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.2"
You can then use hive.metastore.warehouse.dir or spark.sql.warehouse.dir to set the Spark warehouse and point to HDFS where the other Hive tables live.

Hive not fully honoring fs.default.name/fs.defaultFS value in core-site.xml

I have the NameNode service installed on a machine called hadoop.
The core-site.xml file has the fs.defaultFS (equivalent to fs.default.name) set to the following:
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:8020</value>
</property>
I have a very simple table called test_table that currently exists in the Hive server on the HDFS. That is, it is stored under /user/hive/warehouse/test_table. It was created using a very simple command in Hive:
CREATE TABLE new_table (record_id INT);
If I attempt to load data into the table locally (that is, using LOAD DATA LOCAL), everything proceeds as expected. However, if the data is stored on the HDFS and I want to load from there, an issue occurs.
I run a very simple query to attempt this load:
hive> LOAD DATA INPATH '/user/haduser/test_table.csv' INTO TABLE test_table;
Doing so leads to the following error:
FAILED: SemanticException [Error 10028]: Line 1:17 Path is not legal ''/user/haduser/test_table.csv'':
Move from: hdfs://hadoop:8020/user/haduser/test_table.csv to: hdfs://localhost:8020/user/hive/warehouse/test_table is not valid.
Please check that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do not conflict.
As the error states, it is attempting to move from hdfs://hadoop:8020/user/haduser/test_table.csv to hdfs://localhost:8020/user/hive/warehouse/test_table. The first path is correct because it references hadoop:8020; the second path is incorrect, because it references localhost:8020.
The core-site.xml file clearly states to use hdfs://hadoop:8020. The hive.metastore.warehouse value in hive-site.xml correctly points to /user/hive/warehouse. Thus, I doubt this error message has any true value.
How can I get the Hive server to use the correct NameNode address when creating tables?
I found that the Hive metastore tracks the location of each table. You can see the that location be running the following in the Hive console.
hive> DESCRIBE EXTENDED test_table;
Thus, this issue occurs if the NameNode in core-site.xml was changed while the metastore service was still running. Therefore, to resolve this issue the service should be restarted on that machine:
$ sudo service hive-metastore restart
Then, the metastore will use the new fs.defaultFS for newly created tables such.
Already Existing Tables
The location for tables that already exist can be corrected by running the following set of commands. These were obtained from Cloudera documentation to configure the Hive metastore to use High-Availability.
$ /usr/lib/hive/bin/metatool -listFSRoot
...
Listing FS Roots..
hdfs://localhost:8020/user/hive/warehouse
hdfs://localhost:8020/user/hive/warehouse/test.db
Correcting the NameNode location:
$ /usr/lib/hive/bin/metatool -updateLocation hdfs://hadoop:8020 hdfs://localhost:8020
Now the listed NameNode is correct.
$ /usr/lib/hive/bin/metatool -listFSRoot
...
Listing FS Roots..
hdfs://hadoop:8020/user/hive/warehouse
hdfs://hadoop:8020/user/hive/warehouse/test.db

Hive doesn't show tables when started from another directory

I installed Hive cdh4 on RHEL. Whenever I start Hive from a directory, it creates metastore_db dir in it and a derby.log file. Is it a normal behaviour? Moreover, when I create a table, starting Hive from a particular directory; I'm unable to see that table when I start Hive from a directory, other than that.
For example,
Let's say I started Hive from my home dir, i.e. $HOME or ~ and I create table in Hive. But when I start Hive from /path/to/my/Hive/directory and do a show tables, the table i just creted wouldn't show up. However, if start Hive from my home directory again and look for tables, I'm able to see the table.
Also, if I make some changes in hive-site.xml, they are simply being ignored by Hive.
Please help me where am I going wrong.
You can change this and use one metastore_db by updating "$HIVE_HOME/conf/hive-default.xml" file's "javax.jdo.option.ConnectionURL" as below:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/path/to/my/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
Where /path/to/my/metastore_db is the location you want to keep your meta store dB.

Adding JAR in Hive is giving error as "Query returned non-zero code: 1, cause: /user/hive/warehouse/abc.jar does not exist."

I created a UDF and exported the jar as abc.jar.
Copied the jar in hdfs at /user/hive/warehouse.
Now, I am getting below errors:
hive> ADD JAR /user/hive/warehouse/abc.jar;
/user/hive/warehouse/abc.jar does not exist
Query returned non-zero code: 1, cause: /user/hive/warehouse/abc.jar does not exist.
hive>
When I do, hadoop fs -ls /user/hive, I can see abc.jar at /user/hive/warehouse path.
Where am I doing wrong and what is the solution for this?
When you add jar from hdfs you use the following statement :
ADD jar hdfs://namenode/user/hive/warehouse/abc.jar;
you are not notifying that you are adding the jar from hdfs . That is the cause of your error.
Hope that helps
The way, you are mentioning the path, it will look the file in local file system.
Either place it there, or use hdfs:// like this
hive> ADD JAR /user/hive/warehouse/abc.jar => local filesystem
hive> ADD JAR hdfs://namenodei/user/hive/warehouse/abc.jar => In hdfs
Above options are valid for current sessions only. So every time you need to write ADD JAR.
In order to add it permanently recommended ways are as follows.
add in hive-site.xml
<property>
<name>hive.aux.jars.path</name>
<value>file://localpath/yourjar.jar</value>
</property>
Copy and paste the JAR file to the ${HIVE_HOME}/auxlib/ folder

Resources