Hive No files matching path file and file Exists - hadoop

I'm having a lot of trouble getting hive to work. I'm running CDH4.5 with YARN, all installed from Cloudera's yum repo. I followed their instructions to set up hive but for some reason it does not recognize legitimate files on my local file system.
[msknapp#localhost data]$ pwd
/home/msknapp/data
[msknapp#localhost data]$ ll | grep county_insurance_pp.txt
-rw-rw-rw- 1 msknapp msknapp 162537 Jan 5 14:58 county_insurance_pp.txt
[msknapp#localhost data]$ sudo -u hive hive
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/hive/hive_job_log_9e8bf55b-7ec8-4b79-be9b-cc2200a33f91_1795256456.txt
hive> describe count_insurance;
2014-01-08 02:42:59.000 GMT Thread[main,5,main] java.io.FileNotFoundException: derby.log (Permission denied)
----------------------------------------------------------------
2014-01-08 02:42:59.443 GMT:
Booting Derby version The Apache Software Foundation - Apache Derby - 10.4.2.0 - (689064): instance a816c00e-0143-6fbb-3f3a-000007a1d270
on database directory /var/lib/hive/metastore/metastore_db
Database Class Loader started - derby.database.classpath=''
OK
fips int
st string
stfips int
name string
a int
b int
c int
d int
e int
f int
total int
Time taken: 5.195 seconds
hive> LOAD DATA LOCAL INPATH 'county_insurance_pp.txt' OVERWRITE INTO TABLE count_insurance;
FAILED: SemanticException Line 1:23 Invalid path ''county_insurance_pp.txt'': No files matching path file:/home/msknapp/data/county_insurance_pp.txt
The file I'm trying to load does exist. I get the same exception when I use an absolute path in my load statement.
On a side note, I still don't know why it keeps giving me a FileNotFoundException for the derby log with a permission warning. A long time ago I went to /var/lib/hive and did 'sudo chmod -R 777 ./*', so permissions should not be a problem.
BTW I am running hadoop in pseudo-distributed mode, and have all three hive daemons running locally. I used hive-server2 not 1.
Somebody please let me know what I'm doing wrong here, or how to debug this.

This is Koji. I had a same problem recently.
The hive script run Hadoop server. If the file county_insurance_pp.txt does not exist on Hadoop server, it can not find the file.
You have to send your target file to Hadoop server before running the script. There are 2 ways to handle this:
use scp
use webhdfs (http://hadoop.apache.org/docs/r1.0.4/webhdfs.html)

Related

Apache Hive Not Returning YARN Application Results Correctly

I'm running a from-scratch cluster on AWS EC2. I have an external table (partitioned) defined with data on S3. I'm able to query this table and receive results to the console with a simple select * statement:
hive> set hive.execution.engine=tez;
hive> select * from external_table where partition_1='1' and partition_2='2';
<correct results returned>
Running a query that requires Tez doesn't return the results to the console:
hive> set hive.execution.engine=tez;
hive> select count(*) from external_table where partition_1='1' and partition_2='2';
Status: Running (Executing on YARN cluster with App id application_1572972524483_0012)
OK
+------+
| _c0 |
+------+
+------+
No rows selected (8.902 seconds)
However, if I dig in the logs and on the filesystem, I can find the results from that query:
(yarn.resourcemanager.log) org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1572972524483_0022 CONTAINERID=container_1572972524483_0022_01_000002 RESOURCE=<memory:1024, vCores:1> QUEUENAME=default
(container_folder/syslog_attempt) [TezChild] |exec.FileSinkOperator|: New Final Path: FS file:/tmp/<REALLY LONG FILE PATH>/000000_0
[root #] cat /tmp/<REALLY LONG FILE PATH>/000000_0
SEQ"org.apache.hadoop.io.BytesWritableorg.apache.hadoop.io.Textl▒ꩇ1som}▒▒j¹▒ 2060
2060 is the correct count for the partition.
Now, oddly enough, I'm able to get the results from the application if I insert overwrite directory on HDFS:
hive> set hive.execution.engine=tez;
hive> INSERT OVERWRITE DIRECTORY '/tmp/local_out' select count(*) from external_table where partition_1='1' and partition_2='2';
[root #] hdfs dfs -cat /tmp/local_out/000000_0
2060
However, attempting to insert overwrite local directory fails:
hive> set hive.execution.engine=tez;
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/local_out' select count(*) from external_table where partition_1='1' and partition_2='2';
[root #] cat /tmp/local_out/000000_0
cat: /tmp/local_out/000000_0: No such file or directory
If I cat the container result file for this query, it's only the number, no class name or special characters:
[root #] cat /tmp/<REALLY LONG FILE PATH>/000000_0
2060
The only out-of-place log message I can find comes from the YARN ResourceManager log:
(yarn.resourcemanager.log) INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1572972524483_0023 CONTAINERID=container_1572972524483_0023_01_000004 RESOURCE=<memory:1024, vCores:1> QUEUENAME=default
(yarn.resourcemanager.log) WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=NMIP OPERATION=AM Released Container TARGET=Scheduler RESULT=FAILURE DESCRIPTION=Trying to release container not owned by app or with invalid id. PERMISSIONS=Unauthorized access or invalid container APPID=application_1572972524483_0023 CONTAINERID=container_1572972524483_0023_01_000004
My guess based on the vaguest of impressions is there's a character encoding problem when the results are written to the local filesystem (hence the special characters in the container result file) but it's really just a guess and I have no idea how to verify/tackle that issue. Any help is greatly appreciated!
Someone on the Apache Hive mailing list suggested this was being caused by the YARN container writing its results files to the local machine where it was running instead of HDFS. I did some digging in the source code and found that:
mapreduce.framework.name=local
which is the default in Hadoop 3.2.1, was causing the problem.
Solved with:
set mapreduce.framework.name=yarn
how you inserted data into hive table? Hive gives count(*) result from metastore instead of running a count job to optimise performance. Try MSCK Repair on this table first to let hive know about new external files and modify hive metastore accordingly.

hive hadoop permissions not correct

I installed apache kylin which requires Hadoop, hove, hbase and java to work. All things are installed correctly. Now when I try to run this example. I get error after the first command ie ${KYLIN_HOME}/bin/sample.sh
and below is the error I am getting
Loading data to table default.kylin_sales
Failed with exception Unable to move source file:/usr/lib/kylin/sample_cube/data/DEFAULT.KYLIN_SALES.csv to destination hdfs://localhost:54310/user/hive/warehouse/kylin_sales/DEFAULT.KYLIN_SALES.csv
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
I have set 777 permissions for both the above path and I am operating as root
Check the hdfs directory permission. If it is not like below, change permission like below
hdfs dfs -chmod g+w /user/hive/warehouse

Not able to create new table in hive from Spark-shell

I am using single node setup in Redhat and installed Hadoop Hive Pig and Spark . I configured hive metadata in Derby and everything . I created new folder for Hive tables and gave full privilege (chmod 777 ) . Then I created one table from Hive CLI and I am able to select those data in Spark-shell and printed those values to the console. But from Spark-shell/Spark-Sql I am not able to create new tables .It is throwing error as
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/2016/hive/test2 is not a directory or unable to create one)
I checked the permission and User(using same user for Installation and Hive and Hadoop Spark etc).
Is there anything need to be done for getting full integration of Spark and Hive
Thanks
Check that the permissions in hdfs are correct (not just the filesystem)
hadoop fs -chmod -R 755 /user
If the error message persists afterwards please update the question.

Hive failed to create /user/hive/warehouse

I just get started on Apache Hive, and I am using my local Ubuntu box 12.04, with Hive 0.10.0 and Hadoop 1.1.2.
Following the official "Getting Started" guide on Apache website, I am now stuck at the Hadoop command to create the hive metastore with the command in the guide:
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
the error was mkdir: failed to create /user/hive/warehouse
Does Hive require hadoop in a specific mode? I know I didn't have to do much to my Hadoop installation other that update JAVA_HOME so it is in standalone mode. I am sure Hadoop itself is working since I am run the PI example that comes with hadoop installation.
Also, the other command to create /tmp shows the /tmp directory already exists so it didn't recreate, and /bin/hadoop fs -ls is listing the current directory.
So, how can I get around it?
Almost all examples of the documentation have this command wrong. Just like unix you will need the "-p" flag to create the parent directories as well unless you have already created them. This command will work.
$HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
When running hive on local system, just add to ~/.hiverc:
SET hive.metastore.warehouse.dir=${env:HOME}/Documents/hive-warehouse;
You can specify any folder to use as a warehouse. Obviously, any other hive configuration method will do (hive-site.xml or hive -hiveconf, for example).
That's possibly what Ambarish Hazarnis kept in mind when saying "or Create the warehouse in your home directory".
This seems like a permission issue. Do you have access to root folder / ?
Try the following options-
1. Run command as superuser
OR
2.Create the warehouse in your home directory.
Let us know if this helps. Good luck!
When setting hadoop properties in the spark configuration, prefix them with spark.hadoop.
Therefore set
conf.set("spark.hadoop.hive.metastore.warehouse.dir","/new/location")
This works for older versions of Spark. The property has changed in spark 2.0.0
Adding answer for ref to Cloudera CDH users who are seeing this same issue.
If you are using Cloudera CDH distribution, make sure you have followed these steps:
launched Cloudera Manager (Express / Enterprise) by clicking on the desktop icon.
Open Cloudera Manager page in browser
Start all services
Cloudera has /user/hive/warehouse folder created by default. Its just that YARN and HDFS might not be up and running to access this path.
While this is a simple permission issue that was resolved with sudo in my comment above, there are a couple of notes:
create it in home directory should work as well, but then you may need to update hive setting for the path of metastore, which I think defaults to /user/hive/warehouse
I ran into another error of CREATE TABLE statement with Hive shell, the error was something like this:
hive> CREATE TABLE pokes (foo INT, bar STRING);
FAILED: Error in metadata: MetaException(message:Got exception: java.io.FileNotFoundException File file:/user/hive/warehouse/pokes does not exist.)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
It turns to be another permission issue, you have to create a group called "hive" and then add the current user to that group and change ownership of /user/hive/warehouse to that group. After that, it works. Details can be found from this link below:
http://mail-archives.apache.org/mod_mbox/hive-user/201104.mbox/%3CBANLkTinq4XWjEawu6zGeyZPfDurQf+j8Bw#mail.gmail.com%3E
if you r running linux check (in hadoop core-site.xml ) data directory & permission, it looks like you ve kept the default which is /data/tmp and im most cases that will take root permission ..
change the xml config file , delete /data/tmp and run fs format (OC after you ve modified the core xml config)
I recommend using upper versions of hive i.e. 1.1.0 version, 0.10.0 is very buggy.
Run this command and try to create a directory it would grant full permission for the user in hdfs /user directory.
hadoop fs -chmod -R 755 /user
I am using MacOS and homebrew as package manager. I had to set the property in hive-site.xml as
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/local/Cellar/hive/2.3.1/libexec/conf/warehouse</value>
</property>

FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed

I am using Ubuntu 12.04, hadoop-0.23.5, hive-0.9.0.
I specified my metastore_db separately to some other place $HIVE_HOME/my_db/metastore_db in hive-site.xml
Hadoop runs fine, jps gives ResourceManager,NameNode,DataNode,NodeManager,SecondaryNameNode
Hive gets started perfectly,metastore_db & derby.log also created,and all hive commands run successfully,I can create databases,table,etc. But after few day later,when I run show databases,or show tables, get below error
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I had this problem too and the accepted answer did not help me so will add my solution here for others:
My problem was I had a single machine with a pseudo distributed set up installed with hive. It was working fine with localhost as the host name. However when we decided to add multiple machines to the cluster we also decided to give the machines proper names "machine01, machine 02 etc etc".
I changed all the hadoop conf/*-site.xml files and the hive-site.xml file too but still had the error. After exhaustive research I realized that in the metastore db hive was picking up the URIs not from *-site files, but from the metastore tables in mysql. Where all the hive table meta data was saved are two tables SDS and DBS. Upon changing the DB_LOCATION_URI column and LOCATION in the tables DBS and SDS respectively to point to the latest namenode URI, I was back in business.
Hope this helps others.
reasons for this
If you changed your Hadoop/Hive version,you may be specifying previous hadoop version (which has ds.default.name=hdfs://localhost:54310 in core-site.xml) in your hive-0.9.0/conf/hive-env.sh
file
$HADOOP_HOME may be point to some other location
Specified version of Hadoop is not working
your namenode may be in safe mode ,run bin/hdfs dfsadmin -safemode leave or bin/hadoop dsfadmin -safemode leave
In case of fresh installation
the above problem can be the effect of a name node issue
try formatting the namenode using the command
hadoop namenode -format
1.Turn off your namenode from safe mode. Try the commands below:
hadoop dfsadmin -safemode leave
2.Restart your Hadoop daemons:
sudo service hadoop-master stop
sudo service hadoop-master start

Resources