Hive couldn't create directory in hdfs and fail to start? - hadoop

I am deploying hive 2.3 in remote mode with a mysql database in another machine as metastore.
I am about to finish the whole process and I am checking whether the deployment is working by running bin/hive
Then I got this error:
Exception in thread "main" java.lang.RuntimeException: Couldn't create directory /user/hive/tmp/54de671c-0236-49e2-b967-7c3da8973f3a_resources
I know this is set by the property hive.downloaded.resources.dir in hive-site.xml. And I set it to be /user/hive/tmp/${hive.session_id}_resources.
I have create /user/hive/tmp in hdfs.
I have changed the directory access $hdfs dfs -chmod -R 777 /user/hive/tmp

I recently met this problem, too. I have solved it. The key is that "/user/hive/tmp" is not in hdfs, it's in your local folder, and you should create "/user/hive/tmp" in your local folder and change the directory access. Hope to help you solve the problem.

Related

Spark - java IOException :Failed to create local dir in /tmp/blockmgr*

I was trying to run a long running Spark Job. After few hours of execution, I get exception below :
Caused by: java.io.IOException: Failed to create local dir in /tmp/blockmgr-bb765fd4-361f-4ee4-a6ef-adc547d8d838/28
Tried to get around it by checking:
Permission issue in /tmp dir. The spark server is not running as root. but /tmp dir should be writable to all users.
/tmp Dir has enough space.
Assuming that you are working with several nodes, you'll need to check every node participate in the spark operation (master/driver + slaves/nodes/workers).
Please confirm that each worker/node have enough disk space (especially check /tmp folder), and right permissions.
Edit: The answer below did not eventually solve my case. It's because some subfolders spark (or some of its dependencies) was able to create, yet not all of them. The frequent necessity of creation of such paths would make any project unviable. Therefore I ran Spark (PySpark in my case) as an Administrator, which solved the case. So in the end it is probably a permission issue afterall.
Original answer:
I solved the same problem I had on my local Windows machine (not a cluster). Since there was no problem with permissions, I created the dir that Spark was failing to create, i.e. I created the following folder as a local user and did not need to change any permissions on that folder.
C:\Users\<username>\AppData\Local\Temp\blockmgr-97439a5f-45b0-4257-a773-2b7650d17142
After verifying all the permissions and user access.
I got the same issue when building the components in Talend studio and it resolved by providing the correct "/" in spark scratch directory (temp directory) in spark Configuration tab. This is required when building the jar in windows and running in Linux cluster.

hive hadoop permissions not correct

I installed apache kylin which requires Hadoop, hove, hbase and java to work. All things are installed correctly. Now when I try to run this example. I get error after the first command ie ${KYLIN_HOME}/bin/sample.sh
and below is the error I am getting
Loading data to table default.kylin_sales
Failed with exception Unable to move source file:/usr/lib/kylin/sample_cube/data/DEFAULT.KYLIN_SALES.csv to destination hdfs://localhost:54310/user/hive/warehouse/kylin_sales/DEFAULT.KYLIN_SALES.csv
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
I have set 777 permissions for both the above path and I am operating as root
Check the hdfs directory permission. If it is not like below, change permission like below
hdfs dfs -chmod g+w /user/hive/warehouse

Not able to create new table in hive from Spark-shell

I am using single node setup in Redhat and installed Hadoop Hive Pig and Spark . I configured hive metadata in Derby and everything . I created new folder for Hive tables and gave full privilege (chmod 777 ) . Then I created one table from Hive CLI and I am able to select those data in Spark-shell and printed those values to the console. But from Spark-shell/Spark-Sql I am not able to create new tables .It is throwing error as
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/2016/hive/test2 is not a directory or unable to create one)
I checked the permission and User(using same user for Installation and Hive and Hadoop Spark etc).
Is there anything need to be done for getting full integration of Spark and Hive
Thanks
Check that the permissions in hdfs are correct (not just the filesystem)
hadoop fs -chmod -R 755 /user
If the error message persists afterwards please update the question.

Hive failed to create /user/hive/warehouse

I just get started on Apache Hive, and I am using my local Ubuntu box 12.04, with Hive 0.10.0 and Hadoop 1.1.2.
Following the official "Getting Started" guide on Apache website, I am now stuck at the Hadoop command to create the hive metastore with the command in the guide:
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
the error was mkdir: failed to create /user/hive/warehouse
Does Hive require hadoop in a specific mode? I know I didn't have to do much to my Hadoop installation other that update JAVA_HOME so it is in standalone mode. I am sure Hadoop itself is working since I am run the PI example that comes with hadoop installation.
Also, the other command to create /tmp shows the /tmp directory already exists so it didn't recreate, and /bin/hadoop fs -ls is listing the current directory.
So, how can I get around it?
Almost all examples of the documentation have this command wrong. Just like unix you will need the "-p" flag to create the parent directories as well unless you have already created them. This command will work.
$HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
When running hive on local system, just add to ~/.hiverc:
SET hive.metastore.warehouse.dir=${env:HOME}/Documents/hive-warehouse;
You can specify any folder to use as a warehouse. Obviously, any other hive configuration method will do (hive-site.xml or hive -hiveconf, for example).
That's possibly what Ambarish Hazarnis kept in mind when saying "or Create the warehouse in your home directory".
This seems like a permission issue. Do you have access to root folder / ?
Try the following options-
1. Run command as superuser
OR
2.Create the warehouse in your home directory.
Let us know if this helps. Good luck!
When setting hadoop properties in the spark configuration, prefix them with spark.hadoop.
Therefore set
conf.set("spark.hadoop.hive.metastore.warehouse.dir","/new/location")
This works for older versions of Spark. The property has changed in spark 2.0.0
Adding answer for ref to Cloudera CDH users who are seeing this same issue.
If you are using Cloudera CDH distribution, make sure you have followed these steps:
launched Cloudera Manager (Express / Enterprise) by clicking on the desktop icon.
Open Cloudera Manager page in browser
Start all services
Cloudera has /user/hive/warehouse folder created by default. Its just that YARN and HDFS might not be up and running to access this path.
While this is a simple permission issue that was resolved with sudo in my comment above, there are a couple of notes:
create it in home directory should work as well, but then you may need to update hive setting for the path of metastore, which I think defaults to /user/hive/warehouse
I ran into another error of CREATE TABLE statement with Hive shell, the error was something like this:
hive> CREATE TABLE pokes (foo INT, bar STRING);
FAILED: Error in metadata: MetaException(message:Got exception: java.io.FileNotFoundException File file:/user/hive/warehouse/pokes does not exist.)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
It turns to be another permission issue, you have to create a group called "hive" and then add the current user to that group and change ownership of /user/hive/warehouse to that group. After that, it works. Details can be found from this link below:
http://mail-archives.apache.org/mod_mbox/hive-user/201104.mbox/%3CBANLkTinq4XWjEawu6zGeyZPfDurQf+j8Bw#mail.gmail.com%3E
if you r running linux check (in hadoop core-site.xml ) data directory & permission, it looks like you ve kept the default which is /data/tmp and im most cases that will take root permission ..
change the xml config file , delete /data/tmp and run fs format (OC after you ve modified the core xml config)
I recommend using upper versions of hive i.e. 1.1.0 version, 0.10.0 is very buggy.
Run this command and try to create a directory it would grant full permission for the user in hdfs /user directory.
hadoop fs -chmod -R 755 /user
I am using MacOS and homebrew as package manager. I had to set the property in hive-site.xml as
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/local/Cellar/hive/2.3.1/libexec/conf/warehouse</value>
</property>

Sqoop Permission Issue when running inside Map Reduce Code

I am trying to invoke Sqoop through a map reduce program using
Sqoop.runTool(arguments,_conf);
When executing, I receive the following error
Exception in thread "main" java.lang.RuntimeException: Could not create temporary directory: /tmp/sqoop-hdfs/compile/a609226c19d65f561dd7035c00d318f6; check for a directory permissions issue on /tmp.
I have set the permissions on /tmp and it's subdirectories in HDFS to 777
I can invoke the same command fine through command line using sudo -u hdfs sqoop ...
This is Cloudera's hadoop distirbution and I am running the job as hdfs user.
This probably isn't the /tmp directory in HDFS, but rather then /tmp directory on the local file system - whats the permissions on that directory (and would also explain why it works when you 'sudo' the command)
Just clean /tmp/sqoop-hdfs/compile floder it works

Resources