I have hadoop 1.2.1 and i have install hive 0.14.0 on single node
$ hive
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxrwxr-x
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxrwxr-x
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:529)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:478)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)
... 7 more
The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxrwxr-x.
I use hadoop fs -chmod g+w /tmp/hive but not working.
Update the permission of your /tmp/hive HDFS directory using the following command
hadoop fs -chmod 777 /tmp/hive
If so can you remove /tmp/hive on both local and hdfs.
hadoop fs -rm -r /tmp/hive;
rm -rf /tmp/hive
Only temporary files are kept in this location. No problem even if we delete this, will be created when required with proper permissions.
I did a little bit of experimentation with this and thought it might be useful to someone.
When hive 0.14.0 is started without first creating /tmp/hive in HDFS, that directory is created with mode 711.
drwx--x--x - hadoop supergroup 0 2014-12-08 18:47 /tmp/hive
If instead one creates the directory via hadoop dfs -mkdir /tmp/hive it defaults to mode 755.
drwxr-xr-x - hadoop supergroup 0 2014-12-09 11:13 /tmp/hive
The minimum permissions required to allow hive to start without errors is 733.
hadoop dfs -chmod 733 /tmp/hive
Resulting in the following and hive starting successfully.
drwx-wx-wx - hadoop supergroup 0 2014-12-09 11:13 /tmp/hive
This leads me to believe that hive 0.14.0 is doing the wrong thing when it creates that directory.
Check the value for the below tag on hive-site.xml, then change the permission for the folder mentioned
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/mydir</value>
<description>Local scratch space for Hive jobs</description>
</property>
hadoop fs -rmr /tmp/mydir;
hadoop fs -mkdir /tmp/mydir;
hadoop fs -chmod 777 /tmp/mydir;
hadoop fs -chmod -R 777 /tmp/mydir;
Related
I am trying to insert data into hive table from a file in hdfs directory by the query:
$ jdbc:hive2://localhost:10000> LOAD DATA INPATH '/user/xyz/stdfiles/testtbl.txt' OVERWRITE INTO TABLE testdb.testtbl;
But the query failed resulting:
Error: Error while compiling statement: FAILED:
HiveAuthzPluginException Error getting permissions for
hdfs://localhost:9000/user/xyz/stdfiles/testtbl.txt: null
(state=42000,code=40000)
I have tried giving permissions by the following command which gives no error:
$ hdfs dfs -chown -R stdfiles /user/xyz/stdfiles
$ hdfs dfs -chmod -R 777 /user/xyz/stdfiles/testtbl.txt
Checked:
$ hdfs dfs -ls /user/xyz/stdfiles
19/05/22 09:15:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rwxrwxrwx 1 stdfiles supergroup 6 2019-05-22 08:45 /user/xyz/stdfiles/testtbl.txt
Inserting data successfully is by desired output
Add following properties in hadoop configuration file core-site.xml worked for me :)
<property>
<name>hadoop.proxyuser.niazullah.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.niazullah.groups</name>
<value>*</value>
</property>
Also check for user access hdfs:
$ hdfs dfs -ls /user
Output:
drwxr-xr-x - main supergroup 0 2019-05-22 13:22 /user/test
Where "main" is the user change it do the hive user
Hey I am installing HIVE in a Hadoop 2.0 Multi Node cluster ,and I am not able to Create folder using this command :
[hadoop#master ~]$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
16/07/19 14:20:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop#master ~]$ $HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
16/07/19 14:24:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Importantly I am not able to find the Created Folder ? Where it will go and create I am not sure. Please help.
JPS for Hadoop is working fine:
[hadoop#master ~]$ jps
2977 ResourceManager
2613 DataNode
3093 NodeManager
2822 SecondaryNameNode
2502 NameNode
5642 Jps
The warning you are getting after running -mkdir command does not impact the Hadoop functionality. It's just a warning, just ignore it. See here for details.
About creating directories under root i.e. "/", it is just one-time activity and should be done by superuser. Once you create the root directories like "/tmp", "/user" etc., then you can create user specific foders like "/user/hduser" and own them using commands:
sudo -u hdfs hdfs dfs -mkdir /tmp
OR
sudo -u hdfs hdfs dfs -mkdir -p /user/hive/warehouse
Once you have the main folder ready, just own it with the user who will be using it:
sudo -u hdfs hdfs dfs -chown hduser:hadoop /user/hive/warehouse
If you want to find the files/directories created on HDFS, then you have to interact with HDFS filesystem using CLI commands only
e.g. hdfs dfs -ls /
The data which is created on HDFS has a physical location on your local filesystem also, but you'll not see that location as files and directories. Look for the dfs.namenode.name.dir and dfs.datanode.data.dir properties in 'hdfs-site.xml' under your installation, usually located at: "/usr/local/hadoop/etc/hadoop/hdfs-site.xml"
I'm getting the following error when executing a select count(*) from tablename query when connected to beeline.
ERROR : Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:201)
I can execute showtables; successfully but get this error anytime I execute a query. I am logged in as the hadoop user that has access to both hadoop and hive.
I've granted the folder where the tables resides full permissions:
drwxr-xr-x - hadoop supergroup 0 2015-06-03 15:44 /data1
drwxrwxrwx - hadoop hadoop 0 2015-06-05 15:23 /tmp
drwxrwxrwx - hadoop supergroup 0 2015-06-05 15:24 /user
The table is in the user directory.
Environment details:
OS: CentOS
Hadoop: HW 2.6.0
Hive: 1.2
Any help would be greatly appreciated.
Is this a hive managed table in that case could you print what you get when you do
hadoop fs -ls /user
hadoop fs -ls /user/hive
hadoop fs -ls /user/hive/warehouse
the error suggests that you are accessing a table from a user who is not the owner and seems like user does not have read and execute access
I learned that I have to configure the NameNode and DataNode dir in hdfs-site.xml. So that's my hdfs-site.xml configuration on the NameNode:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file://usr/local/hadoop-2.6.0/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
</configuration>
I did almost the same on my DataNode and changed dfs.namenode to dfs.datanode.
Then I formatted the filesystem via
hadoop namenode -format
Everything seems to be finished without an error.
Then I wanted to create a directory in my HDFS filesystem by using:
hdfs dfs -mkdir test
And I got an error:
mkdir: `test': No such file or directory
What did I miss or what's the common process from formatting to creating files/directories with HDFS?
Well, it's so easy.
hdfs dfs -mkdir /test
was created successfully.
hdfs dfs -put myFile /test/myFile
works as well.
Create a directory:
hdfs dfs -mkdir directoryName
Create a new file in directory
hdfs dfs -touchz directoryName/Newfilename
Write into newly created file in HDFS
nano filename
Save it Cntr+X Y
Read the newly created file from HDFS
nano fileName
Or
hdfs dfs -cat directoryName/fileName
HDFS is a non POSIX compliant file systems so you can't edit files directly inside of HDFS, however you can Copy a file from your local system to HDFS using following command:
hdfs dfs -put /path/in/source/system/filename /path/in/HDFS/system/destination
If you want to create multiple sub-directories then you should also use -p flag:
hdfs dfs -mkdir -p /test/another_test/one_more_test
hduser#Connected:~$ hive
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:529)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:478)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)
I have created the hive directories in hadoop as:
hadoop fs -mkdir /usr/hive/warehouse
and
set permissions for the table:
hadoop fs -chmod g+w /usr/hive/warehouse
but its still not working? what should I do?
Looks like the HDFS directory /tmp/hive is missing or doesn't have enough permission to write files inside. Execute the following command for assigning proper permission.
Switch to HDFS admin user first (sudo -su hdfs command can be used), then execute the following commands.
hadoop fs -chmod 777 /tmp;
hadoop fs -mkdir /tmp/hive;
hadoop fs -chmod -R 777 /tmp/hive;
Check the value for the below tag on hive-site.xml, then change the permission for the folder mentioned
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/mydir</value>
<description>Local scratch space for Hive jobs</description>
</property>
hadoop fs -rmr /tmp/mydir;
hadoop fs -mkdir /tmp/mydir;
hadoop fs -chmod 777 /tmp/mydir;
hadoop fs -chmod -R 777 /tmp/mydir;