Cloudera Installation issue (scm_prepare_node.sh: Permission denied) - hadoop

When i am trying to installing cloudera hadoop i am getting below error while copying the files stage
/tmp/scm_prepare_node.BggVxw3l
bash: /tmp/scm_prepare_node.BggVxw3l/scm_prepare_node.sh: Permission denied
Can anyone help to fix this issue.
P.S: tmp having 777 permissions drwxrwxrwt. 41 root root 4096 May 9 14:59 tmp

my /tmp was mounted. So i changed noexec permissions in /etc/fstab then restarted the machines. Now everything working fine

Related

Hadoop returns permission denied

I am trying to install hadoop (2.7) in cluster (two machines hmaster and hslave1). I installed hadoop in the folder /opt/hadoop/
I followed this tutorial but Iwhen I run the command start-dfs.sh, I got the following error about:
hmaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-hmaster.out
hmaster: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hmaster.out
hslave1: mkdir: impossible to create the folder « /opt/hadoop\r »: Permission denied
hslave1: chown: impossible to reach « /opt/hadoop\r/logs »: no file or folder of this type
/logs/hadoop-hadoop-datanode-localhost.localdomain.out
I used the command chmod 777 for the folder hadoop in hslave but I still have this error.
Insted of using /opt/ use /usr/local/ if you get that permission issue again give the root permissions using chmod. I already configured hadoop 2.7 in 5 machines. Or else use "Sudo chown user:user /your log files directory".
Seems you have already gave master password less access to login slave.
Make sure you are logged in with username available on both servers.
(hadoop in your case, as tutorial you are following uses 'hadoop' user.)
you can edit the '/etc/sudoer' file using 'sudo' or directly type 'visudo' in the terminal and add the following permission for newly created user 'hadoop' :-
hadoop ALL = NOPASSWD: ALL
might it will resolved your issues.

The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw- (on Windows)

I am running Spark on Windows 7. When I use Hive, I see the following error
The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
The permissions are set as the following
C:\tmp>ls -la
total 20
drwxr-xr-x 1 ADMIN Administ 0 Dec 10 13:06 .
drwxr-xr-x 1 ADMIN Administ 28672 Dec 10 09:53 ..
drwxr-xr-x 2 ADMIN Administ 0 Dec 10 12:22 hive
I have set "full control" to all users from Windows->properties->security->Advanced.
But I still see the same error.
I have checked a bunch of links, some say this is a bug on Spark 1.5?
First of all, make sure you are using correct Winutils for your OS. Then next step is permissions.
On Windows, you need to run following command on cmd:
D:\winutils\bin\winutils.exe chmod 777 D:\tmp\hive
Hope you have downloaded winutils already and set the HADOOP_HOME variable.
First thing first check your computer domain. Try
c:\work\hadoop-2.2\bin\winutils.exe ls c:/tmp/hive
If this command says access denied or FindFileOwnerAndPermission error (1789): The trust relationship between this workstation and the primary domain failed.
It means your computer domain controller is not reachable , possible reason could be you are not on same VPN as your system domain controller.Connect to VPN and try again.
Now try the solution provided by Viktor or Nishu.
You need to set this directory's permissions on HDFS, not your local filesystem. /tmp doesn't mean C:\tmp unless you set fs.defaultFs in core-site.xml to file://c:/, which is probably a bad idea.
Check it using
hdfs dfs -ls /tmp
Set it using
hdfs dfs -chmod 777 /tmp/hive
Next solution worked on Windows for me:
First, I defined HADOOP_HOME. It described in detail here
Next, I did like Nishu Tayal, but with one difference:C:\temp\hadoop\bin\winutils.exe chmod 777 \tmp\hive
\tmp\hive is not local directory
Error while starting the spark-shell on VM running on Windows:
Error msg: The root scratch dir: /tmp/hive on HDFS should be writable. Permission denied
Solution:
/tmp/hive is temporary directory. Only temporary files are kept in this
location. No problem even if we delete this directory, will be created when
required with proper permissions.
Step 1) In hdfs, Remove the /tmp/hive directory ==> "hdfs dfs -rm -r /tmp/hive"
2) At OS level too, delete the dir /tmp/hive ==> rm -rf /tmp/hive
After this, started the spark-shell and it worked fine..
This is a simple 4 step process:
For Spark 2.0+:
Download Hadoop for Windows / Winutils
Add this to your code (before SparkSession initialization):
if(getOS()=="windows"){
System.setProperty("hadoop.home.dir", "C:/Users//winutils-master/hadoop-2.7.1");
}
Add this to your spark-session (You can change it to C:/Temp instead of Desktop).
.config("hive.exec.scratchdir","C:/Users//Desktop/tmphive")
Open cmd.exe and run:
"path\to\hadoop-2.7.1\bin\winutils.exe" chmod 777 C:\Users\\Desktop\tmphive
The main reason is you started the spark at wrong directory. please create folders in D://tmp/hive (give full permissions) and start your spark in D: drive
D:> spark-shell
now it will work.. :)
Can please try giving 777 permission to the folder /tmp/hive because what I think is that spark runs as a anonymous user(which will come in other user category) and this permission should be recursive.
I had this same issue with 1.5.1 version of spark for hive, and it worked by giving 777 permission using below command on linux
chmod -r 777 /tmp/hive
There is a bug in Spark Jira for the same. This has been resolved few days back. Here is the link.
https://issues.apache.org/jira/browse/SPARK-10528
Comments have all options, but no guaranteed solution.
Issue resolved in spark version 2.0.2 (Nov 14 2016). Use this version .
Version 2.1.0 Dec 28 2016 release has same issues.
Use the latest version of "winutils.exe" and try. https://github.com/steveloughran/winutils/blob/master/hadoop-2.7.1/bin/winutils.exe
I also faced this issue. This issue is related to network. I installed spark on Windows 7 using particular domain.
Domain name can be checked
Start -> computer -> Right click -> Properties -> Computer name,
domain and workgroup settings -> click on change -> Computer Name
(Tab) -> Click on Change -> Domain name.
When I run spark-shell command, it works fine, without any error.
In other networks I received write permission error.
To avoid this error, run spark command on Domain specified in above path.
I was getting the same error "The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-" on Windows 7. Here is what I did to fix the issue:
I had installed Spark on C:\Program Files (x86)..., it was looking for /tmp/hive under C: i.e., C:\tmp\hive
I downloaded WinUtils.exe from https://github.com/steveloughran/winutils. I chose a version same as what I chose for hadoop package when I installed Spark. i.e., hadoop-2.7.1
(You can find the under the bin folder i.e., https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin)
Now used the following command to make the c:\tmp\hive folder writable
winutils.exe chmod 777 \tmp\hive
Note: With a previous version of winutils too, the chmod command was setting the required permission without error, but spark still complained that the /tmp/hive folder was not writable.
Using the correct version of winutils.exe did the trick for me. The winutils should be from the version of Hadoop that Spark has been pre built for.
Set HADOOP_HOME environment variable to the bin location of winutils.exe. I have stored winutils.exe along with C:\Spark\bin files. So now my SPARK_HOME and HADOOP_HOME point to the same location C:\Spark.
Now that winultils has been added to path, give permissions for hive folder using winutils.exe chmod 777 C:\tmp\hive
You don't have to fix the permission of /tmp/hive directory yourself (like some of the answers suggested). winutils can do that for you. Download the appropriate version of winutils from https://github.com/steveloughran/winutils and move it to spark's bin directory (e. x. C:\opt\spark\spark-2.2.0-bin-hadoop2.6\bin). That will fix it.
I was running spark test from IDEA, and in my case the issue was wrong winutils.exe version. I think you need to match it with you Hadoop version. You can find winutils.exe here
/*
Spark and hive on windows environment
Error: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
Pre-requisites: Have winutils.exe placed in c:\winutils\bin\
Resolve as follows:
*/
C:\user>c:\Winutils\bin\winutils.exe ls
FindFileOwnerAndPermission error (1789): The trust relationship between this workstation and the primary domain failed.
// Make sure you are connected to the domain controller, in my case I had to connect using VPN
C:\user>c:\Winutils\bin\winutils.exe ls c:\user\hive
drwx------ 1 BUILTIN\Administrators PANTAIHQ\Domain Users 0 Aug 30 2017 c:\user\hive
C:\user>c:\Winutils\bin\winutils.exe chmod 777 c:\user\hive
C:\user>c:\Winutils\bin\winutils.exe ls c:\user\hive
drwxrwxrwx 1 BUILTIN\Administrators PANTAIHQ\Domain Users 0 Aug 30 2017 c:\user\hive

impala-shell query failing with Error(13)

I'm running an impala-shell on a 3-node cluster. Some queries work just fine, but a few return the following error:
Create file /tmp/impala-scratch/924abcb4827fd7ba:d15cd3585951f4b2_c8e0146a-37cd-457a-96f6-ac5d933cd4da failed with errno=13 description=Error(13): Permission denied
I have checked my local directory, and /tmp/impala-scratch does exist and it is read-write-executable by me. Any tips would be greatly appreciated!
Okay, so I figured it out. It turns out that the old /tmp/impala-scratch had access permissions:
drwxr-xr-x
According to:
Hiveserver2: Failed to create/change scratchdir permissions to 777: Could not create FileClient
You have to change the permissions to 777:
chmod -R 777 /tmp/impala-scratch/
And this fixed it.

Hiveserver2: Failed to create/change scratchdir permissions to 777: Could not create FileClient

I'm running a MapR Community Edition Hadoop cluster (M3).
Unfortunately, the HiveServer2 service crashes and, according the log file in /opt/mapr/hive/hive-0.13/logs/mapr/hive.log, there's a problem with permissions on the scratch directory:
2015-02-24 21:21:08,187 WARN [main]: server.HiveServer2 (HiveServer2.java:init(74)) - Failed to create/change scratchdir permissions to 777: Could not create FileClient java.io.IOException: Could not create FileClient
I checked the settings for the scratch directory using hive -e 'set;' | grep scratch:
hive.exec.scratchdir=/user/mapr/tmp/hive/
hive.scratch.dir.permission=700
I notice that hive.scratch.dir.permission is set to 700 and the error message suggests that it wants to change this to 777. However, according to the filesystem, /mapr/my.cluster.com/user/mapr/tmp has 777 permissions and belongs to the mapr user.
mapr#hadoop01:/mapr/my.cluster.com/user/mapr/tmp$ ls -al
total 2
drwxr-xr-x 3 mapr mapr 1 Feb 22 10:39 .
drwxr-xr-x 5 mapr mapr 3 Feb 24 08:40 ..
drwxrwxrwx 56 mapr mapr 54 Feb 23 10:20 hive
Judging by the filesystem permissions, I would expect the mapr user to do whatever it wants with this folder and so don't understand the error message.
I'm curious to know if anyone's seen this before and, if so, how did you fix it?
Update:
I had a look at the source code, and notice some relevant comments just prior to the warning:
// When impersonation is enabled, we need to have "777" permission on root scratchdir, because
// query specific scratch directories under root scratchdir are created by impersonated user and
// if permissions are not "777" the query fails with permission denied error.
I added set the following properties in hive-site.xml:
<property>
<name>hive.scratch.dir.permission</name>
<value>777</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive/</value>
</property>
... and created the /tmp/hive/ folder in HDFS with 777 permissions:
mapr#hadoop01:~$ hadoop fs -ls -d /tmp/hive
drwxrwxrwx - mapr mapr 0 2015-02-27 08:38 /tmp/hive
Although this looked promising, I still got the same warning in hive.log.
Update the permission of your /tmp/hive HDFS directory to set it to 777:
hadoop fs -chmod 777 /tmp/hive
Or remove /tmp/hive , temporary files will be created anyway even when you delete them.
hadoop fs -rm -r /tmp/hive;
rm -rf /tmp/hive

Hadoop HDFS - Cannot give +x permission to files

So, I used Cloudera's installation and management tool to get a 3 node cluster of servers up and running.
I have HDFS running and can see / create directories etc.
I went ahead and installed the Fuse plugin which allows me to mount the HDFS as a file system. Everything works fine. I can write files to the folders etc.
Problem:
when I run 'chmod 777 ./file.sh' in the mounted drive, it doesnt give any errors but when i do a 'ls -l' it only has:
'-rw-rw-rw- 1 root nobody 26 Oct 5 08:57 run.sh'
When I run 'sudo -u hdfs hadoop fs -chmod 777 /run.sh' it still has the same permissions. No matter what I do in any way I cannot get execute permission on any files.
I have disabled permissions in Cloudera manager, and also chown'd the folder (chmod -R 777 the folder also). But nothing seems to be working.
Any ideas?
Seems like adding: "umask=000" to the fstab mount line did the trick. (also added exec for good measure)
Thanks!

Resources