I am trying to install hadoop (2.7) in cluster (two machines hmaster and hslave1). I installed hadoop in the folder /opt/hadoop/
I followed this tutorial but Iwhen I run the command start-dfs.sh, I got the following error about:
hmaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-hmaster.out
hmaster: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hmaster.out
hslave1: mkdir: impossible to create the folder « /opt/hadoop\r »: Permission denied
hslave1: chown: impossible to reach « /opt/hadoop\r/logs »: no file or folder of this type
/logs/hadoop-hadoop-datanode-localhost.localdomain.out
I used the command chmod 777 for the folder hadoop in hslave but I still have this error.
Insted of using /opt/ use /usr/local/ if you get that permission issue again give the root permissions using chmod. I already configured hadoop 2.7 in 5 machines. Or else use "Sudo chown user:user /your log files directory".
Seems you have already gave master password less access to login slave.
Make sure you are logged in with username available on both servers.
(hadoop in your case, as tutorial you are following uses 'hadoop' user.)
you can edit the '/etc/sudoer' file using 'sudo' or directly type 'visudo' in the terminal and add the following permission for newly created user 'hadoop' :-
hadoop ALL = NOPASSWD: ALL
might it will resolved your issues.
Related
I followed this tutorial for installation of Hadoop. Unfortunately, when I run the dfs namenode -format script - The following error was printed on console:
but at the end i see this msg
dfs namenode -format
WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
mkdir: cannot create directory ‘/home/hdoop/hadoop-3.2.1/logs’: Permission denied
ERROR: Unable to create /home/hdoop/hadoop-3.2.1/logs. Aborting.
thank u
also when i run
./start-dfs.sh
Starting namenodes on [localhost]
localhost: WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
Starting datanodes
Starting secondary namenodes [blabla]
blabla: Warning: Permanently added 'blabla,192.168.100.10' (ECDSA) to the list of known hosts.
change the permission of /home/hdoop to the correct one!
i sloved this link here
According to my configuration i didn't set JAVA_HOME within PATH
$ which java
$ echo $JAVA_HOME
Also,i change the value of HADOOP_OPTS in hadoop-env.sh as given below.
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/"
the image before and after
Create Directory :- Create "logs" directory by using root access yourself.As in this case directory was logs create it at /home/hdoop/hadoop-3.2.1/ (eg. "/home/{username}/{extracted hadoop directory}/" )
Give Access of Directory :- Make it accessible by using sudo chmod 777 {directory location}
In this case - sudo chmod 777 /home/hdoop/hadoop-3.2.2/logs To see it worked in my case see following image :
solved it by giving access of following directory
I am currently working on the Hortonworks practice exam and I am getting errors I have not been able to troubleshoot.
During the first step the prompt asks Put the three files from the home/horton/datasets/flight delays directory on the local machine into the user/horton/flight delays directory in hdfs permission denied error. When on the node that hdfs is installed on (root#namenode). I run the simple command:
hadoop fs -copyFromLocal /home/horton/datasets/flightdelays/flight_delays1.csv /user/horton/flightdelays
This returns the error /home/horton/datasets/flightdelays/flight_delays1.csv no such file or directory
When I run the same exact command above from the command line on the local machine instead of running it after being ssh'd onto the namenode (horton#some-ip) I get a permission denied error:
permission denied user=horton access=WRITE inode='/user/horton/flightdelays":hdfs:hdfs:drwxr-xr-x
If anyone has done this practice exam before or knows what this error is and could lend any assistance it would be greatly appreciated. When researching online a lot of people are running into the same issue with the permission denied but im going to assume that on a practice exam that they set up you shouldn't be needing to use sudo for every command you run.
Again any help would be fantastic thanks!!
Try this on CLI
sudo -u hdfs hdfs -copyFromLocal /input/file/path /hdfs/path/
Try this in your command line
hadoop fs -put /localfile.txt /hdfs path
The issue is that the folder you're trying to write to has ownership and permssions of hdfs:hdfs:drwxr-xr-x meaning it is owned by the 'hdfs' user and group. Only the hdfs user has write permissions to that folder everyone else has read and execute permissions only. Thus writing to that folder as the 'horton' user will not work.
You need to run the command as hdfs like so:
sudo -u hdfs hadoop fs -copyFromLocal /home/horton/datasets/flightdelays/flight_delays1.csv /user/horton/flightdelays
When I execute the get command it says permission denied,
I tried the already given solution but didn't worked. Following is the command and its op
hduser#ubuntu:~$ hadoop fs -get /user/hduser/Input/pg*.txt /home/vilas/Desktop/
Warning: $HADOOP_HOME is deprecated.
get: Permission denied
Check out the permissions of this /user/hduser directory, maybe hduser does not have permission to access it, if so then you can execute the following command (as hdfs user)
hdfs dfs chown hduser:hduser /user/hduser
More information about chown here.
then try again.
you must go into the desktop directory, open the terminal there and run the command
hadoop fs -get /user/hduser/Input/pg*.txt .
Just created instance and deployed a cluster using bdutil. SSH works fine as I can ssh into instance using ./bdutil shell.
When I try to access directories such as Hadoop, hdfs etc., it throws an error:
Permission Denied
The terminal appears like this username#hadoop-m $ I know hadoop-m is the name of the instance. What is the username? It says my name but I don't know where it got this from or what the password is.
I am using Ubuntu to ssh into the instance.
Not a hadoop expert, I can answer a bit generally. On GCE when you ssh in gcloud creates a username from you google account name. Hadoop directories such as hadoop or hdfs are probably owned by a different user. Please try using sudo chmod to make give your username permissions to read/write the directories you need.
To elaborate on Jeff's answer, bdutil-deployed clusters set up the user hadoop as the Hadoop admin (this 'admin' user may differ on different Hadoop systems, where Hadoop admin accounts may be split into separate users hdfs, yarn, mapred, etc). Note that bdutil clusters should work without needing deal with Hadoop admin stuff for normal jobs, but if you need to access those Hadoop directories, you can either do:
sudo su hadoop
or
sudo su
to open a shell as hadoop or root, respectively. Or as Jeff mentions, you can sudo chmod to grant broader access to your own username.
I just installed Hadoop single node but when i run it by logging on localhost it gives error that it cannot make changes to files as permission is denied?
Have you followed all the steps as suggested in: http://hadoop.apache.org/common/docs/current/single_node_setup.html ?
You may want to look at this : http://getsatisfaction.com/cloudera/topics/permission_denied_error_in_desktop
Also, some more information would definitely help.
You have not given necessary permissions.Make a different user other than root.Follow this tutorial to the point http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
It seems to be missing permissions for the user on the directory containing the files
make sure that the user you are logged on , is the owner of the Hadoop directory by running
ls -la command
if not the owner run the command chown -R hadoop user:group hadoop directory and it will work fine.
also you can follow the tutorial of michael noll
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/