hadoop2.2.0 installation on linux ( NameNode not starting ) - hadoop

I am trying to run a single node hadoop cluster on my machine with the following config:
inux livingstream 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
I am able to format the namenode without any problems however when I try and start the namenode using :
hadoop-daemon.sh start namenode
I get the following errors :
ishan#livingstream:/usr/local/hadoop$ hadoop-daemon.sh start namenode
Warning: $HADOOP_HOME is deprecated.
mkdir: cannot create directory `/var/log/hadoop/ishan': Permission denied
chown: cannot access `/var/log/hadoop/ishan': No such file or directory
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting namenode, logging to /var/log/hadoop/ishan/hadoop-ishan-namenode-livingstream.out
/usr/sbin/hadoop-daemon.sh: line 138: /var/run/hadoop/hadoop-ishan-namenode.pid: No such file or directory
/usr/sbin/hadoop-daemon.sh: line 137: /var/log/hadoop/ishan/hadoop-ishan-namenode-livingstream.out: No such file or directory
head: cannot open `/var/log/hadoop/ishan/hadoop-ishan-namenode-livingstream.out' for reading: No such file or directory
/usr/sbin/hadoop-daemon.sh: line 147: /var/log/hadoop/ishan/hadoop-ishan-namenode-livingstream.out: No such file or directory
/usr/sbin/hadoop-daemon.sh: line 148: /var/log/hadoop/ishan/hadoop-ishan-namenode-livingstream.out: No such file or directory
I did not create a separate user "hduser" for hadoop installation. I am using my exsisting username. May be that is why I am facing the problem.
Can someone please help me with this .
Exactly what permissions do I need to alter to get this working ?
UPDATE
After fiddling around and getting around the permission problems I have moved on to new stupidity of errors posted here : hadoop Nanenode wont start
I will forever keep you guys in mind if you can nudge me in the right direction so that I can start some real work on this.

Related

Hadoop installation Issue:Permission denied

I followed this tutorial for installation of Hadoop. Unfortunately, when I run the dfs namenode -format script - The following error was printed on console:
but at the end i see this msg
dfs namenode -format
WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
mkdir: cannot create directory ‘/home/hdoop/hadoop-3.2.1/logs’: Permission denied
ERROR: Unable to create /home/hdoop/hadoop-3.2.1/logs. Aborting.
thank u
also when i run
./start-dfs.sh
Starting namenodes on [localhost]
localhost: WARNING: /home/hdoop/hadoop-3.2.1/logs does not exist. Creating.
Starting datanodes
Starting secondary namenodes [blabla]
blabla: Warning: Permanently added 'blabla,192.168.100.10' (ECDSA) to the list of known hosts.
change the permission of /home/hdoop to the correct one!
i sloved this link here
According to my configuration i didn't set JAVA_HOME within PATH
$ which java
$ echo $JAVA_HOME
Also,i change the value of HADOOP_OPTS in hadoop-env.sh as given below.
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/"
the image before and after
Create Directory :- Create "logs" directory by using root access yourself.As in this case directory was logs create it at /home/hdoop/hadoop-3.2.1/ (eg. "/home/{username}/{extracted hadoop directory}/" )
Give Access of Directory :- Make it accessible by using sudo chmod 777 {directory location}
In this case - sudo chmod 777 /home/hdoop/hadoop-3.2.2/logs To see it worked in my case see following image :
solved it by giving access of following directory

start-all.sh: command not found. How do I fix this?

I tried installing hadoop using this tutorial, link (timestamped the video from where problem occurs)
However, after formatting the namenode(hdfs namenode -format) I don't get the "name" folder in /abc.
Also the start-all.sh and other /sbin commands dont work.
P.S I did try installing hadoop as a single node which didnt work so I tried removing it, redoing everything as a double node setup, so i had to reformat the namenode..i dont know if that affected this somehow.
EDIT 1: I fixed the start-all.sh command not working because there was a mistake in .bashrc that i corrected.
However I get these error messages when running start-all.sh or start-dfs.sh etc.
hadoop#linux-virtual-machine:~$ start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-namenode-linux-virtual-machine.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.10.0/logs’: Permission denied
localhost: chown: cannot access '/usr/local/hadoop-2.10.0/logs': No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: head: cannot open '/usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out' for reading: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
localhost: /usr/local/hadoop-2.10.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.10.0/logs/hadoop-hadoop-datanode-linux-virtual-machine.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:a37ThJJRRW+AlDso9xrOCBHzsFCY0/OgYet7WczVbb0.
Are you sure you want to continue connecting (yes/no)? no
0.0.0.0: Host key verification failed.
EDIT 2: Fixed the above error my changing the permissions to hadoop folder (in my case both hadoop-2.10.0 and hadoop)
start-all.sh works perfectly but namenode doesnt show up.
It's not clear how you setup your PATH variable. Or how the scripts are not "working". Did you chmod +x them to make them executable? Any logs output that comes from them at all?
The start-all script is available in the sbin directory of where you downloaded Hadoop, so just /path/to/sbin/start-all.sh is all you really need.
Yes, the namenode needs formatted on a fresh cluster. Using the official Apache Guide is the most up-to-date source and works fine for most.
Otherwise, I would suggest you learn about Apache Amabri, which can automate your installation. Or just use a Sandbox provided by Cloudera, or use many of the Docker containers that already exist for Hadoop if you don't care about fully "installing" it.

permission denied error on hdfs hile using put command

While trying to use the put command to add patternsToSkip file to hdfs, I get an error saying: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
In the image below, you can see the sequence of commands written along with the error:
I tried to user access as biadmin, root, and even hdfs but with no luck! (details in the image)
please help me fix this error. Thanks folks.
Reason, it is giving permission issue is because you are trying to put the file inside /user directory in hdfs since you are using 2 dots in put statement. You need to login as supergroup to access or copy file inside that particular directory.
What i would suggest is try running below commands to copy file to hdfs.
Target with one dot
hadoop fs -put patternsToSkip .
OR
Giving complete target directory path
hadoop fs -put patternsToSkip /user/<instance_name>/output

Hadoop returns permission denied

I am trying to install hadoop (2.7) in cluster (two machines hmaster and hslave1). I installed hadoop in the folder /opt/hadoop/
I followed this tutorial but Iwhen I run the command start-dfs.sh, I got the following error about:
hmaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-hmaster.out
hmaster: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hmaster.out
hslave1: mkdir: impossible to create the folder « /opt/hadoop\r »: Permission denied
hslave1: chown: impossible to reach « /opt/hadoop\r/logs »: no file or folder of this type
/logs/hadoop-hadoop-datanode-localhost.localdomain.out
I used the command chmod 777 for the folder hadoop in hslave but I still have this error.
Insted of using /opt/ use /usr/local/ if you get that permission issue again give the root permissions using chmod. I already configured hadoop 2.7 in 5 machines. Or else use "Sudo chown user:user /your log files directory".
Seems you have already gave master password less access to login slave.
Make sure you are logged in with username available on both servers.
(hadoop in your case, as tutorial you are following uses 'hadoop' user.)
you can edit the '/etc/sudoer' file using 'sudo' or directly type 'visudo' in the terminal and add the following permission for newly created user 'hadoop' :-
hadoop ALL = NOPASSWD: ALL
might it will resolved your issues.

hadoop creates dir that cannot be found

I use the following hadoop command to create a directory
hdfs dfs -mkdir /tmp/testing/morehere1
I get the following message:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
Not understanding the error, I run the command again, which returns this message:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
mkdir: `/tmp/testing/morehere2': File exists
then when I try to go to the directory just created, it's not there.
cd /tmp/testing/morehere2
-bash: cd: /tmp/testing/morehere2: No such file or directory
Any ideas what I am doing wrong?
hdfs dfs -mkdir /tmp/testing/morehere1
This command created a directory in the hdfs . Dont worry about the log4j warning . The command created the directory successfully . That is why you got the error mkdir: /tmp/testing/morehere2': File exists the second time you tried the command .
The following command will not work , since the directory is not created in your local filesystem , but in hdfs .
cd /tmp/testing/morehere2
Use the command below to check the created directory in hdfs :
hdfs dfs -ls /tmp/testing
You should be able to see the new directory there .
About the log4j warning : You can ignore that as it will not cause your hadoop commands to fail . But if you want to correct it , you can add a File appender to log4j.properties .
Remember that there's a difference between HDFS and your local file system. That first line that you posted creates a directory in HDFS, not on your local system. So you can't cd to it or ls it or anything directly; if you want to access it, you have to through hadoop. It's also very rare to be logging to HDFS as file appends have never been well-supported. I suspect that you actually want to be creating that directory locally, and that might be part of your problem.
If your MR code were running fine previously and Now its showing this log4j error then restart all the hadoop daemons. It may solve your problem as it solves mine :)

Resources