I am trying to run Hadoop in Pseudo-Distributed mode. For this I am trying to follow this tutorial http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
I can ssh to my localhost and Format the filesystem. However, I can't start NameNode daemon and DataNode daemon by this command :
sbin/start-dfs.sh
When I execute it with sudo I get:
ubuntu#ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sudo sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Permission denied (publickey).
localhost: Permission denied (publickey).
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Permission denied (publickey).
and when executed without sudo:
ubuntu#ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out’ for reading: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out’ for reading: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
0.0.0.0: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
0.0.0.0: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out’ for reading: No such file or directory
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
I also notice now that when executing ls to check content of hfs directories like here, it fails:
ubuntu#ip-172-31-42-67:~/dir$ hdfs dfs -ls output/
ls: Call From ip-172-31-42-67.us-west-2.compute.internal/172.31.42.67 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Can anyone tell me what could be the problem ?
I had the same problem and the only solution I found was:
https://anuragsoni.wordpress.com/2015/07/05/hadoop-start-dfs-sh-localhost-permission-denied-how-to-fix/
Which suggest you to generate a new ssh-rsa key
The errors above suggest a permissions problem.
You have to make sure that the hadoop user has the proper privileges to /usr/local/hadoop.
For this purpose you can try:
sudo chown -R hadoop /usr/local/hadoop/
Or
sudo chmod 777 /usr/local/hadoop/
Please make sure that you do the following "Configuration" correctly, you need to edit 4 ".xml" files:
Edit the file hadoop-2.6.0/etc/hadoop/core-site.xml , between , put in :
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
Edit the file hadoop-2.6.0/etc/hadoop/hdfs-site.xml, between put in:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
Edit the file hadoop-2.6.0/etc/hadoop/mapred-site.xm, between paste the following and save
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Edit the file hadoop-2.6.0/etc/hadoop/yarn-site.xml, between paste the following and save
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
Related
when i start hadoop daemon i got following error
[hdp#localhost ~]$ start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hdp in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting datanodes
localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting secondary namenodes [localhost.localdomain]
localhost.localdomain: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting resourcemanager
Starting nodemanagers
localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hduser#manoj-VirtualBox:/usr/local/hadoop$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/tmp’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-namenode.pid: No such file or directory
localhost: mkdir: cannot create directory ‘/tmp’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-datanode.pid: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/tmp’: Permission denied
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-manoj-VirtualBox.out
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-secondarynamenode.pid: No such file or directory
starting yarn daemons
mkdir: cannot create directory ‘/tmp’: Permission denied
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-manoj-VirtualBox.out
/usr/local/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn-hduser-resourcemanager.pid: No such file or directory
localhost: mkdir: cannot create directory ‘/tmp’: Permission denied
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn-hduser-nodemanager.pid: No such file or directory
I tried running
sudo chown -R hduser /usr/local/hadoop/
to give permissions to hduser. but still the same.
Also tried running sbin/start-dfs.sh & sbin/start-yarn.sh these also resulting same permisssion problems.
After Adding to sudoers some other permission denied errors are coming.
hduser#manoj-VirtualBox:/usr/local/hadoop$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-namenode.pid: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-datanode.pid: Permission denied
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-manoj-VirtualBox.out
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 165: /tmp/hadoop-hduser-secondarynamenode.pid: Permission denied
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-manoj-VirtualBox.out
/usr/local/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn-hduser-resourcemanager.pid: Permission denied
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-manoj-VirtualBox.out
localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 125: /tmp/yarn-hduser-nodemanager.pid: Permission denied
I am very confused about Hadoop configuration about core-site.xml and hdfs-site.xml. I feel that start-dfs.sh script not actually use the setting. I use hdfs user to format the Namenode successfully but execute start-dfs.sh can not start hdfs daemons. Can anyone help me! here is the error message:
[hdfs#I26C ~]$ start-dfs.sh
Starting namenodes on [I26C]
I26C: mkdir: cannot create directory ‘/hdfs’: Permission denied
I26C: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
I26C: starting namenode, logging to /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-namenode-I26C.out’ for reading: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
10.1.226.15: mkdir: cannot create directory ‘/hdfs’: Permission denied
10.1.226.15: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
10.1.226.15: starting datanode, logging to /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: mkdir: cannot create directory ‘/edw/hadoop-2.7.2/logs’: Permission denied
10.1.226.16: chown: cannot access ‘/edw/hadoop-2.7.2/logs’: No such file or directory
10.1.226.16: starting datanode, logging to /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.15: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-datanode-I26C.out’ for reading: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: head: cannot open ‘/edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out’ for reading: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/hdfs’: Permission denied
0.0.0.0: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
0.0.0.0: starting secondarynamenode, logging to /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out’ for reading: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
Here is the info about my deployment
master:
hostname: I26C
IP:10.1.226.15
Slave:
hostname:I26D
IP:10.1.226.16
Hadoop version: 2.7.2
OS: CentOS 7
JAVA: 1.8
I have create four users:
groupadd hadoop
useradd -g hadoop hadoop
useradd -g hadoop hdfs
useradd -g hadoop mapred
useradd -g hadoop yarn
The HDFS namenode and datanode dir privileges :
drwxrwxr-x. 3 hadoop hadoop 4.0K Apr 26 15:40 hadoop-data
The core-site.xml setting:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/edw/hadoop-data/</value>
<description>Temporary Directory.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.1.226.15:54310</value>
</property>
</configuration>
The hdfs-site.xml setting:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///edw/hadoop-data/dfs/namenode</value>
<description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
</description>
</property>
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///edw/hadoop-data/dfs/datanode</value>
<description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
</description>
</property>
</configuration>
The hdfs user doesn't have permission for the hadoop folders.
Lets say, you are using hdfs user and hadoop group to run the hadoop setup. Then you need to run following command :
sudo chown -R hduser:hadoop <directory-name>
Give the appropriate Read-write-execute permission to your logged in user.
I have fix the problem, thank you guys.
The hadoop log configuration HADOOP_LOG_DIR setting in /etc/profile can not be used in hadoop-env.sh. So the HADOOP_LOG_DIR default is empty, the start-dfs.sh use the default directory setting by hadoop-env.sh
export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
I use hdfs use to preform the start-dfs.sh the HADOOP_LOG_DIR set to /hdfs, so it will not have privilege to create directory.
Here is my new solution edit ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh set HADOOP_LOG_DIR:
HADOOP_LOG_DIR="/var/log/hadoop"
export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
I have setup hadoop on mac local mac. When i start-dfs using the start-dfs.sh command using a separate hadoop user i get the following error in the terminal.
0.0.0.0: mkdir: /usr/local/Cellar/hadoop/2.3.0/libexec/logs: Permission denied
Does anyone know how i can change the log directory for hadoop? I installed hadoop using homebrew.
bash-3.2$ start-dfs.sh
14/03/31 09:04:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: mkdir: /usr/local/Cellar/hadoop/2.3.0/libexec/logs: Permission denied
localhost: chown: /usr/local/Cellar/hadoop/2.3.0/libexec/logs: No such file or directory
localhost: starting namenode, logging to /usr/local/Cellar/hadoop/2.3.0/libexec/logs/hadoop-hadoop-namenode-mymac.local.out
localhost: /usr/local/Cellar/hadoop/2.3.0/libexec/sbin/hadoop-daemon.sh: line 151: /usr/local/Cellar/hadoop/2.3.0/libexec/logs/hadoop-hadoop-namenode-mymac.local.out: No such file or directory
localhost: head: /usr/local/Cellar/hadoop/2.3.0/libexec/logs/hadoop-hadoop-namenode-mymac.local.out: No such file or directory
localhost: /usr/local/Cellar/hadoop/2.3.0/libexec/sbin/hadoop-daemon.sh: line 166: /usr/local/Cellar/hadoop/2.3.0/libexec/logs/hadoop-hadoop-namenode-mymac.local.out: No such file or directory
localhost: /usr/local/Cellar/hadoop/2.3.0/libexec/sbin/hadoop-daemon.sh: line 167: /usr/local/Cellar/hadoop/2.3.0/libexec/logs/hadoop-hadoop-namenode-mymac.local.out: No such file or directory
localhost: mkdir: /usr/local/Cellar/hadoop/2.3.0/libexec/logs: Permission denied
localhost: chown: /usr/local/Cellar/hadoop/2.3.0/libexec/logs: No such file or directory
The error indicates a permissions problem. The hadoop user needs the proper privileges to the hadoop folder. Try running the following in Terminal:
sudo chown -R hadoop /usr/local/Cellar/hadoop/2.3.0/
Hi i can't resolve my problem when running hadoop with start-all.sh
rochdi#127:~$ start-all.sh
/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer
expression expected
starting namenode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-namenode-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting datanode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-datanode-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting secondarynamenode, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-secondarynamenode-127.0.0.1
/usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer
expression expected
starting jobtracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-jobtracker-127.0.0.1
localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [:
localhost: integer expression expected
localhost: starting tasktracker, logging to
/usr/local/hadoop/libexec/../logs/hadoop-rochdi-tasktracker-127.0.0.1
localhost: Erreur : impossible de trouver ou charger la classe
principale localhost
path:
rochdi#127:~$ echo "$PATH"
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/hadoop/bin:/usr/local/hadoop/lib
before coming error i change hostname file as:
127.0.0.1 localhost
127.0.1.1 ubuntu.local ubuntu
and i configured my bashrc file as
export HADOOP_PREFIX=/usr/local/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
and jps command
rochdi#127:~$ jps
3427 Jps
help me please
i resolve the problem i just change my hostname but and all nodes start but when i stop them i have this message:
rochdi#acer:~$ jps
4605 NameNode
5084 SecondaryNameNode
5171 JobTracker
5460 Jps
5410 TaskTracker
rochdi#acer:~$ stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode
after extract hadoop tar file
open ~/bashrc file and add following at the end of file
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL=$HADOOP_HOME
then,
edit file $HADOOP_HOME/etc/hadoop/core-site.xml add following config then start hadoop
<configuration>
<property>
<name>fs.default.name </name>
<value> hdfs://localhost:9000 </value>
</property>
</configuration>
still problem the use this link click here
Check your hosts file and *-site.xml files for host names. This error occurs when the hostnames not defined properly.
Use server IP address instead of using localhost in core-site.xml and check your entries in etc/hosts and slaves file.
The OP posted that the main error was fixed by changing the hostname (answered Dec 6 '13 at 14:19). It suggests issues with the file /etc/hosts and the file 'slaves' in the master. Remember that each host name in the cluster must match the values in those files. When an xml is wrongly configured, it normally throw up Connectivity issues between the ports of the services.
From the message "no ${SERVICE} to stop" most probably the previous start-all.sh leaved the java processes orphaned. The solutions is to stop manually each process, e.g.
$ kill -9 4605
and then execute again the start-all.sh command
It's important to mention that this is an old question, and currently we have the version 2 and 3 of Hadoop. And I strongly recommend using on of the latest version.