Hadoop setup single node: PriviledgedActionException - hadoop

I do step by step to setup hadoop here http://hadoop.apache.org/docs/stable/single_node_setup.html
i setup hadoop in redhat with root account
everything ok but when ./start-all and watch log JobTracker file look like this:
2013-06-11 08:31:00,194 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:31:01,195 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:31:01,196 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.net.ConnectException: Call to localhost/127.0.0.1:8088 failed on connection exception: java.net.ConnectException: Connection refused
2013-06-11 08:31:01,196 INFO org.apache.hadoop.mapred.JobTracker: Problem connecting to HDFS Namenode... re-trying
java.net.ConnectException: Call to localhost/127.0.0.1:8088 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1136)
at org.apache.hadoop.ipc.Client.call(Client.java:1112)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy7.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:135)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:276)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:241)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1908)
at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1906)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.JobTracker.initializeFilesystem(JobTracker.java:1906)
at org.apache.hadoop.mapred.JobTracker.offerService(JobTracker.java:2324)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4792)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:453)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:579)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:202)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1243)
at org.apache.hadoop.ipc.Client.call(Client.java:1087)
... 20 more
Log of datanode like this:
2013-06-11 08:30:25,051 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:8088 not available yet, Zzzzz...
2013-06-11 08:30:27,052 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:28,053 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:29,054 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:30,055 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:31,056 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:32,057 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:33,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:34,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:35,059 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:36,060 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:36,061 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:8088 not available yet, Zzzzz...
my configuration xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://127.0.0.1:8088</value>
</property>
</configuration>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>127.0.0.1:8089</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
My Hots file:
192.168.1.211 kkapps # Added by NetworkManager
127.0.0.1 localhost
::1 localhost
thanks for your investigate!

Go to services.msc Run.
Find CYGWIN sshd service. Right click on it, go to Properties and then to Log On tab.
Ensure that This account is checked with username as ./cyg_server(can vary depending on what you had entered while ssh localhost).
Now put the Password in the fields and start the sshd service.
Now try running the code.
This should work!!!

Related

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri ed 0 time(s)

I configured an Apache hadoop cluster with 1 Namenode and 2 Datanodes in VMware Workstation and Namenode is working fine, also did ssh-passwordless login too, but when I try to start datanode get the following error?
Under data nodes log getting Retrying error for namenode under both datanodes, whereas I tried to ping and connect with Namenode no error.
Below is the log for datanode,
2015-11-14 19:54:22,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = dn2.hcluster.com/192.168.155.133
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 1
5:23:09 PDT 2013
STARTUP_MSG: java = 1.8.0_65
************************************************************/
2015-11-14 19:54:23,447 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-11-14 19:54:23,485 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2015-11-14 19:54:23,876 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2015-11-14 19:54:25,720 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:27,723 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:28,726 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:29,729 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:30,733 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:31,753 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:32,755 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:33,758 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:34,762 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:35,764 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri
ed 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2015-11-14 19:54:35,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to nn1.hcluster.com/192.168.155.
131:9000 failed on local exception: java.net.NoRouteToHostException: No route to host
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)
at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
at org.apache.hadoop.ipc.Client.call(Client.java:1093)
... 16 more
2015-11-14 19:54:35,952 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at dn2.hcluster.com/192.168.155.133
************************************************************/
From Datanode 1 and 2, Namenode and it's GUI is working and all 3Desktop are able to communicate with eachother via pin or ssh passwordless too. Please help..
core-site.xml under namenode
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://nn01.hcluster.com:9000</value>
</property>
</configuration>
make sure your Namenode is running fine. Otherwise check the Machine IP and host name in /etc/hosts file.
Make sure that you have added this hostname "nn01.hcluster.com" there.

Connection refused on hadoop fs -ls

I have been following some instructions on setting-up a vagrant single-node cluster and have been through the instrustions once without issue. However, I am running into several problems when trying to repeat the same instructions. Now I am getting a connection refused when trying to run hadoop fs -ls /
$ hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.
15/01/18 04:09:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:26 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:27 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
When I run jps I get the following:
$jps
4176 JobTracker
4313 TaskTracker
3970 DataNode
4581 Jps
4094 SecondaryNameNode
I'm really at a loss as to what I have missed to cause this different behavior. Any help would be greatly appreciated.
Looks like your namenode is not up. Make sure u format the namenode
stop the cluster and stop all daemons
format the namenode
Once you format, try starting namenode first and other daemons.
By looking at the daemons running there is no name node. I would suggest you go ahead and restart all the daemons. Below are the commands which just restart the daemons and name node will be up and running. Hope this helps!
sudo service hadoop-master stop
sudo service hadoop-master start
hadoop dfsadmin -safemode leave
sudo jps

Error when copying the file into HDFS

Hadoop cluster started normally and JPS shows datanodes and tasktracker running correctly.
When i copy a file into HDFS this is the error message i am getting.
hduser#nn:~$ hadoop fs -put gettysburg.txt /user/hduser/getty/gettysburg.txt
Warning: $HADOOP_HOME is deprecated.
14/08/24 21:12:50 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:51 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:52 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:53 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:54 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:55 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:56 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:57 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:58 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:59 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to nn/10.10.1.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
hduser#nn:~$
I am able to do ssh from NN to DNs and Viceverssa and between DNs.
I have changed the cd /etc/hosts in all NNs and DNs as below.
#127.0.0.1 localhost loghost localhost.project1.ch-geni-net.emulab.net
#10.10.1.1 NN-Lan NN-0 NN
#10.10.1.2 DN1-Lan DN1-0 DN1
#10.10.1.3 DN2-Lan DN2-0 DN2
#10.10.1.5 DN4-Lan DN4-0 DN4
#10.10.1.4 DN3-Lan DN3-0 DN3
10.10.1.1 nn
10.10.1.2 dn1
10.10.1.3 dn2
10.10.1.4 dn3
10.10.1.5 dn4
My mapredsite.xml looks like this.
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://nn:54310</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHE$
</property>
</configuration>
Configured cd /usr/local/hadoop/conf/master
hduser#nn:/usr/local/hadoop/conf$ vi masters
#localhost
nn
hduser#dn1:~$ jps
9975 DataNode
10186 Jps
10070 TaskTracker
hduser#dn1:~$
hduser#nn:~$ jps
5979 JobTracker
5891 SecondaryNameNode
6159 Jps
hduser#nn:~$
What is the problem?
Check your fs.default.name property in core-site.xml file. The value should be hdfs://NN:port.
Check the following :
core-site.xml - the hdfs url mentioned - hdfs://ip:port
Format namenode
Check if safemode is on

hadoop enviroment is not available until i format the name node and continue always

Iam using single node hadoop cluster in ubuntu 13.10 with hadoop 1.2.1
Always iam having a problem like
whenever i restart my compueter and want to enter into hadoop environment
i login to terminal and type su
i get error like bin/su not in something
i do export /user/bin:/bin
then it works
then after getting into su
when i type hadoop fs -ls
i get error like trying 1..2...
lastly it failes
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/user/bin:bin
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/usr/bin:bin
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/usr/bin:/bin
user#ubuntu1310:~$ su
Password:
root#ubuntu1310:/home/user# start-all.sh
starting namenode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-namenode-ubuntu1310.out
root#localhost's password:
localhost: starting datanode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-datanode-ubuntu1310.out
root#localhost's password:
localhost: starting secondarynamenode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-secondarynamenode-ubuntu1310.out
starting jobtracker, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-jobtracker-ubuntu1310.out
root#localhost's password:
localhost: starting tasktracker, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-tasktracker-ubuntu1310.out
root#ubuntu1310:/home/user# hadoop fs -ls
14/01/09 05:46:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
root#ubuntu1310:/home/user# hadoop fs -ls
14/01/09 05:54:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:54:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
root#ubuntu1310:/home/user#
How can i aviod this 2 errors as
its very hard for me to format namenode always
First of all su has nothing to do with Hadoop. Next, you are getting these errors probably because you have not specified hadoop.tmp.dir property in your core-site.xml. The value of this property defaults to /tmp which gets emptied at each restart. Thus you loose all the HDFS metadata+data and have to reformat it.
It is always a good practice to add this property. Also, it is advisable to add dfs.name.dir and dfs.data.dir properties in your hdfs-site.xml files.
it seems like you have not set the password less login while installing hadoop it is recommended to set password less log in
for proper installation guide you can refer
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Hadoop error dfs -copyFromLocal

While moving a file in hadoop from temp directory
Used below command :-
[Divya#localhost hadoop]$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /home/Divya/gutenberg
How to resolve this error :-
13/07/03 14:42:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/03 14:42:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
copyFromLocal: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
Check whether namenode process is running on your localhost and on this port 54310
Check whether the hadoop daemon services are running properly. Use JPS command as root to check whether they are running properly. Make sure the cluster is properly set up.

Resources