Error when copying the file into HDFS - hadoop

Hadoop cluster started normally and JPS shows datanodes and tasktracker running correctly.
When i copy a file into HDFS this is the error message i am getting.
hduser#nn:~$ hadoop fs -put gettysburg.txt /user/hduser/getty/gettysburg.txt
Warning: $HADOOP_HOME is deprecated.
14/08/24 21:12:50 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:51 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:52 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:53 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:54 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:55 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:56 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:57 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:58 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:59 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to nn/10.10.1.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
hduser#nn:~$
I am able to do ssh from NN to DNs and Viceverssa and between DNs.
I have changed the cd /etc/hosts in all NNs and DNs as below.
#127.0.0.1 localhost loghost localhost.project1.ch-geni-net.emulab.net
#10.10.1.1 NN-Lan NN-0 NN
#10.10.1.2 DN1-Lan DN1-0 DN1
#10.10.1.3 DN2-Lan DN2-0 DN2
#10.10.1.5 DN4-Lan DN4-0 DN4
#10.10.1.4 DN3-Lan DN3-0 DN3
10.10.1.1 nn
10.10.1.2 dn1
10.10.1.3 dn2
10.10.1.4 dn3
10.10.1.5 dn4
My mapredsite.xml looks like this.
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://nn:54310</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHE$
</property>
</configuration>
Configured cd /usr/local/hadoop/conf/master
hduser#nn:/usr/local/hadoop/conf$ vi masters
#localhost
nn
hduser#dn1:~$ jps
9975 DataNode
10186 Jps
10070 TaskTracker
hduser#dn1:~$
hduser#nn:~$ jps
5979 JobTracker
5891 SecondaryNameNode
6159 Jps
hduser#nn:~$
What is the problem?

Check your fs.default.name property in core-site.xml file. The value should be hdfs://NN:port.

Check the following :
core-site.xml - the hdfs url mentioned - hdfs://ip:port
Format namenode
Check if safemode is on

Related

Not able to run datanode in multinode hadoop cluster setup, need suggestion

I am trying to setup a multi-node hadoop cluster, however datanode is failing to start, need hep on this. Below are the details. No other setups done apart from this. I have only one data node and one name node setup as of now.
NAMENODE setup -
CORE-SITE.xml
<property>
<name>fs.defult.name</name>
<value>hdfs://192.168.1.7:9000</value>
</property>
HDFS-SITE.XML
<property>
<name>dfs.name.dir</name>
<value>/data/namenode</value>
</property>
DATANODE SETUP:
NAMENODE setup -
CORE-SITE.xml
<property>
<name>fs.defult.name</name>
<value>hdfs://192.168.1.7:9000</value>
</property>
HDFS-SITE.XML
<property>
<name>dfs.data.dir</name>
<value>/data/datanode</value>
</property>
When I run namenode it runs fine however when I try to run data node on other machine whos IP is 192.168.1.8 it fails and log says
2017-05-13 21:26:27,744 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-05-13 21:26:27,862 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-05-13 21:26:32,908 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:34,979 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:36,041 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:37,093 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:38,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:39,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
#
and datanode dies
Is anything there to setup ?
let me any other details required. Is there any other files to change? I am using centos7 to setup the env. I did formatting of namenode also more than 2-3 times, and also permissions are proper. Only connectivity issue however when I try to scp from master to slave (namenode to datanode) its works fine.
Suggest if there are any other setup to be done to make it successful!
There is a typo in the property name of your configuration. A 'a' is missing : fs.defult.name (vs fs.default.name).

Connection refused on hadoop fs -ls

I have been following some instructions on setting-up a vagrant single-node cluster and have been through the instrustions once without issue. However, I am running into several problems when trying to repeat the same instructions. Now I am getting a connection refused when trying to run hadoop fs -ls /
$ hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.
15/01/18 04:09:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:26 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:27 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/01/18 04:09:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
When I run jps I get the following:
$jps
4176 JobTracker
4313 TaskTracker
3970 DataNode
4581 Jps
4094 SecondaryNameNode
I'm really at a loss as to what I have missed to cause this different behavior. Any help would be greatly appreciated.
Looks like your namenode is not up. Make sure u format the namenode
stop the cluster and stop all daemons
format the namenode
Once you format, try starting namenode first and other daemons.
By looking at the daemons running there is no name node. I would suggest you go ahead and restart all the daemons. Below are the commands which just restart the daemons and name node will be up and running. Hope this helps!
sudo service hadoop-master stop
sudo service hadoop-master start
hadoop dfsadmin -safemode leave
sudo jps

hadoop enviroment is not available until i format the name node and continue always

Iam using single node hadoop cluster in ubuntu 13.10 with hadoop 1.2.1
Always iam having a problem like
whenever i restart my compueter and want to enter into hadoop environment
i login to terminal and type su
i get error like bin/su not in something
i do export /user/bin:/bin
then it works
then after getting into su
when i type hadoop fs -ls
i get error like trying 1..2...
lastly it failes
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/user/bin:bin
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/usr/bin:bin
user#ubuntu1310:~$ su
Command 'su' is available in '/bin/su'
The command could not be located because '/bin' is not included in the PATH environment variable.
su: command not found
user#ubuntu1310:~$ export PATH=/usr/bin:/bin
user#ubuntu1310:~$ su
Password:
root#ubuntu1310:/home/user# start-all.sh
starting namenode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-namenode-ubuntu1310.out
root#localhost's password:
localhost: starting datanode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-datanode-ubuntu1310.out
root#localhost's password:
localhost: starting secondarynamenode, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-secondarynamenode-ubuntu1310.out
starting jobtracker, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-jobtracker-ubuntu1310.out
root#localhost's password:
localhost: starting tasktracker, logging to /usr/lib/hadoop/libexec/../logs/hadoop-root-tasktracker-ubuntu1310.out
root#ubuntu1310:/home/user# hadoop fs -ls
14/01/09 05:46:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:46:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:47:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
root#ubuntu1310:/home/user# hadoop fs -ls
14/01/09 05:54:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:54:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/01/09 05:55:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
root#ubuntu1310:/home/user#
How can i aviod this 2 errors as
its very hard for me to format namenode always
First of all su has nothing to do with Hadoop. Next, you are getting these errors probably because you have not specified hadoop.tmp.dir property in your core-site.xml. The value of this property defaults to /tmp which gets emptied at each restart. Thus you loose all the HDFS metadata+data and have to reformat it.
It is always a good practice to add this property. Also, it is advisable to add dfs.name.dir and dfs.data.dir properties in your hdfs-site.xml files.
it seems like you have not set the password less login while installing hadoop it is recommended to set password less log in
for proper installation guide you can refer
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Hadoop - Pseudo-Distributed Operation

I am trying to copy a file quangle.txt from my localsystem to Hadoop using the command below:
testuser#ubuntu:~/Downloads/hadoop/bin$ ./hadoop fs -copyFromLocal Desktop/quangle.txt hdfs://localhost/testuser/quangle.txt
13/11/28 06:35:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
copyFromLocal: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
I tried to ping 127.0.0.1 and I got the response. Please advice
just add correct port to the filepath after localhost:
hdfs://localhost:9000/testuser/quangle.txt
Looks like your Name node isn't running - try running the jps cmd and see if NameNode is listed in the running services (or you might have to run ps axww | grep NameNode if the NameNode was started by/under a different user)
Does sudo netstat -atnp | grep 8020 yield any results?
If the Name Node is refusing to start then copy in your Name Node logs into to your original question (or post a new question - after searching for the error first of all to see if someone else has had this problem)
Try running jps to see the currently running Java processes.
Are all Hadoop processes running, especially the Namemode?
If yes, you should get this output (with different process ids):
10015 JobTracker
9670 TaskTracker
9485 DataNode
10380 Jps
9574 SecondaryNameNode
9843 NameNode
I think you can use hadoop fs -put ~/Desktop/quangle.txt /testuser, after copied, you can look up it via hadoop fs -ls /testuser in the /testuser directory
you create Desktop and others with the command hadoop fs -mkdir testuser and then try, it worked for me that way
Maybe there is something wrong with your setting for Pseudodistributed Mode.
It should be configured in this order:
fill up the configuration files:core-site.xml, hdfs-site.xml,
mapred-site.xml, yar-site.xml.
Configuring SSH
Formatting the
HDFS filesystem
Starting and stopping the daemons

Hadoop setup single node: PriviledgedActionException

I do step by step to setup hadoop here http://hadoop.apache.org/docs/stable/single_node_setup.html
i setup hadoop in redhat with root account
everything ok but when ./start-all and watch log JobTracker file look like this:
2013-06-11 08:31:00,194 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:31:01,195 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:31:01,196 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.net.ConnectException: Call to localhost/127.0.0.1:8088 failed on connection exception: java.net.ConnectException: Connection refused
2013-06-11 08:31:01,196 INFO org.apache.hadoop.mapred.JobTracker: Problem connecting to HDFS Namenode... re-trying
java.net.ConnectException: Call to localhost/127.0.0.1:8088 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1136)
at org.apache.hadoop.ipc.Client.call(Client.java:1112)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy7.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:135)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:276)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:241)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1908)
at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1906)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.JobTracker.initializeFilesystem(JobTracker.java:1906)
at org.apache.hadoop.mapred.JobTracker.offerService(JobTracker.java:2324)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4792)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:453)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:579)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:202)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1243)
at org.apache.hadoop.ipc.Client.call(Client.java:1087)
... 20 more
Log of datanode like this:
2013-06-11 08:30:25,051 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:8088 not available yet, Zzzzz...
2013-06-11 08:30:27,052 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:28,053 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:29,054 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:30,055 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:31,056 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:32,057 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:33,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:34,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:35,059 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:36,060 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8088. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-06-11 08:30:36,061 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:8088 not available yet, Zzzzz...
my configuration xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://127.0.0.1:8088</value>
</property>
</configuration>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>127.0.0.1:8089</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
My Hots file:
192.168.1.211 kkapps # Added by NetworkManager
127.0.0.1 localhost
::1 localhost
thanks for your investigate!
Go to services.msc Run.
Find CYGWIN sshd service. Right click on it, go to Properties and then to Log On tab.
Ensure that This account is checked with username as ./cyg_server(can vary depending on what you had entered while ssh localhost).
Now put the Password in the fields and start the sshd service.
Now try running the code.
This should work!!!

Resources