I have set something up on my mac for installing hadoop. But there is an error message like this:
13/02/18 04:05:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
13/02/18 04:05:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
13/02/18 04:05:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
13/02/18 04:05:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
13/02/18 04:05:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
13/02/18 04:05:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
13/02/18 04:05:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
13/02/18 04:05:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
13/02/18 04:06:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
13/02/18 04:06:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:546)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:542)
... 17 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
at org.apache.hadoop.ipc.Client.call(Client.java:1050)
... 31 more
then I enter jps to check the service, the result is :
20635 Jps
20466 TaskTracker
20189 DataNode
20291 SecondaryNameNode
I don't know how to deal with this error. could someone give me an answer?
Thx a Lot!!!
Actually all processes that (hadoop should run) are not running because of misconfiguration of IP.I am not familiar with Mac OS but on linux and windows we need to put hadoop enteries for connection in hosts files(etc/hosts) and I am damn sure that it should be for Mac.Now the point is You need to put your hadoop entry in that file as a local mnachine like 127.0.0.1Actually you need to put it against actual IP of your machine For example
hadoop-machine 127.0.0.1 -->(placing loop back IP is wrong here because hadoop will try to connect with this IP).
remove this 127.0.0.1 and place the actual IP of your machine infront of this entry.You can find ip of your mac machine eaisly.here are some questions which are not directly related to hadoop but I guess they would be helpful for you.Question 1 , Question 2, Question 3
This might help a bit. But first U gotta remove the earlier installation using the command
rm -rf /usr/local/Cellar /usr/local/.git && brew cleanup
Then U may begin with a fresh installation of Hadoop on U're Mac.
Step 1: Install Homebrew
$ ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"
Step 2: Install Hadoop
$ brew install hadoop
Assume that brew installs Hadoop 1.2.1
Step 3: Configure Hadoop
$ cd /usr/local/Cellar/hadoop/1.2.1/libexec
Add the following line to conf/hadoop-env.sh:
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
Add the following lines to conf/core-site.xml inside the configuration tags:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
Add the following lines to conf/hdfs-site.xml inside the configuration tags:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
Add the following lines to conf/mapred-site.xml inside the configuration tags:
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
Go to System Preferences > Sharing.
Make sure “Remote Login” is checked.
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Step 4: Enable SSH to localhost
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Step 5: Format Hadoop filesystem
$ bin/hadoop namenode -format
Step 6: Start Hadoop
$ bin/start-all.sh
Make sure that all Hadoop processes are running:
$ jps
Run a Hadoop example:
One more thing ! “brew update” will update the hadoop binaries to recent version (1.2.1 at present).
I had this same error: ConnectException: Connection refused in all the secondarynamenode log files.
But I also found this in the namenode's log file:
2015-10-25 16:35:15,720 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-admin/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
I therefore did a
hadoop namenode -format
and the problem has gone away. Thus, the error message was just about the fact that the namenode had not started successfully.
Related
I've searched web and stackoverflow for a long time but it was not useful.
I have installed hadoop yarn 2.2.0 in 2 node cluster setup. but something goes wrong.
when I start hadoop daemons using start-dfs.sh and start-yarn.sh on master node, they successfully run in master and slave (my master's hostname is RM and my slave's hostname is slv). they can ssh each other successfully. but when I want to run a job, this error appears:
14/01/02 04:22:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/01/02 04:22:56 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/QuasiMonteCarlo_1388665371850_813553673/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
and in datanode log this log exists:
2014-01-02 04:40:31,616 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: RM/192.168.1.101:9000
2014-01-02 04:40:37,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: RM/192.168.1.101:9000. Already tried 0 time(s)$
2014-01-02 04:40:38,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: RM/192.168.1.101:9000. Already tried 1 time(s)$
2014-01-02 04:40:39,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: RM/192.168.1.101:9000. Already tried 2 time(s)$
2014-01-02 04:40:40,621 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: RM/192.168.1.101:9000. Already tried 3 time(s)
I checked the 9000 port on the master node and the output is this:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 10227/java
I guess the problem is caused by the reason that in the slave node when I
telnet RM 9000
it says
Trying 192.168.1.101...
telnet: Unable to connect to remote host: Connection refused
however
telnet RM
the output is :
Trying 192.168.1.101...
Connected to RM.
Escape character is '^]'.
Ubuntu 12.04.2 LTS
RM login:
for additional information my /etc/hosts on master and slave is as below:
127.0.0.1 RM|slv localhost
192.168.1.101 RM
192.168.1.103 slv
can anybody suggest me a solution?
any help is really appreciated.
thanks
I think the problem is that your master is listening on 127.0.0.1:9000, so datanode can't connect because it is not listening at 192.168.1.101:9000 (theoretically, a good place to listen is 0.0.0.0:9000 since avoids this problems, but seems this configuration is not accepted).
Maybe will fix modifying your /etc/hosts deleting the first line, or try first just with:
127.0.0.1 localhost
192.168.1.101 RM
192.168.1.103 slv
-- edit: read comments bellow
I had the same problem, I changed
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
in core-site.xml to
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-address:8020</value>
</property>
and it worked
I encountered the same issue. After ran jps, we can see all the namenode and datanode are running up. but cannot see the active node in the web page. And I found I put 127.0.0.1 master in /etc/hosts. After removed it. the slaves can telnet master 9000.
My /etc/hosts looks like:
127.0.0.1 localhost
192.168.139.129 slave1
192.168.139.130 slave2
192.168.139.128 master
This is happening in pseudo-distributed as well as distributed mode.
When I try to start HBase, initially all the 3 services - master, region and quorumpeer start. However within a minute, the master stops. In the logs, this is the trace -
2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 0 time(s).
2013-05-06 20:10:26,528 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 1 time(s).
2013-05-06 20:10:27,530 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 2 time(s).
2013-05-06 20:10:28,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 3 time(s).
2013-05-06 20:10:29,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 4 time(s).
2013-05-06 20:10:30,538 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 5 time(s).
2013-05-06 20:10:31,540 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 6 time(s).
2013-05-06 20:10:32,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 7 time(s).
2013-05-06 20:10:33,544 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 8 time(s).
2013-05-06 20:10:34,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 9 time(s).
2013-05-06 20:10:34,550 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to <master/master_ip>:9000 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
at org.apache.hadoop.ipc.Client.call(Client.java:1155)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
at $Proxy9.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:259)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:220)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:86)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
at org.apache.hadoop.ipc.Client.call(Client.java:1121)
... 18 more
Steps I have taken to fix this without any success
- downgraded from distributed mode to pseudo-distributed mode. Same issue.
- tried standalone mode- no luck
- used same user (hadoop) for both hadoop and hbase. Setup passwordless ssh for hadoop. - same problem.
- edited /etc/hosts file and changed localhost/servername as well as 127.0.0.1 to actual IP address referencing SO and different sources. Still same issue.
- rebooted the server
Here are the conf files.
hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://<master>:9000/hbase</value>
<description>The directory shared by regionservers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value><master></value>
</property>
<property>
<name>hbase.master</name>
<value><master>:60000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.</description>
</property>
</configuration>
/etc/hosts file
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
.
What am I doing wrong here?
Hadoop Version - Hadoop 0.20.2-cdh3u5
HBase Version - Version 0.90.6-cdh3u5
By looking at you configuration file, I assume that you are using the actual hostname in your config files. Add the hostname along with the IP of the machine into the /etc/hosts file if that is the case. Also make sure it matches with the hostname in your Hadoop's core-site.xml. Proper name resolution is vital for a proper HBase functioning.
If you still face any problem please follow the steps mentioned here properly. I have tried to explain the procedure in detail and hopefully you'll be able to make it run if you follow all the steps carefully.
HTH
I believe you're trying to use pseudo-distributed mode. I was getting the same error until I fixed 3 things:
local /etc/hosts file
$ cat /etc/hosts
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
172.20.x.x my.hostname.com
instead of pointing to hostname, point to localhost in hbase-env.sh
Correct my classpath
A. Ensure Hadoop is in classpath (via hbase-env.sh)
export JAVA_HOME=your path to java home
export HADOOP_HOME=your path to hadoop home
export HBASE_HOME=your path to hbase home
export HBASE_CLASSPATH=your path to hbase home/conf:your path to hadoop home/conf
B. When running my program, I edited the following bash script from HBase: The Definitive Guide (bin/run.sh)
$ grep -v # bin/run.sh
bin=`dirname "$0"`
bin=`cd "$bin">/dev/null; pwd`
echo "usage: $(basename $0) <example-name>"
exit 1;
fi
MVN="mvn"
if [ "$MAVEN_HOME" != "" ]; then
MVN=${MAVEN_HOME}/bin/mvn
fi
CLASSPATH="${HBASE_CONF_DIR}"
if [ -d "${bin}/../target/classes" ]; then
CLASSPATH=${CLASSPATH}:${bin}/../target/classes
fi
cpfile="${bin}/../target/cached_classpath.txt"
if [ ! -f "${cpfile}" ]; then
${MVN} -f "${bin}/../pom.xml" dependency:build-classpath -Dmdep.outputFile="${cpfile}" &> /dev/null
fi
CLASSPATH=`hbase classpath`:${CLASSPATH}:`cat "${cpfile}"`
JAVA_HOME=your path to java home
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx512m
echo "Classpath is $CLASSPATH"
"$JAVA" $JAVA_HEAP_MAX -classpath "$CLASSPATH" "$#"
It's worth noting I am using teh Mac. I believe these instructions will work for teh Linux too.
I was following a tutorial to install hadoop: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
Now I am stuck at the "Copy local example data to HDFS" step.
The connection error I get:
<12/10/26 17:29:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
12/10/26 17:29:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
12/10/26 17:29:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
12/10/26 17:29:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
12/10/26 17:29:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
12/10/26 17:29:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
12/10/26 17:29:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
12/10/26 17:29:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
12/10/26 17:29:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
12/10/26 17:29:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
which is pretty much the same to this question already:
Errors while running hadoop
The point now is, I have disabled the ivp6, as described there and in above tutorial, but it doesn't help. Is there something I have been missing?
EDIT:
I repeated the tutorial on a second machine with a freshly installed ubuntu and compared it step by step. It turned out, there was some bug in the bashrc configuration of the hduser. Afterwards it worked fine...
I get the exact error message if I try to do Hadoop fs <anything> when the DataNode/NameNode aren't running, so I would guess the same is happening for you.
Type jps in your terminal. If everything is running, it should look like:
16022 DataNode
16524 Jps
15434 TaskTracker
15223 JobTracker
15810 NameNode
16229 SecondaryNameNode
I would wager that you're DataNode or NameNode isn't running. If anything is missing from jps's print out, start it again.
after the whole configuration give this command
hadoop namenode -formate
and the start all services by this command
start-all.sh
this will solve your problem
go to your etc/hadoop/core-site.xml. check the value for fs.default.name
It should be as shown below.
{
fs.default.name
hdfs://localhost:54310
}
after the whole configuration give this command
hadoop namenode -format
the start all services by this command
start-all.sh
this will solve your problem .
Your namenode may be in safe mode ,run bin/hdfs dfsadmin -safemode leave or bin/hadoop dsfadmin -safemode leave
then follow step - 2 and step -3
If I type http://localhost:50070 or http://localhost:9000 to see the nodes,my browser shows me nothing I think it can't connect to the server.
I tested my hadoop with this command:
hadoop jar hadoop-*test*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
but too didn't work and it tries to connect to the server,this is the output:
12/06/06 17:25:24 INFO mapred.FileInputFormat: nrFiles = 10
12/06/06 17:25:24 INFO mapred.FileInputFormat: fileSize (MB) = 1000
12/06/06 17:25:24 INFO mapred.FileInputFormat: bufferSize = 1000000
12/06/06 17:25:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
12/06/06 17:25:26 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
12/06/06 17:25:27 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
12/06/06 17:25:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
12/06/06 17:25:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
12/06/06 17:25:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
12/06/06 17:25:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
12/06/06 17:25:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
12/06/06 17:25:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
12/06/06 17:25:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
I changed some files like this:
in conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
in conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
</configuration>
in conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
Thanks for your attention. If I run this command:
cat /etc/hosts
I see:
127.0.0.1 localhost
127.0.1.1 ubuntu.ubuntu-domain ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
and if i run this one:
ps axww | grep hadoop
I see this result:
2170 pts/0 S+ 0:00 grep --color=auto hadoop
but no effect. Have you any idea, how can I solve my problem?
There are few things that you need to take care of before starting hadoop services.
Check what this returns:
hostname --fqdn
In your case this should be localhost.
Also comment out IPV6 in /etc/hosts.
Did you format the namenode before starting HDFS.
hadoop namenode -format
How did you install Hadoop. Location of log files will depend on that. Usually it is in location "/var/log/hadoop/" if you have used cloudera's distribution.
If you are a complete newbie, I suggest installing Hadoop using Cloudera SCM which is quite easy. I have posted my approach in installing Hadoop with Cloudera's distribution.
Also
Make sure DFS location has a write permission. It usually sits # /usr/local/hadoop_store/hdfs
That is a common reason.
same problem i got and this solved my problem:
problem lies with the permission given to the folders
"chmod" 755 or greater for the folders
/home/username/hadoop/*
Another possibility is the namenode is not running.
You can remove the HDFS files:
rm -rf /tmp/hadoop*
Reformat the HDFS
bin/hadoop namenode -format
And restart hadoop services
bin/hadoop/start-all.sh (Hadoop 1.x)
or
sbin/hadoop/start-all.sh (Hadoop 2.x)
also edit your /etc/hosts file and change 127.0.1.1 to 127.0.0.1...proper dns resolution is very important for hadoop and a bit tricky too..also add following property in your core-site.xml file -
<property>
<name>hadoop.tmp.dir</name>
<value>/path_to_temp_directory</value>
</property>
the default location for this property is /tmp directory which get emptied after each system restart..so you loose all your info at each restart..also add these properties in your hdfs-site.xml file -
<property>
<name>dfs.name.dir</name>
<value>/path_to_name_directory</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/path_to_data_directory</value>
</property>
I am assuming that is your first installation of hadoop.
At the beginning please check if your daemons are working. To do that use (in terminal):
jps
If only jps appears that means all daemons are down. Please check the log files. Especially the namenode. Log folder is probably somewhere there /usr/lib/hadoop/logs
If you have some permission problems. Use this guide during the installation.
Good installation guide
I am shooting with this explanations but these are most common problems.
Hi Edit your core conf/core-site.xml and change localhost to 0.0.0.0. Use the conf below. That should work.
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://0.0.0.0:9000</value>
</property>
haduser#user-laptop:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/input
/user/haduser/input
11/12/14 14:21:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
11/12/14 14:21:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
11/12/14 14:21:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
11/12/14 14:21:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
11/12/14 14:21:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
11/12/14 14:21:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
11/12/14 14:21:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
11/12/14 14:21:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. -Already tried 7 time(s).
11/12/14 14:21:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
11/12/14 14:21:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
I am getting the above errors when I'm trying to copy files from /tmp/input to /user/haduser/input even though the file /etc/hosts contain entry for localhost.
When the jps command is run, the TaskTracker and the namenode are not listed.
What could be the problem? Please someone help me with this.
I had similar issues - Actually Hadoop was binding to IPv6.
Then I Added - "export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true " to $HADOOP_HOME/conf/hadoop-env.sh
Hadoop was binding to IPv6 even when I had disabled IPv6 on my system.
Once I added it to env, started working fine.
Hope this helps someone.
Try to do ssh to your local system using the IP, in this case:
$ ssh 127.0.0.1
Once you are able to do the ssh successfully. Run the below command to know the list of open ports
~$ lsof -i
look for a listening connector with name: localhost:< PORTNAME > (LISTEN)
copy this < PORTNAME > and replace the existing value of port number in tag of fs.default.name property in your core-site.xml in the hadoop conf folder
save the core-site.xml, this should resolve the issue.
NameNode (NN) maintains the namespace for HDFS and it should be running for filesystem operations on HDFS. Check the logs why the NN hasn't started. TaskTracker is not required for operations on HDFS, only NN and DN are sufficient. Check the http://goo.gl/8ogSk and http://goo.gl/NIWoK tutorials on how to setup Hadoop on a single and multi node.
All the files in the bin are exectuables. Just copy the command and paste it in the terminal. Make sure the address is right, i.e. the user must be replaced by something. That would do the trick.