I am getting connection refuse error Even I have right permission my name-node and data-node and they are not starting up. service gives following error :(Connection failed: [Errno 111] Connection refused to 0.0.0.0:50010).
It's probably because you didn't config your datanode(maybe your namenode either) or there is something wrong with your configuration file.
As you can see: http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
The default value of dfs.datanode.address is 0.0.0.0:50010.
Related
I'm having an error when running yarn on a job. HDFS and Yarn both start up fine, jps shows everything normal, pseudo-distributed mode on HDFS works perfectly, and I have triple and quadruple checked my configuration files. Whenever I attempt to run Yarn, however, this happens:
INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From serverA/IPaddress to serverB:30170 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ApplicationClientProtocolPBClientImpl.getNewApplication over null after 6 failover attempts. Trying to failover after sleeping for 44428ms.
Yarn then attempts to connect over and over again until I forcefully quit the process. Any ideas why this is happening?
Can you see yarn web ui?
How did you start hdfs and yarn?
You can try ./sbin/start-all.sh
I've recently installed Hadoop on my Windows and to test it out I wrote the following line hadoop fs -ls in the command prompt and after that it gives the following error
ls: Call From DESKTOP-I1FS520/192.168.100.57 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't know how to fix, any help would be appreciated
The error indicates your namenode is not running or the address is misconfigured for the namenode location (fs.defaultFS) in your core-site.xml
I have build a hadoop cluster on ECS on Aliyun of Alibaba.com( it's like AWS). The OS is Ubuntu12.04 . The version of Hadoop is 2.7.1
The cluster is consisted of one master and two slaves.
I can start it successfully. Every node can work well, and I
can use ssh to access two slave node from master node.
Every node is started.
But when I run the wordcount program, there is something wrong. The
error is as following:
exception: java.net.ConnectException: Call From master/10.144.52.189 to localhost:38635 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
When I added Port 38635 in the file /etc/ssh/sshd_config, I run the wordcount program again. The error is still existed, the only difference is the Port 38635 changed.
exception: java.net.ConnectException: Call From master/10.144.52.189 to localhost:46656 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
How to fix this problem? The ports 38635 and 46656 are added in /etc/ssh/sshd_config, the error occurs when run the wordcount program with a new port in the error information.
When I am running the following query in hive:
hive> select count(*) from testsql;
I am getting the following error:
Error
FAILED: RuntimeException java.net.ConnectException: Call From impetus-1466/192.168.49.77 to impetus-1466:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
The jps looks like:
[impadmin#impetus-1466 hadoop-1.0.3.15]$ jps
26380 TaskTracker
26709 Jps
26230 JobTracker
25943 NameNode
I started the
$ start-all.sh
$ start-dfs.sh
$ start-mapred.sh
How could this be solved?
Thanks
If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formated.
And check hadoop.tmp.dir in core-site.xml, if it is not set, the default directory of it is /tmp, so set hadoop.tmp.dir in core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/path/to/hadoop/tmp</value>
</property>
Then stop hadoop and reformat hdfs namenode -format, then restart the hadoop.
Similar question http://localhost:50070 does not work HADOOP
The reason for this is, either there are no datanodes in your cluster or the datanodes do not know their namenode. This might be the result of namenode format at least twice. The cluster id of namenode got changed but this change was not reflected to the datanodes.
The below links might be helpful:
Datanode not starts correctly
http://hortonworks.com/community/forums/topic/clusterid-mismatch-for-namenode-and-datanodes-in-fully-distributed-cluster/
I tried installing hadoop in two nodes. Both the nodes are up and running. The namenode runs on Ubuntu 10.10 and Datanode on Fedora 13. While copying the file from local file system to hdfs I encountered the following errors.
The terminal showed:
12/04/12 02:19:15 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.OException: Bad connect ack with firstBadLink as 10.211.87.162:9200
12/04/12 02:19:15 INFO hdfs.DFSClient: Abandoning block blk_-1069539184735421145_1014
The log file in namenode showed:
2012-10-16 16:17:56,723 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.6.2.26:50010, storageID=DS-880164535-10.18.13.10-50010-1349721715148, infoPort=50075, ipcPort=50020):DataXceiver
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:282)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
at java.lang.Thread.run(Thread.java:662)
Datanodes available are indicated as 2. I've disabled the firewall and selinux.
The following changes have also been made in the hdfs-site.xml
dfs.socket.timeout -> 360000
dfs.datanode.socket.write.timeout -> 3600000
dfs.datanode.max.xcievers -> 1048576
Both the nodes run sun-java6-jdk, The datanode contains Openjdk but the path settings have been made for sun java.
Yet the same error persists.
What might be the solution.
That's because your firewall is on.
try
sudo /etc/init.d/iptables stop
If you are on Ubuntu, do
sudo ufw disable
this should solve the issue.
The exception log mentioned tha the failure reason is No route to host.
Try ping 10.6.2.26 to test your network connection.