Hadoop localhost connection failed - hadoop

I am trying to run the following command:
hadoop fs -ls /data
It is giving me the following error:
Call From elkd02/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I do not understand what is going wrong. I have checked the /etc/hosts file and it contains following:
127.0.0.1 Localhost
127.0.1.1 elkd02
How should I resolve the issue?

Related

Hadoop not connecting to local host

I've recently installed Hadoop on my Windows and to test it out I wrote the following line hadoop fs -ls in the command prompt and after that it gives the following error
ls: Call From DESKTOP-I1FS520/192.168.100.57 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't know how to fix, any help would be appreciated
The error indicates your namenode is not running or the address is misconfigured for the namenode location (fs.defaultFS) in your core-site.xml

Ambari can't start namenode with a connection exception

I have been strapped in this for a few days,I tried a lot of methods on the Internet but it didn't work,so I register on stackoverflow and write my first question.
My environment is HDP2.4 and Ubuntu14.04 LTS
the log information is like this:
safemode: Call From master/9.119.131.105 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2016-04-21 15:59:57,796 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://master:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From master/9.119.131.105 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
(1).At first I tried to avoid the safemode,but it isn't the key of this issue.
(2).Then I started to work on the 8020 port of master,master is the namenode of my cluster and I am sure that the hosts file is correct:
9.119.131.105 master
(3).ssh function is good,the four nodes in the cluster can load each other without passcodes
(4).I'm sure the fire wall is off
# /etc/init.d/iptables stop
# ufw status
Status: inactive
(5).I also tried to open the 8020 port manually:
# iptables -A INPUT -p tcp --dport 8020 -j ACCEPT
it still didn't work.....
(6).I tried :
# telnet master 8020
but:
telnet:Unable to connect to remote host: Connect refused
(7).I find that not only the 8020 port can't work,but also 50070 can't work
(8).I'm sure the parameters in the configure files is 8020:
fs.defaultFS
hdfs://master:8020
dfs.namenode.rpc-address
master:8020
I'm expecting somebody can help me,I will be very appreciate about thant.
Thank you!

Working with HDFS within docker container

I'm reading a book on hadoop, which i'm running locally with docker. I'm using image from sequenceiq and trying to execute this command:
./bin/hadoop fs -copyFromLocal /input/docs/quangle.txt hdfs://localhost/user/4lex1v/quangle.txt
But it throws an error:
copyFromLocal: Call From e9584b413d6a/172.17.0.2 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't have such problems running it locally, how can i fix it?

how to connect localhost as client and server in hadoop same user in ubuntu

If client and server in same ubuntu machine, not able to connect.
giving error
Call From ashish/127.0.0.1 to localhost:54310 failed on connection
exception: java.net.ConnectException: Connection refused; For more
details see: http://wiki.apache.org/hadoop/ConnectionRefused
By server do you mean namenode? By client do you mean hadoop client, datanode, nodemanager? Are you sure namenode is running and exposed on localhost:54310? Could you try
> nc -vz localhost 54310
How does your /etc/hosts look like? How does your core-site.xml and hdfs-site.xml looks like? What do you get for (as your hadoop user):
> jps -ml
What do you get for:
> sudo iptables -L
Also take a look at:
Hadoop cluster setup - java.net.ConnectException: Connection refused
Hadoop Datanodes cannot find NameNode

There are something wrong about Hadoop cluster

I have build a hadoop cluster on ECS on Aliyun of Alibaba.com( it's like AWS). The OS is Ubuntu12.04 . The version of Hadoop is 2.7.1
The cluster is consisted of one master and two slaves.
I can start it successfully. Every node can work well, and I
can use ssh to access two slave node from master node.
Every node is started.
But when I run the wordcount program, there is something wrong. The
error is as following:
exception: java.net.ConnectException: Call From master/10.144.52.189 to localhost:38635 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
When I added Port 38635 in the file /etc/ssh/sshd_config, I run the wordcount program again. The error is still existed, the only difference is the Port 38635 changed.
exception: java.net.ConnectException: Call From master/10.144.52.189 to localhost:46656 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
How to fix this problem? The ports 38635 and 46656 are added in /etc/ssh/sshd_config, the error occurs when run the wordcount program with a new port in the error information.

Resources