java.net.ConnectException: Connection refused when trying to use hdfs - hadoop

I find a problem when I try to use hadoop hdfs command:
root#ec2-35-205-125-85:~# hdfs dfs -copyFromLocal ~/input/ ~/input/
copyFromLocal: Call From ip-172-32-5-110.us-west-2.compute.internal/172.32.5.110 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
This problem happen not just for -copyFromLocal but for all command start with hdfs, for example -ls, -mkdir.....

After the following code the problem solved:
bash /usr/local/hadoop/sbin/start-all.sh
And after this code the all subnodes should've run, I check it with jps, it show the following:
2033 SecondaryNameNode
2778 Jps
2325 NodeManager
2195 ResourceManager
1691 NameNode
After that run:
hdfs namenode -format
And then the warning message has gone.

Related

Error in loading hadoop distributed file system

I installed Hadoop-3.3.4 in Ubuntu-20. I wrote the command for starting hadoop, i.e.
samar#pc:~$ $HADOOP_HOME/sbin/start-all.sh
Then it showed the output as.
WARNING: Attempting to start all Apache Hadoop daemons as samar in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [pc]
Starting resourcemanager
Starting nodemanagers
But when I tried to access the HDFS with the command
samar#pc:~$ hdfs dfs -ls
It gave a message as:
ls: Call From pc/127.0.1.1 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
and the output of jps was:
10485 Jps
10101 NodeManager
9946 ResourceManager
9739 SecondaryNameNode
9533 DataNode
Namenode did not start successfully (9000 is namenodes services port)
Are there more logs?

Hadoop not connecting to local host

I've recently installed Hadoop on my Windows and to test it out I wrote the following line hadoop fs -ls in the command prompt and after that it gives the following error
ls: Call From DESKTOP-I1FS520/192.168.100.57 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't know how to fix, any help would be appreciated
The error indicates your namenode is not running or the address is misconfigured for the namenode location (fs.defaultFS) in your core-site.xml

Connection refused even when all daemons are working on hadoop,

I am working on hadoop 2.X , on which when I run jps, it shows all the daemons running correctly.
[root#localhost Flume]# jps
3521 NodeManager
3058 DataNode
3252 SecondaryNameNode
4501 Jps
3419 ResourceManager
2957 NameNode
But when I run,
hadoop dfs -ls /
It says,
ls: Call From localhost.localdomain/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
please help me with it.
The command you gave "hadoop dfs -ls /" is not correct. Change the dfs to fs. This command can be specified in two ways:
hadoop fs -ls / /* this is the old style and is deprecated */
hdfs dfs -ls / /* this is the new style */
Note the difference in both commands. The 2nd part of 1st command should be fs and not dfs.

How can I deal with the java.net.ConnectException in Hadoop?

I was trying to copy some local file to the HDFS, with this script:
bin/hadoop fs -copyFromLocal '/home/czy/IdeaProjects/HadoopInAction/FirstHadoop/src/main/resources/crossView.txt' /user/czy
It came like this:
copyFromLocal: Call From ubuntu/127.0.1.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
But when I used a script like this(without hdfs://localhost/), it worked well:
bin/hadoop fs -copyFromLocal '/home/czy/IdeaProjects/HadoopInAction/FirstHadoop/src/main/resources/crossView.txt' /user/czy
Why did this happen? Why do I received to localhost:8020 failed when I configured namenode port to 9000?
Check your core-site.xml file for valid parameters, server and port names
Check below link..it might be useful...
https://datashine.wordpress.com/2014/09/06/java-net-connectexception-connection-refused-for-more-details-see-httpwiki-apache-orghadoopconnectionrefused/

failed on connection exception: java.net.ConnectException: Connection refused in hive

When I am running the following query in hive:
hive> select count(*) from testsql;
I am getting the following error:
Error
FAILED: RuntimeException java.net.ConnectException: Call From impetus-1466/192.168.49.77 to impetus-1466:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
The jps looks like:
[impadmin#impetus-1466 hadoop-1.0.3.15]$ jps
26380 TaskTracker
26709 Jps
26230 JobTracker
25943 NameNode
I started the
$ start-all.sh
$ start-dfs.sh
$ start-mapred.sh
How could this be solved?
Thanks
If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formated.
And check hadoop.tmp.dir in core-site.xml, if it is not set, the default directory of it is /tmp, so set hadoop.tmp.dir in core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/path/to/hadoop/tmp</value>
</property>
Then stop hadoop and reformat hdfs namenode -format, then restart the hadoop.
Similar question http://localhost:50070 does not work HADOOP
The reason for this is, either there are no datanodes in your cluster or the datanodes do not know their namenode. This might be the result of namenode format at least twice. The cluster id of namenode got changed but this change was not reflected to the datanodes.
The below links might be helpful:
Datanode not starts correctly
http://hortonworks.com/community/forums/topic/clusterid-mismatch-for-namenode-and-datanodes-in-fully-distributed-cluster/

Resources