Connection refused even when all daemons are working on hadoop, - hadoop

I am working on hadoop 2.X , on which when I run jps, it shows all the daemons running correctly.
[root#localhost Flume]# jps
3521 NodeManager
3058 DataNode
3252 SecondaryNameNode
4501 Jps
3419 ResourceManager
2957 NameNode
But when I run,
hadoop dfs -ls /
It says,
ls: Call From localhost.localdomain/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
please help me with it.

The command you gave "hadoop dfs -ls /" is not correct. Change the dfs to fs. This command can be specified in two ways:
hadoop fs -ls / /* this is the old style and is deprecated */
hdfs dfs -ls / /* this is the new style */
Note the difference in both commands. The 2nd part of 1st command should be fs and not dfs.

Related

Error in loading hadoop distributed file system

I installed Hadoop-3.3.4 in Ubuntu-20. I wrote the command for starting hadoop, i.e.
samar#pc:~$ $HADOOP_HOME/sbin/start-all.sh
Then it showed the output as.
WARNING: Attempting to start all Apache Hadoop daemons as samar in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [pc]
Starting resourcemanager
Starting nodemanagers
But when I tried to access the HDFS with the command
samar#pc:~$ hdfs dfs -ls
It gave a message as:
ls: Call From pc/127.0.1.1 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
and the output of jps was:
10485 Jps
10101 NodeManager
9946 ResourceManager
9739 SecondaryNameNode
9533 DataNode
Namenode did not start successfully (9000 is namenodes services port)
Are there more logs?

java.net.ConnectException: Connection refused when trying to use hdfs

I find a problem when I try to use hadoop hdfs command:
root#ec2-35-205-125-85:~# hdfs dfs -copyFromLocal ~/input/ ~/input/
copyFromLocal: Call From ip-172-32-5-110.us-west-2.compute.internal/172.32.5.110 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
This problem happen not just for -copyFromLocal but for all command start with hdfs, for example -ls, -mkdir.....
After the following code the problem solved:
bash /usr/local/hadoop/sbin/start-all.sh
And after this code the all subnodes should've run, I check it with jps, it show the following:
2033 SecondaryNameNode
2778 Jps
2325 NodeManager
2195 ResourceManager
1691 NameNode
After that run:
hdfs namenode -format
And then the warning message has gone.

access file in datanode from namenode hadoop

I have configured the namenode in hadoopmaster and data node in hadoopslave.When start-dfs.sh in hadoopmaster the namenode in hadoopmaster and datanode in hadoopslave getting started.when i try to execute the command hdfs dfs -ls / in hadoopmaster i can't view the files that i had put in hadoopslave.
Note:I put a file in hadoopslave using hdfs dfs -put /sample.txt /
Correct me if I'am wrong!

Hadoop -mkdir : - Could not create the Java Virtual Machine

I have configured the Hadoop 1.0.4 and started the following without any issue:
1. $ start-dfs.sh : -Works fine
2. $ start-mapred.sh : - Work fine
3. $ jps (Output is below)
Out put:
rahul#rahul-Inspiron-N4010:/usr/local/hadoop-1.0.4/bin$ jps
6964 DataNode
7147 SecondaryNameNode
6808 NameNode
7836 Jps
7254 JobTracker
7418 TaskTracker
But facing issue: While issuing command
rahul#rahul-Inspiron-N4010:/usr/local/hadoop-1.0.4/bin$ hadoop -mkdir /user
Getting following error
Unrecognized option: -mkdir
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I applied the patch : HDFS-1943.patch but not use full
Should be: hdfs dfs -mkdir /user
Consult documentation at http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html
fs option is missing
hadoop fs -mkdir /user

failed on connection exception: java.net.ConnectException: Connection refused in hive

When I am running the following query in hive:
hive> select count(*) from testsql;
I am getting the following error:
Error
FAILED: RuntimeException java.net.ConnectException: Call From impetus-1466/192.168.49.77 to impetus-1466:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
The jps looks like:
[impadmin#impetus-1466 hadoop-1.0.3.15]$ jps
26380 TaskTracker
26709 Jps
26230 JobTracker
25943 NameNode
I started the
$ start-all.sh
$ start-dfs.sh
$ start-mapred.sh
How could this be solved?
Thanks
If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formated.
And check hadoop.tmp.dir in core-site.xml, if it is not set, the default directory of it is /tmp, so set hadoop.tmp.dir in core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/path/to/hadoop/tmp</value>
</property>
Then stop hadoop and reformat hdfs namenode -format, then restart the hadoop.
Similar question http://localhost:50070 does not work HADOOP
The reason for this is, either there are no datanodes in your cluster or the datanodes do not know their namenode. This might be the result of namenode format at least twice. The cluster id of namenode got changed but this change was not reflected to the datanodes.
The below links might be helpful:
Datanode not starts correctly
http://hortonworks.com/community/forums/topic/clusterid-mismatch-for-namenode-and-datanodes-in-fully-distributed-cluster/

Resources