How can I deal with the java.net.ConnectException in Hadoop? - hadoop

I was trying to copy some local file to the HDFS, with this script:
bin/hadoop fs -copyFromLocal '/home/czy/IdeaProjects/HadoopInAction/FirstHadoop/src/main/resources/crossView.txt' /user/czy
It came like this:
copyFromLocal: Call From ubuntu/127.0.1.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
But when I used a script like this(without hdfs://localhost/), it worked well:
bin/hadoop fs -copyFromLocal '/home/czy/IdeaProjects/HadoopInAction/FirstHadoop/src/main/resources/crossView.txt' /user/czy
Why did this happen? Why do I received to localhost:8020 failed when I configured namenode port to 9000?

Check your core-site.xml file for valid parameters, server and port names
Check below link..it might be useful...
https://datashine.wordpress.com/2014/09/06/java-net-connectexception-connection-refused-for-more-details-see-httpwiki-apache-orghadoopconnectionrefused/

Related

Hadoop not connecting to local host

I've recently installed Hadoop on my Windows and to test it out I wrote the following line hadoop fs -ls in the command prompt and after that it gives the following error
ls: Call From DESKTOP-I1FS520/192.168.100.57 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't know how to fix, any help would be appreciated
The error indicates your namenode is not running or the address is misconfigured for the namenode location (fs.defaultFS) in your core-site.xml

How to figure out hdfs URI - java.io.IOException: Incomplete HDFS URI, no host

How can I figure out the URI my hdfs dfs commands are connecting to?
Is there any configuration file that stores the URI or any command that can be used to display it?
I looked into the documention of FileSystemShell and the dfsadmin documentation without success. (Also, I do not have access to most of dfsadmin commands.)
When I call a command with hdfs:///user/myUserName/... it throws the exception:
Exception in thread "main" java.io.IOException: Incomplete HDFS URI, no host: hdfs:///user/myUserName/test.avro
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.avro.mapred.FsInput.<init>(FsInput.java:38)
at org.apache.avro.tool.Util.openSeekableFromFS(Util.java:110)
at org.apache.avro.tool.DataFileGetSchemaTool.run(DataFileGetSchemaTool.java:47)
at org.apache.avro.tool.Main.run(Main.java:87)
at org.apache.avro.tool.Main.main(Main.java:76)
Simple commands like hdfs dfs -ls are working fine.
Using Hadoop 3.1.0.
If you're able to access the file core-site.xml, then you can look for the value assigned to property fs.defaultFS
$ grep -A 2 defaultFS /etc/hadoop/conf/core-site.xml
<name>fs.defaultFS</name>
<value>hdfs://bigdataserver-2.internal.cloudapp.net:8020</value>
</property>
Note: I use Cloudera and core-site.xml is where i get the detail. For Hadoop, you might be having core-default.xml
Check this out: https://hadoop.apache.org/docs/r2.8.5/hadoop-project-dist/hadoop-common/core-default.xml

java.net.ConnectException: Connection refused when trying to use hdfs

I find a problem when I try to use hadoop hdfs command:
root#ec2-35-205-125-85:~# hdfs dfs -copyFromLocal ~/input/ ~/input/
copyFromLocal: Call From ip-172-32-5-110.us-west-2.compute.internal/172.32.5.110 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
This problem happen not just for -copyFromLocal but for all command start with hdfs, for example -ls, -mkdir.....
After the following code the problem solved:
bash /usr/local/hadoop/sbin/start-all.sh
And after this code the all subnodes should've run, I check it with jps, it show the following:
2033 SecondaryNameNode
2778 Jps
2325 NodeManager
2195 ResourceManager
1691 NameNode
After that run:
hdfs namenode -format
And then the warning message has gone.

Working with HDFS within docker container

I'm reading a book on hadoop, which i'm running locally with docker. I'm using image from sequenceiq and trying to execute this command:
./bin/hadoop fs -copyFromLocal /input/docs/quangle.txt hdfs://localhost/user/4lex1v/quangle.txt
But it throws an error:
copyFromLocal: Call From e9584b413d6a/172.17.0.2 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't have such problems running it locally, how can i fix it?

failed on connection exception: java.net.ConnectException: Connection refused in hive

When I am running the following query in hive:
hive> select count(*) from testsql;
I am getting the following error:
Error
FAILED: RuntimeException java.net.ConnectException: Call From impetus-1466/192.168.49.77 to impetus-1466:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
The jps looks like:
[impadmin#impetus-1466 hadoop-1.0.3.15]$ jps
26380 TaskTracker
26709 Jps
26230 JobTracker
25943 NameNode
I started the
$ start-all.sh
$ start-dfs.sh
$ start-mapred.sh
How could this be solved?
Thanks
If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formated.
And check hadoop.tmp.dir in core-site.xml, if it is not set, the default directory of it is /tmp, so set hadoop.tmp.dir in core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/path/to/hadoop/tmp</value>
</property>
Then stop hadoop and reformat hdfs namenode -format, then restart the hadoop.
Similar question http://localhost:50070 does not work HADOOP
The reason for this is, either there are no datanodes in your cluster or the datanodes do not know their namenode. This might be the result of namenode format at least twice. The cluster id of namenode got changed but this change was not reflected to the datanodes.
The below links might be helpful:
Datanode not starts correctly
http://hortonworks.com/community/forums/topic/clusterid-mismatch-for-namenode-and-datanodes-in-fully-distributed-cluster/

Resources