I am getting below error when I am trying to configure hadoop plugin in eclipse.
Error:call to localhost:54310 failed on connection exception:java.net.connectException:Connection refused:no further informaion
Hadoop version is 1.0.4
I have installed hadoop in Linux and I am running my Eclipse using Windows.
In the hadoop location window, I have tried with host as localhost and linux server.
MR Master: Host: localhost and port 54311
DFS Master: Host: localhost and port 54310
MR Master: Host: <Linux server name> and port 54311
DFS Master: Host: <Linux server name> and port 54310
In my mapred-site.xml I see this entry entry localhost:54311.
ConnectionRefused error is, you are trying to connect a directory which you dont have permission to read/write.
This may be caused by, a directory created another user(e.g. root) and your master machine is trying to read from/write to that directory.
It is more likely that you are trying to read an input from wrong place. Check your input directory if there is no problem with it, check your output directory
Related
I'm trying to set up two types of hadoop clusters: one standalone via SSH localhost and the other in aws ec2.
Both fail for similar issues: a connection refused error.
Here are some pictures of the issues: This is the result of ssh localhost
The next is: the failed run.
This is the relevenat portion of ~/.ssh/config
I can run hadoop, hdfs, yarn, and all the other commands. But, when I actually type this and run it, it fails:
Of note, I'm following this tutorial for the aws ec2 cluster, (this command is almost at the end). https://awstip.com/setting-up-multi-node-apache-hadoop-cluster-on-aws-ec2-from-scratch-2e9caa6881bd
Which is failling on this command: scp hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml ubuntu#ec2-54-209-221-47.compute-1.amazonaws.com:/home/ubuntu/hadoop/conf
That's not my ec2 link; it's from the example, but that's where it's faiing with the same error as the 2nd and 4th pictures.
I have hadoop installed in pseudo-distributed mode.
When running the command
hadoop fs -ls
I am getting the following error:
ls: Call From kali/127.0.1.1 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused;
For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Any suggestions?
If you read the link in the error, I see two immediate points that need addressed.
If the error message says the remote service is on "127.0.0.1" or "localhost" that means the configuration file is telling the client that the service is on the local server. If your client is trying to talk to a remote system, then your configuration is broken.
You should treat pseudodistributed mode as a remote system, even if it is only running locally.
For HDFS, you can resolve that by putting your computer hostname (preferably the full FQDN for your domain), as the HDFS address in core-site.xml. For your case, hdfs://kali:9000 should be enough
Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this).
I'm not completely sure why it needs removed, but the general answer I can think of is that Hadoop is a distributed system, and as I mentioned, treat the pseudodistributed mode as if it's remote HDFS server. Therefore, no loopback addresses should use your computers hostname
For example, remove the second line of this
127.0.0.1 localhost
127.0.1.1 kali
Or remove the hostname from this
127.0.0.1 localhost kali
Most importantly (emphasis added)
None of these are Hadoop problems, they are hadoop, host, network and firewall configuration issues
I started following an online tutorial to configure multi ndoes on my single local VM. here is the hosts on master node:
127.0.0.1 localhost
192.168.96.132 hadoop
192.168.96.135 hadoop1
192.168.96.136 hadoop2
ssh:ALL:allow
sshd:ALL:allow
Here is the command that used to work:hdfs dfs -ls
Now I am seeing error message below:
ls: Call From hadoop/192.168.96.132 to hadoop:9000 failed on connection exception:
java.net.ConnectException: Connection refused;
For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
What is wrong with my configuration? where should I check and correct it?
Thank you very much.
First try to
ping hadoop,
ping hadoop1 and
ping hadoop2.
Ex: ping hadoop
Then just try to connect via ssh
The syntax is
ssh username#hadoop
ssh username#hadoop1
ssh username#hadoop2
Then see the results to find out whether the systems are connecting or not.
I am using VM with Ambari 2.2 and HDP 2.3 and installing services using Ambari user interface. Issue is NameNode not starting and log indicates error saying port is in use 50070. I tried netstat and other tools to find out if anything is running on port 50070, it is not. I also tried changing 50070 to 50071 but error remains the same except it now says port is in use 50071.Below is the error I get in ambari error file:
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-hdp-m.out
2016-02-07 11:52:47,058 ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode.
java.net.BindException: Port in use: hdp-m.samitsolutions.com:50070
When using Ambari, I came across the port is in use 50070 problem. I found it's actually caused by the mismatch of NameNode's host, not port. In sometimes, Ambari will start namenode on HostB and HostC, while your configure are HostA and HostC.
Such a situation could by caused by: Update wrong namenode config when moving namenode
I have a ubuntu server VM in virtual box(in Mac OSX). And I configured a Hadoop Cluster via docker: 1 master(172.17.0.3), 2 slave nodes(172.17.0.4, 172.17.0.6). After run "./sbin/start-dfs.sh" under Hadoop home folder, I found below error in datanode machine:
Datanode denied communication with namenode because hostname cannot be
resolved (ip=172.17.0.4, hostname=172.17.0.4): DatanodeRegistration(0.0.0.0,
datanodeUuid=4c613e35-35b8-41c1-a027-28589e007e78, infoPort=50075,
ipcPort=50020, storageInfo=lv=-55;cid=CID-9bac5643-1f9f-4bc0-abba-
34dba4ddaff6;nsid=1748115706;c=0)
Because docker does not support bidirectional name linking and further more, my docker version does not allow editing /etc/hosts file, So I use IP address to set name node and slaves. Following is my slaves file:
172.17.0.4
172.17.0.6
After searching on google and stackoverflow, no solution works for my problem. However I guess that Hadoop Namenode regard 172.17.0.4 as a "hostname", so it reports "hostname can not be resolved" where "hostname=172.17.0.4".
Any Suggestions?
Finally I got a solution, which proved my suppose:
1.upgrade my docker to 1.4.1, following instructions from: https://askubuntu.com/questions/472412/how-do-i-upgrade-docker.
2.write IP=>hostname mappings of master and slaves into /etc/hosts
3.use hostname instead of ip address in Hadoop slaves file.
4."run ./sbin/start-dfs.sh"
5.Done!