ubuntu: ssh: connect to host ubuntu port 22: Connection refused - hadoop

Ubuntu 16.04.1 LTS
Hadoop 3.3.1
I have mannually downloaded and installed java version "1.8.0_261"
when I run start-dfs.sh,
hadoop#ubuntu:~/hadoop/sbin$ start-dfs.sh
Starting namenodes on [ubuntu]
ubuntu: ssh: connect to host ubuntu port 22: Connection refused
Starting datanodes
localhost: ssh: connect to host localhost port 22: Connection refused
Starting secondary namenodes [ubuntu]
ubuntu: ssh: connect to host ubuntu port 22: Connection refused
2021-06-25 17:42:19,711 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... > using builtin-java classes where applicable

Try installing whole SSH package pack:
sudo apt-get install ssh

Related

An error occurring in starting hadoop in ubuntu

I installed Hadoop == '3.3.4' in UBUNTU VERSION="20.04.4 LTS (Focal Fossa)".
After installation and formatting the namenode i worte the following command
samar#pc:~/hadoop-3.3.4/sbin$ ./start-all.sh
But it gave the output as:
Starting namenodes on [localhost]
localhost: ssh: connect to host localhost port 22: Connection refused
Starting datanodes
localhost: ssh: connect to host localhost port 22: Connection refused
Starting secondary namenodes [pc]
pc: ssh: connect to host pc port 22: Connection refused
2022-08-25 13:22:46,349 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
localhost: ssh: connect to host localhost port 22: Connection refused
ssh connection output is:

hadoop Starting namenodes on [ubuntu] ubuntu: Permission denied (publickey,password)

Ubuntu 16.04.1 LTS
Hadoop 3.3.1
when I run start-dfs.sh,
hadoop#ubuntu:~/hadoop/sbin$ start-dfs.sh
Starting namenodes on [ubuntu]
ubuntu: Warning: Permanently added 'ubuntu' (ECDSA) to the list of known hosts.
ubuntu: Permission denied (publickey,password).
Starting datanodes
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: Permission denied (publickey,password).
Starting secondary namenodes [ubuntu]
ubuntu: Permission denied (publickey,password).
2021-06-25 18:05:42,961 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... >using builtin-java classes where applicable
I also encountered this problem.
I resolved using the following shell commands.
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

ssh connection to host port 22 connection refused

I use a VMware virtualization system. I have centos release 7 as my operating system. I installed hadoop2.7.1. After installing Hadoop I ran the command :#hdfs namenode -format, it ran successfully. But when I run the command :#./start-all.sh it gives me errors. I tried several proposals that I saw on the internet but the problem persists
[root#MASTER sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
21/06/17 19:06:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [MASTER]
root#master's password:
MASTER: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-MASTER.out
localhost: ssh: connect to host localhost port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: ssh: connect to host 0.0.0.0 port 22: Connection refused
21/06/17 19:06:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-MASTER.out
localhost: ssh: connect to host localhost port 22: Connection refused
Provide ssh-key less access to all your worker nodes in hosts file, even localhost. Read instruction in the Tutorial of How To Set Up SSH Keys on CentOS 7.
At last test access without password by ssh localhost and ssh [yourworkernode].
Also, run start-dfs.sh and if was successful run start-yarn.sh.

secondary namenode failed to start ssh: connect to host 0.0.0.7 port 22: Connection timed out

Starting secondary namenodes [7]
7: ssh: connect to host 0.0.0.7 port 22: Connection timed out
secondary namenode not started due to connection timeout
how to fix this on Ubuntu 20.04.1 LTS?
vivek#7:~$ start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as pradeep in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [7]
7: ssh: connect to host 0.0.0.7 port 22: Connection timed out
2020-10-15 18:00:39,116 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
vivek#7:~$ jps
6256 NodeManager
5649 NameNode
6098 ResourceManager
6586 Jps
5788 DataNode

Hadoop 2.6.0 on OS-X Yosemite

I'm following this tutorial for installing hadoop on my OS-X Yosemite.
On starting the server I get the following message:
Starting namenodes on [localhost]
Starting secondary namenodes [0.0.0.0]
15/02/06 10:59:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/Cellar/hadoop/2.6.0/libexec/logs/yarn-sverma-resourcemanager-Ban-1sverma-m.local.out
However, on running any example, I'm getting the following exception:
java.net.ConnectException: Call From Ban-1sverma-m.local/10.177.55.82 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
On doing a telnet localhost 9000, I see:
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Can anyone please tell why my server is not running ?
EDIT:
This is the log inside yarn-sverma-resourcemanager-Ban-1sverma-m.local.log :
2015-02-05 16:41:07,723 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = Ban-1sverma-m.local/10.177.55.82
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0
Try to manually start the services and check the log files.
I have seen Connection Refused messages when the config files contain an external hostname or IP address, but the clients attempt to connect to the localhost, which resolves to the loopback address. This answer might help.

Resources