I made hadoop image based on centos using dockerfile. There are 4 nodes. I want to configure cluster using ssh-copy-id. But an error has occurred.
ERROR: ssh: connect to host [ip] port 22: Connection refused
How can I solve this problem?
ssh follows a client-server architecture. So, the openssh-server has to be installed in the container. Now ssh-copy-id and other commands should run if the ip address is routable.
Related
I am trying to run OpenMPI from a python notebook on databricks clusters (ubuntu).
The cluster has 3 nodes:
Driver node: 8 cores
two worker nodes: each has 8 cores
I found that the Open MPI has been installed on databricks.
My command:
sudo mpirun --allow-run-as-root -np 25 --hostfile MY_HOSTFILE ./MY_C_APP
I got:
ssh: connect to host DRIVER_NODE_IP port 22: No route to host
ssh: connect to host ONE_WORKER_NODE_IP port 22: No route to host
In MY_HOSTFILE, all nodes' IP addresses are listed.
I run the command from python notebook on databricks.
%sh
sudo ssh -T DRIVER_NODE_IP
I got:
ssh: connect to host DRIVER_NODE_IP port 22: No route to host
the notebook is run on the driver node through webservice.
I have tried to set ssh key so that each node can be accessed. But, the shell command of "ssh-keygen" cannot be run from databricks python notebook.
Could anybody let me know how I can work out this infrastructure problem?
thanks
I've ubuntu 16.04 on both my local and virtual machine, I want to access my virtual machine from my local machine, I've already changed the network adapter to bridge connection (both ips are in 192.168,10.x). But when i run the ssh virtual_mac_ip from my local terminal i get the error ssh: connect to host 192.168.10.7 port 22: Connection refused.
ps: I want to configure single node hadoop cluster
The issue has been resolved I changed my network adapter to NAT again, and use port forwarding on port 2222. Now when I run "ssh -p 2222 username#127.0.0.1", I am able to connect to my guest OS
Side note: Please check if OpenSSH is installed on your guess machine
I just installed the Bash Ubuntu on Windows 10 natively. When I try and acess a remote server I get ssh: connect to host HOSTNAME port 22: Connection refused
I have tried to find a solutions, but the solutions doesn't work.
This is what I have tried:
https://askubuntu.com/questions/59458/error-message-sudo-unable-to-resolve-host-user/733120#733120
When I could not acess /etx/hosts I tried this:
https://askubuntu.com/questions/326239/cannot-access-etc-hosts
After downloading gksudo to try and edit /etc/hosts I got this error message (gksudo:2601): Gtk-WARNING **: cannot open display:
Are u sure everything is correctly setup?
I just tried
ssh -T git#github.com
in my Bash on Ubuntu on Windows.
and it totally works Returning
Hi <usernamer>! You've successfully authenticated, but GitHub does not provide shell access.
Maybe you have some settings that prevent connections? for example in ~/.bashrc?
Maybe your server needs a different port? use ssh -p 2222 for that.
I am trying to execute the below command in putty.
ssh username#servername
I am getting the below error messsage:
ssh: connect to host servername port 22: Connection refused
I am trying to connect to a Windows server from a Unix server. Please help me!
The main think is that Windows Server doesn't contain ssh daemon in the box. You could install telnet server from the Windows components and connect from unix to windows machine by telnet command or use 3d-party ssh servers/cygwin.
More available on Server Fault or SO. (How ironic, both topics are closed as offtopic)
Edited:-
i have done with single node cluster on two different machine,I have made one as master(192.168.1.1) and other m/c as slave(192.168.1.2), I am successfully able to ping between two machine,I have made the following changes to get into 2 node cluster Update :-
/etc/hosts on both machines hosts.allow
All : Ashish-PC 192.168.1.1 : allow
All : slave 192.168.1.2 : allow
master file with
Ashish-PC
slaves file with
Ashish-PC
slave
I am getting an error while copying local host public key to remote host(slave): port 22
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop#slave
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: ssh: connect to host slave port 22: Connection timed out
as well as when i start all dfs at master services then also :-
bin/start-dfs.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-Ashish-namenode- Ashish-PC.out
slave: ssh: connect to host slave port 22: Connection timed out
Ashish-PC: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-Ashish-secondarynamenode-Ashish-PC.out
slave: ssh: connect to host slave port 22: Connection timed out
while copying key:-
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop#slave
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: ssh: connect to host slave port 22: Connection timed out
i have used cygwin and ssh is working fine on both the PC and I went through some suggestion to change the port number 22(because of ISP problem) but i dont want do that just because.
thanks in advance for your help and response.
Allow master to communicate through Windows Firewall by adding sshd in home as well as public...
make sure your sshd services are started on each node for communication.
This worked for me:
1.
sudo vi /etc/ssh/sshd_config
2.
Remove the comment
#Port 22
#Protocol 2