Hadoop nodes do not ask for passwords during start - hadoop

When I try to ssh into localhost, I am prompted for password. See below
"
ssh connection to localhost:
[hadoop#mftrhel74 sbin]$ ssh localhost
hadoop#localhost's password:
Last login: Fri Aug 23 15:44:08 2019 from mah"
---The above statement means, passwordless connection is not setup----
But when I try to start Hadoop nodes as below, it doesn't prompt for password.
And the nodes are not starting, I see below message
I think it should prompt me to enter the password for the user just like as SSH connection is to be established.
[hadoop#mftrhel74 ~]$ start-dfs.sh
Starting namenodes on [mftrhel74]
mftrhel74: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting datanodes
localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting secondary namenodes [mftrhel74]
mftrhel74: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
************I DO NOT WANT A PASSWORDLESS CONNECTION*****

I suspect you are able to log in to one of the nodes with SSH, however probably you have not set up passwordless ssh between the nodes, so the steps you try to execute from the node will fail.
Here is some documentation that should explain that you need to set up passwordless ssh or otherwise install an ambari client (assuming you work on HDP).
https://ambari.apache.org/1.2.2/installing-hadoop-using-ambari/content/ambari-chap1-5-2.html

Related

hadoop Does not run properly

I am new in hadoop and I run it by below steps:
ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh localhost
./start-all.sh
but I get below error:
WARNING: Attempting to start all Apache Hadoop daemons as ... in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
localhost: Permission denied (publickey,password).
Starting datanodes
localhost: Permission denied (publickey,password).
Starting secondary namenodes [karbasi]
karbasi: Permission denied (publickey,password).
Starting resourcemanager
Starting nodemanagers
localhost: Permission denied (publickey,password).
Please help me in solving my problem.
Permission denied issue at Hadoop Installation
In the above link, you can find the reason for the issue.
The issue occurs if any mistakes occur in ssh key generation or if the hadoop installation is not extracted by same user and started by the same user.

Must the username of NameNode be equal to DataNode's?

I run namenode with user hduser#master, datanodes are running with user1#slave1, user1#slave2. Setting up SSh keys works fine, I can ssh remotely to my DataNode machines from master.
However, when I try to run the hadoop-daemons.sh for my datanodes it fails because it tries to ssh with the wrong user:
hduser#master:~$ hadoop-daemons.sh start datanode
hduser#slave3's password: hduser#slave1's password: hduser#slave2's password:
slave1: Permission denied (publickey,password).
slave2: Permission denied (publickey,password).
slave3: Permission denied (publickey,password).
I tried to reset the public and private key for my master and copying it to the data nodes
$ ssh-keygen -t rsa -P ""
$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub user1#slave1
But gives me same error.
Does the user on the NameNode need to be the same as for the DataNodes?
Answer: After resetting the VMs and adding same user and installing Hadoop on the Data Nodes with the same user as for Name Node, it worked. So I guess the answer is yes...

unable to start start-dfs.sh in Hadoop Multinode cluster

I have created a hadoop multinode cluster and also configured SSH in both master and slave nodes now i can connect to slave without password in master node
But when i try to start-dfs.sh in master node I'm unable to connect to slave node the execution stops at below line
log:
HNname#master:~$ start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-namenode-master.out
HDnode#slave's password: master: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-datanode-master.out
I pressed Enter
slave: Connection closed by 192.168.0.2
master: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-secondarynamenode-master.out
jobtracker running as process 10396. Stop it first.
HDnode#slave's password: master: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-tasktracker-master.out
slave: Permission denied, please try again.
HDnode#slave's password:
after entering the slave password the connection is closed
Below things I have tried but no results:
formatted namenode in both master & slave node
created new ssh key and configured in both the nodes
override the default HADOOP_LOG_DIR form the this post
I think you missed this step "Add the SSH Public Key to the authorized_keys file on your target hosts"
Just redo the password-less ssh step correctly. Follow this:
Generate public and private SSH keys
ssh-keygen
Copy the SSH Public Key (id_rsa.pub) to the root account on your
target hosts
.ssh/id_rsa
.ssh/id_rsa.pub
Add the SSH Public Key to the authorized_keys file on your target
hosts
cat id_rsa.pub >> authorized_keys
Depending on your version of SSH, you may need to set permissions on
the .ssh directory (to 700) and the authorized_keys file in that
directory (to 600) on the target hosts.
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
Check the connection:
ssh root#<remote.target.host>
where <remote.target.host> has the value of each host name in your cluster.
If the following warning message displays during your first
connection: Are you sure you want to continue connecting (yes/no)?
Enter Yes.
Refer: Set Up Password-less SSH
Note: password will not be asked, if your passwordless ssh is setup properly.
Make sure to start hadoop services with a new user called hadoop.
Then make sure to add the public key to the slaves with that new user.
If this doesn't work, check your firewall or iptables
I hope it helps
That mean you haven't created public key properly.
Follow below sequence.
Create User
Give all required permissions to that user
Generate public key with same user
Format Name Node
Start hadoop services.
Now it should not ask for password.

hadoop2.6.0 sudo sbin/start-dfs.sh fail

I'm following the Hadoop official tutorial to run Hadoop on a my machine in a pseudo-distributed mode.
I can use ssh to login in localhost without password:
admin#mycomputer:/usr/local/hadoop/hadoop-2.6.0$ ssh localhost
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-45-generic x86_64)
* Documentation: https://help.ubuntu.com/
4 packages can be updated.
0 updates are security updates.
Last login: Mon Feb 9 12:31:17 2015 from localhost
admin#mycomputer:~$
And I can also format the namenode without error, but I cannot start Hadoop with start-dfs.sh:
admin#mycomputer:/usr/local/hadoop/hadoop-2.6.0$ sudo sbin/start-dfs.sh
Starting namenodes on [localhost]
root#localhost's password:
localhost: Permission denied, please try again.
Why I'm still asked to provide root password while I can ssh into localhost without it?
I also tried:
sudo passwd
to reset the password, but later encounter the same permission denied error, it seems to me that this password is not the password for root#localhost. How can I solve this problem?
I think you didn't change the permission for the hadoop-2.6.0 folder. Give admin user permission to this folder and try to start.
Follow my below blog link : I provided steps in detail installing in Ubuntu by enriching from another blog.
http://gubendran.blogspot.com/2015/01/install-hadoop-in-single-node-linux.html

how to restart hadoop cluster on emr

I have a hadoop installation on the Amazon Elastic MapReduce , whenever I try to restart the cluster I get the following error:
/stop-all.sh
no jobtracker to stop
The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Permission denied (publickey).
no namenode to stop
localhost: Permission denied (publickey).
localhost: Permission denied (publickey).
Any idea on how to restart hadoop?
Following hack worked for me.
I have replaced "ssh" command in sbin/slaves.sh & sbin/hadoop-daemon.sh with "ssh -i ~/.ssh/keyname"
I'm using hadoop version 2.4 and this worked for me:
export HADOOP_SSH_OPTS="-i /home/hadoop/mykey.pem"
For the stop-all.sh script to work, you probably need to have the same user in all the machines as the user with which you are executing the stop-all.sh script.
Moreover, it seems you do not have a password less ssh setup from the machine you are executing stop-all.sh to rest of the machines that will spare you from manually entering the password for each machine separately. Passwords might be different for the same user for different machines, please don't forget that.

Resources