Need password each time start hadoop - hadoop

I installed the Hadoop 2.6.4 on the Ubuntu server, and i use SSH to login the Ubuntu server from my Mac, since the rsa key was used for login, so i don't have to input any password. but when i run the start_dfs.sh to start the server, i do have to input the password for each service as below:
jianrui#cloudfoundry:~$ start-dfs.sh
Starting namenodes on [localhost]
Password:
localhost: starting namenode, logging to /home/jianrui/hadoop-2.6.4/logs/hadoop-dingjianrui-namenode-cloudfoundry.out
Password:
localhost: starting datanode, logging to /home/jianrui/hadoop-2.6.4/logs/hadoop-dingjianrui-datanode-cloudfoundry.out
Starting secondary namenodes [0.0.0.0]
Password:
0.0.0.0: starting secondarynamenode, logging to /home/jianrui/hadoop-2.6.4/logs/hadoop-dingjianrui-secondarynamenode-cloudfoundry.out
dingjianrui#cloudfoundry:~$

I am able to resolve the issue using below commands.
The following commands are used for generating a key value pair using SSH. Copy the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

If you tried all of this you don't succeed.
Try below.
$ ssh-keygen -t rsa -P ""
$ ssh-copy-id -i ~/.ssh/id_rsa [id]#[domain]
I'm using Redhat7
If you may use IP address instead of a domain name.
and
You wanna use a domain name easily.
Edit your /etc/hosts file.
ex> 192.168.0.11 cluster01
The matter is just how to copy a key file to another machine.
I succeed so easily.

Related

Hadoop Cluster - "hadoop" user ssh communication

I am setting up Hadoop 2.7.3 cluster on EC2 servers - 1 NameNode, 1 Secondary NameNode and 2 DataNodes.
Hadoop core uses SSH for communication with slaves to launch the processes on the slave node.
Do we need to have same SSH keys on all the nodes for the hadoop user?
What is the best practice/ideal way to copy or add the NameNode to Slave nodes SSH credentials?
Do we need to have same SSH keys on all the nodes for the hadoop user?
The same public key needs to be on all of the nodes
What is the best practice/ideal way to copy or add the NameNode to
Slave nodes SSH credentials?
Per documentation:
Namenode: Password Less SSH
Password-less SSH between the name nodes and the data nodes. Let us
create a public-private key pair for this purpose on the namenode.
namenode> ssh-keygen
Use the default (/home/ubuntu/.ssh/id_rsa) for the key location and
hit enter for an empty passphrase.
Datanodes: Setup Public Key
The public key is saved in /home/ubuntu/.ssh/id_rsa.pub. We need to
copy this file from the namenode to each data node and append the
contents to /home/ubuntu/.ssh/authorized_keys on each data node.
datanode1> cat id_rsa.pub >> ~/.ssh/authorized_keys
datanode2> cat id_rsa.pub >> ~/.ssh/authorized_keys
datanode3> cat id_rsa.pub >> ~/.ssh/authorized_keys
Namenode: Setup SSH Config
SSH uses a configuration file located at ~/.ssh/config for various
parameters. Set it up as shown below. Again, substitute each node’s
Public DNS for the HostName parameter (for example, replace
with EC2 Public DNS for NameNode).
Host nnode
HostName <nnode>
User ubuntu
IdentityFile ~/.ssh/id_rsa
Host dnode1
HostName <dnode1>
User ubuntu
IdentityFile ~/.ssh/id_rsa
Host dnode2
HostName <dnode2>
User ubuntu
IdentityFile ~/.ssh/id_rsa
Host dnode3
HostName <dnode3>
User ubuntu
IdentityFile ~/.ssh/id_rsa
At this point, verify that password-less operation works on each node
as follows (the first time, you will get a warning that the host is
unknown and whether you want to connect to it. Type yes and hit enter.
This step is needed once only):
namenode> ssh nnode
namenode> ssh dnode1
namenode> ssh dnode2
namenode> ssh dnode3

why hadoop asks for password before starting any of the services?

why ssh login is required before starting hadoop? And why hadoop asks for password for starting any of the services?
shravilp#shravilp-HP-15-Notebook-PC:~/hadoop-2.6.3$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
shravilp#localhost's password:
localhost: starting namenode, logging to /home/shravilp/hadoop-
In Ubuntu, you can use the following one time set up steps to eliminate the need to enter password when running hadoop commands, for example, start-dfs.sh, start-yarn.sh:
sudo apt-get install openssh-server openssh-client
ssh-keygen -t rsa
ssh-copy-id user#localhost
replace user with your username. It was tested on Ubuntu 16.04.2, hadoop-2.7.3, jdk1.8.0_121
Note: 1. when execute "ssh-keygen -t rsa" command, you can simply press ENTER three times to accept the default values. 2. when execute "ssh-copy-id user#localhost" command, "Are you sure you want to continue connecting (yes/no)?" enter "yes", then your password
See this question also
It asks for password as it uses the SSH protocol, You could avoid this by adding your public key to all the ssh files of each node to make it passwordless.

Hadoop - requestion for network lan password during starting cluster

I can't understant what password is expected by hadoop.
I configured it according to tutorial. I do:
sudo su
#bash start-dfs.sh
And now it expects someting like password lan's network. I have no idea what should I write.
As you can see, I run script as root. Of course master (from that I run script) may ssh to slaves as root without password (I configured and tested it).
Disclaimer: It is possbile that I give incorrect name (for example for script name - it is beacause of I don't understand exactly now. However I am sure that it was about something like lan's network password)
Help me please, for which a password is it?
Edit: I was using http://backtobazics.com/big-data/setup-multi-node-hadoop-2-6-0-cluster-with-yarn/
It seems you may not setup passwordless-ssh. Passwordless-ssh is required to run hadoop services (daemons). So try to setup ssh among nodes
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Then ssh user#hostname

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password) during ambari hadoop installation

I am trying to deploy a hadoop cluster using ambari, but when i select the hostnames with FQDN and proceed to configure I get the permission denied error for ssh.
STEPS:
1. generated rsa key using ssh-keygen as root.
changed permission for .ssh(700) and authorized_keys(640)
cat the public key to authorized_keys.
and copied the public key to all the hosts(authorized_keys) and changed the file permission as above.
I could ssh passwordless from ambari server host to all the other hosts.
But from ambari is failing to do the hadoop installation with below error.
SSH command execution finished
host=XXX, exitcode=255
Command end time 2015-06-23 10:44:07
ERROR: Bootstrap of host XXX fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Please dont mark this question as a duplicate. I could see other questions with same description but none of them mention about the Ambari ssh permission denied error.
I encountered the same problem with him.
ssh -i <your_keypair> root#<your_host>
I tried this but it wasn't solved.
Here's my solution
host1 ip:192.168.1.21
host2 ip:192.168.1.22
host3 ip:192.168.1.23
on host1:
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys
scp /root/.ssh/id_dsa.pub host2:/root/
scp /root/.ssh/id_dsa.pub host3:/root/
on host2:
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/id_dsa.pub >> /root/.ssh/authorized_keys
on host3:
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/id_dsa.pub >> /root/.ssh/authorized_keys
host1:/root/.ssh/id_dsa This's the file which you need.
You should be able to execute something like
ssh -i <your_keypair> root#<your_host>
from some other host. If this is not working, then you are using wrong keypair.
I had exactly the same message but it turned out the problem was user error. I had been uploading the public key to the Ambari installer, rather than the private key.
Try using id_rsa file instead of copying and pasting its content in ambari web-app
Doing this fix the problem for me.

how to do passwordless ssh access to slave while starting services of hadoop in its multinod cluster..

I've installed multinode cluster of hadoop. Now I am trying to do passwordless ssh access to slave. i.e. my problem is that,when I start services from master it asks me password to start every service, and takes much time to start it.If anyone has solution then please help me
You have to generate and copy RSA key from Namenode to all the Datanodes.
user#namenode:~> ssh-keygen -t rsa
Just press 'Enter' for any passphrase
user#namenode:~> ssh user#datanode mkdir -p .ssh
user#datanode's password:
Finally append namenode's new public key to user#datanode:.ssh/authorized_keys and enter datanode's password one last time:
user#namenode:~> cat .ssh/id_rsa.pub | ssh user#datanoe 'cat >> .ssh/authorized_keys'
user#datanode's password:
You can test by
user#namenode:~> ssh user#datanode

Resources