Hadoop alternate SSH key - hadoop

I'm setting up a multinode hadoop cluster and have a shared key to passwordless SSH between nodes. I named the file ~/.ssh/hadoop_rsa and can connect to other hosts using ssh -i ~/.ssh/hadoop_rsa host.
I need some way to tell hadoop to use this alternate SSH key when connecting to other nodes.

It appears that commands are run on each slave using the script:
$HADOOP_HOME/sbin/slaves.sh
That script includes a reference to the environment variable $HADOOP_SSH_OPTS when calling ssh. I was able to tell Hadoop to use a different key file by setting an environment variable like this:
export HADOOP_SSH_OPTS="-i ~/.ssh/hadoop_rsa"
Thanks to Varun on the Hadoop mailing list for pointing me in the right direction

Related

ssh key setting for hadoop connection in mutli clusters

I know that ssh key connection should be required for the hadoop operation.
Suppose that there are five clusters consisting of one namenode and four data nodes.
By setting the ssh key connection, we can connect from namenode to datanode and vice versa.
Note that two-way connection should be required for hadoop operation, which means that only one side (namenode to datanode, but not connect to from datanode to namenode) is not possible to operate hadoop as far as I know.
For above scenario, if we have 50 nodes or 100 nodes, it is very laborious jobs to configure all the ssh-key command by connecting the machine and typing same commands ssh-keygen -t ...
For these reasons, I have tried to script the shell code and but failed to do it in an automatic way.
my code is as below.
list.txt
namenode1
datanode1
datanode2
datanode3
datanode4
datanode5
...
cat list.txt | while read server
do
ssh $server 'ssh-keygen' < /dev/null
while read otherserver
do
ssh $server 'ssh-copy-id $otherserver' < /dev/null
done
done
However, it didn't work. As you can understand, the code means that it iterates over all the nodes and creates the key and then copy the generated key into other server using the ssh-copy-id command. But the code didn't work.
So my question is that how to script the codes which enables ssh connection (bothways) using shell scripts...It takes a lot of time for me to achieve it and I cannot find any document describing the ssh connection for multi nodes for avoiding laborious tasks.
You only need to create a public/private key pair at the master node, then use ssh-copy-id -i ~/.ssh/id_rsa.pub $server in the loop. And the master should be in the loop. And there is no need to do this in reverse at the namenodes. The keys have to belong and installed by the user that is running the hadoop cluster. After running the script, you should be able to ssh to all namenodes, as the hadoop user, without using a password.

Want to keep username#hostname of hadoop slaves different

I am setting a hadoop-2.7.3 multi-node cluster. For adding slave node i edited the slave file and /etc/hosts file. Also I added ssh pass key to them Now after executing start-dfs.sh the hadoop connects to user1#myStyle which is me, its all right till here. But now instead of connecting to other node having name user2#node1 it connects to user1#node1 which does not exists. So, how can I connect to user2#node1 instead of user1#node1
OS:-Ubuntu 16.04
Hadoop Version:-2.7.3
Step-1:
The slaves file must have entries in the form (one machine name per line):
machine_hostname1
machine_hostname2
...
In the above, each line represents the actual name of the machine in the cluster and must be exactly the same as specified in /etc/hosts file.
Step-2:
Check whether you are manually able to connect to each machine by using the following command:
ssh -i ~/.ssh/<"keyfilename"> <"username">#publicNameOfMachine
Don't type the quotes or angle-brackets in the above command, and replace the components with the names you have chosen.
Step-3:
If you are not able to connect manually, then either your key file is not correct, or it has not been placed in the .ssh directory on the target machine, or it does not Linux 600 permission for the file.
Step-4:
Do you have a config file on the NameNode under .ssh directory. That file should have entries like the following 4 lines per machine:
Host <"ShortMachineName">
HostName <"MachinePublicName">
User <"username">
IdentityFile ~/.ssh/<keyfilename>
Don't type the quotes or angle-brackets in the above 4 commands, and replace the components with the names you have chosen. These 4 lines are per machine.
Make sure you are not repeating (cut-paste error) the username and/or machine name for each machine. It must match what username and machine names you have configured.

OpenMPI: Simple 2-Node Setup

I'm having trouble running an OpenMPI program using only two nodes (one of the nodes is the same machine that is executing the mpiexec command and the other node is a separate machine).
I'll call the machine that is running mpiexec, master, and the other node slave.
On both master and slave, I've installed OpemMPI in my home directory under ~/mpi
I have a file called ~/machines.txt on master.
Ideally, ~/machines.txt should contain:
master
slave
However, when I run the following on master:
mpiexec -n 2 --hostfile ~/machines.txt hostname
OUTPUT, I get the following error:
bash: orted: command not found
But if ~/maschines.txt only contains the name of the node that the command is running on, it works.
~/machines.txt:
master
Command:
mpiexec -n 2 --hostfile ~/machines.txt hostname
OUTPUT:
mastermaster
I've tried running the same command on slave, and changed the machines.txt file to contain only slave, and it worked too. I've made sure that my .bashrc file contains the proper paths for OpenMPI.
What am I doing wrong? In short, there is only a problem when I try to execute a program on a remote machine, but I can run mpiexec perfectly fine on the machine that is executing the command. This makes me believe that it's not a path issue. Am I missing a step in connecting both machines? I have passwordless ssh login capability from master to slave.
This error message means that you either do not have Open MPI installed on the remote machine, or you do not have your PATH set properly on the remote machine for non-interactive logins (i.e., such that it can't find the installation of Open MPI on the remote machine). "orted" is one of the helper executables that Open MPI uses to launch processes on remote nodes -- so if "orted" was not found, then it didn't even get to the point of trying to launch "hostname" on the remote node.
Note that there might be a difference between interactive and non-interactive logins in your shell startup files (e.g., in your .bashrc).
Also note that it is considerably simpler to have Open MPI installed in the same path location on all nodes -- in that way, the prefix method described above will automatically add the right PATH and LD_LIBRARY_PATH when executing on the remote nodes, and you don't have to muck with your shell startup files.
Note that there are a bunch of FAQ items about these kinds of topics on the main Open MPI web site.
Either explicitly set the absolute OpenMPI prefix with the --prefix option:
prompt> mpiexec --prefix=$HOME/mpi ...
or invoke mpiexec with the absolute path to it:
prompt> $HOME/mpi/bin/mpiexec ...
The latter option sets the prefix automatically. The prefix is then used to set PATH and LD_LIBRARY_PATH on the remote machines.
This answer comes very late but for linux users, it is a bad habit to add the environment variables at the end of the ~/.bashrc file, because carefully looking at the top, you will notice an if function exiting if in non-interactive mode, which is precisely what you do compiling your program through the ssh host. So put your environment variables at the TOP of the file, before this exiting if
try edit the file
/etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/hadoop/openmpi_install/bin"
LD_LIBRARY_PATH=/home/hadoop/openmpi_install/lib

Setup Pseudo Distributed / Single Node Setup Apache Hadoop 2.2

I have installed Apache Hadoop 2.2 as Single Node Cluster. When I am trying to execute giraph example, it ends up with error "LocalJobRunner, you cannot run in split master/worker mode since there is only 1 task at a time".
I was going through forums, and I found that I can update mapred-site.xml to have 4 mappers. I tried that but still no help. I came across, one more forum were I can change single node setup to behave as pseudo distributed mode and it resolved the issue.
Can someone please let me know, which config files do I need to change to get single node setup behave as pseudo distributed mode.
Adding to renZzz answer, You also need to check that if you can ssh to the localhost without a passphrase:
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Following link can help you- https://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
for my first setup, I followed some manuals, but surely the best one for single node setup, was the pdf Apache Hadoop YARN_sample. I recommond you to use this manual step by step
First, ensure that the number of workers is one. Then, you need to configure Giraph not to split workers and master via:
giraph.SplitMasterWorker=false
You can either set it in giraph-site.xml or pass via command
line option:
-ca giraph.SplitMasterWorker=false
Ref:
https://www.mail-archive.com/user#giraph.apache.org/msg01631.html

50 nodes hadoop passphraseless

My question is very simple, I want to setup a 50 nodes hadoop cluster, how can I setup the passphraseless between the 50 nodes. if manually operating is very difficult! Thanks in advance!
You don't need to setup SSH between the nodes, it is sufficient to have it unidirectional between the master and the slaves. (So only the master must access the slaves without password).
The usual approach is to write a bash script that loops over your slaves file and logs into your slave copying the public key of the master into the authorized keys of the slaves.
You can see a small workthrough on Praveen Sripati's blog.
However, I'm no admin so I can't tell you if there is a smarter way. Maybe this is better suited on Superuser.com
Maybe this can help:
To work seamlessly, SSH needs to be set up to allow password-less
login for the hadoop user from machines in the cluster. The simplest
way to achieve this is to generate a public/private key pair, and
place it in an NFS location that is shared across the cluster.
First,
generate an RSA key pair by typing the following in the hadoop user
account:
% ssh-keygen -t rsa -f ~/.ssh/id_rsa
Even though we want
password-less logins, keys without passphrases are not considered good
practice (it’s OK to have an empty passphrase when running a local
pseudodistributed cluster, as described in Appendix A), so we specify
a passphrase when prompted for one. We shall use ssh-agent to avoid
the need to enter a password for each connection.
The private key is
in the file specified by the -f option, ~/.ssh/id_rsa, and the public
key is stored in a file with the same name with .pub appended,
~/.ssh/id_rsa.pub.
Next we need to make sure that the public key is in
the ~/.ssh/authorized_keys file on all the machines in the cluster
that we want to connect to. If the hadoop user’s home directory is an
NFS filesystem, as described earlier, then the keys can be shared
across the cluster by typing:
% cat ~/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys
If the home directory is not shared using NFS,
then the public keys will need to be shared by some other means.
Test
that you can SSH from the master to a worker machine by making sure
sshagent is running,3 and then run ssh-add to store your passphrase.
You should be able to ssh to a worker without entering the passphrase
again.
Source:
Tom White, Hadoop: The Definitive Guide, page 301
Found it googling here:
https://www.google.rs/url?sa=t&rct=j&q=&esrc=s&source=web&cd=22&cad=rja&ved=0CDYQFjABOBQ&url=http%3A%2F%2Fbigdata.googlecode.com%2Ffiles%2FOreilly.Hadoop.The.Definitive.Guide.3rd.Edition.Jan.2012.pdf&ei=sGzZULb6OfOM0wWhlYDYAw&usg=AFQjCNGvNUZcQBvM_Ucqf_K0JGAlCRxr3A&sig2=Qpa_KZyP1mXXm9yQv0ynRw&bvm=bv.1355534169,d.d2k

Resources