Passwordless SSH not establishing - hadoop

I am trying to install Hadoop on Amazon EC2 Instance CentOS 6.5.I am connected to the instance but want to make the session passwordless SSH. To do this I used the following commands:
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub node01
I get an error saying : Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I tried logging in as "root" as well as "ec2-user" but it shows the same error.
Could anyone help on this.

I have created a simple scriptlet to ease this process on EC2 - Ubuntu instances.
You can check it out here.
Just give the machine names and key path, you are done!
https://github.com/hshinde/pwless

Related

Setting up Hadoop on a RHEL machine with a security policy

I have been playing around with a Hadoop installation on CentOS for a while but today when I shifted to RHEL I got pesky password prompts when trying to start the pseudo-distributed cluster. After hours of poking around I finally managed to get rid of them by removing the security policy I had selected during installation of RHEL.
Looks like some aspect of the security policy was not letting me set up password less SSH to allow the different servers to communicate.
Going forward I would like to be able to run a cluster on machines with security policy enabled. What are the changes that I need to make, or where should I start looking into, to get the right set of network configurations?
I got pesky password prompts when trying to start the pseudo-distributed cluster
That's a sign you did not correctly establish a passwordless SSH keypair. Perhaps you did type a password when you generated the key? Or you didn't add it correctly into the authorized keys file for an SSH session.
This should not prompt for a password
$ ssh localhost
And if it does, generate keys again without a password
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Also, RHEL systems need SELinux disabled. I believe Cloudera and Hortonworks install guides also have you turning the firewall off
If you want a secure cluster, you would install and configure MIT Kerberos or Active Directory

Adding pem key to jenkins box

I have a jenkins box, I have ssh in to it and from there I want to access one of the Ec2 instance in AWS, I tried ssh -i "mykeyname.pem" ec2-user#DNSname but It throws me an error "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)".
I have the PEM file of the EC2 instance I want to connect. But is there any way I can ssh in to the instance..?
There are two possible reasons.
Default user name is not "ec2-user"
Please check your using image "jenkins box".
If it doesn't use "ec2-user", change user name for ssh commands
Your key-pair is incorrect
Once you created EC2 instance with correct key-pair, you could access EC2 instance with such commands
Please check your using key-pair name
FYI
Connecting to Your Linux Instance Using SSH

use ssh private key from host in vagrant guest

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Laravel "envoy run" command not working with ssh key

I am running following command in a laravel project folder and getting following error.
rakib$ envoy run list --env=production
[ubuntu#54.187.123.4]: Permission denied (publickey).
But I can successfully ssh using following command:
ssh -i ~/.ssh/sw-new.pem ubuntu#54.187.123.4
My ~/.ssh/config file content looks like:
Host 54.187.123.4
IdentityFile ~/.ssh/sw-new.pem
Can anyone suggest me what is the possible reason of getting "Permission denied" error?
It's possible that envoy is using the wrong user when attempting to ssh into the production server. Specify a user in your ~/.ssh/config file:
Host 54.187.123.4
IdentityFile ~/.ssh/sw-new.pem
User ubuntu
That should work.
It is possible as answer above for AWS user when you attempting to ssh in production mode, after define "config" file as "~/.ssh/config":
Host ec2-52-29-45-15.eu-central-2.compute.amazonaws.com
IdentityFile /home/tux/Desktop/ssh/masterpro.pem
User ubuntu

how to restart hadoop cluster on emr

I have a hadoop installation on the Amazon Elastic MapReduce , whenever I try to restart the cluster I get the following error:
/stop-all.sh
no jobtracker to stop
The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Permission denied (publickey).
no namenode to stop
localhost: Permission denied (publickey).
localhost: Permission denied (publickey).
Any idea on how to restart hadoop?
Following hack worked for me.
I have replaced "ssh" command in sbin/slaves.sh & sbin/hadoop-daemon.sh with "ssh -i ~/.ssh/keyname"
I'm using hadoop version 2.4 and this worked for me:
export HADOOP_SSH_OPTS="-i /home/hadoop/mykey.pem"
For the stop-all.sh script to work, you probably need to have the same user in all the machines as the user with which you are executing the stop-all.sh script.
Moreover, it seems you do not have a password less ssh setup from the machine you are executing stop-all.sh to rest of the machines that will spare you from manually entering the password for each machine separately. Passwords might be different for the same user for different machines, please don't forget that.

Resources