I am using: CentOS 7 with Ambari 2.1.1 to try and setup a single node setup on a VM. I want to do this to install vanilla hadoop etc instead of installing a prepackaged VM with some modified version of hadoop.
I am logged in as root. I have created a ssh key pair. I also ran:
"cat id_rsa.pub > authorized_keys"
"chmod 700 .ssh/"
"chmod 640 ./ssh/authorized_keys"
I have edited /etc/ssh/sshd_config to: permit empty passwords, allow root login and also to state where the authorized_keys file is.
Without a password I can run "ssh root#localhost" and log in fine.
I have ran "ambari-server setup" successfully and logged in at localhost:8080 with user: admin pass: admin.
In "Install Options" FQDN I typed "localhost.test" and have selected a copy of my private key for the Host Registration Information.
But not matter what I do I am unable to get the components install under the confirmed hosts part and thus can't get any further.
Can someone please point out what I am missing here?
Thanks to Yusaku on HortonWorks forum for the help.
Ok I ran:
hostname -f
and got localhost
python -c ‘import socket; print socket.getfqdn()’
and got localhost.localdomain
By entering localhost.localdomain into the FQDN I was able to get the install working.
Related
I have a weird problem. when I run a playbook with the -u root option it does not run on the remote machine. My colleagues are able to run it as root with the -u root option.
Weirdest thing is that I can ssh to the remote machine. I can sudo to root on the machine and I am able to run playbooks without the -u root option there.
It must be some kind of configuration thing but I have looked everywhere and can't seem to find it. Anyone that can point me in the right direction?
Looking on the documentation. The -u means you want to connect to the server as a root user so you should have configured ssh keys or you should add a --become flag to able pass the password for root user.
Connection Options:
control as whom and how to connect to hosts
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=None)
I have been playing around with a Hadoop installation on CentOS for a while but today when I shifted to RHEL I got pesky password prompts when trying to start the pseudo-distributed cluster. After hours of poking around I finally managed to get rid of them by removing the security policy I had selected during installation of RHEL.
Looks like some aspect of the security policy was not letting me set up password less SSH to allow the different servers to communicate.
Going forward I would like to be able to run a cluster on machines with security policy enabled. What are the changes that I need to make, or where should I start looking into, to get the right set of network configurations?
I got pesky password prompts when trying to start the pseudo-distributed cluster
That's a sign you did not correctly establish a passwordless SSH keypair. Perhaps you did type a password when you generated the key? Or you didn't add it correctly into the authorized keys file for an SSH session.
This should not prompt for a password
$ ssh localhost
And if it does, generate keys again without a password
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Also, RHEL systems need SELinux disabled. I believe Cloudera and Hortonworks install guides also have you turning the firewall off
If you want a secure cluster, you would install and configure MIT Kerberos or Active Directory
I have a strange problem with vagrant ssh. Similar questions, like Vagrant asks for password after SSH key update, or (vagrant & ssh) require password, or Vagrant ssh authentication failure do not help me.
So, the plot.
I have a virtual machine running Ubuntu 14.04.3. All setup was made according to this article: https://blog.engineyard.com/2014/building-a-vagrant-box.
Note: I can ssh to this virtual machine using Putty with vagrant's insecure_private_key (converted to *.ppk), which is located "C:/Users/Gino/.vagrant.d/insecure_private_key. Password is not promtped.
Then I packaged this virtual machine, init vagrant with this package and ran vagrant up. I got "Warning: Authentication failure. Retrying..." error. But nevertheless I could vagrant ssh to this machine, but it asked me a password. And if I tried to ssh to it using Putty with the necessary key (as in the first paragraph), it asked me for a password too.
I vagrant halted this machine, found it in VirtualBox VM's list and ran it manually. After that I tried to ssh to this machine using Putty with the same key and succeed - I could logon without any password.
Result of vagrant ssh-config, if needed:
h:\VagrantBoxes\main-server32>vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "C:/Users/Gino/.vagrant.d/insecure_private_key"
IdentitiesOnly yes
LogLevel FATAL
My Vagrantfile (it was generated automatically, almost nothing there, only a suggested line from comments was added):
Vagrant.configure(2) do |config|
config.vm.box = "vagrant-main-server32"
config.ssh.insert_key = false
end
So what's the mystery here? Why ssh using key works without vagrant up and fails and prompts for password with it?
Note. Another funny thing: it still can not authenticate during
vagrant up. But if at the time when errors "authentication failure"
appear I log in to vm through virtualbox, it also succeed to log in in
the window with vagrant up. And then vagrant ssh works.
I had the same issue with vagrant 1.8.1, on several boxes I use (ie: geerlingguy/centos6)
I didn't have any problem with Vagrant 1.7 on those boxes.
After some research on why i could not ssh in that box, it appears that /home/vagrant on the box had 755 permissions and ssh prevent authentication to user with those permissions
extract of /var/log/secure:
Jan 28 15:11:36 server sshd[11721]: Authentication refused: bad ownership or modes for directory /home/vagrant
To fix that vm, I only have to change the permissions /home/vagrant (did a chmod 700 on it) and now i can ssh directly into my boxes
I don't knwo how to fix it directly I think you should modify your box directly
Hope this helps!
edit: I thought it was a shared folder from the host but it's /vagrant that is shared not /home/vagrant
I had this old setting at the top of ~/.ssh/config.
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
After removing it, vagrant ssh stopped asking for password.
If you saved your Vagrantfile on an external HardDrive and use exfat because you are working cross platform like me, you will also encounter this error. Since exfat does not save permissions, ssh will always think that the private keys permission is 777 => to open.
I put together this script as a workaround which runs on powershell and bash (so compatible with Linux, Mac and Windows):
# ssh-agent # uncomment if your ssh-agent isn't running as a service
cat V:\vm\arch_template\.vagrant\machines\default\virtualbox\private_key | ssh-add -
ssh -p 2222 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant#localhost
It requieres a working ssh-agent configuration. Also pay attantion to the correct port! Vagrant changes it to a different port if 2222 isn't availabe during vagrant up.
I had the same issue, getting vagrant#127.0.0.1's password: when starting up vagrant, after inputting the supposed password [vagrant], I could connect to the VM. However, after reading through other solutions, I tried ssh-agent on the same directory where the vagrantfile that was initiated is, and vagrant-ssh, and I am able to connect to the running instance.
I am trying to install Hadoop on Amazon EC2 Instance CentOS 6.5.I am connected to the instance but want to make the session passwordless SSH. To do this I used the following commands:
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub node01
I get an error saying : Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I tried logging in as "root" as well as "ec2-user" but it shows the same error.
Could anyone help on this.
I have created a simple scriptlet to ease this process on EC2 - Ubuntu instances.
You can check it out here.
Just give the machine names and key path, you are done!
https://github.com/hshinde/pwless
I'm installing greenplum database on my desktop computer following the official installation guide. When I'm executing
# gpseginstall -f hostfile_exkeys -u gpadmin -p P#$$word
it asks me to provide cluster password access:
[root#sm403-08 greenplum-db-4.2.1.0]# gpseginstall -f hostfile_exkeys -uyang -par0306
20120506:05:59:33:012887 gpseginstall:sm403-08:root-[INFO]:-Installation Info:
link_name None
binary_path /usr/local/greenplum-db-4.2.1.0
binary_dir_location /usr/local
binary_dir_name greenplum-db-4.2.1.0
20120506:05:59:33:012887 gpseginstall:sm403-08:root-[INFO]:-check cluster password access
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
This is what my hostfile_exkeys file looks like:
localhost
localhost-1
localhost-2
since I only have one machine.
A similar post on the web (http://www.topix.com/forum/com/greenplum/TSDQHMJ6M7I9D0A44) says:
"I had the same error and I discovered that it was because I had set sshd to refuse root login. You must edit your sshd configuration and permit root login for gpseginstall to work. Hope that helps!"
But I have tried to modify my /etc/ssh/sshd_config file to let it permit root login:
# Authentication:
#LoginGraceTime 2m
PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
and restarted sshd:
Stopping sshd: [FAILED]
Starting sshd: [ OK ]
but nothing works; the gpseginstall program is still asking for password.
I have tried all the passwords I can ever think of, root, gpadmin, my own user's password, but none of them works. What am I expected to do to get it work?
Update: It seems that the problem lies in installing the Greenplum community edition on a single node. Is there anyone who has some experience with this?
Thanks in advance!
It seems that I'm installing Greenplum database on a single node, so don't have to do the gpseginstall step. This is used to install Greenplum on all segments from the master host.
You need to enable password auth.
sudo nano /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
Then service sshd restart
I will be glad if it helps someone who is trying to install greenplum in cluster mode.
#installing greenplum cluster steps
# first add entires for all servers and interfaces in your /etc/hosts
# gpdb01- master
# gpdb02 - secondary master
# gpdb03 , gpdb04 - data nodes
#setup ssh between all machines
ssh-keygen
ssh-copy-id gpdb02
ssh-copy-id gpdb03
ssh-copy-id gpdb04
# also add entries for the interfaces
vi /etc/hosts
172.12.13.14 gpdb01
172.12.13.14 gpdb01-1
172.12.13.14 gpdb01-2
172.12.13.15 gpdb02
172.12.13.15 gpdb02-1
172.12.13.15 gpdb02-2
172.12.13.16 gpdb03
172.12.13.16 gpdb03-1
172.12.13.16 gpdb03-2
172.12.13.17 gpdb04
172.12.13.17 gpdb04-1
172.12.13.17 gpdb04-2
# enable RootLogin and PasswordAuthentication on all servers
vi /etc/ssh/sshd_config
service sshd restart
#create your hostkey file
gpdb01
gpdb01-1
gpdb01-2
gpdb02
gpdb02-1
gpdb02-2
gpdb03
gpdb03-1
gpdb03-2
gpdb04
gpdb04-1
gpdb04-2
# run the gpseg installer
gpseginstall -f hostfile_exkeys -u gpadmin -p P#$$word