Setting up Hadoop on a RHEL machine with a security policy - hadoop

I have been playing around with a Hadoop installation on CentOS for a while but today when I shifted to RHEL I got pesky password prompts when trying to start the pseudo-distributed cluster. After hours of poking around I finally managed to get rid of them by removing the security policy I had selected during installation of RHEL.
Looks like some aspect of the security policy was not letting me set up password less SSH to allow the different servers to communicate.
Going forward I would like to be able to run a cluster on machines with security policy enabled. What are the changes that I need to make, or where should I start looking into, to get the right set of network configurations?

I got pesky password prompts when trying to start the pseudo-distributed cluster
That's a sign you did not correctly establish a passwordless SSH keypair. Perhaps you did type a password when you generated the key? Or you didn't add it correctly into the authorized keys file for an SSH session.
This should not prompt for a password
$ ssh localhost
And if it does, generate keys again without a password
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Also, RHEL systems need SELinux disabled. I believe Cloudera and Hortonworks install guides also have you turning the firewall off
If you want a secure cluster, you would install and configure MIT Kerberos or Active Directory

Related

How to access phpMyAdmin from laptop via SSH tunnel through AWS bastion/jump server to EC2 instance using .ssh/config

Need to reach phpMyAdmin on an EC2 instance behind a bastion/jumpserver from local laptop.
Looking to reduce these steps into using .shh/config. The question seeks to solve the right configurations.
When connecting to EC2 without public bastion server to jump through, this is the normal way documented which does not work in my case because our deployment uses a public facing bastion:
https://docs.bitnami.com/aws/faq/get-started/access-phpmyadmin/
When you need to jump through a public facing bastion e.g.:
Local/Laptop ------> bastion/jumpserver -----> ec2
This above reference link does not follow the same workflow and documentation is sparse.
Setting up inbound/outbound rules for this capability is also sparse.
The preference is to use .ssh/config which is setup like this:
Host bastionHostTunnel
Hostname <publicBastionIp>
User <bastionusername>
ForwardAgent yes
IdentityFile <local path to .pem file>
Host ec2Host
Hostname <privateEC2IP>
User <ec2 username>
ForwardAgent yes
IdentityFile <local path to .pem file>
# -A Enable forwarding of the Authentication agent connection
# -W used on older machines instead of -J to bounce through
# %h the remote hostname
# On Windows 10(only?) seems must call ssh.exe instead of only ssh
ProxyCommand ssh.exe -A -W %h:22 bastionHostTunnel
I obviously left out vars in <> above - but I have them and have verified similar configuration is working for enabling SFTP as above with FileZilla.
Then in shell call this to bind port localhost:8888 (http://127.0.0.1:8888):
ssh ec2Host -D 8888
Then ought to be able to open browser and go to the following to access phpMyAdmin:
http://127.0.0.1:8888/phpmyadmin
Current issue is that this process is hanging and possibly refusing the connection. This points to either bad configuration above or incorrect inbound/outbound rules for either/both bastion and ec2 instance.
Has anyone here had similar issue and was able to solve and could share further, much appreciated. Plus any extra clues as far as debugging the overall process would help in the answer.
I'm most curious if it works if you specific everything on the command line...once you determine that works, you can start refactoring to put some aspects in to .ssh/config. It's usually easier for me to find errors with my configuration if everything is on the command line, plus I don't know that I see the correct forwarding options all listed there.
Unless I'm very mistaken, you don't need any reference to the ec2 host in your SSH config file because you're using the jump machine to redirect localhost traffic there, you wouldn't directly be able to reach the ec2 host machine from your local machine using an SSH tunnel.
There are many ways to do a tunnel, but when I do this, I use a command like ssh -L 8080:destination:80 -i <keyfile> me#jumpbox . destination must be reachable from jumpbox, which I can verify by first using ssh -i <keyfile> jumpbox then, once on that machine, ssh destination. If there's a problem along the way, it's easier to debug these little steps (for instance, if I can't connect by manual ssh to jumpbox then I know the tunnel will never work).

ssh remote access on bash Windows 10

I'd like to connect remotely to the Ubuntu bash on my Windows 10.
I've got an answer on port 22 but when it asks for username and password, it says access denied...
I've already created a user "root" and i've done a "sudo passwd root"
Windows firewall is deactivated (service stopped).
Thanks !
Stop ssh server and ssh broker services on Windows to avoid SSH port conflict
Makes below changes in /etc/ssh/sshd_config:
UsePrivilegeSeparation no
PasswordAuthentication yes
Then restart ssh server by sudo service ssh restart. If you see could not load host key error then create host key as below and restart ssh service:
sudo ssh-keygen -f /etc/ssh/ssh_host_rsa_key -b 4096 -t rsa
First, You need to Stop/Disable Windows 10 SSH Server Broker Services or Change OpenSSH Port.
After that, modify the /etc/ssh/sshd_config:
UsePrivilegeSeparation no
PubkeyAuthentication no
PasswordAuthentication yes
I started having issues after installing VirtualBox with my Bash on Ubuntu on Windows SSH connection. I stopped the VM, uninstalled, and still couldn't authenticate. The user 'Nobody' is correct, the best solution would either to disable the SSH Broker for Windows 10, or just change the port for SSH on the Linux subsystem, which I did, and works perfectly.
You must also in most cases add a inbound firewall rule to allow traffic on port 22.. the default setup only allows for inbound traffic using the windows implementation of ssh, therefore not allowing any traffic for the openssh-server. Just follow the instructions above and then add a rule for port 22 inbound in Windows Firewall and you should be set.
Since windows implementation doesn't provide chroot you need to modify the /etc/ssh/sshd_config
UsePrivilegeSeparation no
Also you will need to create a user using useradd command or so.

Correct steps to setup Ambari on a centos VM

I am using: CentOS 7 with Ambari 2.1.1 to try and setup a single node setup on a VM. I want to do this to install vanilla hadoop etc instead of installing a prepackaged VM with some modified version of hadoop.
I am logged in as root. I have created a ssh key pair. I also ran:
"cat id_rsa.pub > authorized_keys"
"chmod 700 .ssh/"
"chmod 640 ./ssh/authorized_keys"
I have edited /etc/ssh/sshd_config to: permit empty passwords, allow root login and also to state where the authorized_keys file is.
Without a password I can run "ssh root#localhost" and log in fine.
I have ran "ambari-server setup" successfully and logged in at localhost:8080 with user: admin pass: admin.
In "Install Options" FQDN I typed "localhost.test" and have selected a copy of my private key for the Host Registration Information.
But not matter what I do I am unable to get the components install under the confirmed hosts part and thus can't get any further.
Can someone please point out what I am missing here?
Thanks to Yusaku on HortonWorks forum for the help.
Ok I ran:
hostname -f
and got localhost
python -c ‘import socket; print socket.getfqdn()’
and got localhost.localdomain
By entering localhost.localdomain into the FQDN I was able to get the install working.

Passwordless SSH not establishing

I am trying to install Hadoop on Amazon EC2 Instance CentOS 6.5.I am connected to the instance but want to make the session passwordless SSH. To do this I used the following commands:
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub node01
I get an error saying : Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I tried logging in as "root" as well as "ec2-user" but it shows the same error.
Could anyone help on this.
I have created a simple scriptlet to ease this process on EC2 - Ubuntu instances.
You can check it out here.
Just give the machine names and key path, you are done!
https://github.com/hshinde/pwless

use ssh private key from host in vagrant guest

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Resources