I have a weird problem. when I run a playbook with the -u root option it does not run on the remote machine. My colleagues are able to run it as root with the -u root option.
Weirdest thing is that I can ssh to the remote machine. I can sudo to root on the machine and I am able to run playbooks without the -u root option there.
It must be some kind of configuration thing but I have looked everywhere and can't seem to find it. Anyone that can point me in the right direction?
Looking on the documentation. The -u means you want to connect to the server as a root user so you should have configured ssh keys or you should add a --become flag to able pass the password for root user.
Connection Options:
control as whom and how to connect to hosts
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=None)
Related
Need to reach phpMyAdmin on an EC2 instance behind a bastion/jumpserver from local laptop.
Looking to reduce these steps into using .shh/config. The question seeks to solve the right configurations.
When connecting to EC2 without public bastion server to jump through, this is the normal way documented which does not work in my case because our deployment uses a public facing bastion:
https://docs.bitnami.com/aws/faq/get-started/access-phpmyadmin/
When you need to jump through a public facing bastion e.g.:
Local/Laptop ------> bastion/jumpserver -----> ec2
This above reference link does not follow the same workflow and documentation is sparse.
Setting up inbound/outbound rules for this capability is also sparse.
The preference is to use .ssh/config which is setup like this:
Host bastionHostTunnel
Hostname <publicBastionIp>
User <bastionusername>
ForwardAgent yes
IdentityFile <local path to .pem file>
Host ec2Host
Hostname <privateEC2IP>
User <ec2 username>
ForwardAgent yes
IdentityFile <local path to .pem file>
# -A Enable forwarding of the Authentication agent connection
# -W used on older machines instead of -J to bounce through
# %h the remote hostname
# On Windows 10(only?) seems must call ssh.exe instead of only ssh
ProxyCommand ssh.exe -A -W %h:22 bastionHostTunnel
I obviously left out vars in <> above - but I have them and have verified similar configuration is working for enabling SFTP as above with FileZilla.
Then in shell call this to bind port localhost:8888 (http://127.0.0.1:8888):
ssh ec2Host -D 8888
Then ought to be able to open browser and go to the following to access phpMyAdmin:
http://127.0.0.1:8888/phpmyadmin
Current issue is that this process is hanging and possibly refusing the connection. This points to either bad configuration above or incorrect inbound/outbound rules for either/both bastion and ec2 instance.
Has anyone here had similar issue and was able to solve and could share further, much appreciated. Plus any extra clues as far as debugging the overall process would help in the answer.
I'm most curious if it works if you specific everything on the command line...once you determine that works, you can start refactoring to put some aspects in to .ssh/config. It's usually easier for me to find errors with my configuration if everything is on the command line, plus I don't know that I see the correct forwarding options all listed there.
Unless I'm very mistaken, you don't need any reference to the ec2 host in your SSH config file because you're using the jump machine to redirect localhost traffic there, you wouldn't directly be able to reach the ec2 host machine from your local machine using an SSH tunnel.
There are many ways to do a tunnel, but when I do this, I use a command like ssh -L 8080:destination:80 -i <keyfile> me#jumpbox . destination must be reachable from jumpbox, which I can verify by first using ssh -i <keyfile> jumpbox then, once on that machine, ssh destination. If there's a problem along the way, it's easier to debug these little steps (for instance, if I can't connect by manual ssh to jumpbox then I know the tunnel will never work).
I am new to Ansible and started learning and working on Ansible Playboks especially on network automation. Part of our hosting infra, inorder to login to any device we have default script runs to ssh into the device, something like goto . Hence no need to give any username and password, it directly logs into the device.
How we can include this customization in Ansible playbook without using any username or password.
Ansible supports using ssh keys.
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.
Refer to documentation here
Also, it is a good idea to read the 'getting started' page
You will still need to supply a Username, that the SSH Key belongs to:
Guide on Setting up an SSH key for a Linux User: Here
Once SSH Key is configured and Copied over to your Ansible Server:
Edit the Sudoers File on the Slave Node and set NOPASSWD for the user, that way your user won't be prompted for a password when you are duing Sudo Commands: Reference Here
I am using: CentOS 7 with Ambari 2.1.1 to try and setup a single node setup on a VM. I want to do this to install vanilla hadoop etc instead of installing a prepackaged VM with some modified version of hadoop.
I am logged in as root. I have created a ssh key pair. I also ran:
"cat id_rsa.pub > authorized_keys"
"chmod 700 .ssh/"
"chmod 640 ./ssh/authorized_keys"
I have edited /etc/ssh/sshd_config to: permit empty passwords, allow root login and also to state where the authorized_keys file is.
Without a password I can run "ssh root#localhost" and log in fine.
I have ran "ambari-server setup" successfully and logged in at localhost:8080 with user: admin pass: admin.
In "Install Options" FQDN I typed "localhost.test" and have selected a copy of my private key for the Host Registration Information.
But not matter what I do I am unable to get the components install under the confirmed hosts part and thus can't get any further.
Can someone please point out what I am missing here?
Thanks to Yusaku on HortonWorks forum for the help.
Ok I ran:
hostname -f
and got localhost
python -c ‘import socket; print socket.getfqdn()’
and got localhost.localdomain
By entering localhost.localdomain into the FQDN I was able to get the install working.
I'm trying to use Vagrant and Ansible to create a developer VM environment. I'm able to connect just fine and install packages. My issue seems to be with ssh, git, and keyfiles. My setup is unfortunately rather complicated, and I don't have the ability to change that. The git repositories are hosted on a machine that I have to connect to via a bastion host with a keyfile.
My local ssh config file has all the necessary proxy commands to make this work. I have SSH forwarding my key, because I can log into the VM manually and use git. Via Ansible it doesn't seem to know about hosts that should be setup via the ssh config file.
I am not running the git clone as sudo, and I am using accept_hostkey. It just doesn't seem to know about the repository host at all.
I have also tried adding an ansible.cfg with the following command:
ssh_args = -o ControlPersist=15m -F ssh.config -q
The ssh.config file is the same as my ~/.ssh/config that happens to work when doing the git clones manually. I'm also doing this as the vagrant user manually, and I have remote_user set to vagrant in my playbook.
I'm just kind of stumped as to how this is supposed to work.
If I understand correctly, you can do it manually git clone into your vagrant machine?
If yes, then you can do like this, as you have already told us that the both machine has exactly the same ~/.ssh/config file, then you can do like this which I did during the git clone, when I got error:
- name: Pull sources from the repository.
git: repo='git#bitbucket.org:test/test.git' version=master dest=/var/www accept_hostkey=True force=yes recursive=no key_file=~/.ssh/id_rsa
Sometime, explicitly defined the key_file, accept_hostkey=True and force=yes solve the problem.
On the other hand, if you want to explicit define that always us the ssh connection instead of paramiko, then you can set into your ansible.cfg file, which is located at /etc/ansible/ansible.cfg
[defaults]
transport=ssh
There is another technique that I have read somewhere, you can also try that please to teach Ansible to talk to Git server on your behalf (again this change is in /etc/ansible/ansible.cfg)
[ssh_connection]
ssh_args = -o ForwardAgent=yes
Hope this will help you. Thanks
I'm not too familiar with Ansible but from docs, Ansible supports 2 ssh transports: OpenSSH, Paramiko (Python's SSH).
Unless you manually choose which one to use, it might choose Paramiko instead of OpenSSH.
This can explain the troubles you are having, since ssh_args is OpenSSH specific setting.
So the issue turned out to be that I was actually running one of my git clones as root after all.
For the SSH key to be forwarded properly in that case, you have to edit /etc/sudoers (with visudo) and update env_keep so that SSH_AUTH_SOCK is preserved.
I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?