use ssh private key from host in vagrant guest - vagrant

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.

Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.

It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783

Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Related

How to access phpMyAdmin from laptop via SSH tunnel through AWS bastion/jump server to EC2 instance using .ssh/config

Need to reach phpMyAdmin on an EC2 instance behind a bastion/jumpserver from local laptop.
Looking to reduce these steps into using .shh/config. The question seeks to solve the right configurations.
When connecting to EC2 without public bastion server to jump through, this is the normal way documented which does not work in my case because our deployment uses a public facing bastion:
https://docs.bitnami.com/aws/faq/get-started/access-phpmyadmin/
When you need to jump through a public facing bastion e.g.:
Local/Laptop ------> bastion/jumpserver -----> ec2
This above reference link does not follow the same workflow and documentation is sparse.
Setting up inbound/outbound rules for this capability is also sparse.
The preference is to use .ssh/config which is setup like this:
Host bastionHostTunnel
Hostname <publicBastionIp>
User <bastionusername>
ForwardAgent yes
IdentityFile <local path to .pem file>
Host ec2Host
Hostname <privateEC2IP>
User <ec2 username>
ForwardAgent yes
IdentityFile <local path to .pem file>
# -A Enable forwarding of the Authentication agent connection
# -W used on older machines instead of -J to bounce through
# %h the remote hostname
# On Windows 10(only?) seems must call ssh.exe instead of only ssh
ProxyCommand ssh.exe -A -W %h:22 bastionHostTunnel
I obviously left out vars in <> above - but I have them and have verified similar configuration is working for enabling SFTP as above with FileZilla.
Then in shell call this to bind port localhost:8888 (http://127.0.0.1:8888):
ssh ec2Host -D 8888
Then ought to be able to open browser and go to the following to access phpMyAdmin:
http://127.0.0.1:8888/phpmyadmin
Current issue is that this process is hanging and possibly refusing the connection. This points to either bad configuration above or incorrect inbound/outbound rules for either/both bastion and ec2 instance.
Has anyone here had similar issue and was able to solve and could share further, much appreciated. Plus any extra clues as far as debugging the overall process would help in the answer.
I'm most curious if it works if you specific everything on the command line...once you determine that works, you can start refactoring to put some aspects in to .ssh/config. It's usually easier for me to find errors with my configuration if everything is on the command line, plus I don't know that I see the correct forwarding options all listed there.
Unless I'm very mistaken, you don't need any reference to the ec2 host in your SSH config file because you're using the jump machine to redirect localhost traffic there, you wouldn't directly be able to reach the ec2 host machine from your local machine using an SSH tunnel.
There are many ways to do a tunnel, but when I do this, I use a command like ssh -L 8080:destination:80 -i <keyfile> me#jumpbox . destination must be reachable from jumpbox, which I can verify by first using ssh -i <keyfile> jumpbox then, once on that machine, ssh destination. If there's a problem along the way, it's easier to debug these little steps (for instance, if I can't connect by manual ssh to jumpbox then I know the tunnel will never work).

What's the use of authorize and keys options in Homestead.yaml file?

I noticed that I can provision a box, and ssh to it even after commenting out both options in Homestead.yaml, as in:
# authorize: ~/.ssh/id_rsa.pub
# keys:
# - ~/.ssh/id_rsa
Are they necessary at all? I suppose that they let me specify public/private keys for vagrant ssh, but as I understand such pair is generated by vagrant anyway (see here). What is the actual need for those settings then?
The reason I'd like to know that is that I keep running into an issue where I cannot ssh into a box as vagrant up keeps hanging up on homestead-7: SSH auth method: private key (as in this question). With authorize and keys options commented out I haven't had problem with vagrant up so far.
SSH keys are used for passwordless authentication. In order to use this, you will need to run ssh-keygen then press enter for all defaults. Once this has been generated, then Homestead will use that to ssh into the VM and run the necessary commands.
If you are running Windows 10, then you will need to install an SSH client. This could be done in various ways such as GIT Bash, Putty, OpenSSH and WSL. If you comment the lines out, then it's likely it will be logging into the machine using the default username/password combo given to the machine.

Vagrant ssh promtps for password

I have a strange problem with vagrant ssh. Similar questions, like Vagrant asks for password after SSH key update, or (vagrant & ssh) require password, or Vagrant ssh authentication failure do not help me.
So, the plot.
I have a virtual machine running Ubuntu 14.04.3. All setup was made according to this article: https://blog.engineyard.com/2014/building-a-vagrant-box.
Note: I can ssh to this virtual machine using Putty with vagrant's insecure_private_key (converted to *.ppk), which is located "C:/Users/Gino/.vagrant.d/insecure_private_key. Password is not promtped.
Then I packaged this virtual machine, init vagrant with this package and ran vagrant up. I got "Warning: Authentication failure. Retrying..." error. But nevertheless I could vagrant ssh to this machine, but it asked me a password. And if I tried to ssh to it using Putty with the necessary key (as in the first paragraph), it asked me for a password too.
I vagrant halted this machine, found it in VirtualBox VM's list and ran it manually. After that I tried to ssh to this machine using Putty with the same key and succeed - I could logon without any password.
Result of vagrant ssh-config, if needed:
h:\VagrantBoxes\main-server32>vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "C:/Users/Gino/.vagrant.d/insecure_private_key"
IdentitiesOnly yes
LogLevel FATAL
My Vagrantfile (it was generated automatically, almost nothing there, only a suggested line from comments was added):
Vagrant.configure(2) do |config|
config.vm.box = "vagrant-main-server32"
config.ssh.insert_key = false
end
So what's the mystery here? Why ssh using key works without vagrant up and fails and prompts for password with it?
Note. Another funny thing: it still can not authenticate during
vagrant up. But if at the time when errors "authentication failure"
appear I log in to vm through virtualbox, it also succeed to log in in
the window with vagrant up. And then vagrant ssh works.
I had the same issue with vagrant 1.8.1, on several boxes I use (ie: geerlingguy/centos6)
I didn't have any problem with Vagrant 1.7 on those boxes.
After some research on why i could not ssh in that box, it appears that /home/vagrant on the box had 755 permissions and ssh prevent authentication to user with those permissions
extract of /var/log/secure:
Jan 28 15:11:36 server sshd[11721]: Authentication refused: bad ownership or modes for directory /home/vagrant
To fix that vm, I only have to change the permissions /home/vagrant (did a chmod 700 on it) and now i can ssh directly into my boxes
I don't knwo how to fix it directly I think you should modify your box directly
Hope this helps!
edit: I thought it was a shared folder from the host but it's /vagrant that is shared not /home/vagrant
I had this old setting at the top of ~/.ssh/config.
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
After removing it, vagrant ssh stopped asking for password.
If you saved your Vagrantfile on an external HardDrive and use exfat because you are working cross platform like me, you will also encounter this error. Since exfat does not save permissions, ssh will always think that the private keys permission is 777 => to open.
I put together this script as a workaround which runs on powershell and bash (so compatible with Linux, Mac and Windows):
# ssh-agent # uncomment if your ssh-agent isn't running as a service
cat V:\vm\arch_template\.vagrant\machines\default\virtualbox\private_key | ssh-add -
ssh -p 2222 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no vagrant#localhost
It requieres a working ssh-agent configuration. Also pay attantion to the correct port! Vagrant changes it to a different port if 2222 isn't availabe during vagrant up.
I had the same issue, getting vagrant#127.0.0.1's password: when starting up vagrant, after inputting the supposed password [vagrant], I could connect to the VM. However, after reading through other solutions, I tried ssh-agent on the same directory where the vagrantfile that was initiated is, and vagrant-ssh, and I am able to connect to the running instance.

Vagrant, Ansible, and SSH forwarding

I'm trying to use Vagrant and Ansible to create a developer VM environment. I'm able to connect just fine and install packages. My issue seems to be with ssh, git, and keyfiles. My setup is unfortunately rather complicated, and I don't have the ability to change that. The git repositories are hosted on a machine that I have to connect to via a bastion host with a keyfile.
My local ssh config file has all the necessary proxy commands to make this work. I have SSH forwarding my key, because I can log into the VM manually and use git. Via Ansible it doesn't seem to know about hosts that should be setup via the ssh config file.
I am not running the git clone as sudo, and I am using accept_hostkey. It just doesn't seem to know about the repository host at all.
I have also tried adding an ansible.cfg with the following command:
ssh_args = -o ControlPersist=15m -F ssh.config -q
The ssh.config file is the same as my ~/.ssh/config that happens to work when doing the git clones manually. I'm also doing this as the vagrant user manually, and I have remote_user set to vagrant in my playbook.
I'm just kind of stumped as to how this is supposed to work.
If I understand correctly, you can do it manually git clone into your vagrant machine?
If yes, then you can do like this, as you have already told us that the both machine has exactly the same ~/.ssh/config file, then you can do like this which I did during the git clone, when I got error:
- name: Pull sources from the repository.
git: repo='git#bitbucket.org:test/test.git' version=master dest=/var/www accept_hostkey=True force=yes recursive=no key_file=~/.ssh/id_rsa
Sometime, explicitly defined the key_file, accept_hostkey=True and force=yes solve the problem.
On the other hand, if you want to explicit define that always us the ssh connection instead of paramiko, then you can set into your ansible.cfg file, which is located at /etc/ansible/ansible.cfg
[defaults]
transport=ssh
There is another technique that I have read somewhere, you can also try that please to teach Ansible to talk to Git server on your behalf (again this change is in /etc/ansible/ansible.cfg)
[ssh_connection]
ssh_args = -o ForwardAgent=yes
Hope this will help you. Thanks
I'm not too familiar with Ansible but from docs, Ansible supports 2 ssh transports: OpenSSH, Paramiko (Python's SSH).
Unless you manually choose which one to use, it might choose Paramiko instead of OpenSSH.
This can explain the troubles you are having, since ssh_args is OpenSSH specific setting.
So the issue turned out to be that I was actually running one of my git clones as root after all.
For the SSH key to be forwarded properly in that case, you have to edit /etc/sudoers (with visudo) and update env_keep so that SSH_AUTH_SOCK is preserved.

Setup passphraseless ssh to localhost on OS X

I'm trying to get Hadoop's Pseudo-Distributed Operation example (http://hadoop.apache.org/common/docs/stable/single_node_setup.html) to work on OS X Lion, but am having trouble getting the ssh to work without a passphrase.
The instructions say the following:
Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase: $
ssh localhost
I'm getting connection refused:
archos:hadoop-0.20.203.0 travis$ ssh localhost
ssh: connect to host localhost port 22: Connection refused
If you cannot ssh to localhost without a passphrase, execute the
following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
After this step I am still getting connection refused. Any ideas???
Sounds like you don't have SSH enabled. Should be in the network settings control panel somewhere.
You go to "System Preferences > Sharing > Remote Access" and there's a list of authorized users. Change it to "All Users".
That's solves this problem.
Check the permissions on your .ssh directory. Some ssh implementations require that the directory be chmod 700. Otherwise, they just ignore it.
Also, check the output of
ssh -v localhost
to see how the ssh client is trying to connect. The output is very detailed, and will help you decide if it's an authentication problem.
I had the same issue.
Please check if the ssh server is running or not.
If yes, open the /etc/init.d/ssh_config and /etc/init.d/sshd_config files. The issue is that the server is running on a different port and the client is pointing to different port.
Before this please ensure that openssh-server and client are installed.
I had the same problem and i solved it the following manner :
SSH is activated.
ssh -v localhost (as stated by Herko)
In the ouput, i identified that the authentication method by DSA is not supported.
debug1: Skipping ssh-dss key /Users/john/.ssh/id_dsa - not in PubkeyAcceptedKeyTypes
I simply re-generate an ECDSA keys and remove the DSA key pairs.
After the keys generation, the procedure given on Hadoop documentation holds.
Therefore, it is important to check, if the authentication method is supported by the Openssh configuration.

Resources