Using cloud-init to write_files to ~/.ssh/ breaks SSH to EC2 instance (perhaps any machine) - amazon-ec2

I'm using a Cloudformation template in YAML with an embedded cloud-init UserData to set hostname and install packages and so on, and I've found that once I include the write_files directive it will break the default SSH key on the EC2 instance i.e. it seems to interfer with whatever process AWS uses to manage authorized_files, in EC2 logs I can see fingerprints of random keys be generating, not the expected keypair.
#cloud-config
hostname: ${InstanceHostname}
fqdn: ${InstanceHostname}.${PublicDomainName}
manage_etc_hosts: true
package_update: true
package_upgrade: true
packages:
- build-essential
- git
write_files:
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: "ubuntu:ubuntu"
content: |
Host github.com
IdentityFile ~/.ssh/git
Host *.int.${PublicDomainName}
IdentityFile ~/.ssh/default
User ubuntu
power_state:
timeout: 120
message: Rebooting to ensure hostname has stuck correctly
mode: reboot
Removing the write_files block works fine, leave it in and I cannot SSH to the host due to ssh key mismatch.
So is it due writing a file to ~/.ssh, maybe ~/.ssh/authorized_keys gets deleted? Or maybe the permissions on the directory are changed?
Appending to ~/.ssh/authorized_keys with runcmd works fine, but I'd like to use the proper write_files method for larger files

For AWS EC2 Linux instances, the SSH public key from your keypair is stored in ~/.ssh/authorized_keys. If you overwrite it with something else, make sure that you understand the implications.
The correct procedure is to append public keys from keypairs to authorized_keys AND set the correct file permissions.
If you are setting up a set of keypairs in authorized_keys, this is OK also. Make sure that you are formatting the file correctly with public keys and setting the file permissions correctly.
The file permissions should be 644 so that the SSH server can read them.
Another possible issue is that when you change authorized_keys you also need to restart the SSH server but I do see that you are rebooting the server which removes that problem.
Ubuntu example:
sudo service ssh restart

Related

How to build Ansible Playbook without username/password

I am new to Ansible and started learning and working on Ansible Playboks especially on network automation. Part of our hosting infra, inorder to login to any device we have default script runs to ssh into the device, something like goto . Hence no need to give any username and password, it directly logs into the device.
How we can include this customization in Ansible playbook without using any username or password.
Ansible supports using ssh keys.
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.
Refer to documentation here
Also, it is a good idea to read the 'getting started' page
You will still need to supply a Username, that the SSH Key belongs to:
Guide on Setting up an SSH key for a Linux User: Here
Once SSH Key is configured and Copied over to your Ansible Server:
Edit the Sudoers File on the Slave Node and set NOPASSWD for the user, that way your user won't be prompted for a password when you are duing Sudo Commands: Reference Here

Unable to connect to AWS instance even after manually adding in public key to authorized_keys

I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh

Add host to known_hosts file without prompt

I am trying to add a known host to the known_hosts file using ansible
vagrant#jedi:/vagrant$ ansible web -m known_hosts -a "name=web state=present"
paramiko: The authenticity of host 'web' can't be established.
The ssh-rsa key fingerprint is afb8cf4885468badb1a7b8afc16ac211.
Are you sure you want to continue connecting (yes/no)?
I keep getting prompted as above.
I thought this module took care of this? Otherwise I should do a keyscan of web and add that to the known_hosts file.
What am I doing wrong or misunderstanding?
#udondan explained very well. Just another note:
This is the default behaivior of Ansible. It will always check the host key. If Ansible never connected to that host this prompt will appear. That is the reason you aren't able to call known_hosts module without accepting the key first.
If this is not desirable you can set host_key_checking=False on ansible.cfg.
MORE SECURE APPROACH
The best approach is set this variable: export ANSIBLE_HOST_KEY_CHECKING=False while you're deploying new servers, then remove it. unset ANSIBLE_HOST_KEY_CHECKING
Host key checking it's an important security feature.
Learn more about host_key_checking.
The module takes care of it. But there are two problems:
Since you're running the task on the target host Ansible will first try to connect to the host before it is able run your task
Also, since the task runs on the target host, you will add the fingerprint to the know_hosts file on that machine, not locally.
You would need to run the task on the local machine, not on the target machine(s).
ansible localhost -m known_hosts -a "name=web state=present"
Otherwise I should do a keyscan of web and add that to the known_hosts file.
I think you need to do that anyway, since the known_hosts module expects you to pass the key. It does not auto-detect the fingerprint.

use ssh private key from host in vagrant guest

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Getting permission denied when pushing to git vps server

I installed git for windows, creating my ssh key and uploaded the public to my server.
I have this working on my Mac, trying to get it working on my windows machine now.
I did a :
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/*
Here is an image of me doing a ssh -v gitserveralias
I have a config file that has the gitserveralias and port etc.
I tried clearing out the known hosts file also.
My config looks like:
Host serveralias
User xxx
Hostname 123.234.452.232
Port 22222
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive true
IdentitiesOnly yes
PreferredAuthentications publickey
Again I have my setup working fine on my Mac.
Two things to check:
Do you have "PubkeyAuthentication yes" in sshd_config on your server? Try setting it.
Is there an offending key in .ssh/known_hosts? Try removing this file.

Resources