how can add my private key to a target host through ansible - vagrant

i have an shh key from /home/renz/.shh/id_rsa.pub. I want to add this to my target host in /root/.shh/authorized_keys through ansible. I tried this but didn't work.
---
- hosts: snapzio
tasks:
- name: Set authorized key took from file
authorized_key:
user: master
state: present
key: "{{ lookup('file', '/home/renz/.ssh/id_rsa.pub') }}"
path: /root/.ssh/authorized_keys
because in the first place, i cannot communicate with the host because my key is not in the authorized keys. I think this idea makes sense if i want to communicate to many hosts. instead of just manually copy and paste the key.

As others have mentioned, if the account you use with Ansible doesn't have a SSH key installed, you'll have to fall back to using password authentication. Assuming InstallMyKey.yml is your playbook, you could run something like this:
ansible-playbook InstallMyKey.yml --ask-become-pass
You'll need to add the remote_user: root line to your YML between the hosts: and tasks: lines, then type in the root password.
Assuming the playbook succeds and everything else in the root SSH settings are correct, your next run of a playbook should use the renz ssh key and get on without a password.

Related

How to add SSH fingerprints to known hosts in Ansible?

I have the following inventory file:
[all]
192.168.1.107
192.168.1.108
192.168.1.109
I want to add fingerprints for these hosts to known_hosts file on local machine.
I know that I can use the ansible.builtin.known_hosts but based on the docs:
Name parameter must match with "hostname" or "ip" present in key
attribute.
it seems like I must already have keys generated and I must have three sets of keys - one set per host. I would like to have just one key for all my hosts.
Right now I can use this:
- name: accept new remote host ssh fingerprints at the local host
shell: "ssh-keyscan -t 'ecdsa' {{item}} >> {{ssh_dir}}known_hosts"
with_inventory_hostnames:
- all
but the problem with this approach is that it is not idempotent - if I run it three times it will add three similar lines in the known_hosts file.
Another solution would be to check the known_hosts file for presence of a host ip and add it only if it is not present, but I could not figure out how to use variables in when condition to check for more than one host.
So the question is how can I add hosts fingerprints to local known_hosts file before generating a set of private/public keys in idempotent manner?
Here in my answer to "How to include all host keys from all hosts in group" I created a small Ansible look-up module host_ssh_keys to extract public SSH keys from the host inventory. Adding all hosts' public ssh keys to /etc/ssh/ssh_known_hosts is then as simple as this, thanks to Ansible's integration of loops with look-up plugins:
- name: Add public keys of all inventory hosts to known_hosts
ansible.builtin.known_hosts:
path: /etc/ssh/ssh_known_hosts
name: "{{ item.host }}"
key: "{{ item.known_hosts }}"
with_host_ssh_keys: "{{ ansible_play_hosts }}"
For public SSH-Keys I use this one:
- hosts: localhost
tasks:
- set_fact:
linuxkey: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
check_mode: no
- hosts: all
tasks:
- shell:
cmd: "sudo su - {{ application_user }}"
stdin: "[[ ! `grep \"{{ hostvars['localhost']['linuxkey'] }}\" ~/.ssh/authorized_keys` ]] && echo '{{ hostvars['localhost']['linuxkey'] }}' >> ~/.ssh/authorized_keys"
warn: no
executable: /bin/bash
register: results
failed_when: results.rc not in [0,1]
I think you can easy adapt it for known_hosts file

connecting to a remote host with ansible-vault encrypted private key does not work

I can ssh to a remote server if I use the ansible command module
e.g
tasks:
- name: ssh to remote machine
command: ssh -i key ansible#172.16.2.2
However as this will be stored in github, I encrypted the private ssh key with ansible-vault.
Once I rerun the same command with the vault decryption password (--ask-vault-pass) it will not connect. It's as if the encryption/de-encryption does not return the same ssh key.
What am I doing wrong here?
My legendary colleague found a solution if anyone else comes across the same issue.
Ansible SSH private key in source control?
You need to copy your encrypted ssh private key to another file first to decrypt it and then you can use it e.g.
- hosts: localhost
gather_facts: false
vars:
source_key: key
dest_key: key2
tasks:
- name: Install ssh key
copy:
src: "{{ source_key }}"
dest: "{{ dest_key }}"
mode: 0600
- name: scp over the cert and key to remote server
command: ssh -i key2 ec2-user#1.1.1.1

Ansible: How to clone private Git repository via SSH with generating and encrypting the keys on the target

I would like to use Ansible to
Generate and encrypt an SSH key pair on the target
Add the SSH public key to GitHub
Clone a private GitHub repository
I explicitly do not want to forward any SSH keys from the control node to the target (never understood the point of doing this) or even worse, copy SSH keys from the control node to the target. I also don't want to keep the private key on the target in plain text. All answers to other questions I've seen always suggest one of these three ways, all of which are bad.
Instead, I want to generate an own pair of SSH keys on each target and of course encrypt the private key so that it doesn't lie around on the target in plain text.
I've so far not managed to do this. Can anyone help?
Here is what I've tried so far (assume all variables used exist):
- name: Generate target SSH key for accessing GitHub
command: "ssh-keygen -t ed25519 -f {{ github_ssh_key_path }} -N '{{ github_ssh_key_encryption_password }}'"
- name: Fetch local target SSH public key
command: "cat {{ github_ssh_key_path }}.pub"
register: ssh_pub_key
- name: Authorize target SSH public key with GitHub
github_key:
name: Access key for target "{{ target_serial }}"
pubkey: "{{ ssh_pub_key.stdout }}"
token: "{{ github_access_token }}"
- name: Clone private git repository
git:
repo: git#github.com:my_org/private_repo.git
clone: yes
dest: /path/to/private_repo
The problem with the encrypted key is that I then get a "permission denied" error from GitHub. I probably need to add the key to ssh-agent, but I haven't been able to figure out how. Or is there an alternative?
The above works fine if I do not encrypt the SSH private key, i.e. if I do
- name: Generate target SSH key for accessing GitHub
command: "ssh-keygen -t ed25519 -f {{ github_ssh_key_path }} -N ''"
instead of the command above, but of course I don't want that.
- name: Generate target SSH key for accessing GitHub
ansible.builtin.command:
cmd: "ssh-keygen -t ed25519 -f {{ github_ssh_key_path }} -N '{{ github_ssh_key_encryption_password }}'"
creates: "{{ github_ssh_key_path }}"
- name: Fetch local target SSH public key
ansible.builtin.slurp: "{{ github_ssh_key_path }}.pub"
register: ssh_pub_key
- name: Authorize target SSH public key with GitHub
github_deploy_key:
name: Access key for target "{{ target_serial }}"
key: "{{ ssh_pub_key.content }}"
owner: "{{ my_org }}"
read_only: true
repo: "{{ private_repo }}"
token: "{{ github_access_token }}"
- name: Clone private git repository
git:
repo: "git#github.com:{{ my_org }}/{{ private_repo }}.git"
clone: true
dest: "/path/to/{{ private_repo }}"
Use community.general.github_deploy_key
to instantiate a deploy key with read_only: true (default) so that the ssh key is tied only to the single GitHub repo. This way it can only be used to get (and not modify) the specific repo that you are cloning to the system anyways. If they can read the plain text private key on the host, they can read the contents of the repo locally. Technically, this does potentially expose more. Future versions of the repo (since the initial pull by the ansible playbook) could be accessed directly from GitHub. If the initial pull restricted what was pulled (e.g. depth: 1 or single-branch: true) more history or branch structure than exists on the local file system could be pulled with the key.
Use ansible.builtin.slurp to read the contents of the public key without resorting to command (better idempotence reporting).
Use creates parameter with ansible.builtin.command module to not recreate the key each time.
Use true/false to avoid Norway problem related ambiguity with yes/no
While this does not answer the tactical question, it may answer the strategic question, depending on the risk budget.
If the private key being encrypted is a hard requirement, it looks like shelling out to ssh-add after launching ssh-agent will be required, and highly host OS dependent for exact syntax details (e.g. how to give ssh-add the passphrase, as discussed here How to make ssh-add read passphrase from a file?). Further, the implementation details vary depending on the ongoing access you want the host to have with the key:
Should it only be usable when this ansible playbook unlocks the key, or should the passphrase be saved to a system keychain (highly host OS dependent)?
Should ssh-add implement a destination constraint (-h) when adding the keys to the agent?
Should the playbook add GitHub as a known host in the ssh config?
ansible_user ssh config, or system-wide?
public GitHub.com, or local instance?
Should ssh-agent start automatically (e.g. on reboot or login), or through manual/ansible intervention?
Do you even want to keep the key on server after completing the clone?
No decreases idempotency, but arguably increases security (or alternatively securty theater).
If not, you will also want to state: absent the github_deploy_key, too.
It does not look like there are (yet) ansible modules to manage the remote host ssh-agent state or keys. ssh agent forwarding seems to be widely accepted by the community and accomplishes most objectives (keeping the authorized key from being persistently stored on the remote host, only allowing use of the key while the agent is forwarded, etc.), so I expect no such module will be forthcoming.

How to add a sudo password to a delegated host

How do you add a sudo password to a delegated host?
eg.
hosts: host1
- name: Some remote command
command: sudo some command
register: result
delegate_to: host2
become: yes
I get "Incorrect sudo password" because I assume it is using the sudo pass for host1. Is there a way to make it use a different password?
It has been a while - but I was struggling with this as well and managed to solve it so here is a summarized answer:
As pointed out correctly the issue is that the ansible_become_password variable is set to to your original host (host1) when running the delegated task on host2.
Option 1: Add become password to inventory and delegate facts
One option to solve this is to specify the become password for host2 in your inventory, and secure it using ansible vault (as they have done here: How to specify become password for tasks delegated to localhost). Then you should be able to trigger using the correct sudo pw with delegate_facts as they did here Ansible delegate_to "Incorrect sudo password".
Option 2: Prompt and overwrite pass manually
If you prefer to get prompted for the second sudo password instead, you can do this by using a vars_promt to specify the second sudo pw during runtime:
- hosts: host1
vars_prompt:
- name: custom_become_pass
prompt: enter the custom become password for host2
private: yes
tasks:
...
Then you can just replace the variable ansible_become_password before running your delegated tasks and it will use the correct sudo password:
tasks:
- name: do stuff on host1
...
- name: set custom become
set_fact:
ansible_become_password: '{{ custom_become_pass }}'
- name: perform delegated task
command: sudo some command
register: result
delegate_to: host2
become: yes
You could try to use ansible_become_password variable directly inside the task's var section.
Ansible doc

How to combine multiple vars files per host?

I have a playbook running against multiple servers. All servers require a sudo password to be specified, which is specific to each user running the playbook. When running the playbook, I can't use --ask-become-pass, because the sudo passwords on the servers differ. This is the same situation as in another question about multiple sudo passwords.
A working solution is to specify ansible_become_pass in host_vars:
# host_vars/prod01.yml
ansible_become_pass: secret_prod01_password
domain: prod01.example.com
# host_vars/prod02.yml
ansible_become_pass: secret_prod02_password
domain: prod02.example.com
Besides ansible_become_pass, there are other variables defined per host. These variables should be committed to the git repository. However, as ansible_become_pass is specific to each user running the playbook, I'd like to have a separate file (ideally, vaulted) which specifies the password per host.
I imagine the following:
# host_vars/prod01.yml: shared in git
domain: prod01.example.com
# host_vars/prod01_secret.yml: in .gitignore
ansible_become_pass: secret_prod01_password
I imagine both files to be combined by Ansible when running the playbook. Is this possible in Ansible? If so, how?
You should be able to use the include_vars task with the inventory_hostname or ansible_hostname variable. For example:
- name: Include host specific variables
include_vars: "{{ ansible_hostname }}.yml"
- name: Include host specific secret variables
include_vars: "{{ ansible_hostname }}_secret.yml"
An even better solution would be to address the problem of users having unique passwords on different hosts.
You could create a new group in the inventory file, maybe sudo-hosts. Put all your sudo host in this group. Then create a file under the directory group_vars with the name of this goup. In this file put the secret yaml-structured text.
sudo_hosts:
host1:
password: xyz
othersecret_stuff: abc
host2:
...
then use ansbile-vault to encrypt this file with ONE password. Call the playbook with option --ask-vault-pass
and you can use your secrets with
"{{ sudo_host['ansible_host'].password }}"

Resources