Ansible authorized_key cant find key file - ansible

I am starting to use Ansible to automate the creation of users. The following code creates the user and the /home/test_user_003/.ssh/id_rsa.pub file.
But the authorized_key step gives error "could not find file in lookup". Its there, I can see it.
---
- hosts: test
become: true
tasks:
- name: create user
user:
name: test_user_003
generate_ssh_key: yes
group: sudo
ssh_key_passphrase: xyz
- name: Set authorized key
authorized_key:
user: test_user_003
state: present
key: "{{ lookup('file', '/home/test_user_003/.ssh/id_rsa.pub') }}"
(I would be interested to know why "key" uses lookup, but thats for education only)

You create user on remote host but try to lookup generated key on local host (all lookups in ansible are executed locally).
You may want to capture (register) result of user task and use it's fields:
- name: create user
user:
name: test_user_003
generate_ssh_key: yes
group: sudo
ssh_key_passphrase: xyz
register: new_user
- name: Set authorized key
authorized_key:
user: test_user_003
state: present
key: "{{ new_user.ssh_public_key }}"

Related

Can Ansible Push SSH Pub key Even If the User's Home DIR Hasn't Been Created?

Servers are connected to the AD environment. I want to push the pub key to enable passwordless access.
However, I found my playbook failed if the user's home directory was not present. Do I have to create a separate task to create the user's home dir before kicking off the key-pushing job?
Here is my playbook:
---
- hosts: all
become: yes
tasks:
- name: Set authorized key from file
authorized_key:
user: user1
state: present
manage_dir: yes
key: "{{ lookup('file', item) }}"
with_fileglob:
- user1.pubkey

How do you change a junos device root password with Ansible?

I am completely new to Ansible and cannot find much documentation regarding changing the root password of a Juniper device. What is the framework to do something like this?
This is what I have so far but I am not confident it is correct.
---
- vars:
newPassword: "{{ newPassword }}"
- hosts: all
gather_facts: no
tasks:
- name: Update Root user's Password
user:
name: root
update_password: always
password: newPassword
There is a good introduction regarding Understanding the Ansible for Junos OS Collections, Roles, and Modules with references to the Ansible Collection Junipernetworks.Junos.
As already mentioned in a comment, modules for certain tasks are available including Manage local user accounts on Juniper JUNOS devices.
- name: Set user password
junipernetworks.junos.junos_user:
name: ansible
role: super-user
encrypted_password: "{{ 'my-password' | password_hash('sha512') }}"
state: present

Switching user for delegation to host outside of inventory with Ansible/awx

I am trying to do the following using Ansible 2.8.4 and awx:
Read some facts from Cisco IOS devices (works)
Put results into a local file using a template (works)
Copy/Move the resulting file to a different server
Since I have to use a different user to access IOS devices and servers, and the servers in question aren't part of the inventory used for the playbook, I am trying to achieve this using become_user and delegate_to.
The initial user (defined in the awx template) is allowed to connect to the IOS devices, while different_user can connect to servers using a ssh private key.
The playbook:
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Gather IOS Facts
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Create Output file
file: path=/tmp/test state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- name: Run Template
template:
src: ios_firmware_check.j2
dest: /tmp/test/output.txt
delegate_to: 127.0.0.1
run_once: true
- name: Set up keys
become: yes
become_method: su
become_user: different_user
authorized_key:
user: different_user
state: present
key: "{{ lookup('file', '/home/different_user/.ssh/key_file') }}"
delegate_to: 127.0.0.1
run_once: true
- name: Copy to remote server
remote_user: different_user
copy:
src: /tmp/test/output.txt
dest: /tmp/test/output.txt
delegate_to: remote.server.fqdn
run_once: true
When run, the playbook fails in the Set up keys task trying to access the home directory with the ssh key:
TASK [Set up keys] *************************************************************
task path: /tmp/awx_2206_mz90qvh9/project/IOS/ios_version.yml:23
[WARNING]: Unable to find '/home/different_user/.ssh/key_file' in expected paths
(use -vvvvv to see paths)
File lookup using None as file
fatal: [host]: FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/different_user/.ssh/key_file"
}
I'm assuming my mistake is somehow related to which user is trying to access the /home/ directory on which device.
Is there a better/more elegant/working way of connecting to a different server using an ssh key to move around files?
I know one possibility would be to just scp using the shell module, but that always feels a bit hacky.
(sort of) solved using encrypted variables in hostvars with Ansible vault.
How to get there:
Encrypting the passwords:
This needs to be done from any commandline with Ansible installed, for some reason this can't be done in tower/awx
ansible-vault encrypt_string "password"
You'll be prompted for a password to encrypt/decrypt.
If you're doing this for Cisco devices, you'll want to encrypt both the ssh and the enable password using this method.
Add encrypted passwords to inventory
For testing, I put it in hostvars for a single switch, should be fine to put it into groupvars and use it on multiple switches as well.
ansible_ssh_pass should be the password to access the switch, ansible_become_pass is the enable password.
---
all:
children:
Cisco:
children:
switches:
switches:
hosts:
HOSTNAME:
ansible_host: ip-address
ansible_user: username
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
Adding the vault password to tower/awx
Add a new credential with credential type "Vault" and the password you used earlier to encrypt the strings.
Now, all you need to do is add the credential to your job template (the template can have one "normal" credential (machine, network, etc.) and multiple vaults).
The playbook then automagically accesses the vault credential to decrypt the strings in the inventory.
Playbook to get Switch Infos and drop template file on a server
The playbook now looks something like below, and does the following:
Gather Facts on all Switches in Inventory
Write all facts into a .csv using a template, save the file on the ansible host
Copy said file to a different server using a different user
The template is configured with the user able to access the server, the user used to access switches with a password is stored in the inventory as seen above.
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Create Output file
file: path=/output/directory state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- debug:
var: network
- name: Gather IOS Facts
remote_user: username
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Run Template
template:
src: ios_firmware_check.csv.j2
dest: /output/directory/filename.csv
delegate_to: 127.0.0.1
run_once: true
- name: Create Destination folder on remote server outside inventory
remote_user: different_username
file: path=/destination/directory mode=0755
delegate_to: remote.server.fqdn
run_once: true
- name: Copy to remote server outside inventory
remote_user: different_username
copy:
src: /output/directory/filename.csv
dest: /destination/directory/filename.csv
delegate_to: remote.server.fqdn
run_once: true

Creating an idempotent playbook that uses root then disables root server access

I'm provisioning a server and hardening it by disabling root access after creating an account with escalated privs. The tasks before the account creation require root so once root has been disabled the playbook is no longer idempotent. I've discovered one way to resolve this is to use wait_for_connection with block/rescue...
- name: Secure the server
hosts: "{{ hostvars.localhost.ipv4 }}"
gather_facts: no
tasks:
- name: Block required to leverage rescue as only way I can see of avoid an already disabled root stopping the playbook
block:
- wait_for_connection:
sleep: 2 # avoid too many requests within the timeout
timeout: 5
- name: Create the service account first so that play is idempotent, we can't rely on root being enabled as it is disabled later
user:
name: swirb
password: {{password}}
shell: /bin/bash
ssh_key_file: "{{ssh_key}}"
groups: sudo
- name: Add authorized keys for service account
authorized_key:
user: swirb
key: '{{ item }}'
with_file:
- "{{ssh_key}}"
- name: Disallow password authentication
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^[\#]PasswordAuthentication"
line: "PasswordAuthentication no"
state: present
notify: Restart ssh
- name: Disable root login
replace:
path: /etc/ssh/sshd_config
regexp: 'PermitRootLogin yes'
replace: 'PermitRootLogin no'
backup: yes
become: yes
notify: Restart ssh
rescue:
- debug:
msg: "{{error}}"
handlers:
- name: Restart ssh
service: name=ssh state=restarted
This is fine until I install fail2ban as the wait_for_connection causes too many connections to the server which jails the IP. So I created a task to add the IP address of the Ansible Controller to the jail.conf like so...
- name: Install and configure fail2ban
hosts: "{{hostvars.localhost.ipv4}}"
gather_facts: no
tasks:
- name: Install fail2ban
apt:
name: "{{ packages }}"
vars:
packages:
- fail2ban
become: yes
- name: Add the IP address to the whitelist otherwise wait_for_connection triggers jailing
lineinfile: dest=/etc/fail2ban/jail.conf
regexp="^(ignoreip = (?!.*{{hostvars.localhost.ipv4}}).*)"
line="\1 <IPv4>"
state=present
backrefs=True
notify: Restart fail2ban
become: yes
handlers:
- name: Restart fail2ban
service: name=fail2ban state=restarted
become: yes
This works but I have to hard wire the Ansible Controller IPv4. There doesn't seem to be a standard way of obtaining the IP address of the Ansible Controller.
I'm also not that keen on adding the controller to every server white list.
Is there a cleaner way of creating an idempotent provisioning playbook?
Otherwise, how do I get the Ansible Controller IP address?

How create users per enviroment using ansible Inventory and module "htpasswd"

I'm newbie in ansible. I wrote ansible role for creating user and password in "/etc/httpd/.htpasswd" like that:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: dev
password: dev
group: apache
mode: 0640
become: true
Now, I'm trying to understand, how I can set user and password placeholder variable per environment for this model using inventory(or any other way). Like, if I ran "ansible playbook -i inventories/dev" so in role of this model could be set:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: "{{ inventory.htpasswd.name }}"
password: "{{ inventory.htpasswd.password }}"
group: apache
mode: 0640
become: true
And in inventory folder per environment will be file "htpasswd" with name and password content like that:
name: dev
password: dev
Does Ansible have something like that? Or can someone explain me what best practices?
By default, each host is assigned to a all group by Ansible. With the following structure you can define group vars based on inventory.
inventories/dev/hosts
inventories/dev/group_vars/all.yml
inventories/staging/hosts
inventories/staging/group_vars/all.yml
In inventories/dev/group_vars/all.yml:
name: dev
password: dev
In inventories/staging/group_vars/all.yml:
name: staging
password: staging
And then in your tasks, reference the vars with their names:
- name: htpasswd
htpasswd:
path: /etc/httpd/.htpasswd
name: "{{ name }}"
password: "{{ password }}"
group: apache
mode: 0640
become: true

Resources