I am completely new to Ansible and cannot find much documentation regarding changing the root password of a Juniper device. What is the framework to do something like this?
This is what I have so far but I am not confident it is correct.
---
- vars:
newPassword: "{{ newPassword }}"
- hosts: all
gather_facts: no
tasks:
- name: Update Root user's Password
user:
name: root
update_password: always
password: newPassword
There is a good introduction regarding Understanding the Ansible for Junos OS Collections, Roles, and Modules with references to the Ansible Collection Junipernetworks.Junos.
As already mentioned in a comment, modules for certain tasks are available including Manage local user accounts on Juniper JUNOS devices.
- name: Set user password
junipernetworks.junos.junos_user:
name: ansible
role: super-user
encrypted_password: "{{ 'my-password' | password_hash('sha512') }}"
state: present
Related
Servers are connected to the AD environment. I want to push the pub key to enable passwordless access.
However, I found my playbook failed if the user's home directory was not present. Do I have to create a separate task to create the user's home dir before kicking off the key-pushing job?
Here is my playbook:
---
- hosts: all
become: yes
tasks:
- name: Set authorized key from file
authorized_key:
user: user1
state: present
manage_dir: yes
key: "{{ lookup('file', item) }}"
with_fileglob:
- user1.pubkey
I am trying to do the following using Ansible 2.8.4 and awx:
Read some facts from Cisco IOS devices (works)
Put results into a local file using a template (works)
Copy/Move the resulting file to a different server
Since I have to use a different user to access IOS devices and servers, and the servers in question aren't part of the inventory used for the playbook, I am trying to achieve this using become_user and delegate_to.
The initial user (defined in the awx template) is allowed to connect to the IOS devices, while different_user can connect to servers using a ssh private key.
The playbook:
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Gather IOS Facts
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Create Output file
file: path=/tmp/test state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- name: Run Template
template:
src: ios_firmware_check.j2
dest: /tmp/test/output.txt
delegate_to: 127.0.0.1
run_once: true
- name: Set up keys
become: yes
become_method: su
become_user: different_user
authorized_key:
user: different_user
state: present
key: "{{ lookup('file', '/home/different_user/.ssh/key_file') }}"
delegate_to: 127.0.0.1
run_once: true
- name: Copy to remote server
remote_user: different_user
copy:
src: /tmp/test/output.txt
dest: /tmp/test/output.txt
delegate_to: remote.server.fqdn
run_once: true
When run, the playbook fails in the Set up keys task trying to access the home directory with the ssh key:
TASK [Set up keys] *************************************************************
task path: /tmp/awx_2206_mz90qvh9/project/IOS/ios_version.yml:23
[WARNING]: Unable to find '/home/different_user/.ssh/key_file' in expected paths
(use -vvvvv to see paths)
File lookup using None as file
fatal: [host]: FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/different_user/.ssh/key_file"
}
I'm assuming my mistake is somehow related to which user is trying to access the /home/ directory on which device.
Is there a better/more elegant/working way of connecting to a different server using an ssh key to move around files?
I know one possibility would be to just scp using the shell module, but that always feels a bit hacky.
(sort of) solved using encrypted variables in hostvars with Ansible vault.
How to get there:
Encrypting the passwords:
This needs to be done from any commandline with Ansible installed, for some reason this can't be done in tower/awx
ansible-vault encrypt_string "password"
You'll be prompted for a password to encrypt/decrypt.
If you're doing this for Cisco devices, you'll want to encrypt both the ssh and the enable password using this method.
Add encrypted passwords to inventory
For testing, I put it in hostvars for a single switch, should be fine to put it into groupvars and use it on multiple switches as well.
ansible_ssh_pass should be the password to access the switch, ansible_become_pass is the enable password.
---
all:
children:
Cisco:
children:
switches:
switches:
hosts:
HOSTNAME:
ansible_host: ip-address
ansible_user: username
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
Adding the vault password to tower/awx
Add a new credential with credential type "Vault" and the password you used earlier to encrypt the strings.
Now, all you need to do is add the credential to your job template (the template can have one "normal" credential (machine, network, etc.) and multiple vaults).
The playbook then automagically accesses the vault credential to decrypt the strings in the inventory.
Playbook to get Switch Infos and drop template file on a server
The playbook now looks something like below, and does the following:
Gather Facts on all Switches in Inventory
Write all facts into a .csv using a template, save the file on the ansible host
Copy said file to a different server using a different user
The template is configured with the user able to access the server, the user used to access switches with a password is stored in the inventory as seen above.
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Create Output file
file: path=/output/directory state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- debug:
var: network
- name: Gather IOS Facts
remote_user: username
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Run Template
template:
src: ios_firmware_check.csv.j2
dest: /output/directory/filename.csv
delegate_to: 127.0.0.1
run_once: true
- name: Create Destination folder on remote server outside inventory
remote_user: different_username
file: path=/destination/directory mode=0755
delegate_to: remote.server.fqdn
run_once: true
- name: Copy to remote server outside inventory
remote_user: different_username
copy:
src: /output/directory/filename.csv
dest: /destination/directory/filename.csv
delegate_to: remote.server.fqdn
run_once: true
I'm still somewhat new to Ansible so I'm sure this isn't the proper way of doing this, but it's what I've come up with considering the requirements I was given.
I have to perform tasks on a server, which I do not have credentials to access since they are locked in a vault. My way of working around this is to get the credentials from the vault, then delegate tasks to that server. I've accomplished this, but I'm wondering if there is a cleaner or more adequate way of doing it. So, here's my setup:
I have a playbook that just has:
---
- hosts: localhost
roles:
- role: get_credentials <-- Not the real role names
- role: use_credentials
Basically, get_credentials gets some credentials from a vault and then use_credentials performs tasks, but each task has
delegate_to: protected_server
vars:
ansible_ssh_user: "{{ user }}"
ansible_ssh_pass: "{{ password }}"
at the end of it
Is there a way I can delegate all the tasks in use_credentials without having to delegate each task individually?
I'ld move both your role from the roles: section to the tasks:, using include_role. Something like this:
tasks:
- name: Get credentials
include_role:
name: get_credentials # I expect this one to set_fact user_from_get_credential and password_from_get_credential
delegate_to: protected_server
- name: Use credentials
include_role:
name: use_credentials
vars:
ansible_ssh_user: "{{ user_from_get_credential }}"
ansible_ssh_pass: "{{ password_from_get_credential }}"
I have an Ansible playbook to update my Debian based servers. For simplicity and security reasons, I don't want to use a vault for the passwords and I also don't want to store them in a publically accessible config file. So I ask for the password for every client with
become: yes
become_method: sudo
Now, when the playbook runs, it seems the first thing Ansible does is ask for the sudo password, but I don't know for which server (the passwords are different). Is there a way to get Ansible to print the current host name before it asks for the password?
The update playbook is similar to this:
---
- hosts:
all
gather_facts: no
vars:
verbose: false
log_dir: "log/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- block:
- setup:
rescue:
- name: "Install required python-minimal package"
raw: "apt-get update && apt-get install -y --force-yes python-apt python-minimal"
- setup:
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout | search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
(source: http://www.panticz.de/Debian-Ubuntu-mass-dist-upgrade-with-Ansible)
Your above become configuration does not make ansible ask you for a become password: it just advises it to use become with the sudo method (which will work without any password if your have the correct keys configured for example).
If you are asked for a become password, it's because (it's a guess but I'm rather confident...) you used the --ask-become-pass option when running ansible-playbook.
In this case, you are prompted only once at the beginning of the playbook operations and this default become password will be used on all servers you connect to except if you defined an other one in your inventory for a specific host/group.
If you have different become passwords depending on your machines, you don't really have an other option: you need to declare those passwords in your inventory (and it is strongly advised to use ansible-vault encryption) or use some other mechanisms to get them out of an external application (hashicorp vault, dynamic inventory, cyberark...)
I want to use Ansible to disable selinux in some remote servers. I don't know yet the full list of the servers, it will come from time to time.
That would be great if the ssh-copy-id phase would be integrated somehow in the playbook - you would expect that from an automation system ? I don't mind getting asked for the password one time per server.
With various reading, I understand I can run a local_action in my task:
---
- name: Disable SELinux
hosts: all
remote_user: root
gather_facts: False
tasks:
- local_action: command ssh-copy-id {{remote_user}}#{{hostname}}
- selinux:
state: disabled
However:
It fails because {{remote_user}} and {{hostname}} are not accessible in this context.
I need to gather_factsto False, because it's executed before local_action
Any idea if that's possible within Ansible playbooks ?
You may try this:
- hosts: all
gather_facts: no
tasks:
- set_fact:
rem_user: "{{ ansible_user | default(lookup('env','USER')) }}"
rem_host: "{{ ansible_host }}"
- local_action: command ssh-copy-id {{ rem_user }}#{{ rem_host }}
- setup:
- selinux:
state: disabled
Define remote user and remote host first, then make local action, then enforce fact gathering with setup.