I am building an ansible playbook to change multiple passwords on our network. And it works! It changes all of the passwords like it is supposed to. However, the last password it changes is the ansible password, which then throws an error because it tries to do the success check using the old Ansible password. Is there a way to tell Ansible to use a different password after changing the password? Below is the entire playbook.
---
# Update local administrator on all Windows systems
- hosts: windows
gather_facts: yes
ignore_errors: no
tasks:
- name: Set Windows Administrator password
ansible.windows.win_user:
name: administrator
update_password: always
password: "{{ new_win_admin_pass }}"
- name: Set ansible password.
ansible.windows.win_user:
name: ansible
update_password: always
password: "{{ new_ansible_pass }}"
# Update all Linux accounts.
# This must always run last, once ansible password is changed, no further changes can occur.
- hosts: rhel
become: yes
gather_facts: yes
ignore_errors: no
tasks:
- name: Set Workstation admin password.
hosts: rhel_workstations
ansible.builtin.user:
name: admin
update_password: always
password: "{{ new_admin_pass | password_hash ('sha512')}}"
- name: Set Linux root password.
ansible.builtin.user:
name: root
update_password: always
password: "{{ new_root_pass | password_hash ('sha512')}}"
- name: Set ansible password.
ansible.builtin.user:
name: ansible
update_password: always
password: "{{ new_ansible_pw_var | password_hash ('sha512')}}"
I'd try to do this with an async task and check back on the result with the new password:
- hosts: rhel
become: yes
tasks:
- name: Set ansible password.
ansible.builtin.user:
name: ansible
update_password: always
password: "{{ new_ansible_pw_var | password_hash ('sha512')}}"
async: 15
poll: 0
register: change_ansible_password
- name: Check ansible password change was successful
vars:
ansible_password: "{{ new_ansible_pw_var }}"
async_status:
jid: "{{ change_ansible_password.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 15
delay: 1
- name: polite guests always clean after themselves when necessary (see doc)
vars:
ansible_password: "{{ new_ansible_pw_var }}"
async_status:
jid: "{{ change_ansible_password.ansible_job_id }}"
mode: cleanup
Related
My goal is to execute a shell command on different hosts with credentials given directly from the playbook. Until now I have tried two things.
Attempt one:
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
args:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
Attempt 2:
- name: set default credentials
set_fact:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
The Username in the variable {{cred_ios_r_user}} is 'User1'. But when I look in the Output from Ansible, it tells me it used the default ssh user named "SSH_User".
What do I need to change so that Ansible takes the given credentials?
first thing you should to check - its "remote_user" attribute of module:
- name: DigitalOcean | Disallow root SSH access
remote_user: root
lineinfile: dest=/etc/ssh/sshd_config
regexp="^PermitRootLogin"
line="PermitRootLogin no"
state=present
notify: Restart ssh
Next thing, its ssh_keys. By default, your connections have your own private key, but you can override it by the extraargs of ansible-playbook command, of variables in inventory-file or job-vars:
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "{{ role_path }}/files/ansible.key"
so if you want, you can set custom keys you have in vars or files:
- name: DigitalOcean | Add Pub key
remote_user: root
authorized_key:
user: "{{ do_user }}"
key: "{{ lookup('file', do_key_public) }}"
state: present
you have take more information in my auto-droplet-digitalocean-role
Your vars should be... vars available for your task, not arguments passed to the shell module. So in a nutshell:
- name: test
vars:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
shell:
command: "whoami"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
delegate_to: "{{ item.split(';')[0] }}"
Meanwhile this looks like a bad practice. You would normally use add_host to build an in-memory inventory with the correct hosts and credentials and then run your task using the natural host batch play loop. Something like
- name: manage hosts
hosts: localhost
gather_facts: false
tasks:
- name: add hosts to inventory
add_host:
name: "{{ item.split(';')[0] }}"
groups:
- my_custom_group
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
- name: run my command on all added hosts
hosts: my_custom_group
gather_facts: false
tasks:
- name: test command
shell: whoami
Alternativelly, you can create a full inventory from scratch from your csv file and use it directly or use/develop an inventory plugin that will read your csv file directly (like in this example)
With Ansible I am trying to copy a file from one host to another host. I accomplish this using the synchronize module and use the delegate_to. However, I believe that this is the root of my problems.
- hosts: hosts
tasks:
- name: Synchronization using rsync protocol (pull)
synchronize:
mode: pull
src: /tmp/file.tar
dest: /tmp
delegate_to: 10.x.x.1
become_user: root
become_method: su
become: yes
My code to get the root password
- name: Set root password for host
set_fact:
ansible_become_password: password
My inventory file containing the hosts:
[hosts]
10.x.x.2
Now i'm facing the following error:
fatal: [hosts]: FAILED! => {"msg": "Incorrect su password"}
Playbook:
- name: Fetch root password for host from CA PAM
shell: ./grabPassword.ksh
args:
chdir: /home/virgil
register: password
delegate_to: "{{ delegate_host }}"
become_user: "{{ root_user }}"
become_method: su
- debug:
msg: "{{ password.stdout }}"
- name: Set root password for host
set_fact:
ansible_become_password: "{{ password.stdout }}"
- name: Copying tar file over (Never existed)
synchronize:
src: /tmp/emcgrab_Linux_v4.8.3.tar
dest: /tmp
delegate_to: "{{ delegate_host }}"
register: test
become: yes
Check if you have not set the become password in the command line or configuration.
I have some servers which I want to administer with ansible. Currently I need to create user acounts on all of them. On some of them, some accounts are already present. I want to create the users with a default password, but if the user exist don't change his password.
Can someone help me with this condition ?
Here is my playbook :
---
- hosts: all
become: yes
vars:
sudo_users:
# password is generated through "mkpasswd" command from 'whois' package
- login: user1
password: hashed_password
- login: user1
password: hashed_password
tasks:
- name: Make sure we have a 'sudo' group
group:
name: sudo
state: present
- user:
name: "{{ item.login }}"
#password: "{{ item.password }}"
shell: /bin/bash
groups: "{{ item.login }},sudo"
append: yes
with_items: "{{ sudo_users }}"
From the docs of user module:
update_password (added in 1.3) always/on_create
always will update passwords if they differ. on_create will only set the password for newly created users.
I want to write a ansible playbook where we can provide a username and ansible will display the authorized keys for that user. The path to the authorized keys is {{user_home_dir}}/.ssh/authorized_keys.
I tried with shell module like below:
---
- name: Get authorized_keys
shell: cat "{{ user_home_dir }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines
The problem is, it will show me the file inside /home/ansible/.ssh/authorized_keys. "ansible" is the user that I am using to connect to remote machine.
Below is vars/main.yml
---
authorized_user: username
user_home_dir: "{{ lookup('env','HOME') }}"
Any idea? FYI I am new to ansible and tried this link already.
In your vars file, you have
user_home_dir: "{{ lookup('env','HOME') }}"
Thanks to Konstantin for pointing it out... All lookups are executed on the control host. So the lookup to env HOME will always resolve to the home directory of the user, from which ansible is being invoked.
You could use the getent module from ansible to retrieve an user's info. The below snippet should help
---
- hosts: localhost
connection: local
remote_user: myuser
gather_facts: no
vars:
username: myuser
tasks:
- name: get user info
getent:
database: passwd
key: "{{ username }}"
register: info
- shell: "echo {{ getent_passwd[username][4] }}"
Below worked. We need to have become too otherwise we will get permission denied error.
---
- hosts: local
remote_user: ansible
gather_facts: no
become: yes
become_method: sudo
vars:
username: myuser
tasks:
- name: get user info
getent:
split: ":"
database: passwd
key: "{{ username }}"
- name: Get authorized_keys
shell: cat "{{ getent_passwd[username][4] }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines
I'm trying to use ansible (version 2.1.2.0) to create named ssh access across our network of servers. Running ansible from a jump box I'm creating a set of users and creating a private/public key pair with the users module
- user:
name: "{{ item }}"
shell: /bin/bash
group: usergroup
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items: "{{ list_of_usernames }}"
From there I would like to copy across the public key to the authorized_users file on each remote server. I'm using the authorized_key_module to fetch and copy the user's generated public key as the authorized_key on the remote servers:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
What I'm finding is that the lookup will fail as it is unable to access the pub file within the user's .ssh folder. It will run happily however if I run ansible as the root user (which for me is not an option I'm willing to take).
fatal: [<ipAddress>]: FAILED! => {"failed": true, "msg": "{{ lookup('file', '/home/testuser/.ssh/id_rsa.pub') }}: the file_name '/home/testuser/.ssh/id_rsa.pub' does not exist, or is not readable"}
Is there a better way to do this other than running ansible as root?
Edit: Sorry I had forgotten to mention that I'm running with become: yes
Edit #2: Below is a condensed playbook of what I'm trying to run, of course in this instance it'll add the authorized_key to the localhost but shows the error I'm facing:
---
- hosts: all
become: yes
vars:
active_users: ['user1','user2']
tasks:
- group:
name: "users"
- user:
name: "{{ item }}"
shell: /bin/bash
group: users
generate_ssh_key: yes
with_items: "{{ active_users }}"
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
become: yes
become_user: "{{ item }}"
with_items:
- "{{ active_users }}"
OK, the problem is with lookup plugin.
It is executed on ansible control host with permissions of user that run ansible-playbook and become: yes don't elevate plugins' permissions.
To overcome this, capture result of user task and use its output in further tasks:
- user:
name: "{{ item }}"
shell: /bin/bash
group: docker
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items:
- ansible5
- ansible6
register: new_users
become: yes
- debug: msg="user {{ item.item }} pubkey {{ item.ssh_public_key }}"
with_items: "{{ new_users.results }}"
Although you need to delegate some of this tasks, the idea will be the same.
On most linux/unix machines only two accounts have access to /home/testuser/.ssh/id_rsa.pub: root and testuser, so if you would like to modify those files you need to be either root, or testuser.
You can use privilige escalation using become. You said you don't want to run ansible as root, which is perfectly fine. You haven't mentioned if the user you are running ansible with has sudo access or not. If it does, and using sudo is fine for you then you can simply do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
This by default will run these command as root using sudo, while keeping the rest of the tasks run by the non-root user. This is also the preferred way of doing runs with ansible.
If you don't want this, and you want to keep your user as root-free as possible, then you need to run the command as testuser (and any other user you want to modify). This would mean you still need to patch up the sudoers file to allow your ansible user to transform into any of these users (which can also open up a few security issues - albeit not as much as being able to run anything as root), after which you can do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
become_user: "{{ item }}"
There are some caveats using this approach, so you might want to read the full Privilege Escalation page on ansible's documentation, especially the section on Unpriviliged Users before you try this.