With Ansible I am trying to copy a file from one host to another host. I accomplish this using the synchronize module and use the delegate_to. However, I believe that this is the root of my problems.
- hosts: hosts
tasks:
- name: Synchronization using rsync protocol (pull)
synchronize:
mode: pull
src: /tmp/file.tar
dest: /tmp
delegate_to: 10.x.x.1
become_user: root
become_method: su
become: yes
My code to get the root password
- name: Set root password for host
set_fact:
ansible_become_password: password
My inventory file containing the hosts:
[hosts]
10.x.x.2
Now i'm facing the following error:
fatal: [hosts]: FAILED! => {"msg": "Incorrect su password"}
Playbook:
- name: Fetch root password for host from CA PAM
shell: ./grabPassword.ksh
args:
chdir: /home/virgil
register: password
delegate_to: "{{ delegate_host }}"
become_user: "{{ root_user }}"
become_method: su
- debug:
msg: "{{ password.stdout }}"
- name: Set root password for host
set_fact:
ansible_become_password: "{{ password.stdout }}"
- name: Copying tar file over (Never existed)
synchronize:
src: /tmp/emcgrab_Linux_v4.8.3.tar
dest: /tmp
delegate_to: "{{ delegate_host }}"
register: test
become: yes
Check if you have not set the become password in the command line or configuration.
Related
I am building an ansible playbook to change multiple passwords on our network. And it works! It changes all of the passwords like it is supposed to. However, the last password it changes is the ansible password, which then throws an error because it tries to do the success check using the old Ansible password. Is there a way to tell Ansible to use a different password after changing the password? Below is the entire playbook.
---
# Update local administrator on all Windows systems
- hosts: windows
gather_facts: yes
ignore_errors: no
tasks:
- name: Set Windows Administrator password
ansible.windows.win_user:
name: administrator
update_password: always
password: "{{ new_win_admin_pass }}"
- name: Set ansible password.
ansible.windows.win_user:
name: ansible
update_password: always
password: "{{ new_ansible_pass }}"
# Update all Linux accounts.
# This must always run last, once ansible password is changed, no further changes can occur.
- hosts: rhel
become: yes
gather_facts: yes
ignore_errors: no
tasks:
- name: Set Workstation admin password.
hosts: rhel_workstations
ansible.builtin.user:
name: admin
update_password: always
password: "{{ new_admin_pass | password_hash ('sha512')}}"
- name: Set Linux root password.
ansible.builtin.user:
name: root
update_password: always
password: "{{ new_root_pass | password_hash ('sha512')}}"
- name: Set ansible password.
ansible.builtin.user:
name: ansible
update_password: always
password: "{{ new_ansible_pw_var | password_hash ('sha512')}}"
I'd try to do this with an async task and check back on the result with the new password:
- hosts: rhel
become: yes
tasks:
- name: Set ansible password.
ansible.builtin.user:
name: ansible
update_password: always
password: "{{ new_ansible_pw_var | password_hash ('sha512')}}"
async: 15
poll: 0
register: change_ansible_password
- name: Check ansible password change was successful
vars:
ansible_password: "{{ new_ansible_pw_var }}"
async_status:
jid: "{{ change_ansible_password.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 15
delay: 1
- name: polite guests always clean after themselves when necessary (see doc)
vars:
ansible_password: "{{ new_ansible_pw_var }}"
async_status:
jid: "{{ change_ansible_password.ansible_job_id }}"
mode: cleanup
My goal is to execute a shell command on different hosts with credentials given directly from the playbook. Until now I have tried two things.
Attempt one:
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
args:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
Attempt 2:
- name: set default credentials
set_fact:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
The Username in the variable {{cred_ios_r_user}} is 'User1'. But when I look in the Output from Ansible, it tells me it used the default ssh user named "SSH_User".
What do I need to change so that Ansible takes the given credentials?
first thing you should to check - its "remote_user" attribute of module:
- name: DigitalOcean | Disallow root SSH access
remote_user: root
lineinfile: dest=/etc/ssh/sshd_config
regexp="^PermitRootLogin"
line="PermitRootLogin no"
state=present
notify: Restart ssh
Next thing, its ssh_keys. By default, your connections have your own private key, but you can override it by the extraargs of ansible-playbook command, of variables in inventory-file or job-vars:
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "{{ role_path }}/files/ansible.key"
so if you want, you can set custom keys you have in vars or files:
- name: DigitalOcean | Add Pub key
remote_user: root
authorized_key:
user: "{{ do_user }}"
key: "{{ lookup('file', do_key_public) }}"
state: present
you have take more information in my auto-droplet-digitalocean-role
Your vars should be... vars available for your task, not arguments passed to the shell module. So in a nutshell:
- name: test
vars:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
shell:
command: "whoami"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
delegate_to: "{{ item.split(';')[0] }}"
Meanwhile this looks like a bad practice. You would normally use add_host to build an in-memory inventory with the correct hosts and credentials and then run your task using the natural host batch play loop. Something like
- name: manage hosts
hosts: localhost
gather_facts: false
tasks:
- name: add hosts to inventory
add_host:
name: "{{ item.split(';')[0] }}"
groups:
- my_custom_group
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
- name: run my command on all added hosts
hosts: my_custom_group
gather_facts: false
tasks:
- name: test command
shell: whoami
Alternativelly, you can create a full inventory from scratch from your csv file and use it directly or use/develop an inventory plugin that will read your csv file directly (like in this example)
I had our security group ask for all the information we gather from the hosts we manage with Ansible Tower. I want to run the setup command and put it into a file in a folder I can run ansible-cmdb against. I need to do this in a playbook because we have disabled root login on the hosts and only allow public / private key authentication of the Tower user. The private key is stored in the database so I cannot run the setup command from the cli and impersonate the Tower user.
EDIT I am adding my code so it can be tested elsewhere.
gather_facts.yml
---
- hosts: all
gather_facts: true
become: true
tasks:
- name: Check for temporary dir make it if it does not exist
file:
path: /tmp/ansible
state: directory
mode: 0755
- name: Gather Facts into a file
copy:
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
dest: /tmp/ansible/{{ inventory_hostname }}
cmdb_gather.yml
---
- hosts: all
gather_facts: no
become: true
tasks:
- name: Fetch fact gather files
fetch:
src: /tmp/ansible/{{ inventory_hostname }}
dest: /depot/out/
flat: yes
CLI:
ansible -i devinventory -m setup --tree out/ all
This would basically look like (wrote on spot not tested):
- name: make the equivalent of "ansible somehosts -m setup --tree /some/dir"
hosts: my_hosts
vars:
treebase: /path/to/tree/base
tasks:
- name: Make sure we have a folder for tree base
file:
path: "{{ treebase }}"
state: directory
delegate_to: localhost
run_once: true
- name: Dump facts host by host in our treebase
copy:
dest: "{{ treebase }}/{{ inventory_hostname }}"
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
delegate_to: localhost
Ok I figured it out. While #Zeitounator had a correct answer I had the wrong idea. In checking the documentation for ansible-cmdb I caught that the application can use the fact cache. When I included a custom ansible.cfg in the project folder and added:
[defaults]
fact_caching=jsonfile
fact_caching_connection = /depot/out
ansible-cmdb was able to parse the output correctly using:
ansible-cmdb -f /depot/out > ansible_hosts.html
I am trying to create switch backups. And i want to create dynamic files based on config output and switch hostname.
As an example, the configuration of switch1 should be saved in file names hostname1, config of switch2 should be saved in file names hostname2 and so on.
I am getting the hostnames from the switches from a file.
And my problem is, that the config of switch1 gets saved in file hostname1, hostname2 etc.
How can I loop the variables correctly to get the right config in the right file?
My current playbook looks like this:
---
- hosts: cisco
connection: local
gather_facts: false
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ inventory_hostname }}"
username: test
password: test
tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: creating folder
file:
path: "{{ backup_path }}"
state: directory
run_once: yes
- name: get hostnames
become: yes
shell: cat /etc/ansible/tests/hostname_ios.txt
register: hostnames
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ item }}.txt"
with_together: "{{ hostnames.stdout_lines }}"
...
Rely on Ansible native host loop instead of inventing your own one.
It's as simple as:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"
Finally got it running.
Defined names in my inventory as followed:
test-switch1 ansible_host=ip
Changed the host var in my playbook
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ ansible_host }}"
username: test
password: test
And then executing my tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"
I want to write a ansible playbook where we can provide a username and ansible will display the authorized keys for that user. The path to the authorized keys is {{user_home_dir}}/.ssh/authorized_keys.
I tried with shell module like below:
---
- name: Get authorized_keys
shell: cat "{{ user_home_dir }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines
The problem is, it will show me the file inside /home/ansible/.ssh/authorized_keys. "ansible" is the user that I am using to connect to remote machine.
Below is vars/main.yml
---
authorized_user: username
user_home_dir: "{{ lookup('env','HOME') }}"
Any idea? FYI I am new to ansible and tried this link already.
In your vars file, you have
user_home_dir: "{{ lookup('env','HOME') }}"
Thanks to Konstantin for pointing it out... All lookups are executed on the control host. So the lookup to env HOME will always resolve to the home directory of the user, from which ansible is being invoked.
You could use the getent module from ansible to retrieve an user's info. The below snippet should help
---
- hosts: localhost
connection: local
remote_user: myuser
gather_facts: no
vars:
username: myuser
tasks:
- name: get user info
getent:
database: passwd
key: "{{ username }}"
register: info
- shell: "echo {{ getent_passwd[username][4] }}"
Below worked. We need to have become too otherwise we will get permission denied error.
---
- hosts: local
remote_user: ansible
gather_facts: no
become: yes
become_method: sudo
vars:
username: myuser
tasks:
- name: get user info
getent:
split: ":"
database: passwd
key: "{{ username }}"
- name: Get authorized_keys
shell: cat "{{ getent_passwd[username][4] }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines