I am trying to run a task executing the known_hosts module with a list of key entries. The problem is that I keep getting the following error, even though the variable has data when using debug.
The task includes an option with an undefined variable. The error was: 'item' is undefined
I have the following task which registers a variable of the ssh-keyscan output
- name: keyscan platform hosts
shell: "ssh-keyscan ssh.{{ item }}"
register: "platform_ssh_host_keys"
loop:
- "one.example.com"
- "two.example.com"
When I run the following debug, I get the stdout which contains the key
- name: debug
debug:
var: item.stdout
with_items: "{{platform_ssh_host_keys.results}}"
But as soon as I run it with known_hosts it says item is undefined.
- name: configure known_hosts
known_hosts:
path: "~/.ssh/known_hosts"
name: "ssh.{{ item.item }}"
key: "{{ item.stdout }}"
state: present
loop: "{{ platform_ssh_host_keys.results }}"
I don't get how item can be defined in debug but not known_hosts task.
🤦‍♂️ Well, after I posted this I immediately realized the problem was due to the fact I had loop as a property for known_hosts and not the tasks. So the problem was indentation.
This fixed it all.
- name: configure known_hosts
known_hosts:
path: "~/.ssh/known_hosts"
name: "ssh.{{ item.item }}"
key: "{{ item.stdout }}"
state: present
loop: "{{platform_ssh_host_keys.results}}"
Related
Using Ansible 2.9.12
Question: How do I configure Ansible to ensure the contents of a file is equal amongst at least 3 hosts, when the file is present at at least one host?
Imagine there are 3 hosts.
Host 1 does not has /file.txt.
Host 2 has /file.txt with contents hello.
Host 3 has /file.txt with contents hello.
Before the play is run, I am unaware whether the file is present or not. So the file could exist on host1, or host2 or host3. But the file exists on at least one of the hosts.
How would I ensure each time Ansible runs, the files across the hosts are equal. So in the end, Host 1 has the same file with the same contents as Host 2 or Host 3.
I'd like this to be dynamically set, instead of specifying the host names or group names, e.g. when: inventory_hostname == host1.
I am not expecting a check to see whether the contents of host 2 and 3 are equal
I do however, want this to be setup in an idempotent fashion.
The play below does the job, I think
shell> cat pb.yml
- hosts: all
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: /tmp
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
But, this doesn't check the existing files are in sync. It would be safer to sync all hosts, I think
- name: Synchronize file
synchronize:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
Q: "FATAL. could not find or access '/tmp/test-multi-01/file.txt on the Ansible controller. However, folder /tmp/test-multi-03 is present with the file.txt in it."
A: There is a problem with the fetch module when the task is delegated to another host. When the TASK [Fetch file.] is delegated to test-multi-01 which is localhost in this case changed: [test-multi-03 -> 127.0.0.1] the file will be fetched from test-multi-01 but will be stored in /tmp/test-multi-03/file.txt. The conclusion is, the fetch module ignores delegate_to when it comes to creating host-specific directories (not reported yet).
As a workaround, it's possible to set flat: true and store the files in a specific directory. For example, add the variable sync_files_dir with the directory, set fetch flat: true, and use the directory to both fetch and copy the file
- hosts: all
vars:
sync_files_dir: /tmp/sync_files
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dir for files to be fetched and synced
file:
state: directory
path: "{{ sync_files_dir }}"
delegate_to: localhost
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- debug:
var: status
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: "{{ sync_files_dir }}/"
flat: true
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "{{ sync_files_dir }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
We can achieve it by fetching the file from hosts where the file exists. The file(s) will be available on the control machine. However if the file which will be the source, exists on more than 1 node, then there will be no single source of truth.
Consider an inventory:
[my_hosts]
host1
host2
host3
Then the below play can fetch the file, then use that file to copy to all nodes.
# Fetch the file from remote host if it exists
- hosts: my_hosts
tasks:
- stat:
path: /file.txt
register: my_file
- fetch:
src: /file.txt
dest: /tmp/
when: my_file.stat.exists
- find:
paths:
- /tmp
patterns: file.txt
recurse: yes
register: local_file
delegate_to: localhost
- copy:
src: "{{ local_file.files[0].path }}"
dest: /tmp
If multiple hosts had this file then it would be in /tmp/{{ ansible_host }}. Then as we won't have a single source of truth, our best estimate can be to use the first file and apply on all hosts.
Well i believe the get_url module is pretty versatile - allows for local file paths or paths from a web server. Try it and let me know.
- name: Download files in all host
hosts: all
tasks:
- name: Download file from a file path
get_url:
url: file:///tmp/file.txt
dest: /tmp/
Edited ans:
(From documentation: For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to)
- name: Check that the file exists
stat:
path: /etc/file.txt
register: stat_result
- name: copy the file to other hosts by delegating the task to the source host
synchronize:
src: path/host
dest: path/host
delegate_to: my_source_host
when: stat_result.stat.exists
I'am trying to automate the creation of the smart_[diskdevice] links to
/usr/share/munin/plugins/smart_
during the installation of the munin node via ansible.
The code here works partially, except there is no diskdevice to link on the target machine. Then I got a fatal failure with
{"msg": "with_dict expects a dict"}
I've review the ansible documentation and tried to search the problem in the web. For my understanding, the whole "file" directive should not be executed if the "when"-statement fails.
---
- name: Install Munin Node
any_errors_fatal: true
block:
...
# drives config
- file:
src: /usr/share/munin/plugins/smart_
dest: /etc/munin/plugins/smart_{{ item.key }}
state: link
with_dict: "{{ ansible_devices }}"
when: "item.value.host.startswith('SATA')"
notify:
- restart munin-node
On targets with a SATA-Drive, the code works. Drives like "sda" are found and the links are created. Loop- and other soft-Devices are ignored (as intended)
Only on a Raspberry with no SATA-Drive at all i got the fatal failure.
You are using the with_dict option to set the loop. This sets the value of the item variable for each iteration as a dictionary with two keys:
key: The name of the current key in the dict.
value: The value of the existing key in the dict.
You are then running the when option that checks the item variable on each iteration. So check if that is the behavior you want.
Regarding your error, it is being thrown because for some reason, ansible_devices is not a dict as the error says. And Ansible checks for the validity of the with_dict type before resolving the when condition.
Check the following example:
---
- name: Diff test
hosts: local
connection: local
gather_facts: no
vars:
dict:
value: True
name: "dict"
tasks:
- debug: var=item
when: dict.value == False
with_dict: '{{ dict }}'
- debug: var=item
when: dict.value == True
with_dict: '{{ dict }}'
- debug: var=item
when: dict.value == False
with_dict: "Not a dict"
The first two task will succeed because they have a valid dict on the with_dict option and a correct condition on the when option. The last one will fail because the with_dict value has the wrong type, even though the when condition resolves correctly and should guarantee to skip the task.
I hope it helps.
I am trying to make a playbook that loops over the number of files in a directory and then use those files in another playbook.
My playbook as it is now:
---
- name: Run playbooks for Raji's testing
register: scripts
roles:
- prepare_edge.yml
- prepare_iq.yml
- scriptor.yml
with_fileglob: ~/ansible/test_scripts/*
~
When I run this it doesn't work, I've tried "register: scripts" to make a variable to reference inside scriptor.yml but again the playbook fails. Any advice or help you can provide would be much appreciated.
Thanks!
P.S. I am super new to ansible
here is the scriptor.yml
---
- hosts: all
tasks:
- name: Create directory
command: mkdir /some/path/
- name: If file is a playbook
copy:
src: "{{ scripts }}"
dest: /some/path/
when: "{{ scripts }}" == "*.yml"
- name: if file is a script
shell: . ${{ scripts }}
when: "{{ scripts }}" == "*.sh"
P.S.S prepare_edge.yml and prepare_iq.yml don't reference anything and just need to be called in the loop before scriptor.yml
here is the error:
ERROR! 'register' is not a valid attribute for a Play
The error appears to have been in '/Users/JGrow33/ansible/raji_magic_playbook.yml': line 3, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Run playbooks for Raji's testing
^ here
There error message you're getting is telling you that you can't run register in a Playbook.
You can accomplish what you're looking for by doing something like the following in your scriptor.yml file:
- hosts: all
tasks:
- name: Create directory
command: mkdir /some/path/
- name: If file is a playbook
copy:
src: "{{ item }}"
dest: /some/path/
with_fileglob: ~/ansible/test_scripts/*.yml
- name: if file is a script
shell: . ${{ item }}
with_fileglob: copy:
src: "{{ item }}"
dest: /some/path/
with_fileglobe: ~/ansible/test_scripts/*.sh
References
How can Ansible "register" in a variable the result of including a playbook?
I'm trying to use ansible (version 2.1.2.0) to create named ssh access across our network of servers. Running ansible from a jump box I'm creating a set of users and creating a private/public key pair with the users module
- user:
name: "{{ item }}"
shell: /bin/bash
group: usergroup
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items: "{{ list_of_usernames }}"
From there I would like to copy across the public key to the authorized_users file on each remote server. I'm using the authorized_key_module to fetch and copy the user's generated public key as the authorized_key on the remote servers:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
What I'm finding is that the lookup will fail as it is unable to access the pub file within the user's .ssh folder. It will run happily however if I run ansible as the root user (which for me is not an option I'm willing to take).
fatal: [<ipAddress>]: FAILED! => {"failed": true, "msg": "{{ lookup('file', '/home/testuser/.ssh/id_rsa.pub') }}: the file_name '/home/testuser/.ssh/id_rsa.pub' does not exist, or is not readable"}
Is there a better way to do this other than running ansible as root?
Edit: Sorry I had forgotten to mention that I'm running with become: yes
Edit #2: Below is a condensed playbook of what I'm trying to run, of course in this instance it'll add the authorized_key to the localhost but shows the error I'm facing:
---
- hosts: all
become: yes
vars:
active_users: ['user1','user2']
tasks:
- group:
name: "users"
- user:
name: "{{ item }}"
shell: /bin/bash
group: users
generate_ssh_key: yes
with_items: "{{ active_users }}"
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
become: yes
become_user: "{{ item }}"
with_items:
- "{{ active_users }}"
OK, the problem is with lookup plugin.
It is executed on ansible control host with permissions of user that run ansible-playbook and become: yes don't elevate plugins' permissions.
To overcome this, capture result of user task and use its output in further tasks:
- user:
name: "{{ item }}"
shell: /bin/bash
group: docker
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items:
- ansible5
- ansible6
register: new_users
become: yes
- debug: msg="user {{ item.item }} pubkey {{ item.ssh_public_key }}"
with_items: "{{ new_users.results }}"
Although you need to delegate some of this tasks, the idea will be the same.
On most linux/unix machines only two accounts have access to /home/testuser/.ssh/id_rsa.pub: root and testuser, so if you would like to modify those files you need to be either root, or testuser.
You can use privilige escalation using become. You said you don't want to run ansible as root, which is perfectly fine. You haven't mentioned if the user you are running ansible with has sudo access or not. If it does, and using sudo is fine for you then you can simply do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
This by default will run these command as root using sudo, while keeping the rest of the tasks run by the non-root user. This is also the preferred way of doing runs with ansible.
If you don't want this, and you want to keep your user as root-free as possible, then you need to run the command as testuser (and any other user you want to modify). This would mean you still need to patch up the sudoers file to allow your ansible user to transform into any of these users (which can also open up a few security issues - albeit not as much as being able to run anything as root), after which you can do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
become_user: "{{ item }}"
There are some caveats using this approach, so you might want to read the full Privilege Escalation page on ansible's documentation, especially the section on Unpriviliged Users before you try this.
I want to pass a variable to a notification handler, but can't find anywhere be it here on SO, the docs or the issues in the github repo, how to do it. What I'm doing is deploying multiple webapps, and when the code for one of those webapps is changed, it should restart the service for that webapp.
From this SO question, I got this to work, somewhat:
- hosts: localhost
tasks:
- name: "task 1"
shell: "echo {{ item }}"
register: "task_1_output"
with_items: [a,b]
- name: "task 2"
debug:
msg: "{{ item.item }}"
when: item.changed
with_items: task_1_output.results
(Put it in test.yml and run it with ansible-playbook test.yml -c local.)
But this registers the result of the first task and conditionally loops over that in the second task. My problem is that it gets messy when you have two or more tasks that need to notify the second task! For example, restart the web service if either the code was updated or the configuration was changed.
AFAICT, there's no way to pass a variable to a handler. That would cleanly fix it for me. I found some issues on github where other people run into the same problem, and some syntaxes are proposed, but none of them actually work.
Including a sub-playbook won't work either, because using with_items together with include was deprecated.
In my playbooks, I have a site.yml that lists the roles of a group, then in the group_vars for that group I define the list of webapps (including the versions) that should be installed. This seems correct to me, because this way I can use the same playbook for staging and production. But maybe the only solution is to define the role multiple times, and duplicate the list of roles for staging and production.
So what is the wisdom here?
Variables in Ansible are global so there is no reason to pass a variable to handler. If you are trying to make a handler parameterized in a way that you are trying to use a variable in the name of a handler you won't be able to do that in Ansible.
What you can do is create a handler that loops over a list of services easily enough, here is a working example that can be tested locally:
- hosts: localhost
tasks:
- file: >
path=/tmp/{{ item }}
state=directory
register: files_created
with_items:
- one
- two
notify: some_handler
handlers:
- name: "some_handler"
shell: "echo {{ item }} has changed!"
when: item.changed
with_items: files_created.results
I finally solved it by splitting the apps out over multiple instances of the same role. This way, the handler in the role can refer to variables that are defined as role variable.
In site.yml:
- hosts: localhost
roles:
- role: something
name: a
- role: something
name: b
In roles/something/tasks/main.yml:
- name: do something
shell: "echo {{ name }}"
notify: something happened
- name: do something else
shell: "echo {{ name }}"
notify: something happened
In roles/something/handlers/main.yml:
- name: something happened
debug:
msg: "{{ name }}"
Seems a lot less hackish than the first solution!
To update jarv's answer above, Ansible 2.5 replaces with_items with loop. When getting results, item by itself will not work. You will need to explicitly get the name, e.g., item.name.
- hosts: localhost
tasks:
- file: >
path=/tmp/{{ item }}
state=directory
register: files_created
loop:
- one
- two
notify: some_handler
handlers:
- name: "some_handler"
shell: "echo {{ item.name }} has changed!"
when: item.changed
loop: files_created.results
I got mine to work like this - I had to add some curly brackets
tasks:
- name: Aktivieren von Security-, Backport- und Non-Security-Upgrades
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^[^"//"]*"\${distro_id}:\${distro_codename}-{{ item }}";'
line: ' "${distro_id}:${distro_codename}-{{ item }}";'
insertafter: "Unattended-Upgrade::Allowed-Origins {"
state: present
register: aenderung
loop:
- updates
- security
- backports
notify: Auskommentierte Zeilen entfernen
handlers:
- name: Auskommentierte Zeilen entfernen
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^\/\/.*{{ item.item }}";.*'
state: absent
when: item.changed
loop: "{{ aenderung.results }}"