in my playbook the first task will find some files and if found register them in a variable and the second task will remove the files via a command passed via the shell, the issue is the second task always errors even when the variable cleanup is set to false. This is the playbook:
tasks:
- name: Find tables
find:
paths: "{{ file path }}"
age: "1d"
recurse: yes
file_type: directory
when: cleanup
register: cleanup_files
- name: cleanup tables
shell: /bin/cleanup {{ item.path | basename }}
with_items: "{{ cleanup_files.files }} "
when: "cleanup or item is defined"
when cleanup is set to false the first task is skipped but the second errors saying: "failed": true, "msg": "'dict object' has no attribute 'files'"}.
item would be defined as the task above didnt run so should it still not skip the task as cleanup is set to false?
ive noticed if i change the or to and in the second task it skips the task fine. im not sure why.
Change the playbook to this code (changes in the second task), and after the code you can see the logic behind the changes:
tasks:
- name: Find tables
find:
paths: "/tmp"
age: "1000d"
recurse: yes
file_type: directory
when: cleanup
register: cleanup_files
- debug: var=cleanup_files
- name: cleanup tables
debug: msg="file= {{ item.path | basename }}"
when: "cleanup_files.files is defined"
with_items: "{{ cleanup_files.files }} "
When you execute with cleanup=false, the find task will register its results to the cleanup_files, but you will notice it doesn't have a cleanup_files.files attribute. When you execute with cleanup=true, you will get the cleanup_files.files, it will be empty if no files found to meet the find criteria.
So, second task needs to know only if cleanup_files.files is defined, and if defined, it can proceed to run. If no files were found to meet the criteria, the with_items clause will handle it properly (no files=> no iterations).
I have added a debug task to check the cleanup_files, you can run and see its structure, when:
cleanup=true
cleanup=false
I think you need to change the second WHEN to
when: "cleanup and cleanup_files.files is defined"
You might also consider making cleanup a tag.
Related
I am trying to make a playbook that will check if something exists and depending on the results, it will execute a command.
I have simplified the problem but here is the gist of it :
I have a list, called "list" :
sample1
sample2
sample3
sample4
I will start by checking if a directory with this name exists.
- name: Status
shell: ls -l | grep {{ item }} | grep -v grep | wc -l
loop: "{{ list }}"
register: status
then i ll determine whether the folder exists or not (not sure if i need this step...)
- debug:
msg: {{ item.item }} exists
loop: "{{ status.results }}"
when: item.stdout != "0"
register: check
- debug:
msg: {{ item.item }} does not exist
loop: "{{ status.results }}"
when: item.stdout = "0"
register: check
the next step is where i am stuck... cant really find the right syntax or way to do this.. Anyway, i want to check if my folder exists or not, if it does not i want to create it.
- name: creation
shell: mkdir {{ item }}
loop: "{{ list }}"
when: check.results.item.stdout != "0"
as it need to check for every results from the list, my condition is based on the "check.results" and not the "list" defined in the loop.
I dont really know if this can be written as such
This is a very common misunderstanding. Ansible is about describing the expected state of the remote system and is generally idempotent for most of its modules. Running the same task an infinite number of times will lead to the same result on the target. In other words, don't check if a directory exists to later give the order to create it. Just describe the expected state: the directory must exist.
This can be done in a single task with the ansible.builtin.file module
- name: Make sure needed directories exist
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop: "{{ list }}"
As the description name in my example suggests, this will create the needed dirs for those who do not exist reporting CHANGE and leave the others alone reporting OK.
Tip: whenever you are going to use shell hold a minute and check the documentation for a module doing the job. In most cases there is one. For example you can use the ansible.builtin.find module rather than looking for files with the shell.
In case you still want to apply a bad solution, the answer was already in my previous version of the answer and is also (again) in you debug task. You just have to apply the same receipe.
- name: creation
shell: mkdir {{ item.item }}
loop: "{{ status.results }}"
when: item.stdout != "0"
Note that this would work with a correct module as well but does not really make sense
- name: Make sure missing directories are created
ansible.builtin.file:
path: "{{ item.item }}"
state: directory
loop: "{{ status.results }}"
when: item.stdout != "0"
Both of these examples require of course to run your previous task (which should be refactored to a find module as stated earlier)
I want to set a playbook level environment, but after I execute a couple of tasks. I have found that I could define a playbook level environment variable before definition of any tasks or task level environment variables. But, I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
- name: server properties
hosts: kafka_broker
vars:
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_host_key_checking: false
date: "{{ lookup('pipe', 'date +%Y%m%d-%H%M%S') }}"
copy_to_dest: "/export/home/kafusr/kafka/secrets"
server_props_loc: "/etc/kafka"
secrets_props_loc: "{{ server_props_loc }}/secrets"
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
tasks:
- name: Create a directory if it does not exist
file:
path: "{{ copy_to_dest }}"
state: directory
mode: '0755'
- name: Find files from "{{ server_props_loc }}"
find:
paths: /etc/kafka/
patterns: "server.properties*"
# ... the rest of the task
register: etc_kafka_server_props
- name: Find files from "{{ secrets_props_loc }}"
find:
paths: /etc/kafka/secrets
patterns: "*"
# ... the rest of the task
register: etc_kafka_secrets_props
- name: Copy the files
copy:
src: "{{ item.path }}"
dest: "{{ copy_to_dest }}"
remote_src: yes
loop: "{{ etc_kafka_server_props.files + etc_kafka_secrets_props.files }}"
- name: set masterkey content value
set_fact:
contents: "{{ lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt') }}"
extract_key2: "{{ contents.split('\n').2.split('|').2|trim }}"
I want to set CONFLUENT_SECURITY_MASTER_KEY after the set_facts task
Is it possible to set playbook level environment variable, but after defining some tasks
Thank you
UPDATE
Initially, when I was executing the playbook as originally defined, I was getting the error
fatal: [kafkaserver1]: FAILED! => {"msg": "The field 'environment' has an invalid value,
which includes an undefined variable. The error was: 'extract_key2' is undefined"}
which was expected as the variable extract_key2 was not set - before copying the files to desired directory.
After #Zeitounator's suggestion, when I added default to the environment variable's definition,
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 | default('') }}"
I now get a different error
The new error is
TASK [set masterkey content value] ******************** fatal: [kafkaserver1]: FAILED! =>
{"msg": "The task includes an option with an undefined variable. The error was: 'contents' is undefined\n\n
The error appears to be in '/export/home/kafuser/tmp/so-71538207-question.yml': line 43, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n
- name: set masterkey content value\n ^ here\n"}
Getting this on all 3 brokers in the console and I checked the file it exists
I did do a cat on that file, copying the path from error to make sure there is no typo, and the contents of that file are displayed on console.
Update 2
I am trying to figure out how to use slurp to get the info, with the same approach as #Zeitounator's example about using lookup.
This is what I am trying. The current definition, is of course, erroneous. Just wanted to show what I am trying to do. But, can it be done for slurp and am I on the right path?
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
((slurp: src: /export/home/z8tpush/kafka/secrets/masterkey.txt)['content'] | b64decode ).split('\n').2.split('|').2|trim
)
}}
#Zeitounator - Will you be able to direct me to an example where a slurp or fetch module is defined to set-up an environment variable and where the value will get updated after the tasks that create the file are executed, similar to what you have shown with lookup filter? I would really appreciate it.
Note:
Ultimately, I want to use ansible to create a new kafka user using confluents CLI commands ( using shell or command module ), verify it in my directory and once satisfied, I will encrypt the security.properties file using the masterkey and copy it to the appropriate location where confluent is installed.
As already mentioned, you can
With Ansible Configuration Settings set environment variables globally
Setting the remote environment in a task
Regarding your question
I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
You can set the environment on Block level, a logical groups of tasks too
Setting the remote environment: "When you set a value with environment: at the play or block level, it is available only to tasks within the play or block that are executed by the same user."
This means you would need to define a block for the next tasks
- name: Block of next task(s)
block:
- name: Next task
...
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
Regarding your question
Is it possible to set playbook level environment variable, but after defining some tasks?
No, not on that level in that run as the playbook is already running.
Another option might be to distribute the tasks in question into an other role, playbook or task file and include_* it.
You cannot set_fact a var depending on an other var in the same block. Moreover, there is absolutely no need to set_fact here as long as your relevant tasks can live with an empty environment var until it is fully defined. The following environment declaration (untested) should work and return the key for every task running after your file exists.
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
(
lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt', errors='ignore')
| default('')
).split('\n').2
| default('')
).2
| default('')
| trim
}}
I'am trying to automate the creation of the smart_[diskdevice] links to
/usr/share/munin/plugins/smart_
during the installation of the munin node via ansible.
The code here works partially, except there is no diskdevice to link on the target machine. Then I got a fatal failure with
{"msg": "with_dict expects a dict"}
I've review the ansible documentation and tried to search the problem in the web. For my understanding, the whole "file" directive should not be executed if the "when"-statement fails.
---
- name: Install Munin Node
any_errors_fatal: true
block:
...
# drives config
- file:
src: /usr/share/munin/plugins/smart_
dest: /etc/munin/plugins/smart_{{ item.key }}
state: link
with_dict: "{{ ansible_devices }}"
when: "item.value.host.startswith('SATA')"
notify:
- restart munin-node
On targets with a SATA-Drive, the code works. Drives like "sda" are found and the links are created. Loop- and other soft-Devices are ignored (as intended)
Only on a Raspberry with no SATA-Drive at all i got the fatal failure.
You are using the with_dict option to set the loop. This sets the value of the item variable for each iteration as a dictionary with two keys:
key: The name of the current key in the dict.
value: The value of the existing key in the dict.
You are then running the when option that checks the item variable on each iteration. So check if that is the behavior you want.
Regarding your error, it is being thrown because for some reason, ansible_devices is not a dict as the error says. And Ansible checks for the validity of the with_dict type before resolving the when condition.
Check the following example:
---
- name: Diff test
hosts: local
connection: local
gather_facts: no
vars:
dict:
value: True
name: "dict"
tasks:
- debug: var=item
when: dict.value == False
with_dict: '{{ dict }}'
- debug: var=item
when: dict.value == True
with_dict: '{{ dict }}'
- debug: var=item
when: dict.value == False
with_dict: "Not a dict"
The first two task will succeed because they have a valid dict on the with_dict option and a correct condition on the when option. The last one will fail because the with_dict value has the wrong type, even though the when condition resolves correctly and should guarantee to skip the task.
I hope it helps.
How to register a variable when using loop with stat module?
I am working on a project where I wish to run comparisons against the known value of a collection of files (checksum), which I will then take action if a change is detected (EG: notify someone, have not written this part yet).
If this were purely a CLI matter, I would have this sorted with some easy SH scripting.
That said, I have Ansible (2.7.5) available within my ENV and am keen to use it!
In reading the vendor documents, using the stat module felt the "Ansible way" to go on this one.
Currently just *NIX servers (Linux, Solaris, and possibly AIX) are in scope, but eventually this might also apply to Windows, where I expect I would use win_stat instead with suitable parameters.
At present I plan to dump the results of the scan to a file (EG: CSV), which I would then iterate / match against, for the purposes of a comparison (to detect if a file has been somehow changed).
This is another part I have not written yet (the read a file and compare portions), but expect to hit those once I get this present matter sorted.
My current challenge, is that I can get "one-off" stat checks to work fine.
However, I expect to be targeting a whole directory worth of files, and thus want to presumably:
"discover" the contents of the target directory, and retain this in memory
iterate (loop) through the list in memory
performing a stat check upon each file
retaining the checksum of each file
building some sort of dict or list?
write the collective results (or one line at a time) out to a log file of sorts (CSV.log: file_path,file_checksum)
I would welcome your feedback on what I might be missing (aside from some hair at this point)?
I have tried a few different approaches to looping within the playbook (loop, with_items, etc.), however the challenge remains the same.
The stat loop runs fine, but the trailing register statement fails to commit the output to memory (resulting in a variety of "undefined variable" errors).
Am I somehow missing something in my loop definition?
Looking at the vendor docs on "Using register with a loop", it would appear I am doing this correctly (in my view anyway).
Simple "target files" I am checking against within a directory.
/tmp/app/targets/file1.txt
Some text.
/tmp/app/targets/file2.cfg
cluster=0
cluster_id=app_pool_00
/tmp/app/targets/file3.sh
#!/bin/sh
printf "Hello world\n"
exit 0
My prototyping playbook as it exists currently.
---
- name: check file integrity
hosts: localhost
become: no
vars:
TARGET: /tmp/app/targets
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: discover target files
find:
paths: "{{ TARGET }}"
recurse: yes
file_type: file
register: TARGET_FILES
- name: scan target
stat:
path: "{{ item.path }}"
get_checksum: yes
loop: "{{ TARGET_FILES.files }}"
register: TARGET_RESULTS
- name: DEBUG
debug:
var: "{{ TARGET_RESULTS }}"
- name: write findings to log
copy:
content: "{{ TARGET_RESULTS.stat.path }},{{ TARGET_RESULTS.stat.checksum }}"
dest: "{{ LOG }}"
...
My "one-off" playbook that worked.
---
- name: check file integrity
hosts: localhost
become: no
vars:
TARGET: /tmp/app/targets/file1.txt
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: scan target
stat:
path: '{{ TARGET }}'
checksum_algorithm: sha1
follow: no
get_attributes: yes
get_checksum: yes
get_md5: no
get_mime: yes
register: result
- name: write findings to log
copy:
content: "{{ result.stat.path }},{{ result.stat.checksum }}"
dest: "{{ LOG }}"
...
The output was not exciting, but useful.
Would expect to build this up with multi-line output (one line per file stat checked) if I could figure out how to loop / register loop output correctly.
/tmp/app/archive/scan_results.log
/tmp/app/targets/file1.txt,8d06cea05d408d70c59b1dbc5df3bda374d869a4
You can use the set_fact module to register a variable like you want.
I don't use it in my test for you, it maybe useless in your case :
---
- name: check file integrity
hosts: localhost
vars:
TARGET: /tmp/app/targets
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: 'discover target files'
find:
paths: "{{ TARGET }}"
recurse: yes
file_type: file
register: TARGET_FILES
- debug:
var: TARGET_FILES
- name: 'scan target'
stat:
path: "{{ item.path }}"
get_checksum: yes
loop: "{{ TARGET_FILES.files }}"
register: TARGET_RESULTS
- debug:
var: TARGET_RESULTS
- name: 'write findings to log'
lineinfile:
line: "{{ item.stat.path }},{{ item.stat.checksum }}"
path: "{{ LOG }}"
create: yes
loop: '{{ TARGET_RESULTS.results }}'
result:
# cat /tmp/app/archive/scan_results.log
/tmp/app/targets/file3.sh,bb4b0ffe4b5d26551785b250c38592b6f482cab4
/tmp/app/targets/file1.txt,8d06cea05d408d70c59b1dbc5df3bda374d869a4
/tmp/app/targets/file2.cfg,fb23292e06f91a0e0345f819fdee34fac8a53e59
Best Regards
I'm trying to write an Ansible role that moves a number of files on the remote system. I found a Stack Overflow post about how to do this, which essentially says "just use the command module with 'mv'". I have a single task defined with a with_items statement like this where each item in dirs is a dictionary with src and dest keys:
- name: Move directories
command: mv {{ item.src }} {{ item.dest }}
with_items: dirs
This is good and it works, but I run into problems if the destination directory already exists. I don't want to overwrite it, so I thought about trying to stat each dest directory first. I wanted to update the dirs variable with the stat info, but as far as I know, there isn't a good way to set or update variables once they're defined. So I used stat to get the info on each directory and then saved the data with register:
- name: Check if directories already exist
stat: path={{ item.dest }}
with_items: dirs
register: dirs_stat
Is there a way to tie the registered stat info to the mv commands? This would be easy if it were a single directory. The looping is what makes this tricky. Is there a way to do this without unrolling this loop into two tasks per directory?
This is not the simplest solution by any means, but if you wanted to use Ansible and not "unroll":
---
- hosts: all
vars:
dirs:
- src: /home/ubuntu/src/test/src1
dest: /home/ubuntu/src/test/dest1
- src: /home/ubuntu/src/test/src2
dest: /home/ubuntu/src/test/dest2
tasks:
- stat:
path: "{{item.dest}}"
with_items: dirs
register: dirs_stat
- debug:
msg: "should not copy {{ item.0.src }}"
with_together:
- dirs
- dirs_stat.results
when: item.1.stat.exists
Simply adapt the debug task to run the appropriate command task instead and when: to when: not ....
You can use stat keyword in your playbook to check if it exists or not if it doesn't then move.
---
- name: Demo Playbook
hosts: all
become: yes
tasks:
- name: check destination
stat:
path: /path/to/dest
register: p
- name: copy file if not exists
command: mv /path/to/src /path/to/src
when: p.stat.exists == False