I have created a variable to disable sites in nginx within my main set of tasks. Since this is a one time task, meaning once domain1.com is disabled I can comment the entire line out. When I do, I receive an error
" {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'None' has no attribute 'domain'".
What can I do to modify my task to only run when there are domains listed within the variable?
nginx_sites_disabled:
#- domain: "domain1.com"
- name: Disable sites
file:
path: /etc/nginx/sites-enabled/{{ item.domain }}
state: absent
with_items: "{{nginx_sites_disabled}}"
notify:
- Reload nginx
Apply default filter within your task:
- name: Disable sites
file:
path: /etc/nginx/sites-enabled/{{ item.domain }}
state: absent
with_items: "{{ nginx_sites_disabled | default([]) }}"
notify:
- Reload nginx
Related answer: Ansible: apply when to complete loop
You don't need to comment out the lines when the work is done.
Like most of Ansible modules, file module is idempotent : if the desired state is absent and the file isn't there, it won't do anything.
Just leave nginx_sites_disabled list unchanged.
By the way, if you still need nginx_sites_disabled to be an empty list, you need to write this :
---
nginx_sites_disabled: []
otherwise, nginx_sites_disabled will be equal to None. That's why you have this error.
Related
I want to set a playbook level environment, but after I execute a couple of tasks. I have found that I could define a playbook level environment variable before definition of any tasks or task level environment variables. But, I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
- name: server properties
hosts: kafka_broker
vars:
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_host_key_checking: false
date: "{{ lookup('pipe', 'date +%Y%m%d-%H%M%S') }}"
copy_to_dest: "/export/home/kafusr/kafka/secrets"
server_props_loc: "/etc/kafka"
secrets_props_loc: "{{ server_props_loc }}/secrets"
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
tasks:
- name: Create a directory if it does not exist
file:
path: "{{ copy_to_dest }}"
state: directory
mode: '0755'
- name: Find files from "{{ server_props_loc }}"
find:
paths: /etc/kafka/
patterns: "server.properties*"
# ... the rest of the task
register: etc_kafka_server_props
- name: Find files from "{{ secrets_props_loc }}"
find:
paths: /etc/kafka/secrets
patterns: "*"
# ... the rest of the task
register: etc_kafka_secrets_props
- name: Copy the files
copy:
src: "{{ item.path }}"
dest: "{{ copy_to_dest }}"
remote_src: yes
loop: "{{ etc_kafka_server_props.files + etc_kafka_secrets_props.files }}"
- name: set masterkey content value
set_fact:
contents: "{{ lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt') }}"
extract_key2: "{{ contents.split('\n').2.split('|').2|trim }}"
I want to set CONFLUENT_SECURITY_MASTER_KEY after the set_facts task
Is it possible to set playbook level environment variable, but after defining some tasks
Thank you
UPDATE
Initially, when I was executing the playbook as originally defined, I was getting the error
fatal: [kafkaserver1]: FAILED! => {"msg": "The field 'environment' has an invalid value,
which includes an undefined variable. The error was: 'extract_key2' is undefined"}
which was expected as the variable extract_key2 was not set - before copying the files to desired directory.
After #Zeitounator's suggestion, when I added default to the environment variable's definition,
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 | default('') }}"
I now get a different error
The new error is
TASK [set masterkey content value] ******************** fatal: [kafkaserver1]: FAILED! =>
{"msg": "The task includes an option with an undefined variable. The error was: 'contents' is undefined\n\n
The error appears to be in '/export/home/kafuser/tmp/so-71538207-question.yml': line 43, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n
- name: set masterkey content value\n ^ here\n"}
Getting this on all 3 brokers in the console and I checked the file it exists
I did do a cat on that file, copying the path from error to make sure there is no typo, and the contents of that file are displayed on console.
Update 2
I am trying to figure out how to use slurp to get the info, with the same approach as #Zeitounator's example about using lookup.
This is what I am trying. The current definition, is of course, erroneous. Just wanted to show what I am trying to do. But, can it be done for slurp and am I on the right path?
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
((slurp: src: /export/home/z8tpush/kafka/secrets/masterkey.txt)['content'] | b64decode ).split('\n').2.split('|').2|trim
)
}}
#Zeitounator - Will you be able to direct me to an example where a slurp or fetch module is defined to set-up an environment variable and where the value will get updated after the tasks that create the file are executed, similar to what you have shown with lookup filter? I would really appreciate it.
Note:
Ultimately, I want to use ansible to create a new kafka user using confluents CLI commands ( using shell or command module ), verify it in my directory and once satisfied, I will encrypt the security.properties file using the masterkey and copy it to the appropriate location where confluent is installed.
As already mentioned, you can
With Ansible Configuration Settings set environment variables globally
Setting the remote environment in a task
Regarding your question
I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
You can set the environment on Block level, a logical groups of tasks too
Setting the remote environment: "When you set a value with environment: at the play or block level, it is available only to tasks within the play or block that are executed by the same user."
This means you would need to define a block for the next tasks
- name: Block of next task(s)
block:
- name: Next task
...
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
Regarding your question
Is it possible to set playbook level environment variable, but after defining some tasks?
No, not on that level in that run as the playbook is already running.
Another option might be to distribute the tasks in question into an other role, playbook or task file and include_* it.
You cannot set_fact a var depending on an other var in the same block. Moreover, there is absolutely no need to set_fact here as long as your relevant tasks can live with an empty environment var until it is fully defined. The following environment declaration (untested) should work and return the key for every task running after your file exists.
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
(
lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt', errors='ignore')
| default('')
).split('\n').2
| default('')
).2
| default('')
| trim
}}
Here is the problem I have. I am running the following playbook
- name: Check for RSA-Key existence
stat:
path: /opt/cert/{{item.username}}.key
with_items: "{{roles}}"
register: rsa
- name: debug
debug:
var: item.stat.exists
loop: "{{rsa.results}}"
- name: Generate RSA-Key
community.crypto.openssl_privatekey:
path: /opt/cert/{{item.username}}.key
size: 2048
when: item.stat.exists == False
with_items:
- "{{roles}}"
- "{{rsa.results}}"
This is the error I receive:
The error was: error while evaluating conditional (item.stat.exists == False): 'dict object' has no attribute 'stat'
The debug task is not firing any error
"item.stat.exists": true
What am I doing wrong and how can I fix my playbook to make it work?
TL;DR
Replace all your tasks with a single one:
- name: Generate RSA-Key or check they exist
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.username }}.key
size: 2048
state: present
with_items: "{{ roles }}"
Problem with your last loop in original example
I don't know what you are trying to do exactly when writing the following in your last task:
with_items:
- "{{roles}}"
- "{{rsa.results}}"
What I know is the actual result: you are looping over a single list made of roles elements at the beginning followed by rsa.results elements. Since I am pretty sure no elements in your roles list has a stat.exists entry, the error you get is quite expected.
Once you have looped over an original list (e.g. roles) and registered the result of the tasks (in e.g. rsa), you actually have all the information you need inside that registered var. rsa.results is a list of individual results. In each elements, you will find all the keys returned by the module you ran (e.g. stat) and an item key holding the original element that was used in the loop (i.e. an entry of your original roles list).
I strongly suggest you study this by yourself with most attention by looking at the entire variable to see how it is globally structured:
- name: Show my entire registered var
debug:
var: rsa
Once you have looked at your incoming data, it will become obvious that you should modify your last task as the following (note the item.item referencing the original element from previous loop):
- name: Generate RSA-Key
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.item.username }}.key
size: 2048
when: not item.stat.exists # Comparing to explicit False is bad use this instead
with_items: "{{ rsa.results }}"
Using stat here is an overkill
To go further, if all the above actually answers your direct question, it does not make much sense in Ansible world. You are doing a bunch of work that Ansible is already doing behind the scene for you.
The community.crypto.openssl_privatekey module create keys idempotently, i.e. it will create the key only if it doesn't exist and report changed or do nothing if the key already exists and report ok. So you can basically reduce all of your 3 tasks example to a single task
- name: Generate RSA-Key or check they exist
community.crypto.openssl_privatekey:
path: /opt/cert/{{ item.username }}.key
size: 2048
state: present # This is not necessary (default) but good practice
with_items: "{{ roles }}"
Consider changing your var name
Last, I'd like to mention that roles is actually a reserved name in Ansible. So defining a var with that name should issue a warning in current ansible version, and will probably be deprecated in some time.
Refs:
registering variables
registering variables with a loop
my playbook structure looks like:
- hosts: all
name: all
roles:
- roles1
- roles2
In tasks of roles1, I define such a variable
---
# tasks for roles1
- name: Get the zookeeper image tag # rel3.0
run_once: true
shell: echo '{{item.split(":")[-1]}}' # Here can get the string rel3.0 normally
with_items: "{{ret.stdout.split('\n')}}"
when: "'zookeeper' in item"
register: zk_tag
ret.stdout:
Loaded image: test/old/kafka:latest
Loaded image: test/new/mysql:v5.7
Loaded image: test/old/zookeeper:rel3.0
In tasks of roles2, I want to use the zk_tag variable
- name: Test if the variable zk_tag can be used in roles2
debug: var={{ zk_tag.stdout }}
Error :
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'
I think I encountered the following 2 problems:
When registering a variable with register, when condition is added, this variable cannot be used in all groups. How to solve this problem? How to make this variable available to all groups?
is my title, How to use variables between different roles in ansible?
You're most likely starting a new playbook for a new host. Meaning all previous collected vars are lost.
What you can do is pass a var to another host with the add_host module.
- name: Pass variable from this play to the other host in the same play
add_host:
name: hostname2
var_in_play_2: "{{ var_in_play_1 }}"
--- EDIT ---
It's a bit unclear. Why do you use the when statement in the first place if you want every host in the play for it to be available?
You might want to use the group_vars/all.yml file to place vars in.
Also, using add_host should be the way to go as how I read it. Can you post your playbook, and the outcome of your playbook on a site, e.g. pastebin?
If there is any chance the var is not defined because of a when condition, you should use a default value to force the var to be defined when using it. While you are at it, use the debug module for your tests rather than echoing something in a shell
- name: Debug my var
debug:
msg: "{{ docker_exists | default(false) }}"
in my playbook the first task will find some files and if found register them in a variable and the second task will remove the files via a command passed via the shell, the issue is the second task always errors even when the variable cleanup is set to false. This is the playbook:
tasks:
- name: Find tables
find:
paths: "{{ file path }}"
age: "1d"
recurse: yes
file_type: directory
when: cleanup
register: cleanup_files
- name: cleanup tables
shell: /bin/cleanup {{ item.path | basename }}
with_items: "{{ cleanup_files.files }} "
when: "cleanup or item is defined"
when cleanup is set to false the first task is skipped but the second errors saying: "failed": true, "msg": "'dict object' has no attribute 'files'"}.
item would be defined as the task above didnt run so should it still not skip the task as cleanup is set to false?
ive noticed if i change the or to and in the second task it skips the task fine. im not sure why.
Change the playbook to this code (changes in the second task), and after the code you can see the logic behind the changes:
tasks:
- name: Find tables
find:
paths: "/tmp"
age: "1000d"
recurse: yes
file_type: directory
when: cleanup
register: cleanup_files
- debug: var=cleanup_files
- name: cleanup tables
debug: msg="file= {{ item.path | basename }}"
when: "cleanup_files.files is defined"
with_items: "{{ cleanup_files.files }} "
When you execute with cleanup=false, the find task will register its results to the cleanup_files, but you will notice it doesn't have a cleanup_files.files attribute. When you execute with cleanup=true, you will get the cleanup_files.files, it will be empty if no files found to meet the find criteria.
So, second task needs to know only if cleanup_files.files is defined, and if defined, it can proceed to run. If no files were found to meet the criteria, the with_items clause will handle it properly (no files=> no iterations).
I have added a debug task to check the cleanup_files, you can run and see its structure, when:
cleanup=true
cleanup=false
I think you need to change the second WHEN to
when: "cleanup and cleanup_files.files is defined"
You might also consider making cleanup a tag.
# "Run application specific configuration scripts"
- include: "app_cfg/{{ app_name }}.yml"
when: "{{ app_conf[app_name].app_cfg }}"
ignore_errors: no
tags:
- conf
I thought that I will be able conditionally include application specific playbooks simply by setting one variable to the true/false value like so:
app_conf:
my_app_1:
app_cfg: no
my_app_2:
app_cfg: yes
Unfortunately Ansible is forcing me to create file beforehand:
ERROR: file could not read: <...>/tasks/app_cfg/app_config.yml
Is there a way I can avoid creating a bunch of empty files?
# ansible --version
ansible 1.9.2
include with when is not conditional in common sense.
It actually includes every task inside include-file and append given when statement to every task included task.
So it expects include-file to exist.
You can try to handle this using with_first_found and skip: true.
Something like this:
# warning, code not tested
- include: "app_cfg/{{ app_name }}.yml"
with_first_found:
- files:
- "{{ app_conf[app_name].app_cfg | ternary('app_cfg/'+app_name+'.yml', 'unexisting.file') }}"
skip: true
tags: conf
It supposed to supply valid config name if app_cfg is true and unexisting.file (which will be skipped) otherwise.
See this answer about skip option.