I have a playbook that will iterate over a set of hosts in different environments, "dev", and "prod". The environment that a host is in will change the other variables that it has. For example, this is in my vars/main.yml file:
---
folder_list_DEV: ["folder-1", "folder-2", "folder-3"]
folder_list_PROD: ["folder-1", "folder-2"]
The intention in my example is to create a series of folders on the target system, depending on which environment it is in. The code that I would like to work, but does not, looks like this:
- name: Create folders
file:
path: "/{{ item }}"
state: present
with_items: "{{ folder_list_env }}
"env" is set on execution of the playbook (-e "env=DEV").
How can I reference this "folder_list_*" variable based on value of the "env" variable?
"{{ vars['folder_list_' + env] }}"
Related
I want to set a playbook level environment, but after I execute a couple of tasks. I have found that I could define a playbook level environment variable before definition of any tasks or task level environment variables. But, I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
- name: server properties
hosts: kafka_broker
vars:
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_host_key_checking: false
date: "{{ lookup('pipe', 'date +%Y%m%d-%H%M%S') }}"
copy_to_dest: "/export/home/kafusr/kafka/secrets"
server_props_loc: "/etc/kafka"
secrets_props_loc: "{{ server_props_loc }}/secrets"
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
tasks:
- name: Create a directory if it does not exist
file:
path: "{{ copy_to_dest }}"
state: directory
mode: '0755'
- name: Find files from "{{ server_props_loc }}"
find:
paths: /etc/kafka/
patterns: "server.properties*"
# ... the rest of the task
register: etc_kafka_server_props
- name: Find files from "{{ secrets_props_loc }}"
find:
paths: /etc/kafka/secrets
patterns: "*"
# ... the rest of the task
register: etc_kafka_secrets_props
- name: Copy the files
copy:
src: "{{ item.path }}"
dest: "{{ copy_to_dest }}"
remote_src: yes
loop: "{{ etc_kafka_server_props.files + etc_kafka_secrets_props.files }}"
- name: set masterkey content value
set_fact:
contents: "{{ lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt') }}"
extract_key2: "{{ contents.split('\n').2.split('|').2|trim }}"
I want to set CONFLUENT_SECURITY_MASTER_KEY after the set_facts task
Is it possible to set playbook level environment variable, but after defining some tasks
Thank you
UPDATE
Initially, when I was executing the playbook as originally defined, I was getting the error
fatal: [kafkaserver1]: FAILED! => {"msg": "The field 'environment' has an invalid value,
which includes an undefined variable. The error was: 'extract_key2' is undefined"}
which was expected as the variable extract_key2 was not set - before copying the files to desired directory.
After #Zeitounator's suggestion, when I added default to the environment variable's definition,
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 | default('') }}"
I now get a different error
The new error is
TASK [set masterkey content value] ******************** fatal: [kafkaserver1]: FAILED! =>
{"msg": "The task includes an option with an undefined variable. The error was: 'contents' is undefined\n\n
The error appears to be in '/export/home/kafuser/tmp/so-71538207-question.yml': line 43, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n
- name: set masterkey content value\n ^ here\n"}
Getting this on all 3 brokers in the console and I checked the file it exists
I did do a cat on that file, copying the path from error to make sure there is no typo, and the contents of that file are displayed on console.
Update 2
I am trying to figure out how to use slurp to get the info, with the same approach as #Zeitounator's example about using lookup.
This is what I am trying. The current definition, is of course, erroneous. Just wanted to show what I am trying to do. But, can it be done for slurp and am I on the right path?
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
((slurp: src: /export/home/z8tpush/kafka/secrets/masterkey.txt)['content'] | b64decode ).split('\n').2.split('|').2|trim
)
}}
#Zeitounator - Will you be able to direct me to an example where a slurp or fetch module is defined to set-up an environment variable and where the value will get updated after the tasks that create the file are executed, similar to what you have shown with lookup filter? I would really appreciate it.
Note:
Ultimately, I want to use ansible to create a new kafka user using confluents CLI commands ( using shell or command module ), verify it in my directory and once satisfied, I will encrypt the security.properties file using the masterkey and copy it to the appropriate location where confluent is installed.
As already mentioned, you can
With Ansible Configuration Settings set environment variables globally
Setting the remote environment in a task
Regarding your question
I haven't found how can I set-up an environment variable that can be used by all tasks following a task.
You can set the environment on Block level, a logical groups of tasks too
Setting the remote environment: "When you set a value with environment: at the play or block level, it is available only to tasks within the play or block that are executed by the same user."
This means you would need to define a block for the next tasks
- name: Block of next task(s)
block:
- name: Next task
...
environment:
CONFLUENT_SECURITY_MASTER_KEY: "{{ extract_key2 }}"
Regarding your question
Is it possible to set playbook level environment variable, but after defining some tasks?
No, not on that level in that run as the playbook is already running.
Another option might be to distribute the tasks in question into an other role, playbook or task file and include_* it.
You cannot set_fact a var depending on an other var in the same block. Moreover, there is absolutely no need to set_fact here as long as your relevant tasks can live with an empty environment var until it is fully defined. The following environment declaration (untested) should work and return the key for every task running after your file exists.
environment:
CONFLUENT_SECURITY_MASTER_KEY: >-
{{
(
(
lookup('file', '/export/home/kafusr/kafka/secrets/masterkey.txt', errors='ignore')
| default('')
).split('\n').2
| default('')
).2
| default('')
| trim
}}
In my playbook i have tasks that use the hostname of a server and extrapolate data to set location and environment based on that. But some servers have unique names and I'm not sure how to set variables on those. I'd prefer not to use ansible facts since i would like to share the playbook with a team. One way I was thinking is to do what's listed below but i'm running into issues. Could someone please guide me.
Create vars_file inventory
---
customservers
customhostname1:
env: test
location: hawaii
customhostname2:
env: prod
location: alaska
In Playbook.
---
task
tasks:
- name: set hostname
shell: echo "customhostname1"
register: my_hostname
- name: setting env var
set_fact:
env: "{{ item.value.env }}"
when: my_hostname == "{{ item.key }}"
with_dict: "{{ customservers }}"
- name: outputing env var
debug:
msg: the output is {{ env }}
Expected output should be test.
Thank you.
In my playbook i have tasks that use the hostname of a server and
extrapolate data to set location and environment based on that.
Bad Idea.
But some servers have unique names and I'm not sure how to set variables on those
And that is why.
The second Bad Idea is to have TEST and PROD in the same inventory. That's just begging for a disaster. They should be two completely separate inventories, though perhaps under the same parent directory:
inventories/
inventories/test/
inventories/test/hosts
inventories/test/host_vars/
inventories/test/host_vars/customhostname1.yml
inventories/prod/
inventories/prod/hosts
inventories/prod/host_vars/
inventories/prod/host_vars/customhostname2.yml
So inventories/prod/hosts could look like this (I prefer the ini format):
[customservers]
customhostname2 location=hawaii
Or:
[customservers]
customhostname2
[customhostname2:vars]
location=hawaii
But in any case, DO NOT combine test and prod inventories.
If you still need that env variable, you can either put it in group_vars/all.yml or right in the hosts file like so:
[all:vars]
env=prod
I am trying to do a custom install of openedx and I have a bunch of .yml files with environment variables in them inside paths that looks like this
playbooks/roles/<component-name>/defaults/main.yml
Then, while running a playbook that installs all such components, I'm using a command like this
ansible-playbook ./openedx_native.yml -e"#roles/<component-name-1>/defaults/main.yml" -e"#roles/<component-name-2>/defaults/main.yml"
Now I want to be able to use the main.yml files from all components and there are about 20-25 of them, so I'm looking for a way to include them using a wildcard, something like this
ansible-playbook ./openedx_native.yml -e"#roles/*/defaults/main.yml"
This, of course, doesn't work and Ansible throws an error like this
ERROR! the file_name
'/var/tmp/configuration/playbooks/roles/*/defaults/main.yml' does not
exist, or is not readable
How do I achieve this? Please help!
An option would be to find the files and include_vars.
tasks:
- command: "sh -c 'find {{ playbook_dir }}/roles/*/defaults/main.yml'"
register: result
- include_vars:
file: "{{ item }}"
loop: "{{ result.stdout_lines }}"
If you have flexibility to change & re-arrange environment variables and its values in /group/all.yaml like environments:
- { name: ‘development’, profile: 'small' }
- { name: ‘staging’, profile: ‘medium’ }
- { name: ‘production’, profile: ‘complex’ }
And then you can use this variable for any task say for example you want to create folder with environment name
- name: create folders for Environment
file:
path: "{{ target }}/{{ item.name }}"
state: directory
mode: 0755
with_items: "{{ environments }}"
I have a task in playbook as shown :
- name: Create configuration.json for every analytics from the template.
template:
src: ./src-zip/{{ item.key }}/configuration_sample.j2
dest: ./src-zip/{{ item.key }}/configuration.json
with_dict: "{{ apps }}"
Now , i have variables in which some are common and some are different defined in file for every run .
Like, there is file var_alerts-manager.yml for one run.
For another run, I have var_abc.yml.
Now i want to use different file for different run. In other words, template will use variables defined in var_alerts-manager.yml in one run and abc.yml in another run and so on.
How can this be achieved in ansible and where should I keep these files so task can include only that specific file for every run?
This is a scenario for which roles mechanism evolved:
Create a my_role role in your playbook dir.
Move your task and template into that role.
Store your variables inside vars directory in the role.
Execute the role with:
- name: Create configuration.json for every analytics from the template.
include_role:
name: my_role
vars_from: "{{ item.key }}" # or whatever key your naming is defined in
loop: "{{ apps|dict2items }}"
You might want to replace item inside the template task with some other variable for clarity, read about loop_control.
In my ansible vars file, I have a variable that will sometimes be set and other times I want to dynamically set it. For example, I have an RPM that I want to install. I can manually store the location in a variable, or if I don't have a particular one in mind, I want to pull the latest from Jenkins. My question is, how can I check if the variable is not defined or empty, and if so, just use the default from Jenkins (already stored in a var)?
Here is what I have in mind:
...code which gets host_vars[jenkins_rpm]
- hosts: "{{ host }}"
tasks:
- name: Set Facts
set_fact:
jenkins_rpm: "{{ hostvars['localhost']['jenkins_rpm'] }}"
- name: If my_rpm is empty or not defined, just use the jenkins_rpm
set_fact: my_rpm=jenkins_rpm
when: !my_rpm | my_rpm == ""
There is default filter for that:
- set_fact:
my_rpm: "{{ my_rpm | default(jenkins_rpm) }}"