I have problem using parametrized ansible include.
I have created following file, named tasks/haproxy.xml
- name: "change node state to {{state}} in haproxy"
tags:
- "haproxy-{{state}}"
become: yes
become_user: root
haproxy:
state: "{{ state }}"
wait: yes
host: "{{ inventory_hostname }}"
backend: app
socket: /var/container_data/haproxy/run/haproxy.sock
delegate_to: "{{ item }}"
with_items: "{{ groups.haproxy }}"
I am including this file in my playbook.yml, passing value of state parameter
- include: tasks/haproxy.yml state=enabled
I am getting following error
TASK [include] *****************************************************************
included: /home/bb/tasks/haproxy.yml for 172.16.224.68, 172.16.224.69
ERROR! 'state' is undefined
state is my parameter, passed when doing include (as described in http://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse)
Whats wrong?
I am using Ansible 2.0.2.0.
edit:
using alternative syntax for passing paramteres
- include: tasks/haproxy.yml
vars:
state: enabled
gives exactly same error message.
Resolved by removing single leading space (!!) when using alternative syntax (vars).
So correct parametrized include is
- include: tasks/haproxy.yml
vars:
state: enabled
vars keyword must be at the same level as include keyword.
Otherwise it does not work, with message ERROR! 'state' is undefined.
Shortened syntax (- include: tasks/haproxy.yml state=enabled) still does not work.
Related
---
- hosts: localhost
gather_facts: no
vars:
var_folder_path: /home/play
tasks:
- name: Include all yaml files in directories
include_vars:
dir: "{{ var_folder_path }}/vars"
extensions:
- 'yaml'
- name: "Print Variable Name"
shell: echo "{{ item }}"
loop:
- "{{ global.globalname.property.Name }}"
- "{{ S3.secret }}"
My var files under /home/play/vars
example_1.yaml
global:
globalname:
property:
cipher: DEFAULT
client:
type: dynamic
Name: test-run
example_2.yaml
gcp:
keyname: sample-run
S3:
secret: run
Resources: false
celery:
resources:
limits:
cpu: 5
When I execute the playbook I get the below error. Not sure why the values are not loading
fatal: [localhost]: FAILED! => {"msg": "'dict object' has no attribute 'globalname'"}
To debug a playbook when an error occurs, it is very often a good idea to insert a debug task.
In your case insert a debug task before the shell task and get the whole variable global output:
- debug:
var: global
As mentioned in my comment above, there are no issues with the playbooks you shared, except for the "S3" variable definition in example_2.yaml, which should be "s3" (in lower case).
One possible cause for the error you reported is that there are more than one "global" var definition in the var files at /home/play/vars, and is overriding the global var definition in example_1.yaml.
Default Ansible merge is in ASCII order, i.e., the last group loaded overwrites the previous groups. See how-variables-are-merged from Ansible official documentation for more details on how variables are merged in Ansible and update your var files accordingly.
As mentioned by user #phanaz in the other answer, its a good practice to use "debug" module for printing the vars to validate, in such scenarios.
I use localhost and set_fact to store variables and access them in different playbooks.
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: set_variables
set_fact:
cloudinit_fqdn: 'server1.example.com'
additional_container_config_values:
security.nesting: 'false'
security.privileged: 'false'
cloudinit_network_raw:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: False
addresses: [192.168.178.35/24]
gateway4: 192.168.178.1
nameservers:
addresses: [192.168.178.13]
Now I want to use the cloudinit_fqdn at import_playbook:
- name: system configuration
import_playbook: "{{ hostvars['localhost']['cloudinit_fqdn'] }}_server_config.yml"
I tried different ways to get that variable, but I get errors like:
'ERROR! 'hostvars' is undefined'
I am not able to get access to that variable by:
- debug:
msg: '{{ vars }}'
ERROR! 'debug' is not a valid attribute for a Play
How can I use a variable at play-level?
Regarding your use case I've setup a short test to come around the syntax errors of the variable, as well the debug task.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Set variables
set_fact:
example_fqdn: 'test.example.com'
- name: Show variables
debug:
msg: "{{ hostvars['localhost'].example_fqdn }}"
While the example is working, adding
- name: Import playbook
import_playbook: "{{ hostvars['localhost'].example_fqdn }}.yml"
or even a simple
- name: Import playbook
import_playbook: "{{ example_fqdn }}.yml"
let the playbook run fail with
ERROR! 'hostvars' is undefined
ERROR! 'example_fqdn' is undefined
since the import is done during compile time, whereby the variable will be defined during runtime. Even not possible is
- name: Import playbook
import_playbook: "{{ to_import }}.yml"
vars:
to_import: "{{ example_fqdn }}"
as the import is static, not dynamic. Importing playbooks and Re-using playbooks seems not working in that way.
What is actually working is
- name: Import playbook
import_playbook: test.example.com.yml
Furher Questions and Answers
Ansible: import_playbook fails with variable undefined error
Ansible: Skip import_playbook with variable definition
What's the difference between include_tasks and import_tasks
I am writing a parent ansible role that runs another role though import_role. The idea that this sibling role (staticdev.pyenv) only runs when an argument pyenv_python_versions is passed, otherwise this is skipped.
According to the official documentation, I tried the following approach:
parent/tasks/main.yml
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
pyenv_global: "{{ pyenv_global }}"
pyenv_python_versions: "{{ pyenv_python_versions }}"
pyenv_virtualenvs: []
when: pyenv_python_versions
I am using currently ansible 4.1.0 (core 2.11.1), and when I test it on Debian 11 (image: cisagov/docker-debian11-ansible:latest) it executes the role anyway, even without any value for pyenv_python_versions. when is not being considered and I also tried with include_role. Complete logs can be found here.
Any idea?
UPDATE: changed condition when from to pyenv_python_versions as suggested by #lonetwin.
The problem was role import was replicating variables from the imported role (pyenv_global, pyenv_python_versions and pyenv_virtualenvs), in this case you solve it just by omitting imported role params (they will be overwritten if you create new defaults for them).
Solution:
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
when: pyenv_python_versions
I have Ansible 2.4.0. I am trying to configure Jenkins proxy based on Ansible Jenkins DevOps Roles documentation:
- hosts: master
roles:
- jenkins_configure_proxy:
jenkins_home: "{{ jenkins_home }}"
proxy_host: "{{ proxy_host }}"
proxy_port: "{{ proxy_port }}"
become: true
environment: "{{proxy_env}}"
When trying to execute ansible_playbook I get:
ERROR! role definitions must contain a role name
The error appears to have been in '/Users/me/projects/jenkins/jenkins.yml': line 10, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- jenkins_configure_proxy:
^ here
That pretty strange, because other roles like jenkins_plugin work fine.
What am I doing wrong?
You use role syntax without parameters to apply role with parameters, see example:
- roles:
# without or with default parameters
- jenkins_configure_proxy
# with custom parameters
- role: jenkins_configure_proxy
jenkins_home: "{{ jenkins_home }}"
proxy_host: "{{ proxy_host }}"
My folder structure:
First I'll give you this so you can see how this is laid out and reference it when reading below:
/environments
/development
hosts // Inventory file
/group_vars
proxies.yml
/custom_tasks
firewall_rules.yml // File I'm trying to bring in
playbook.yml // Root playbook, just brings in the plays
rev-proxy.yml // Reverse-proxy playbook, included by playbook.yml
playbook.yml:
---
- include: webserver.yml
- include: rev-proxy.yml
proxies.yml just contains firewall_custom_include_file: custom_tasks/firewall_rules.yml
firewall_rules.yml:
tasks:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
and finally rev-proxy.yml play:
---
- hosts: proxies
become: yes
roles:
- { role: firewall }
- { role: geerlingguy.nginx }
pre_tasks:
# jessie-backports for nginx-extras 1.10
- name: "Adding jessie-backports repo"
copy: content="deb http://ftp.debian.org/debian jessie-backports main" dest="/etc/apt/sources.list.d/jessie-backports.list"
- name: Updating apt-cache.
apt: update_cache="yes"
- name: "Installing htop"
apt:
name: htop
state: present
- name: "Coopying SSL certificates"
copy: src=/vagrant/ansible/files/ssl/ dest=/etc/ssl/certs force=no
tasks:
- name: "Including custom firewall rules."
include: "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml"
when: firewall_custom_include_file is defined
vars_files:
- ./vars/nginx/common.yml
- ./vars/nginx/proxy.yml
What I'm trying to do:
Using Ansible 2.2.1.0
I'm trying to include a list of tasks that will be run if a variable firewall_custom_include_file is set. The list is included relative to the inventory directory by doing "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml" - in this case that works out to /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml
Essentially the idea here is that I need to have different firewall rules be executed based on what environment I'm in, and what hosts are being provisioned.
To give a simple example: I might want to whitelist a database server IP on the production webserver, but not on the reverse proxy, and also not on my development box.
The problem:
Whenever I include firewall_rules.yml like above, it tells me:
TASK [Including custom firewall rules.] ****************************************
fatal: [proxy-1]: FAILED! => {"failed": true, "reason": "included task files must contain a list of tasks"}
I'm not sure what it's expecting, I tried taking out the tasks: at the beginning of the file, making it:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
But then it gives me the error:
root#ansible-control:/vagrant/ansible# ansible-playbook -i environments/development playbook.yml
ERROR! Attempted to execute "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as inventory script: problem running /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml --list ([Errno 8] Exec format error)
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as YAML: 'AnsibleSequence' object has no attribute 'keys'
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as ini file: /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml:2: Expected key=value host variable assignment, got: name:
At this point I'm not really sure what it's looking for in the included file, and I can't seem to really find clear documentation on this, or other people having this issue.
Try to execute with -i environments/development/hosts instead of directory.
But I bet that storing tasks file inside inventory is far from best practices.
You may want to define list of custom rules as inventory variable, e.g.:
custom_rules:
- src: 10.10.10.3
port: 80
direction: in
rule: allow
- port: 443
rule: allow
And instead of include task, make something like this:
- ufw:
port: "{{ item.port | default(omit) }}"
rule: "{{ item.rule | default(omit) }}"
direction: "{{ item.direction | default(omit) }}"
src: "{{ item.src | default(omit) }}"
with_items: "{{ custom_rules }}"