List extra-vars provided to the playbook - ansible

I have following scaled down version of playbook and I need to find out the list of extra-vars passed to this playbook. This is needed because sometimes user by typing mistake provide incorrect name of the extravar and the playbook will not use misspelled variable. SO I need to make sure user provide correct name of the extravar`.
---
- hosts: localhost
vars:
restart: 'on'
tasks:
- name: "WHEN CONDITION CHECK"
shell: echo reboot
when: "'off' in restart "
Example:
#This should run
ansible-playbook playbook.yml -e restart=off
#This should get detected, like some extra var is passed.
ansible-playbook playbook.yml -e restarttt=off
Note: I already know restart is not defined check, I am trying to find the list of vars. As I want to use defaults also, So cannot use not defined check.
I am expecting to print following as output:
restart
restarttt

If you run below code
- hosts: all
gather_facts: True
tasks:
- name: host
debug: var=vars
You will find your extra vars in the JSON structure, but it seems that it goes to the 'root' of the tree and to hostvars 'branch'
Hope it helps.

You may leverage /proc/<process-pid>/cmdline file to read the command line arguments passed to the ansible playbook.
---
- hosts: localhost
gather_facts: False
tasks:
- name: "Get the command line args"
shell: "cat -A /proc/{{ lookup('pipe', 'echo $PPID') }}/cmdline |sed -r 's/\\^#/ /g'"
register: cmd_line
delegate_to: localhost
- name: "Print the arguments passed to the playbook"
debug:
msg: "{{ cmd_line.stdout.split(' ')|select() }}"
Example:
ansible-playbook foo.yml -i inventory.yml -e foo=123 -e 'bar=0 zoo=1'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************
TASK [Get the command line args] ****************************************************************************************************************************************************************************************************
changed: [127.0.0.1 -> localhost]
TASK [Print the arguments passed to the playbook] ***********************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
"msg": [
"/usr/bin/python3",
"/home/p/.local/bin/ansible-playbook",
"foo.yml",
"-i",
"inventory.yml",
"-e",
"foo=123",
"-e",
"bar=0",
"zoo=1"
]
}
Once you have the list of args passed to the ansible-playbook you may write whatever logic is needed.
Note: the file /proc/<pid>/cmdline is having null separated list of arguments; you might want to write a better parsing logic if your code break at any point.

Related

Passing hostname to ansible playbook through extravars

I have to pass the host on which the Ansible command will be executed through extra vars.
I don't know in advance to which hosts the tasks will be applied to, and, therefore, my inventory file is currently missing the hosts: variable.
If I understood from the article "How to pass extra variables to an Ansible playbook" correctly, overwriting hosts is only possible by having already composed groups of hosts.
From the post Ansible issuing warning about localhost I gathered that referencing hosts to be managed in an Ansible inventory is a must, however, I still have doubts about it since the usage of extra vars was not mentioned in the given question.
So my question is: What can i do in order to make this playbook work?
- hosts: "{{ host }}"
tasks:
- name: KLIST COMMAND
command: klist
register: klist_result
- name: TEST COMMAND
ansible.builtin.shell: echo hi > /tmp/test_result.txt
... referencing hosts to be managed in an Ansible inventory is a must
Yes, that's the case. Regarding your question
What can I do in order to make this playbook work? (annot. without a "valid" inventory file)
you could try with the following workaround.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- add_host:
hostname: "{{ target_hosts }}"
group: dynamic
- hosts: dynamic
become: true
gather_facts: true
tasks:
- name: Show hostname
shell:
cmd: "hostname && who am i"
register: result
- name: Show result
debug:
var: result
A call with
ansible-playbook hosts.yml --extra-vars="target_hosts=test.example.com"
resulting into execution on
TASK [add_host] ***********
changed: [localhost]
PLAY [dynamic] ************
TASK [Show hostname] ******
changed: [test.example.com]
In any case it is recommended to check how to build your inventory.
Further Documentation
add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory

How to use a variable defined in a previous task for use in a task where the conditional omits that host?

Effectively, I have two servers, and I am trying to get use output from the command of one of them, to configure the other, and vise versa. I spent a few hours reading on this, and found out that the hostvars process and dummy hosts is seemingly what I want. No matter how I try to implement this process, I still get undefined variables, and/or failures from the host(s) not being in the pattern for the task:
Here is the relevant block, with only hosts mux-ds1 and mux-ds2 are in the dispatchers group:
---
- name: Play that sets up the sql database during the build process on all mux dispatchers.
hosts: mux_dispatchers
remote_user: ansible
vars:
ansible_ssh_pipelining: yes
tasks:
- name: Check and save database master bin log file and position on mux-ds2.
shell: sudo /usr/bin/mysql mysql -e "show master status \G" | grep -E 'File:|Position:' | cut -d{{':'}} -f2 | awk '{print $1}'
become: yes
become_method: sudo
register: syncds2
when: ( inventory_hostname == 'mux-ds2' )
- name: Print current ds2 database master bin log file.
debug:
var: "syncds2.stdout_lines[0]"
- name: Print current ds2 database master bin position.
debug:
var: "syncds2.stdout_lines[1]"
- name: Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.
add_host:
name: "ds2_bin"
bin_20: "{{ syncds2.stdout_lines }}"
- debug:
var: "{{ hostvars['ds2_bin']['bin_21'] }}"
- name: Compare master bin variable output for ds1's database and if different, configure for it.
shell: sudo /usr/bin/mysql mysql -e "stop slave; change master to master_log_file='"{{ hostvars['ds2_bin']['bin_21'][0] }}"', master_log_pos="{{ hostvars['ds2_bin']['bin_21'][1] }}"; start slave"
become: yes
become_method: sudo
register: syncds1
when: ( inventory_hostname == 'mux-ds1' )
Basically everything works properly up to where I try to see the value of the variable from the dummy host with the debug module, but it tells me the variable is still undefined even though it's defined in the original variable. This is supposed to be the system to get around such problems:
TASK [Print current ds2 database master bin log file.] **************************************************
ok: [mux-ds1] => {
"syncds2.stdout_lines[0]": "VARIABLE IS NOT DEFINED!"
}
ok: [mux-ds2] => {
"syncds2.stdout_lines[0]": "mysql-bin.000001"
}
TASK [Print current ds2 database master bin position.] **************************************************
ok: [mux-ds1] => {
"syncds2.stdout_lines[1]": "VARIABLE IS NOT DEFINED!"
}
ok: [mux-ds2] => {
"syncds2.stdout_lines[1]": "107"
}
The above works as I intend, and has the variables populated and referenced properly for mux-ds2.
TASK [Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.] ********
fatal: [mux-ds1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to be in '/home/ansible/ssn-project/playbooks/i_mux-sql-config.yml': line 143, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.\n ^ here\n"}
This is where the issue is, the variable seems to be magically undefined again, which is odd given this process is designed to end-run that issue. I can't even make it to the second set of debug tasks.
Note that this is ultimately for the purpose of syncing up two master/master replication mysql databases. I'm also doing this with the shell module because the mysql version which must be used can be no higher than 5.8, and the ansible module requires 5.9, which is a shame. There same process will be done for mux-ds2 in reverse as well assuming this can be made to work.
Either I'm making a mistake in this implementation which keeps it from functioning, or I'm using the wrong implementation for what I want. I've spent too much time now trying to figure this out alone and would appreciate any solution for this which would work. Thanks in advance!
Seems like you are going a complicated route when a simple delegation of tasks and the usage of the special variable hostvars, to be able to fetch fact from different node, should give you what you expect.
Here is an example — focusing just on the important part — so, you might want to add the become and become_user back in there:
- shell: >-
sudo /usr/bin/mysql mysql -e "show master status \G"
| grep -E 'File:|Position:'
| cut -d{{':'}} -f2
| awk '{print $1}'
delegate_to: mux-ds2
run_once: true
register: syncds2
- shell: >-
sudo /usr/bin/mysql mysql -e "stop slave;
change master to
master_log_file='"{{ hostvars['mux-ds2'].syncds2.stdout_lines.0 }}"',
master_log_pos="{{ hostvars['mux-ds2'].syncds2.stdout_lines.1 }}";
start slave"
delegate_to: mux-ds1
run_once: true
Here is an example, running some dummy shell tasks, given the playbook:
- hosts: node1, node2
gather_facts: no
tasks:
- shell: |
echo 'line 0'
echo 'line 1'
delegate_to: node2
run_once: true
register: master_config
- shell: |
echo '{{ hostvars.node2.master_config.stdout_lines.0 }}'
echo '{{ hostvars.node2.master_config.stdout_lines.1 }}'
delegate_to: node1
run_once: true
register: master_replicate_config
- debug:
var: master_replicate_config.stdout_lines
delegate_to: node1
run_once: true
This would yield:
PLAY [node1, node2] **********************************************************
TASK [shell] *****************************************************************
changed: [node1 -> node2(None)]
TASK [shell] *****************************************************************
changed: [node1]
TASK [debug] *****************************************************************
ok: [node1] =>
master_replicate_config.stdout_lines:
- line 0
- line 1

Ansible: What is the variable order/precedence in case of multiple 'extra_vars' files?

In this older question about "Can extra_vars receive multiple files?", the original poster answered the question, saying that multiple vars files could be accomplished by just using multiple --extra-vars parameters.
The followup question that I have is that, in such a case, where the ansible-playbook command line has two --extra-vars parameters, each pointing to a different file, what is the order or precedence of those files?
Also, what happens if both files have the same var name (e.g., my_host) in them?
For example, say I have 2 files, extraVars1.yml and extraVars2.yml and in the ansible-playbook command line I have:
ansible-playbook... --extra-vars "#extraVars1.yml" --extra-vars "#extraVars2.yml"
and the extraVars1.yml file has:
my_host: 1.2.3.4
and the extraVars2.yml file has:
my_host: 5.6.7.8
What will the value of the my_host var be when the playbook is run?
Thanks!
Jim
According the Ansible documentation about Using Variables and Understanding variable precedence
extra vars (for example, -e "user=my_user") (always win precedence)
In general, Ansible gives precedence to variables that were defined more recently ...
This means the last defined wins.
Lets have a short test here with a vars.yml playbook.
---
- hosts: localhost
become: false
gather_facts: false
vars:
my_host: 9.0.0.0
tasks:
- name: Show value
debug:
msg: "{{ my_host }}"
The execution of ansible-playbook vars.yml will result into an output of
TASK [Show value] ***
ok: [localhost] =>
msg: 9.0.0.0
The execution of ansible-playbook -e "#extraVars1.yml" vars.yml will result into an output of
TASK [Show value] ***
ok: [localhost] =>
msg: 1.2.3.4
The execution of ansible-playbook -e "#extraVars1.yml" -e "#extraVars2.yml" vars.yml will result into an output of
TASK [Show value] ***
ok: [localhost] =>
msg: 5.6.7.8
The execution of ansible-playbook -e "#extraVars2.yml" -e "#extraVars1.yml" vars.yml will result into an output of
TASK [Show value] ***
ok: [localhost] =>
msg: 1.2.3.4

How to set a Ansible extra variable to answer a pause prompt in a playbook

We use an Ansible playbook for running a few commands like version_get, device_status on the target machine. Intentionally we have restricted reboot option from being executed. But occasionally, we would like to automatically answer the yes prompt by setting some variables in the --extra-vars option.
A simple representation of our playbook minimized to run on localhost.
---
- hosts: localhost
gather_facts: no
tasks:
- name: Confirm Execution
pause:
prompt: "You are about to execute a '{{cmd}}' command on the device. Enter 'yes' to proceed"
register: pause_result
run_once: True
when: not (cmd|regex_search('(get|status)'))
- meta: end_play
when: pause_result.user_input|default('yes') != 'yes'
I know I can add reboot as part of the existing get|status list, but I don't want to do that, because I want the users exercise special precaution when running it. So with the current code as above, if I run reboot I'm left with a prompt like
$ ansible-playbook loc.yml -e 'cmd=reboot'
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***********************************************************************************************************************************************************
TASK [Confirm Execution] ***************************************************************************************************************************************************
[Confirm Execution]
You are about to execute a 'reboot' command on this device. Enter 'yes' to proceed:
I just know how to set a variable to automatically answer this prompt. Tried passing echo yes | to the playbook and seeing an error as
$ echo yes | ansible-playbook loc.yml -e 'cmd=reboot'
[WARNING]: Not waiting for response to prompt as stdin is not interactive
I also tried to pass the --extra-vars as below but none of them seemed to work
-e 'cmd=reboot {"pause_result": "yes"}'
-e 'cmd=reboot pause_result=yes'
I would simply use an other var and condition the prompt to this one. Some lines of code being more expressive than a long speech:
The playbook
---
- hosts: localhost
gather_facts: no
tasks:
- name: Confirm Execution
pause:
prompt: "You are about to execute a '{{cmd}}' command on the device. Enter 'yes' to proceed"
register: pause_result
run_once: True
when:
- not (cmd | regex_search('(get|status)'))
- not (skip_confirm | default(false) | bool)
- meta: end_play
when: pause_result.user_input | default('yes') != 'yes'
- name: dummy task to see if end of play was effective
debug:
msg: "In real world, I would play {{ cmd }}"
Example calls:
$ ansible-playbook test.yml -e cmd=reboot
$ ansible-playbook test.yml -e cmd=reboot -e skip_confirm=true
If you do not want to introduce a new var, you can use the existing one but that will still require a modification of your current playbook.
The when clause of your prompt should become:
when:
- not (cmd | regex_search('(get|status)'))
- pause_result is not defined
And the call:
$ ansible-playbook test.yml -e cmd=reboot -e "pause_result={user_input: yes}"

Override hosts variable of Ansible playbook from the command line

This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'

Resources