ansible filter with json_query - ansible

I write this:
- name: test for seed
debug:
var: hostvars|json_query("*.ansible_host")
And it prints every host. But this does not filter hosts:
- name: test for seed
debug:
var: hostvars|json_query("*[?ansible_host=='192.168.56.101']")
It just prints an empty list, while I'm sure this host exists. This is the relevant inventory line:
[build-servers]
build-server ansible_host=192.168.56.101
Am I doing something wrong?

You should filter resulting list, not original hash: * | [?ansible_host=='192.168.168.21']
P.S. you usually don't want to use var option of debug module to print Jinja statements, use msg instead.

Related

How can I put the discovered values into loop variables so that they are on one line

How can I put the discovered values into loop variables so that they are on one line using Ansible task? I have now task like this
- name: Updating test.conf
lineinfile:
path: "/root/test.conf"
regexp: "test="
line: "test={{ hostvars[item]['ansible_env'].SSH_CONNECTION.split(' ')[2] }}"
state: present
with_nested:
- "{{groups['app']}}"
It needs that when invoking the job, it takes the IP addresses from the servers that are in the app group and puts them on a single line. Currently, it performs such a substitution twice with which they are replaced and finally there is only one address in the test parameter.
I need format after task like this:
test=1.1.1.1, 2.2.2.2
While jinja2 is heavily inspired from python, its does not allow you to do all the same operations. To join a list your would have to do something like:
- debug:
msg: "{{ myvar | join(',') }}"
vars:
myvar:
- foo
- bar
When in doubt, always use a simple playwook with a debug task to validate your jinja code.

Ansible, counting occurence of a word in a string

I am fairly new to Ansible, and I have been googling about this particular issue described below:
name: GETTING OUTPUT AND STORING IT INTO A VARIABLE
connection: network_cli
cli_command:
command: show configuration interface ge-0/0/0 | display set | match unit
register: A
Above, the task will run the command show configuration interface ge-0/0/0 on Juniper router, the out put will contain a bunch of key words unit. This output is then stored in a variable A.
I want to count the number of occurence key word unit appear in the output and store it in a variable COUNT. How can I do that? Ijust need an example.
Thanks and have a good weekend!!
If you have this task:
- name: get output and store
connection: network_cli
cli_command:
command: show configuration interface ge-0/0/0 | display set | match unit
register: show_config_result
Then you use a subsequent set_fact task to store the value you want in a variable:
- name: store unit count in unit_count variable
set_fact:
unit_count: "{{ (show_config_result.stdout_lines|length)-1 }}"

How to use variables between different roles in ansible

my playbook structure looks like:
- hosts: all
name: all
roles:
- roles1
- roles2
In tasks of roles1, I define such a variable
---
# tasks for roles1
- name: Get the zookeeper image tag # rel3.0
run_once: true
shell: echo '{{item.split(":")[-1]}}' # Here can get the string rel3.0 normally
with_items: "{{ret.stdout.split('\n')}}"
when: "'zookeeper' in item"
register: zk_tag
ret.stdout:
Loaded image: test/old/kafka:latest
Loaded image: test/new/mysql:v5.7
Loaded image: test/old/zookeeper:rel3.0
In tasks of roles2, I want to use the zk_tag variable
- name: Test if the variable zk_tag can be used in roles2
debug: var={{ zk_tag.stdout }}
Error :
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'
I think I encountered the following 2 problems:
When registering a variable with register, when condition is added, this variable cannot be used in all groups. How to solve this problem? How to make this variable available to all groups?
is my title, How to use variables between different roles in ansible?
You're most likely starting a new playbook for a new host. Meaning all previous collected vars are lost.
What you can do is pass a var to another host with the add_host module.
- name: Pass variable from this play to the other host in the same play
add_host:
name: hostname2
var_in_play_2: "{{ var_in_play_1 }}"
--- EDIT ---
It's a bit unclear. Why do you use the when statement in the first place if you want every host in the play for it to be available?
You might want to use the group_vars/all.yml file to place vars in.
Also, using add_host should be the way to go as how I read it. Can you post your playbook, and the outcome of your playbook on a site, e.g. pastebin?
If there is any chance the var is not defined because of a when condition, you should use a default value to force the var to be defined when using it. While you are at it, use the debug module for your tests rather than echoing something in a shell
- name: Debug my var
debug:
msg: "{{ docker_exists | default(false) }}"

how to declare a dictionary using another variable

I need to declare a dictionary from ansible facts. the issue is i need to pass on a string value to create this dictionary. but when i try to pass this variable it creates a string rather than a dictionary.
I tried using set_facts also to create the variable still not able to find the solution.
- name: get part name
set_fact:
device_name: "nvme1n1"
- setup:
filter:
ansible_devices
register: detail
- set_fact:
part_dict: "{{detail.ansible_facts.ansible_devices.{{device_name}}.partitions}}"
- debug:
var: part_dict
When i use the above code the output is a string
TASK [debug] *******************************************************************************************************************************
ok: [10.95.198.103] => {
"part_dict": "detail.ansible_facts.ansible_devices.\"nvme1n1\".partitions"
}
but when i just hardcode the device name then i get the dictionary. i just need to how do i correct the syntax so i can get the dictionary by passing the key as a variable.
Map elements can be accessed through different syntax:
Option 1:
variable.element1.element2
Option2:
variable['element1']['element2']
Mix:
variable['element1'].element2
In your case, you simply have to slighly change the syntax so that ansible does not get confused (or even fires an error with the test I made with ansible 2.7.8 + Jinja 2.10)
part_dict: "{{ detail.ansible_facts.ansible_devices[device_name].partitions }}"

Create host-specific facts with an ansible custom module

I'm writing an ansible custom module that does the same as the URI module with some additional features.
In particular, I would like the data returned to be available either as a global var or as host specific var
(where the host is the inventory host running the task).
based on a value passed to the module.
However, no matter what I do it seems that I can only create a global variable.
Let's say that I run the playbook for N hosts and I execute the custom module only once (run_once: yes).
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: yes -> yes, to create a global fact called: custom_var
- name: DEBUG - Print the content of custom_var
debug: msg="{{ custom_var }}"
This works fine, all N hosts are able to see the custom_var.
Is there a way that I can have custom_var defined only for the actual host that executes the task?
Strangely enough, I also tried to register the result of the first task as follows:
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: no -> no, if I don't create any global fact
register: register_task
- name: DEBUG - Print the content of custom_var
debug: msg="{{ register_task }}"
but it looks like that also was available to all the hosts in the inventory (with the same content). Is that the expected behaviour?
And any idea about how to make the custom_var available only for the host that actually runs the task?
Thanks in advance!
Yes, this is expected behaviour with run_once: yes as it overrides host loop.
With this option set, the task is executed only once (on the first host in a batch) but result is copied to all hosts in the same batch. So you end up with custom_var/register_task defined for every host.
If you want the opposite, you can change run_once: yes to conditional statement like:
when: play_hosts.index(inventory_hostname) == 0
This will execute the task only on the first host but will not override host loop (still skip the task for other hosts because of conditional); This way registered fact will be available only for the first host.

Resources