I am running ansible role to get package version on 50 servers. As example, I am looking to get output in form below
{
server1: "1.0",
server2: "1.0",
.
.
.
servert50: "1.1"
}
I have the logic to get the version but can't get a clear way to collect one fact from each run after role execution is finished.
The following didn't produce what I am looking for since it runs after each execution NOT once at the end. Also I need to have facts in list of dictionaries.
post_tasks:
- name: Print installed versions
run_once: true
copy:
content: "{{ hostvars | to_nice_json }}"
dest: /tmp/hostvar.json
delegate_to: localhost
Related
I created a Worflow job in awx containing 2 jobs:
Job 1 is using the credentials of the windows server where we get the json file from. It reads the content and put it in a variable using set_stats
Job2 is using the credential of the server where to upload the json file. It reads the content of the variable set in the job 1 in the set_stats task and creates a json file with the content.
First job:
- name: get content
win_shell: 'type {{ file_dir }}{{ file_name }}'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
register: result
- set_fact:
this_local: "{{ content.stdout_lines }}"
- set_stats:
data:
test_stat: "{{ this_local }}"
- name: set hostname in a variable
set_stats:
data:
current_hostname: "{{ ansible_hostname }}"
per_host: no
Second job
- name: convert to json and copy the file to destination control node.
copy:
content: "{{ test_stat | to_json }}"
dest: "/tmp/{{ current_hostname }}.json"
How can I get the current_hostname, so that the the created json file is named <original_hostname>.json? In my case its concatenating the two hosts which I passed in the first job.
In my case its concatenating the two hosts which I passed in the first job
... which is precisely what you asked for since you used per_host: no as parameter to set_stats to gather the current_hostname stat globally for all host and that aggregate: yes is the default.
Anyhow, this is not exactly the intended use of set_stats and you are making this overly complicated IMO.
You don't need two jobs. In this particular case, you can delegate the write task to a linux host in the middle of a play dedicated to windows hosts (and one awx job can use several credentials).
Here is a pseudo untested playbook to give you the idea. You'll want to read the slurp module documentation which I used to replace your shell task to read the file (which is a bad practice).
Assuming your inventory looks something like:
---
windows_hosts:
hosts:
win1:
win2:
linux_hosts:
hosts:
json_file_target_server:
The playbook would look like:
- name: Gather jsons from win and write to linux target
hosts: windows_hosts
tasks:
- name: Get file content
slurp:
src: "{{ file_dir }}{{ file_name }}"
register: json_file
- name: Push json content to target linux
copy:
content: "{{ json_file.content | b64decode | to_json }}"
dest: "/tmp/{{ inventory_hostname }}.json"
delegate_to: json_file_target_server
I am trying to create a playbook which is managing to create some load balancers.
The playbook takes a configuration YAML in input, which is formatted like so:
-----configuration.yml-----
virtual_servers:
- name: "test-1.local"
type: "standard"
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
(omissis)
As you can see, this defines a list of load balancing objects with the relative specifications.
If I want to create for example a monitor instance, which depends on these definitions, I created this task which is defined within a playbook.
-----Playbook snippet-----
...
- name: "Creator | Create new monitor"
include_role:
name: vs-creator
tasks_from: pool_creator
with_items: "{{ virtual_servers }}"
loop_control:
loop_var: monitor_item
...
-----Monitor Task-----
- name: "Set monitor facts - Site 1"
set_fact:
monitor_name: "{{ monitor_item.name }}"
monitor_vs_port: "{{ monitor_item.vs_port }}"
monitor_interval: "{{ monitor_item.monitor_interval}}"
monitor_partition: "{{ hostvars['localhost']['vlan_partition'] | first }}"
...
(omissis)
- name: "Create HTTP monitor - Site 1"
bigip_monitor_http:
state: present
name: "{{ monitor_name }}_{{ monitor_vs_port }}.monitor"
partition: "{{ monitor_partition }}"
interval: "{{ monitor_interval }}"
timeout: "{{ monitor_interval | int * 3 | int + 1 | int }}"
provider:
server: "{{ inventory_hostname}}"
user: "{{ username }}"
password: "{{ password }}"
delegate_to: localhost
when:
- site: 1
- monitor_item.name | regex_search(regex_site_1) != None
...
As you can probably already see, I have a few problems with this code, the main one which I would like to optimize is the following:
The creation of a load balancer (virtual_server) involves multiple tasks (creation of a monitor, pool, etc...), and I would need to treat each list element in the configuration like an object to create, with all the necessary definitions.
I would need to do this for different sites which pertain to our datacenters - for which I use regex_site_1 and site: 1 in order to get the correct one... though I realize that this is not ideal.
The script, as of now, does that, but it's not well-managed I believe, and I'm at a loss on what approach should I take in developing this playbook: I was thinking about looping over the playbook with each element from the configuration list, but apparently, this is not possible, and I'm wondering if there's any way to do this, if possible with an example.
Thanks in advance for any input you might have.
If you can influence input data I advise to turn elements of virtual_servers into hosts.
In this case inventory will look like this:
virtual_servers:
hosts:
test-1.local:
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
And all code code will become a bliss:
- hosts: virtual_servers
tasks:
- name: Doo something
delegate_to: other_host
debug: msg=done
...
Ansible will create all loops for you for free (no need for include_roles or odd loops), and most of things with variables will be very easy. Each host has own set of variable which you just ... use.
And part where 'we are doing configuration on a real host, not this virtual' is done by use of delegate_to.
This is idiomatic Ansible and it's better to follow this way. Every time you have include_role within loop, you for sure made a mistake in designing the inventory.
I'm completely new to Ansible so I'm still struggling with its way of working... I have an inventory file with several hosts sorted by environment and function:
[PRO-OSB]
host-1
host-2
[PRO-WL]
host-3
host-4
[PRO:children]
PRO-OSB
PRO-WL
But I think sometimes I might need to run playbooks specifying even more, i.e attending to the environment, its function, cluster of hosts and application running on the host. So in resume every host must have 4 "categories": environment, function, cluster and app.
How could I achieve this without having to constantly repeat entries??
How could I achieve this without having to constantly repeat entries?
You can't. You have to declare in each needed groups the machines that belong to it. So if a machine belongs to 4 distinct groups (not taking into account parent groups), you'll have to declare that host in the 4 relevant groups.
Ansible could have chosen the other way around (i.e. list for each hosts the groups it belongs to) but this is not the retained solution and it would be as verbose.
To make things easier and IMO a bit more secure, you can split your inventory in several environment inventories (prod, dev....) so that you can remove one level of complexity inside each inventory. The downside is that you cannot target all your envs at once with such a setup.
If your inventory is big and targeting some sort of cluster/cloud environment (vsphere, aws...), dynamic inventories can help.
Q: "Every host must have 4 "categories": environment, function, cluster and app. How could I achieve this without having to constantly repeat entries?"
A: It's possible to declare default options in a [*:vars] section and override it with the host's specific options. See How variables are merged. For example the inventory
$ cat hosts
[PRO_OSB]
test_01 my_cluster='cluster_A'
test_02
[PRO_WL]
test_03 my_cluster='cluster_A'
test_04
[PRO:children]
PRO_OSB
PRO_WL
[PRO:vars]
my_environment='default_env'
my_function='default_fnc
my_cluster='default_cluster'
my_app='default_app'
with the playbook
- hosts: PRO
gather_facts: false
tasks:
- debug:
msg: "{{ inventory_hostname }}
{{ my_environment }}
{{ my_function }}
{{ my_cluster }}
{{ my_app }}"
gives (my_cluster variable of test_01 and test_03 was overridden by hosts' value)
"msg": "test_01 default_env 'default_fnc cluster_A default_app"
"msg": "test_02 default_env 'default_fnc default_cluster default_app"
"msg": "test_04 default_env 'default_fnc default_cluster default_app"
"msg": "test_03 default_env 'default_fnc cluster_A default_app"
Q: "Run playbooks specifying even more, i.e attending to the environment, its function, cluster of hosts and application."
It's possible to create dynamic groups with the module add_host and select the hosts as needed. For example create new group cluster_A in the first play and use it in the next one
- hosts: all
tasks:
- add_host:
name: "{{ item }}"
group: cluster_A
loop: "{{ hostvars|
dict2items|
json_query('[?value.my_cluster == `cluster_A`].key') }}"
delegate_to: localhost
run_once: true
- hosts: cluster_A
tasks:
- debug:
var: inventory_hostname
gives
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
for folks who might need something like this --
I ended up using this format for my inventory file when I was trying to establish multiple variables for each host for delivering files based on role or team.
inventory/support/hosts
support:
hosts:
qahost:
host_role:
waiter
dishwasher
host_teams:
ops
sales
awshost:
host_role:
waiter
host_teams:
dev
testhost1:
host_role:
dishwasher
host_teams:
ops
dev
testhost2:
host_role:
dishwasher
boss
host_teams:
ops
dev
sales
and referenced them in this play:
- name: parse inventory file host variables
debug:
msg: |
- "role attribute of host: {{ item }} is {{ hostvars[item]['host_role'] }}"
- "team attribute of host: {{ item }} is {{ hostvars[item]['host_teams'] }}"
with_items: "{{ inventory_hostname }}"
when: hostvars[item]['host_teams'] is contains (team_name)
ansible command with custom extra_var that is passed to the playbook and matched in the inventory with the when conditional:
ansible-playbook -i inventory/support/hosts -e 'team_name="sales"' playbook.yml
Given a playbook like this:
- name: "Tasks for service XYZ"
hosts: apiservers
roles:
- { role: common }
Is there a way to reference the playbook's name ("Tasks for service XYZ")? (i.e. a variable)
EDIT:
My intention is to be able to reference the playbook's name in a role task, i.e. sending a msg via slack like
- name: "Send Slack notification indicating deploy has started"
slack:
channel: '#project-deploy'
token: '{{ slack_token }}'
msg: '*Deploy started* to _{{ inventory_hostname }}_ of `{{ PLAYBOOK_NAME }}` version *{{ service_version }}*'
delegate_to: localhost
tags: deploy
It was added in 2.8:
ansible_play_name
The name of the currently executed play. Added in 2.8.
No, the special variables for Ansible are documented here, and you can see that there is no variable to return the playbook name.
As mentioned in the comments, however, you can always do this:
---
- name: "{{ task_name }}"
hosts: localhost
vars:
task_name: "Tasks for service XYZ"
tasks:
- debug:
msg: "{{ task_name }}"
From your circumstances, it looks like you only want this for audit/notification purposes? In that case (and assuming unixy clients), using
lookup('file', '/proc/self/cmdline') | regex_replace('\u0000',' ')
will give you the entire command line that ansible-playbook was called with, parameters and all, which would include the playbook name. Depending on your circumstances, that might be a useful enough tradeoff.
I would like to use the zabbix _maintenance module.
But I want to send the host_groups as an extra var so I can put multiple host groups in maintenance.
The problem I faced is that the host_group needs a list of items and I can't understand how to write the role so it will run over a list given to it by the extra var
I tried :
- name: maintenance
zabbix_maintenance:
name: Pause
host_groups:
- "{{ item }}"
with_items:
- { 'zabbix_hosts_groups' }
state: "{{ zabbix_state }}"
server_url: http://zabbix.XXX.com
login_user: YYY
login_password: XXX
minutes: 90
desc: "Paused-for-dep"
and running it:
ansible-playbook -i 'localhost,' --connection=local zabbix-maintenance.yml -e '{"zabbix_hosts_groups":"Test1","Test2"}' -e 'zabbix_state=present
Syntactically correct task definition would be:
- name: maintenance
zabbix_maintenance:
name: Pause
host_groups: "{{ zabbix_hosts_groups }}"
state: "{{ zabbix_state }}"
server_url: http://zabbix.XXX.com
login_user: YYY
login_password: XXX
minutes: 90
desc: "Paused-for-dep"
I don't understand the problem description though. "Jenkins"??? "How to write the role"??? Please at least learn the vocabulary required to ask a question.
I use ansible module zabbix_host. There is an attribute link_templates. But it removes all groups which are linked to your host before.
I have not managed this yet. So now I use this to add common templates, and then handly link required templates in Zabbix GUI.
Check this repo, maybe you'll find some helpful: igogorevi4:Ansible