Constant Date and Time - ansible

I leverage this to get the date/time without running facts in my playbooks, in order to save run time:
all.yaml
date: "{{ lookup('pipe','date \"+%Y-%m-%d-%H%M\"') }}"
I've noticed that if I reference this at the beginning of the playbook it references one time e.g. 2019-04-10-1300. If I reference it at the end of the playbook, which is 5 minutes later, the time is different e.g. 2019-04-10-1305.
I want to use this variable to reference a directory name, and therefore I want it to be constant from at any point in the script's lifetime.
./outputs/"{{ date }}"/errors.txt
AKA
./outputs/2019-04-10-1300/errors.txt
How do I get this value to be constant?
EDIT
This task gives me an error
- name: TESTS
environment:
execution_date: "{{ lookup('pipe','date \"+%Y%m%d-%H%M\"') }}"
tags:
- test
The group_var below is not callable via "environment.execution_date" or "execution_date"
all.yaml
environment:
execution_date: "{{ lookup('pipe','date \"+%Y%m%d-%H%M\"') }}"
- name: TESTS
debug:
var: environment.execution_date

Sounds like you're wanting to save/recall a particular date, using it like a variable.
Probably a few ways to do this, my first thought is that you could export this as an environment variable and then recall that value:
environment:
execution_date: "{{ lookup('pipe','date \"+%Y-%m-%d-%H%M\"') }}"
You would then use it like:
./outputs/"{{ execution_date }}"/errors.txt
Check out the documentation about this here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html

Ansible variables don't store a value, they get re-evaluated each time they are referenced. Hence your date variable always does a fresh lookup of the current time.
To store a value, and recall it later you can set a fact, e.g.
- hosts: localhost
connection: local
tasks:
- set_fact:
execution_time: "{{ lookup('pipe','date \"+%Y-%m-%d-%H%M\"') }}"
- debug:
msg: "{{ execution_time }}"
- pause:
minutes: 2
- hosts: localhost
connection: local
tasks:
- debug:
msg: "{{ execution_time }}"

Related

'gather_facts' seems to break 'set_fact' and 'hostvars'

I am using set_fact and hostvars to pass variables between plays within a playbook. My code looks something like this:
- name: Staging play
hosts: localhost
gather_facts: no
vars_prompt:
- name: hostname
prompt: "Enter hostname or group"
private: no
- name: vault
prompt: "Enter vault name"
private: no
- name: input
prompt: "Enter input for role"
private: no
tasks:
- set_fact:
target_host: "{{ hostname }}"
target_vault: "{{ vault }}"
for_role: "{{ input }}"
- name: Execution play
hosts: "{{ hostvars['localhost']['target_host'] }}"
gather_facts: no
vars_files:
- "vault/{{ hostvars['localhost']['target_vault'] }}.yml"
tasks:
- include_role:
name: target_role
vars:
param: "{{ hostvars['localhost']['for_role'] }}"
This arrangement has worked without issue for months. However, our environment has changed and now I need to take a timestamp and pass that to the role as well as the other variable, so I made the following changes (denoted by comments):
- name: Staging play
hosts: localhost
gather_facts: yes # Changed from 'no' to 'yes'
vars_prompt:
- name: hostname
prompt: "Enter hostname or group"
private: no
- name: vault
prompt: "Enter vault name"
private: no
- name: input
prompt: "Enter input for role"
private: no
tasks:
- set_fact:
target_host: "{{ hostname }}"
target_vault: "{{ vault }}"
for_role: "{{ input }}"
current_time: "{{ ansible_date_time.iso8601 }}" # Added fact for current time
- name: Execution play
hosts: "{{ hostvars['localhost']['target_host'] }}"
gather_facts: no
vars_files:
- "vault/{{ hostvars['localhost']['target_vault'] }}.yml"
tasks:
- include_role:
name: target_role
vars:
param: "{{ hostvars['localhost']['for_role'] }}"
timestamp: "{{ hostvars['localhost']['current_time'] # Passed current_time to
Execution Play via hostvars
Now, when I execute, I get the error that the vault hostvars variable is undefined in the Execution Play. After some experimenting, I've found that setting gather_facts: yes in the Staging Play is what is triggering the issue.
However, I need gather_facts enabled in order to use ansible_time_date. I've already verified via debug that the facts are being recorded properly and can be called by hostvars within the Staging Play; just not in the following Execution Play. After hours of research, I can't find any reasoning for why gathering facts in the Staging Play should affect hostvars for the Execution Play or any idea on how to fix it.
At the end of the day, all I need is the current time passed to the included role. Anyone who can come up with a solution that actually works in this use case wins Employee of the Month. Bonus points if you can explain the initial issue with gather_facts.
Thanks!
So, I had to reinvent the wheel a bit, but came up with a much cleaner solution. I simply created a default value for a timestamp in the role itself and added a setup call for date/time at the appropriate point, conditional on there being no existing value for the variable in question.
- name: Gather date and time.
setup:
gather_subset: date_time
when: timestamp is undefined and ansible_date_time is undefined
I was able to leave gather_facts set to no in the dependent playbook but I still have no idea why setting it to yes broke anything in the first place. Any insight in this regard would be appreciated.
... if you can explain the initial issue with gather_facts ... Any insight in this regard would be appreciated.
This is caused by variable precedence and because Ansible do not "overwrite or set a new value" for a variable. So it will depend on when and where they become defined.
You may test with the following example
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ hostvars['localhost'].ansible_facts }}" # will be {} only
- name: Gather date and time only
setup:
gather_subset:
- 'date_time'
- '!min'
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}" # from hostvars['localhost'] again
and "try to break it" by adding
- name: Set Fact
set_fact:
ansible_date_time:
date: '1970-01-01'
- name: Show Facts
debug:
msg: "{{ hostvars['localhost'] }}"
Just like to note that for your use case you should use
gather_subset:
- 'date_time'
- '!min'
since your are interested in ansible_date_time only. See what is the exact list of Ansible setup min?.
Be also aware of caching facts since "When created with set_facts’s cacheable option, variables have the high precedence in the play, but are the same as a host facts precedence when they come from the cache."

Ansible - Is it possible to loop over a list of objects in input within a playbook

I am trying to create a playbook which is managing to create some load balancers.
The playbook takes a configuration YAML in input, which is formatted like so:
-----configuration.yml-----
virtual_servers:
- name: "test-1.local"
type: "standard"
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
(omissis)
As you can see, this defines a list of load balancing objects with the relative specifications.
If I want to create for example a monitor instance, which depends on these definitions, I created this task which is defined within a playbook.
-----Playbook snippet-----
...
- name: "Creator | Create new monitor"
include_role:
name: vs-creator
tasks_from: pool_creator
with_items: "{{ virtual_servers }}"
loop_control:
loop_var: monitor_item
...
-----Monitor Task-----
- name: "Set monitor facts - Site 1"
set_fact:
monitor_name: "{{ monitor_item.name }}"
monitor_vs_port: "{{ monitor_item.vs_port }}"
monitor_interval: "{{ monitor_item.monitor_interval}}"
monitor_partition: "{{ hostvars['localhost']['vlan_partition'] | first }}"
...
(omissis)
- name: "Create HTTP monitor - Site 1"
bigip_monitor_http:
state: present
name: "{{ monitor_name }}_{{ monitor_vs_port }}.monitor"
partition: "{{ monitor_partition }}"
interval: "{{ monitor_interval }}"
timeout: "{{ monitor_interval | int * 3 | int + 1 | int }}"
provider:
server: "{{ inventory_hostname}}"
user: "{{ username }}"
password: "{{ password }}"
delegate_to: localhost
when:
- site: 1
- monitor_item.name | regex_search(regex_site_1) != None
...
As you can probably already see, I have a few problems with this code, the main one which I would like to optimize is the following:
The creation of a load balancer (virtual_server) involves multiple tasks (creation of a monitor, pool, etc...), and I would need to treat each list element in the configuration like an object to create, with all the necessary definitions.
I would need to do this for different sites which pertain to our datacenters - for which I use regex_site_1 and site: 1 in order to get the correct one... though I realize that this is not ideal.
The script, as of now, does that, but it's not well-managed I believe, and I'm at a loss on what approach should I take in developing this playbook: I was thinking about looping over the playbook with each element from the configuration list, but apparently, this is not possible, and I'm wondering if there's any way to do this, if possible with an example.
Thanks in advance for any input you might have.
If you can influence input data I advise to turn elements of virtual_servers into hosts.
In this case inventory will look like this:
virtual_servers:
hosts:
test-1.local:
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
And all code code will become a bliss:
- hosts: virtual_servers
tasks:
- name: Doo something
delegate_to: other_host
debug: msg=done
...
Ansible will create all loops for you for free (no need for include_roles or odd loops), and most of things with variables will be very easy. Each host has own set of variable which you just ... use.
And part where 'we are doing configuration on a real host, not this virtual' is done by use of delegate_to.
This is idiomatic Ansible and it's better to follow this way. Every time you have include_role within loop, you for sure made a mistake in designing the inventory.

how to write a playbook to delete OpenStack volume snapshot of older than 10 days using os_volume_snapshot module

---
- name: Creating a volume snapshot
hosts: Test-ctrl
gather_facts: True
tasks:
- name: Creating snapshot of Test
os_volume_snapshot:
auth:
auth_url: http://20.10.X.X:5000/v3/
username: XXXXXXX
password: XCXCXCXC
project_name: test-stack
project_domain_name: Default
user_domain_name: Default
state: absent
validate_certs: False
display_name: Test- {{ lookup('pipe','date +%Y-%m-%d-%H-%M-%S') }}
volume: Test-1
force: yes
how to write a playbook to delete OpenStack volume snapshot of older than 10 days
Here my playbook to create volume. But customize to delete volume older than 10 days or 5 days ????
I also have a need to do this but, sadly, it is not possible with the os_volume_snapshot module. Neither is it possible using any of the OpenStack modules in Ansible (2.9). Also the os_volume_snapshot makes the volume parameter mandatory (which is silly - as you don't need to know the name of the original volume to delete a snapshot).
So, it you "must" use the os_volume_snapshot then you're out of luck.
The built-in os_ modules are very much a "work in progress" with regard to comprehensive control of OpenStack clusters in Ansible, and it is of little use to you for the task you've identified.
But ... hold on ...
Like you, I need to automate this and it can be accomplished using Ansible and the official python-openstackclient module. OK - it's not "pure" Ansible (i.e. it's not using purely built-in modules) but it's a playbook that uses the built-in command module and it works.
(BTW - no warranty provided in the following - and use at your own risk)
So, here's a playbook that I'm running that deletes snapshot volumes that have reached a defined age in days. You will need to provide the following variables...
Yes, there are better ways to provide OpenStack variables (like cloud files or OS_ environment variables etc.) but I've made all those that are required explicit so it's easier to know what's actually required.
os_auth_url (i.e. "https://example.com:5000/v3")
os_username
os_password
os_project_name
os_project_domain_name (i.e. "Default")
os_user_domain_name
And
retirement_age_days. A +ve value of days where all snapshot volumes that have reached that age are deleted. If 0 all snapshots are deleted.
A summary of what the playbook does: -
It uses openstack volume snapshot list to get a list of snapshot volumes in the project
It then uses openstack volume snapshot show to get information about each snapshot (i.e. its created_at date) and builds a lit of volume ages (in days)
It then uses openstack volume snapshot delete to delete all volumes that are considered too old
---
- hosts: localhost
tasks:
# Delete all project snapshots that are too old.
# The user is expected to define the variable 'retirement_age_days'
# where all volumes that have reached that age are deleted.
# If 0 all snapshots are deleted.
#
# The user is also required to define OpenStack variables.
- name: Assert control variables
assert:
that:
- retirement_age_days is defined
- retirement_age_days|int >= 0
- os_auth_url is defined
- os_username is defined
- os_password is defined
- os_project_name is defined
- os_project_domain_name is defined
- os_user_domain_name is defined
# Expectation here is that you have the following OpenStack information: -
#
# - auth_url (i.e. "https://example.com:5000/v3")
# - username
# - password
# - project_name
# - project_domain_name (i.e. "Default")
# - user_domain_name
# We rely in the OpenStack client - the Ansible "os_" module
# (Ansible 2.9) does not do what we need, so we need the client.
# It' Python so make sure it's available...
- name: Install prerequisite Python modules
pip:
name:
- python-openstackclient==5.3.1
extra_args: --user
- name: Set snapshot command
set_fact:
snapshot_cmd: openstack volume snapshot
# To avoid cluttering the command-line we
# define all the credential material as a map of variables
# that we then apply as the 'environment' for each command invocation.
- name: Define OpenStack environment
set_fact:
os_env:
OS_AUTH_URL: "{{ os_auth_url }}"
OS_USERNAME: "{{ os_username }}"
OS_PASSWORD: "{{ os_password }}"
OS_PROJECT_NAME: "{{ os_project_name }}"
OS_PROJECT_DOMAIN_NAME: "{{ os_project_domain_name }}"
OS_USER_DOMAIN_NAME: "{{ os_user_domain_name }}"
# Get all the snapshot names in the project.
# The result is a json structure that we parse
# in order to get just the names...
- name: Get snapshots
command: "{{ snapshot_cmd }} list --format json"
environment: "{{ os_env }}"
changed_when: false
register: snap_result
- name: Collect snapshot volume names
set_fact:
snap_volume_names: "{{ snap_result.stdout|from_json|json_query(query)|flatten }}"
vars:
query: "[*].Name"
- name: Display existing snapshot volumes
debug:
var: snap_volume_names
# For each snapshot, get its 'info'.
# The combined results are then parsed in order to
# locate each volume's 'created_at' date.
# We compare that to 'now' in order to build a list of
# volume ages (in days)...
- name: Get snapshot volume info
command: "{{ snapshot_cmd }} show {{ item }} --format json"
environment: "{{ os_env }}"
changed_when: false
register: snap_volume_info
loop: "{{ snap_volume_names }}"
- name: Create snapshot age list (days)
set_fact:
snap_volume_ages: >-
{{
snap_volume_ages|default([]) +
[ ((ansible_date_time.iso8601|to_datetime(fmt_1)) -
(item.stdout|from_json|json_query(query)|to_datetime(fmt_2))).days ]
}}
vars:
query: "created_at"
fmt_1: "%Y-%m-%dT%H:%M:%SZ"
fmt_2: "%Y-%m-%dT%H:%M:%S.%f"
loop: "{{ snap_volume_info.results }}"
# Using the combined volume names and ages lists
# iterate through the ages and delete volumes have reached their age limit...
- name: Delete old snapshots
command: "{{ snapshot_cmd }} delete {{ item.1 }}"
environment: "{{ os_env }}"
changed_when: true
when: item.0 >= retirement_age_days|int
with_together:
- "{{ snap_volume_ages|default([]) }}"
- "{{ snap_volume_names|default([]) }}"

Ansible loop returns outputs entire object

I am gathering information about AWS ec2 instances and then attempting to loop through them to output the instance_id property of the registered results.
When I run through the loop I get the expected results, but I also get the entire registered object outputted as well. It appears to flatten the object to a string and output it. What is the reason for the additional output and is there a better loop method I should use?
Thank you in advance!
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: get ec2 instance info
ec2_instance_info:
region: us-east-1
filters:
"tag:app": ansible
"tag:env": dev
register: ec2
- debug:
msg: "{{ item['instance_id'] }}"
loop: "{{ ec2['instances'] }}"
FIX
- debug:
msg: "{{ item['instance_id'] }}"
loop: "{{ ec2['instances'] }}"
loop_control:
label: "{{ item.instance_id }}"
I think I found your answer #duffney.
By the looks of things Was addressed as a bug/feature and amended
https://github.com/ansible/ansible/issues/35493
Does it help out what you are looking for?

Set variable if empty or not defined with ansible

In my ansible vars file, I have a variable that will sometimes be set and other times I want to dynamically set it. For example, I have an RPM that I want to install. I can manually store the location in a variable, or if I don't have a particular one in mind, I want to pull the latest from Jenkins. My question is, how can I check if the variable is not defined or empty, and if so, just use the default from Jenkins (already stored in a var)?
Here is what I have in mind:
...code which gets host_vars[jenkins_rpm]
- hosts: "{{ host }}"
tasks:
- name: Set Facts
set_fact:
jenkins_rpm: "{{ hostvars['localhost']['jenkins_rpm'] }}"
- name: If my_rpm is empty or not defined, just use the jenkins_rpm
set_fact: my_rpm=jenkins_rpm
when: !my_rpm | my_rpm == ""
There is default filter for that:
- set_fact:
my_rpm: "{{ my_rpm | default(jenkins_rpm) }}"

Resources