I use Ansible to create and delete AWS launch configurations. I add the timestamp in the name.
My problem is that, I can create the LC, but when it comes to deletion, the timestamp changes and then the playbook of deletion can't find the LC to delete it.
this is how I use the timestamp variable:
I put this in a file called timestamp_lc.yml:
- set_fact: now="{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
and in the playbooks I call it:
- include: timestamp_lc.yml
How to make the variable now persistent so that ansible does not execute the date command every time I call the variable now ?
this is the creation task:
- name: Create launch configuration
sudo: yes
command: >
aws autoscaling create-launch-configuration
--region {{ asg.region }}
--launch-configuration-name "{{ asg.launch_configuration.name }}_{{ now }}"
The deletion task:
- name: Delete launch configuration
sudo: yes
command: >
aws autoscaling delete-launch-configuration
--region {{ asg.region }}
--launch-configuration-name {{ asg.launch_configuration.name }}_{{ now }}
That will happen with every execution of the ansible as you are getting the value from the date command and setting a fact of that and that keeps on updating with every iteration.
One way I can think is to save the value in a file in an extension on the target server or the local server -- I feel this would be more reliable
---
- name: test play
hosts: localhost
tasks:
- name: checking the file stats
stat:
path: stack_delete_info
register: delete_file_stat
- name: tt
debug:
var: delete_file_stat
- name: test
shell: echo "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}" > stack_delete_info
when: delete_file_stat.stat.exists == false
Related
How to set variables in an Ansible playbook which do not change per host?
Per S.O.P. before posting, I read the Ansible docs on Using Variables, etc., and of course searched Stack Overflow, and the Internet for possible answers. What I've seen discussed was where to define variables, but not how to set variables in a playbook which do not change with each host in the inventory.
I have an Ansible playbook where variables are set from Ansible-facts.
The variables are used to create a string with the current date and time, which is used to as the filename for a log.
e.g. HealthCheckReport-YYYY-MM-DD_HHMM.txt
A time stamped file is created, then the results from the command run for each server is written to this file.
If the time (minutes) changes while the play is still iterating through the inventory, the variable changes, throwing a "path does not exist" error for each of the remaining hosts.
The example below is an Ansible playbook which runs the nslookup command for the hosts listed in the default inventory file.
Set and concatenate variables
Create a file with a time stamped filename (The OS is SuSe Linux)
Run the nslookup command on hosts in the inventory file
Write the command results to the time stamped file
---
- name: Output to Filename with Timestamp
hosts: healthchecks
connection: local
gather_facts: yes
strategy: linear
order: inventory
vars:
report_filename_prefix: "HealthCheckResults-"
report_date_time: "{{ ansible_date_time.date }}_{{ ansible_date_time.hour }}{{ ansible_date_time.minute }}"
report_filename_date: "{{ report_filename_prefix }}{{ report_date_time }}.txt"
report_path: "/reports/healthchecks/{{ report_filename_date }}"
tasks:
- name: Create file with timestamped filename
delegate_to: localhost
lineinfile:
path: "{{ report_path }}"
create: yes
line: "Start: Health Check Report\n{{ report_path }}"
run_once: true
- name: Run nslookup command
delegate_to: localhost
throttle: 1
command: nslookup {{ inventory_hostname }}
register: nslookup_result
- name: Append nslookup results to a file
delegate_to: localhost
throttle: 1
blockinfile:
state: present
insertafter: EOF
dest: "{{ report_path }}"
marker: "- - - - - - - - - - - - - - - - - - - - -"
block: |
Server: {{ inventory_hostname }}
Environment: {{ environmentz }}
{{ nslookup_result.stdout_lines.3 }}
{{ nslookup_result.stdout_lines.4 }}
Ansible version - 2.9
Facing issue in writing output to a csv file, its not writing the output consistently into the file.
Having an inventory file with three server IPs, script will execute command to check the disk space of each server and writing the output to a csv file.
Sometimes its writing all the three server details into the file, sometimes its writing only one or two server details into the file.
- hosts: localhost
connection: local
gather_facts: False
vars:
filext: ".csv"
tasks:
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- name: get current dir
local_action: command pwd
register: current_dir
- name: create dir
file: path={{ current_dir.stdout }}/HCT state=directory
- name: Set file path here
set_fact:
file_path: "{{ current_dir.stdout }}/HCT/HCT_check{{ filext }}"
- name: Creates file
file: path={{ file_path }} state=touch
# Writing to a csv file
- hosts:
- masters
become: false
vars:
disk_space: "Able to get disk space for the CM {{ hostname }} "
disk_space_error: "The server {{ hostname }} is down for some reason. Please check manually."
disk_space_run_status: "{{disk_space}}"
cur_date: "{{ansible_date_time.iso8601}}"
tasks:
- name: runnig command to get file system which are occupied
command: bash -c "df -h | awk '$5>20'"
register: disk_space_output
changed_when: false
ignore_errors: True
no_log: True
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostname }}, {{ ip_address }}, {{ cur_date }}"
insertafter: EOF
state: present
delegate_to: localhost
Please help to resolve this issue.
The issue is that the task "Log the task get list of file systems with space occupied" is executed in parallel for the 3 servers, so you're having concurrent writing problems.
One solution is to use the serial keyword at play level with a value of 1, this way, all the tasks will be executed for each server one at a time.
- hosts:
- masters
become: false
serial: 1
vars:
[...]
Another solution is to have the task executed for only 1 server but looping over the results of all servers by using hostvars:
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostvars[item].hostname }}, {{ hostvars[item].ip_address }}, {{ hostvars[item].cur_date }}"
insertafter: EOF
state: present
run_once: True
loop: "{{ ansible_play_hosts }}" # Looping over all hosts of the play
delegate_to: localhost
---
- name: Creating a volume snapshot
hosts: Test-ctrl
gather_facts: True
tasks:
- name: Creating snapshot of Test
os_volume_snapshot:
auth:
auth_url: http://20.10.X.X:5000/v3/
username: XXXXXXX
password: XCXCXCXC
project_name: test-stack
project_domain_name: Default
user_domain_name: Default
state: absent
validate_certs: False
display_name: Test- {{ lookup('pipe','date +%Y-%m-%d-%H-%M-%S') }}
volume: Test-1
force: yes
how to write a playbook to delete OpenStack volume snapshot of older than 10 days
Here my playbook to create volume. But customize to delete volume older than 10 days or 5 days ????
I also have a need to do this but, sadly, it is not possible with the os_volume_snapshot module. Neither is it possible using any of the OpenStack modules in Ansible (2.9). Also the os_volume_snapshot makes the volume parameter mandatory (which is silly - as you don't need to know the name of the original volume to delete a snapshot).
So, it you "must" use the os_volume_snapshot then you're out of luck.
The built-in os_ modules are very much a "work in progress" with regard to comprehensive control of OpenStack clusters in Ansible, and it is of little use to you for the task you've identified.
But ... hold on ...
Like you, I need to automate this and it can be accomplished using Ansible and the official python-openstackclient module. OK - it's not "pure" Ansible (i.e. it's not using purely built-in modules) but it's a playbook that uses the built-in command module and it works.
(BTW - no warranty provided in the following - and use at your own risk)
So, here's a playbook that I'm running that deletes snapshot volumes that have reached a defined age in days. You will need to provide the following variables...
Yes, there are better ways to provide OpenStack variables (like cloud files or OS_ environment variables etc.) but I've made all those that are required explicit so it's easier to know what's actually required.
os_auth_url (i.e. "https://example.com:5000/v3")
os_username
os_password
os_project_name
os_project_domain_name (i.e. "Default")
os_user_domain_name
And
retirement_age_days. A +ve value of days where all snapshot volumes that have reached that age are deleted. If 0 all snapshots are deleted.
A summary of what the playbook does: -
It uses openstack volume snapshot list to get a list of snapshot volumes in the project
It then uses openstack volume snapshot show to get information about each snapshot (i.e. its created_at date) and builds a lit of volume ages (in days)
It then uses openstack volume snapshot delete to delete all volumes that are considered too old
---
- hosts: localhost
tasks:
# Delete all project snapshots that are too old.
# The user is expected to define the variable 'retirement_age_days'
# where all volumes that have reached that age are deleted.
# If 0 all snapshots are deleted.
#
# The user is also required to define OpenStack variables.
- name: Assert control variables
assert:
that:
- retirement_age_days is defined
- retirement_age_days|int >= 0
- os_auth_url is defined
- os_username is defined
- os_password is defined
- os_project_name is defined
- os_project_domain_name is defined
- os_user_domain_name is defined
# Expectation here is that you have the following OpenStack information: -
#
# - auth_url (i.e. "https://example.com:5000/v3")
# - username
# - password
# - project_name
# - project_domain_name (i.e. "Default")
# - user_domain_name
# We rely in the OpenStack client - the Ansible "os_" module
# (Ansible 2.9) does not do what we need, so we need the client.
# It' Python so make sure it's available...
- name: Install prerequisite Python modules
pip:
name:
- python-openstackclient==5.3.1
extra_args: --user
- name: Set snapshot command
set_fact:
snapshot_cmd: openstack volume snapshot
# To avoid cluttering the command-line we
# define all the credential material as a map of variables
# that we then apply as the 'environment' for each command invocation.
- name: Define OpenStack environment
set_fact:
os_env:
OS_AUTH_URL: "{{ os_auth_url }}"
OS_USERNAME: "{{ os_username }}"
OS_PASSWORD: "{{ os_password }}"
OS_PROJECT_NAME: "{{ os_project_name }}"
OS_PROJECT_DOMAIN_NAME: "{{ os_project_domain_name }}"
OS_USER_DOMAIN_NAME: "{{ os_user_domain_name }}"
# Get all the snapshot names in the project.
# The result is a json structure that we parse
# in order to get just the names...
- name: Get snapshots
command: "{{ snapshot_cmd }} list --format json"
environment: "{{ os_env }}"
changed_when: false
register: snap_result
- name: Collect snapshot volume names
set_fact:
snap_volume_names: "{{ snap_result.stdout|from_json|json_query(query)|flatten }}"
vars:
query: "[*].Name"
- name: Display existing snapshot volumes
debug:
var: snap_volume_names
# For each snapshot, get its 'info'.
# The combined results are then parsed in order to
# locate each volume's 'created_at' date.
# We compare that to 'now' in order to build a list of
# volume ages (in days)...
- name: Get snapshot volume info
command: "{{ snapshot_cmd }} show {{ item }} --format json"
environment: "{{ os_env }}"
changed_when: false
register: snap_volume_info
loop: "{{ snap_volume_names }}"
- name: Create snapshot age list (days)
set_fact:
snap_volume_ages: >-
{{
snap_volume_ages|default([]) +
[ ((ansible_date_time.iso8601|to_datetime(fmt_1)) -
(item.stdout|from_json|json_query(query)|to_datetime(fmt_2))).days ]
}}
vars:
query: "created_at"
fmt_1: "%Y-%m-%dT%H:%M:%SZ"
fmt_2: "%Y-%m-%dT%H:%M:%S.%f"
loop: "{{ snap_volume_info.results }}"
# Using the combined volume names and ages lists
# iterate through the ages and delete volumes have reached their age limit...
- name: Delete old snapshots
command: "{{ snapshot_cmd }} delete {{ item.1 }}"
environment: "{{ os_env }}"
changed_when: true
when: item.0 >= retirement_age_days|int
with_together:
- "{{ snap_volume_ages|default([]) }}"
- "{{ snap_volume_names|default([]) }}"
I want to execute some script on remote via Ansible and get result file from remote to host.
I wrote a playbook like below:
---
- name : script deploy
hosts: all
vars:
timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
become: true
tasks:
- name: script deployment
script: ./exe.sh {{ansible_nodename}}_{{ timestamp }}
args:
chdir: /tmp
exe.sh successfully executed on remote and redirect result to output file like remote_20170806065817.data
Script execution takes a few seconds, and I tried to fetch result file after execution done.
But {{timestamp}} is re-evaluated and changed when I fetch it.
So fetch cannot find script-execution result file name.
What I want is assign immutable (constant) value in my playbook.
Is there any workaround?
Ansible uses lazy evaluation, so variables are evaluated at the moment of their use.
You should set the fact, which will be evaluated once:
---
- name : script deploy
hosts: all
become: true
tasks:
- set_fact:
timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
- name: script deployment
script: ./exe.sh {{ansible_nodename}}_{{ timestamp }}
args:
chdir: /tmp
Im creating a deployment playbook for our web services. Each web service is in its own directory such as:
/webapps/service-one/
/webapps/service-two/
/webapps/service-three/
I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes.
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
ignore_errors: yes
While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these:
This:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
when: shell: [ -d /webapps/{{item}} ]
or this:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
stat:
path: /webapps/{{item}}
register: path
when: path.stat.exists == True
I'd collect facts first and then do only necessary things.
- name: Check existing services
stat:
path: "/tmp/{{ item }}"
with_items: "{{ services_to_stop }}"
register: services_stat
- name: Stop existing services
with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
shell: "/webapps/scripts/stopService.sh {{ item }}"
Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps):
---
- hosts: localhost
connection: local
vars:
dir_names: []
tasks:
- find:
paths: "/webapps"
file_type: directory
recurse: no
register: tmp_dirs
- set_fact: dir_names="{{ dir_names+ [item['path']] }}"
no_log: True
with_items:
- "{{ tmp_dirs['files'] }}"
- debug: var=dir_names
You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this:
- name: Stop services
with_items: "{{ dir_names }}"
shell: "/webapps/scripts/stopService.sh {{item | basename }}"