Schedule deletion of unused template - ansible

In Ansible Tower, is there a possibility to create a scheduled task that checks if a template has not been executed for one year and if so, deletes it?

The short answers is: yes, of course. The long answer is: someone has to create such task. To do so, one may getting familiar with the Ansible Tower REST API, in detail Job Templates - List Jobs for a Job Template.
In example, a call for Jobs of a Job Template which was never executed
curl --silent --user ${ACCOUNT}:${PASSWORD} https://${TOWER_URL}/api/v2/job_templates/${ID}/jobs/ --write-out "\n%{http_code}\n"| jq .
would result into an output of
{
"count": 0,
"next": null,
"previous": null,
"results": []
}
200
A call for Jobs of a Job Template which is executed daily would result into an output of
{
"count": 70,
"next": "/api/v2/job_templates/<id>/jobs/?page=2",
"previous": null,
"results": [
{
"id": <id>,
<snip>
"created": "2022-06-10T05:57:18.976798Z",
"modified": "2022-06-10T05:57:19.666354Z",
"name": "<name>",
"description": "<description>",
"unified_job_template": <id>,
"launch_type": "manual",
"status": "successful",
"failed": false,
"started": "2022-06-10T05:57:19.870208Z",
"finished": "2022-06-10T05:57:33.752072Z",
"canceled_on": null,
"elapsed": 13.882,
"job_explanation": "",
"execution_node": "<executionNode>",
"controller_node": "",
"job_type": "run",
"inventory": <id>,
"project": <id>,
"playbook": "<path>",
"scm_branch": "",
"forks": 0,
"limit": "<hostgroup>",
"verbosity": 0,
"extra_vars": "{\"if_there_any\": \"false\"}",
"job_tags": "check",
"force_handlers": false,
"skip_tags": "",
"start_at_task": "",
"timeout": 0,
"use_fact_cache": false,
"organization": <id>,
"job_template": <id>,
"passwords_needed_to_start": [
"ssh_password"
],
"allow_simultaneous": false,
"artifacts": {},
"scm_revision": "<rev>",
"instance_group": 1,
"diff_mode": false,
"job_slice_number": 0,
"job_slice_count": 1,
"webhook_service": "",
"webhook_credential": null,
"webhook_guid": ""
}
]
}
200
Since the goal is to execute it via Ansible Engine, as well schedule via Ansible Tower, a sample rest.yml playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
TOWER_API_URL: "<tower_url>/api/v2"
FILTER: ".version"
ID: "<id>"
tasks:
- name: Example REST API call
shell:
cmd: curl --silent -u '{{ ansible_user }}:{{ ansible_password }}' --location {{ TOWER_API_URL }}/ping | jq {{ FILTER }}
warn: false
register: result
failed_when: result.rc != 0
changed_when: false
check_mode: false
- name: Show result
debug:
msg: "{{ result.stdout }}"
- name: List Jobs for a Job Template
uri:
url: "https://{{ TOWER_API_URL }}/job_templates/{{ ID }}/jobs/"
user: "{{ ansible_user }}"
password: "{{ ansible_password }}"
force_basic_auth: true
method: GET
validate_certs: yes
return_content: yes
status_code: 200
body_format: json
check_mode: false
register: result
- name: Show result
debug:
msg: "{{ result.json.results }}" # list of jobs
which can be called from CLI via
sshpass -p ${PASSWORD} ansible-playbook --user ${ACCOUNT} --ask-pass rest.yml
Please take note that the "count": 70 is greater than the result set result.json.results | length of 25 and there is a next page mentioned "next": "...?page=2". The result.json.results | last therefore does not contain the most recent execution. This is because of Pagination.
Depending on the setup and actual configuration of Ansible Tower one may need to adjust the page_size. In example to get the most recent result
...
url: "https://{{ TOWER_API_URL }}/job_templates/{{ ID }}/jobs/?page_size=100"
...
msg: "{{ result.json.results | last }}"

Related

Juniper Software upgrade faills cause of not enough space

I've tried to update a EX2300 switch from Juniper with the ansible module(juniper_junos_software) but eveytime i tried it fails cause it doesn't have enough space and i tried a bunch of stuff all from trying to SCP with other Ansible modules such as (net_put and junipernetworks.junos.junos_scp) which to no relief i can't get to work either
The Ansible code is:
- name: Install Junos OS
hosts: EX
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
OS_version: "20.4R1.12"
OS_package: "junos-arm-32-20.4R1.12.tgz"
pkg_dir: "/etc/JunOS"
log_dir: "/var/log"
netconf_port: 830
wait_time: 3600
tasks:
- name: Checking NETCONF connectivity
wait_for:
host: "{{ inventory_hostname }}"
port: "{{ netconf_port }}"
timeout: 5
- name: Clean up the device
juniper_junos_command:
commands:
- request system snapshot delete snap*
- request system software delete jweb
timeout: 200
register: response
- name: Print response from Clean up the device
debug:
var: response
- name: Install Junos OS package
juniper_junos_software:
version: "{{ OS_version }}"
local_package: "{{ pkg_dir }}/{{ OS_package }}"
cleanfs: yes
validate: no
reboot: true
logfile: "{{ log_dir }}/ansible.log"
register: sw
notify:
- wait_reboot
- name: Print response
debug:
var: response
- name: Snapshot Slice alternate
juniper_junos_command:
commands: request system snapshot slice alternate
timeout: 200
register: response
handlers:
- name: wait_reboot
wait_for:
host: "{{ inventory_hostname }}"
port: "{{ netconf_port }}"
timeout: "{{ wait_time }}"
when: not sw.check_mode
The error i get is:
"changed": true,
"check_mode": false,
"invocation": {
"module_args": {
"attempts": null,
"baud": null,
"checksum": null,
"checksum_algorithm": "md5",
"checksum_timeout": 300,
"cleanfs_timeout": 300,
"console": null,
"cs_passwd": null,
"cs_user": null,
"force_host": false,
"host": "10.15.84.100",
"issu": false,
"level": null,
"logdir": null,
"logfile": "/var/log/ansible.log",
"mode": null,
"nssu": false,
"passwd": null,
"port": 830,
"provider": null,
"ssh_config": null,
"ssh_private_key_file": "/etc/ansible/ssh-keys/id_ed25519",
"timeout": 30,
"user": "ansible",
"validate": false,
"vmhost": false
}
},
"msg": [
"Unable to install the software %s",
"\nERROR: estimate of space required: 119 Mbytes, available: 41 Mbytes\n"
]
}
"space required: 119 Mbytes, available: 41 Mbytes" indicates you need to delete some files, usually image files.

Creating IAM users with Ansible - getting the CLI credentials

I am create usernames with the iam module, and I am using the access_key_state: create option.
However, I want my playbook to output the Access Key and the Secret Access Key for each user.
playbook.yml:
---
- name: "Starting the tasks: Creates IAM Policy, group, Role and User"
hosts: localhost
connection: local
gather_facts: False
vars_files:
- vars/aws-credentials.yml
tasks:
- include: tasks/create-user.yml
tags: user
- include: tasks/create-group.yml
tags: group
tasks/create-user.yml:
---
# Create the IAM users with Console and API access
- name: Create new IAM users with API keys and console access
iam:
iam_type: user
name: "{{ item }}"
state: present
password: "{{ lookup('password', 'passwordfile chars=ascii_letters') }}"
access_key_state: create
update_password: on_create
no_log: true
register: newusers
loop:
- johna
- mariab
- carlosc
- name: test
debug:
msg: "{{ credentials.results }}"
The debug message "{{ credentials.results }}" gives me the Access Key, but not the Secret Access Key:
{
"ansible_loop_var": "item",
"changed": true,
"created_keys": [],
"failed": false,
"groups": null,
"invocation": {
"module_args": {
"access_key_ids": null,
"access_key_state": "create",
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"groups": null,
"iam_type": "user",
"key_count": 1,
"name": "carol.v",
"new_name": null,
"new_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "/",
"profile": null,
"region": null,
"security_token": null,
"state": "present",
"trust_policy": null,
"trust_policy_filepath": null,
"update_password": "always",
"validate_certs": true
}
},
"item": "carlosc",
"keys": {
"AK_________FV": "Active"
},
"user_meta": {
"access_keys": [
{
"access_key_id": "AK_________FV",
"status": "Active"
}
]
},
"user_name": "carlosc"
}
How to get the Secret Access Key for each user?
Update 09 May 2020: For further reference.
Bad news; it appears they are purposefully throwing the secret_access_key in the trash: https://github.com/ansible/ansible/blob/v2.9.7/lib/ansible/modules/cloud/amazon/iam.py#L238-L241
It appears the only way around that is to set key_count: 0 in your iam: and then use awscli or a custom ansible module to make that same iam.create_access_key call and preserve the result
- name: create access key for {{ item }}
command: aws iam create-access-key --user-name {{ item }}
environment:
AWS_REGION: '{{ the_region_goes_here }}'
AWS_ACCESS_KEY_ID: '{{ whatever_you_called_your_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ your_aws_secret_access_key_name_here }}'
register: user_keys
with_items:
- johna
- mariab
- carlosc
Feel free to file an issue, although you'll likely have to file it against the new amazon.aws collection since that iam.py is no longer present in the devel branch
You can use community.aws.iam:
- name: Create IAM User with API keys
community.aws.iam:
iam_type: user
name: some_dummy_user
state: present
access_key_state: create
register: new_user
- debug:
var: new_user
You'll be able to get your access and secret keys at:
new_user.user_meta.access_keys[0].access_key_id
new_user.user_meta.access_keys[0].secret_access_key
I have mine getting loaded into Secrets Manager and will eventually have them rotated with a lambda function.

Raise an error based on results of other task

I am using the postgresql_db module that is part of Ansible. For example with something like
- name: Database
postgresql_db:
name: "{{ vars[item + '_database_name_version'] }}"
login_host: "{{ vars[item + '_database_host'] }}"
login_password: "{{ vars[item + '_database_admin_password'] }}"
login_user: "{{ vars[item + '_database_admin_username'] }}"
port: "{{ vars[item + '_database_port'] }}"
state: restore
target: "{{ backup_restore[item]['db_tar'] }}"
when: backup_restore[item]['db_tar'] is defined
with_items: '{{ backup_restore }}'
register: db_restore
When I debug output db_restore I see
TASK [backup : db_restore] *****************************************************
ok: [myapp] => {
"db_restore": {
"changed": true,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": true,
"cmd": "cmd: ****",
"failed": false,
"invocation": {
"module_args": {
"ca_cert": null,
"conn_limit": "",
"db": "myapp_0_1_0",
"encoding": "",
"lc_collate": "",
"lc_ctype": "",
"login_host": "1.1.1.2",
"login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"login_unix_socket": "",
"login_user": "ansible",
"maintenance_db": "postgres",
"name": "myapp_0_1_0",
"owner": "",
"port": 5432,
"session_role": null,
"ssl_mode": "prefer",
"state": "restore",
"target": "/backup/tmp/myapp-myapp/myapp_daily/databases/PostgreSQL.sql.gz",
"target_opts": "",
"template": ""
}
},
"item": "myapp",
"msg": "",
"rc": 0,
"stderr": "ERROR: relation \"myapp_table\" already exists\n",
"stderr_lines": [
"ERROR: relation \"myapp_table\" already exists"
]
}
]
}
}
Although there is an error during execution of this module as visible in stderr, the Ansible postgresql_db module also returns "failed": false. So it looks like it is ignoring any errors that might occur while running commands to restore the database.
Now I want to add a task to check db_restore for stderr attribute and if present, raise an error so that the user is made aware of the problems.
How can I raise an error? Is there an error module?
Yes, there is a module to raise errors which is called fail (https://docs.ansible.com/ansible/latest/modules/fail_module.html)
# Example playbook using fail and when together
- fail:
msg: The system may not be provisioned according to the CMDB status.
when: cmdb_status != "to-be-staged"
Another possible way is to use failed_when.
- name: Fail task when the command error output prints FAILED
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
I would recommend you to read the ansible page on error handling (https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html)
I strongly agree with #Stefan Wegener's answer, but want to add another possibility.
I'm curious if your problem only occurs because of the loop you wrapped around your task.
Another possible solution would be to add the any_errors_fatal: true clause on play level like described in the Ansible documentation:
- hosts: somehosts
any_errors_fatal: true
roles:
- myrole
See https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html#aborting-the-play
for reference.

How to retrieve a value from stdout of ansible registered variable?

One of my ansible play is using a shell module to create a vault token.The command returns some value that I want to use in next play.
I registered the command output in vault_output parameter.Here I am getting stdout from that variable.
"vault_output.stdout": {
"auth": {
"accessor": "XXXXXXXXXXXXXXXXXXX",
"client_token": "abcdefghijkl",
"entity_id": "XXXXXXXXXXXXXXXXXXX",
"lease_duration": 600,
"metadata": {
"username": "vault"
},
"policies": [
"default",
],
"renewable": true,
"token_policies": [
"default",
]
},
"data": {},
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "3470a160-3ed5-ceaa-f57b-4f3d74f6a269",
"warnings": null,
"wrap_info": null
}
}
I am looking to get value of client_token which should be abcdefghijkl. Can anyone help me out to get that value which can be used in next play.
I have tried using vault_output.stdout[num], vault_output.stdout_lines, vault_output.stdout.auth , vault_output.stdout.['auth'] but no luck.
Expecting Result:
"client_token": "abcdefghijkl"
Finally found an answer to this.
- set_fact:
result: "{{ (vault_output.stdout | from_json).auth.client_token }}"
- debug:
var: result
result: 9fa7fdd6-c8da-ac8c-b5d8-df18b17eb3f0
The output from debug: var=vault_output.stdout is actually a bit misleading here. The variable does not contain a dict object Ansible can index. You will need to first parse it with the from_json filter:
- set_fact:
result: "{{ vault_output.stdout | from_json }}"
- debug:
var: result.auth.client_token

How can loop items as input parameters to a ansible role

I am trying to convert an existing ansible playbook (for extracting the webpage content of multiple webpage URL's in parallel fashion) to re-usable roles. I need the role to accept variables in a loop and produce the output for all the items in a single task which my current playbook is able to do. But the current role is only able to produce the output of the last item in the loop
I have tried registering the webpage content inside and outside the roles but of no use. And also looping the response results with_items same as of the role is producing results for non-200 values
FYI I got the expected output by including the loop inside the role but it's defeating the purpose of maintaining a role for GET call because I will not need a loop every time for the GET call. So I am expecting to loop the role in the testplaybook.yml.
Test-Role: main.yml
uri:
url: "{{ URL_Variable }}"
method: GET
status_code: 200
return_content: yes
register: response
ignore_errors: true
testplaybook.yml:
- hosts: localhost
gather_facts: true
tasks:
- name: Include roles
include_role:
name: Test-Role
vars:
URL_Variable: "http://{{ item }}:{{ hostvars[groups['group1'][0]]['port'] }}/{{ hostvars[groups['group1'][0]]['app'] }}/"
with_items: "{{ groups['group1'] }}"
- name: "Display content"
debug:
var: response.results
Expected Output:
response.results:
ok: [127.0.0.1] => (item=[0, 'item1']) => {
"ansible_loop_var": "item",
"item": [
0,
"item1"
],
"response": {
"accept_ranges": "bytes",
"changed": false,
"connection": "close",
"content": "content1",
"content_length": "719",
"content_type": "text/html",
"cookies": {},
"failed": false,
"msg": "OK (719 bytes)",
"redirected": false,
"server": "42",
"status": 200,
"url": "http://item1:port/app/"
}
}
ok: [127.0.0.1] => (item=[1, 'item2']) => {
"ansible_loop_var": "item",
"item": [
1,
"item2"
],
"response": {
"accept_ranges": "bytes",
"changed": false,
"connection": "close",
"content": "content2",
"content_length": "719",
"content_type": "text/html",
"cookies": {},
"failed": false,
"msg": "OK (719 bytes)",
"redirected": false,
"server": "42",
"status": 200,
"url": "http://item2:port/app/"
}
}
try this Test-Role: main.yml file:
- uri:
url: "{{ URL_Variable }}"
method: GET
status_code: 200
return_content: yes
register: response
ignore_errors: true
- name: Add response to responses array
set_fact:
responses_results: "{{ responses_results | default([]) + [{'URL': URL_Variable, 'response': response.content}] }}"
this works with include_tasks, i assume it would work with include_role as well, the variable responses_results should persist across roles assuming its in the same play. if not works, try to switch your code to a single role instead, with an include_tasks.
hope it helps

Resources