How to use Ansible register values as role environment variables? - ansible

I am building an AWS EC2 Instance using Ansible and want to set three environment variables to be used by the roles. The playbook is launched frmo AWX. I get the STS Assume Role values in the "Assume Credentials" task.
---
- name: Create Instance
hosts: localhost
gather_facts: false
connection: local
tasks:
- import_tasks: usshared.yml
when: (buildenv == "us-shared")
- import_tasks: ossettings.yml
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
# - name: Assume Role Data
# debug:
# var: assumed_role
- name: Find AMI Target
ec2_ami_info:
filters:
name: "cmpc*{{ osselection }}*base*"
owners: self
region: "{{ target_region }}"
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
register: found_base_ami
- import_tasks: osbuildlinux.yml
when: (osselection == "centos7") or (osselection == "rhel7")
- import_tasks: osbuildwin.yml
when: (osselection == "win2019")
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
- name: Configure New Windows Instance
hosts: new_launch_windows
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- winadjoin
- appgroup
This is the assumed_role register contents:
"assumed_role": {
"changed": true,
"sts_creds": {
"access_key": "HIDDEN",
"secret_key": "HIDDEN",
"session_token": "HIDDEN",
"expiration": "2022-06-08T16:58:38+00:00"
},
"sts_user": {
"assumed_role_id": "HIDDEN:AWXBuildServer",
"arn": "arn:aws:sts::0123456789:assumed-role/sre-ec2-role-assumed/AWXBuildServer"
}
}
Now I want to use some of the register values as environment variables for roles like so:
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
However, I am getting this error:
The field 'environment' has an invalid value, which includes an undefined variable. The error was: 'assumed_role' is undefined. 'assumed_role' is undefined
How do I correctly pass the register values to the environment?

ok you have more than one plays in same playbook with different hosts, so i suggest to use add_host after you have register the result:
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ assumed_role }}"
then in second play:
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
vars:
assumed_role: "{{ hostvars['variable_holder']['shared_variable'] }}"
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"

Related

Deploy multiple VMs from an OVF file

I am trying to develop the code for creating multiple VM's using module deploy ovf in Ansible. I've tried to find out with other solution, but, it didn't work out. Here you can see my playbook :
- hosts: localhost
become: yes
gather_facts: false
vars_files:
- vars: vars.yml
tasks:
- name: deploy ovf
vmware_deploy_ovf:
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
name: "{{ vm_name }}"
ovf: "{{ ovf_path }}"
cluster: "{{ cluster }}"
wait_for_ip_address: true
inject_ovf_env: false
power_on: no
datastore: "{{ datastore }}"
networks: "{{ vcen_network }}"
disk_provisioning: thin
In variables files, I set "vm_name" as list.
vars.yml
vm_name:
vm-01
vm-02
So I've ran the code with extra variables like this:
ansible-playbook main.yml -e "vm_name=vm-01" -e "vm_name=vm-02".
It's only create vm-02 but not for both. Also, I tried to use "loop" or "with_items", but, it didn't work out.
Please assist, thank you
Yeah. Don't do that. Have the hosts in your inventory, and let Ansible do its thing:
- hosts: all
become: no
gather_facts: false
vars_files:
- vars: vars.yml
tasks:
- name: deploy ovf
vmware_deploy_ovf:
# hostname: "{{ hostname }}" # use environment variables
# username: "{{ username }}" # use environment variables
# password: "{{ password }}" # use environment variables
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
name: "{{ inventory_hostname }}"
ovf: "{{ ovf_path }}"
cluster: "{{ cluster }}"
wait_for_ip_address: true
inject_ovf_env: false
power_on: no
datastore: "{{ datastore }}"
networks: "{{ vcen_network }}"
disk_provisioning: thin
delegate_to: localhost

Ansible with VMWare to create Dictionary using Ansible

I am trying to create a dictionary with lists of items for a VMWare details collection. I was able to create the list individually. But not sure how to merge this.
- name: Gather DC info
community.vmware.vmware_datacenter_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
register: datacenter_infor
- name: Set DC_name variable
set_fact:
# dc_name: "{{ item.name }}"
dc_name: >-
{{ (dc_name | default([]))
+ [item.name]
}}
loop: "{{ datacenter_infor.datacenter_info }}"
- name: Gather cluster info
vmware_cluster_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: false
datacenter: "{{ item }}"
register: cluster_info
loop: "{{ dc_name }}"
- name: Set Host_name variable
set_fact:
host_name_list: >-
{{ (host_name_list | default([]))
+ data
}}
vars:
data: "{{ item.clusters.values() }}"
loop: "{{ cluster_info.results }}"
This will result in below output:
TASK [Set Host_name variable] *********************************************************************************************************************************************************************************************
ok: [localhost] => (item={'changed': False, 'clusters': {'PQR-CLU01': {'hosts': [{'name': 'PQR-cn0001.myhost.com', 'folder': '/PQR/host/PQR-CLU01'}, {'name': 'PQR-cn0002.myhost.com', 'folder': '/PQR/host/PQR-CLU01'}})
ok: [localhost] => (item={'changed': False, 'clusters': {'ABC-CLU01': {'hosts': [{'name': 'ABC-cn0002.myhost.com', 'folder': '/ABC/host/ABC-CLU01'}, {'name': 'ABC-cn0001.myhost.com', 'folder': '/ABC/host/ABC-CLU01'}})
How can I create a dictionary with above items like using ansible:
{'PQR-CLU01': ['PQR-cn0001.myhost.com','PQR-cn0002.myhost.com'],'ABC-CLU01':['ABC-cn0002.myhost.com','ABC-cn0001.myhost.com']
Given the data
cluster_info:
results:
- changed: false
clusters:
PQR-CLU01:
hosts:
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0001.myhost.com
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0002.myhost.com
- changed: false
clusters:
ABC-CLU01:
hosts:
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0002.myhost.com
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0001.myhost.com
Q: "Create dictionary (below)."
cluster_dict:
ABC-CLU01:
- ABC-cn0002.myhost.com
- ABC-cn0001.myhost.com
PQR-CLU01:
- PQR-cn0001.myhost.com
- PQR-cn0002.myhost.com
A: Create the list of clusters
cluster_list: "{{ cluster_info.results|
map(attribute='clusters')|
map('dict2items')|
flatten }}"
gives
cluster_list:
- key: PQR-CLU01
value:
hosts:
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0001.myhost.com
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0002.myhost.com
- key: ABC-CLU01
value:
hosts:
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0002.myhost.com
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0001.myhost.com
Create a list of keys
cluster_keys: "{{ cluster_list|
map(attribute='key')|
list }}"
gives
cluster_keys:
- PQR-CLU01
- ABC-CLU01
Create a list of values
cluster_vals: "{{ cluster_list|
map(attribute='value.hosts')|
map('map', attribute='name')|
list }}"
gives
cluster_vals:
- - PQR-cn0001.myhost.com
- PQR-cn0002.myhost.com
- - ABC-cn0002.myhost.com
- ABC-cn0001.myhost.com
Crate the dictionary
cluster_dict: "{{ dict(cluster_keys|zip(cluster_vals)) }}"
gives
cluster_dict:
ABC-CLU01:
- ABC-cn0002.myhost.com
- ABC-cn0001.myhost.com
PQR-CLU01:
- PQR-cn0001.myhost.com
- PQR-cn0002.myhost.com
Example of a complete playbook
- hosts: localhost
vars:
cluster_info:
results:
- changed: false
clusters:
PQR-CLU01:
hosts:
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0001.myhost.com
- folder: /PQR/host/PQR-CLU01
name: PQR-cn0002.myhost.com
- changed: false
clusters:
ABC-CLU01:
hosts:
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0002.myhost.com
- folder: /ABC/host/ABC-CLU01
name: ABC-cn0001.myhost.com
cluster_list: "{{ cluster_info.results|
map(attribute='clusters')|
map('dict2items')|
flatten }}"
cluster_keys: "{{ cluster_list|
map(attribute='key')|
list }}"
cluster_vals: "{{ cluster_list|
map(attribute='value.hosts')|
map('map', attribute='name')|
list }}"
cluster_dict: "{{ dict(cluster_keys|zip(cluster_vals)) }}"
tasks:
- debug:
var: cluster_dict

Ansible Tower how to pass inventory to my playbook variables

I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?

Ansible - Working with block module

I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped

how to get the exit status of the each task in ansible

I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.

Resources