Ansible valueFrom aws secrets manager - ansible

I need to set environment vars for Container in AWS Fargate,
Values for those vars are in AWS Secret Manager, secret ARN is arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J, it stores two key/value secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
In CloudFormation the following worked perfect:
ContainerDefinitions:
- Name: "prowler"
Image: !Ref Image
Environment:
- Name: AWS_ACCESS_KEY_ID
Value: '{{resolve:secretsmanager:secret.automation-user:SecretString:AWS_ACCESS_KEY_ID}}'
I have to do the same with Ansible (v2.9.15) and community.aws.ecs_taskdefinition module
Based on "official" example I have the following snippet:
- name: Create task definition
ecs_taskdefinition:
family: "{{ task_definition_name }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
environment:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J/AWS_ACCESS_KEY_ID"
..but it doesn't work:
TASK [ansible-role-prowler-deploy : Create task definition] ********************
[0;31mAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'value'[0m
[0;31mfatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.ecs_taskdefinition', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 520, in <module>\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 357, in main\nKeyError: 'value'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}[0m
I tried a few ways of that syntax, but no luck(

it turned out that secret section should have been used:
- name: Create ECS task definition
ecs_taskdefinition:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
family: "{{ task_definition_name }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
repositoryCredentials:
credentialsParameter: "{{ artifactory_creds_arn }}"
logConfiguration:
logDriver: awslogs
options:
"awslogs-group": "{{ log_group_name }}"
"awslogs-region": "{{ aws_region }}"
"awslogs-stream-prefix": "ecs"
secrets:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "{{ aws_ak_arn }}"
- name: "AWS_SECRET_ACCESS_KEY"
valueFrom: "{{ aws_sk_arn }}"
environment:
- name: "AWS_ACCOUNT_ID"
value: "{{ aws_id }}"

Related

Error on Ansible VMWARE VMOTION module - need to move VMs using ansible from primary vcenter to DR site for Emergency Move

I am attempting to migrate VMs from one vcenter to another in the environment using ansible.
Both of the vcenters are running vSphere Client version 7.0.3.01100.
ansible-helper is a RHEL 8 system with Python 3 and the required pyvmomi and vsphere-automation-sdk-python packages.
Note: The vmware snapshot and power off work. It just fails at the vmotion stage
The user that is being used to attempt the vmotion is the same (AD Controlled) and has the same privileges.
Here is the playbook used to attempt the move:
---
- name: Migrate Systems to Subterra (Fail-over to DR Site)
hosts: subterra
gather_facts: no
vars_files:
- environments/vars/vault.yml
vars:
vcenter_hostname: "vcenter1.company.com"
vcenter_username: "DOMAIN\\ansible_user"
dst_vcenter: "vcenter2.company.com"
datacenter_name: "DC1"
datastore: "SAN_Location1"
tasks:
- name: Gather one specific VM
community.vmware.vmware_vm_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
validate_certs: false
delegate_to: ansible-helper
register: vminfo
- name: List the VM's Information
ansible.builtin.debug:
var: vminfo
- name: print keys of cluster data
ansible.builtin.debug:
msg: "{{ vminfo.virtual_machines | list }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vminfo.virtual_machines[0].folder
- name: Testing printing the json_query
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Listing Datastore name
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: Define VM Folder
ansible.builtin.set_fact:
vm_folder_loc: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Define Data Store from vm output
ansible.builtin.set_fact:
dest_data_store: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vm_folder_loc
- name: Create a snapshot prior to migration
community.vmware.vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "{{ vm_folder_loc }}"
name: "{{ inventory_hostname }}"
state: present
memory_dump: false
validate_certs: false
snapshot_name: Snapshot_Prior_to_Migration
description: "Snapshot of system prior to migration to vcenter2"
delegate_to: ansible-helper
- name: PowerOff VM
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredoff
delegate_to: ansible-helper
- name: Perform vMotion of virtual machine to Subterra
community.vmware.vmware_vmotion:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
destination_datastore: "{{ dest_data_store }}"
destination_datacenter: "vcenter2"
destination_resourcepool: "Linux"
validate_certs: false
delegate_to: ansible-helper
- name: PowerOn VM
community.vmware.vmware_guest:
hostname: "{{ dst_vcenter }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredon
delegate_to: ansible-helper
The playbook works until it gets to the community.vmware.vmware_vmotion module. Then I get the following error:
{
"exception": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_port": null,
"ansible_host": "192.168.28.12",
"ansible_user": "devops"
},
"module_stderr": "OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /var/lib/awx/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 32265\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.28.12 closed.\r\n",
"changed": false,
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
Here is my ansible.cfg:
[defaults]
inventory = ./environments/test/hosts
ANSIBLE_SSH_ARGS = -C -o ControlMaster=auto -o ControlPersist=30m
ANSIBLE_PIPELINING = True
retry_files_enabled = False
host_key_checking = False
allow_world_readable_tmpfiles = True
remote_user = devops
private_key_file = ../keys/private_key_rsa
role_path = ./roles/

Ansible how to reference the variables from different tasks files

I want to be able to reference the variable vpc_info registered by file create-public-vpc from file create-public-sunbet
/etc/ansible/roles/ec2/tasks/main.yml
# tasks file for ec2-provision
- name:
import_tasks: create-vpc.yml
import_tasks: create-public-subnet.yml
/etc/ansible/roles/ec2/vars/main.yml
---
# vars file for ec2-provision
################################### designate python interpreter ########################
ansible_python_interpreter: /usr/local/bin/python3.8
############################## VPC INFO #########################################
vpc_name: "My VPC"
vpc_cidr_block: "10.0.0.0/16"
aws_region: "us-east-1"
################################### VPC Subnet ###############################################
aws_zone: "us-east-1a"
# Subnets
vpc_public_subnet_cidr: "10.0.0.0/24"
# Subnet
vpc_private_subnet_cidr: "10.0.1.0/24"
create-vpc.yml
- name: Create AWS VPC
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr_block }}"
region: "{{ aws_region }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
state: present
register: vpc_info
- name: Set vpc_info as fact
set_fact:
vpc_info_fact: "{{ vpc_info }}"
create-public-sunbet.yml
- name: print vpc_info_fact
debug:
msg: "{{ hostvars['localhost']['vpc_info_fact'] }}"
- name: Create Public Subnet in VPC
ec2_vpc_subnet:
vpc_id: "{{ vpc_info['vpc']['id'] }}"
cidr: "{{ vpc_public_subnet_cidr }}"
region: "{{ aws_region }}"
az: "{{ aws_zone }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
state: present
tags:
Name: Public Subnet
register: public_subnet_info
When I run ansible-playbook ec2-provision.yml, the error message is as follows:
[root#VM-0-14-centos tasks]# ansible-playbook ec2-provision.yml
[WARNING]: While constructing a mapping from /etc/ansible/roles/EC2/tasks/main.yml, line 4, column 3, found a duplicate dict key (import_tasks). Using last defined value
only.
PLAY [localhost] ************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [localhost]
TASK [EC2 : print vpc_info_fact] ********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'vpc_info_fact' is undefined\n\nThe error appears to be in '/etc/ansible/roles/EC2/tasks/create-public-subnet.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: print vpc_info_fact\n ^ here\n"}
PLAY RECAP ******************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Try to set the variable value as a fact once the variable is created and then you'd access the corresponding fact via hostvars.
For example:
- name: Create AWS VPC
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr_block }}"
region: "{{ aws_region }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
state: present
register: vpc_info
- name: Set vpc_info as fact
set_fact: vpc_info_fact="{{ vpc_info }}"
To access it from a different file, we have the following task:
- name: Create Public Subnet in VPC
ec2_vpc_subnet:
vpc_id: "{{ hostvars['localhost']['vpc_info']['vpc']['id'] }}"
cidr: "{{ vpc_public_subnet_cidr }}"
region: "{{ aws_region }}"
az: "{{ aws_zone }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
state: present
tags:
Name: Public Subnet
register: public_subnet_info
Your main.yml contains a tasks with two modules. It should be
- name: Create VPC
import_tasks: create-vpc.yml
- name: Create Public Subnets
import_tasks: create-public-subnet.yml
The running playbooks prints the warning about that issue.
[WARNING]: While constructing a mapping from /etc/ansible/roles/EC2/tasks/main.yml, line 4, column 3, found a duplicate dict key (import_tasks). Using last defined value
only.
Ansible cannot have more then one module per task - and include_tasks is a module. It picks the last module, if there are multiple modules in one task (after writing the warning message and not exiting).
This is the main problem of all your issues. Everything else looks ok to me.

Building a VM using Ansible Tower and I want to have the ability to create additional disks only when their variables are defined

I'm currently building Ansible image playbook templates across AWS and VSphere and I'd like be able to define multiple additional disks but only when they're defined via the variables.
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 | default(80) }}"
type: "{{ vm_disk_type0 | default(thin) }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
- size_gb: "{{ vm_disk_size3 }}"
type: "{{ vm_disk_type3 }}"
datastore: "{{ vm_disk_datastore3 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Variables:
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_datastore1: "C6200_T2_FCP_3Days"
vm_disk_size1: "50"
vm_disk_type1: "thin"
vm_disk_datastore2: "C6200_T2_FCP_3Days"
vm_disk_size2: "20"
vm_disk_type2: "thin"
vm_disk_datastore3: ""
vm_disk_size3: ""
vm_disk_type3: ""
Error:
{
"_ansible_parsed": false,
"exception": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"_ansible_no_log": false,
"module_stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"changed": false,
"module_stdout": "",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
The idea being, if the variable isn't defined when the job is launched, it will be skipped by the vm_guest module.
Is this the best method? Is anyone able to suggest a successful way forward?
Update: Probably the best method would be to build the instance and then use vm_guest_disk to add additional disks using the method suggested below. This module is available in v2.8.
Our version of Tower is 2.7.9, so I'm going to use a more verbose method of multiple vm_guest calls via a disk_num variable:
Variables
num_disks: 0
vm_cluster: C6200_NPE_PC_ST
vm_datacenter: DC2
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_datastore1: ''
vm_disk_datastore2: ''
vm_disk_datastore3: ''
vm_disk_datastore4: ''
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_type1: ''
vm_disk_type2: ''
vm_disk_type3: ''
vm_disk_type4: ''
vm_domain: corp.local
vm_folder: /DC2/vm/ap-dev
vm_hostname: xyzvmserver.corp.local
vm_memory_mb: 16000
vm_network: C6200_10.110.64.0_24_VL1750
vm_num_cpus: 4
vm_servername: server05
vm_template: Windows2016_x64_AN_ESX_v1.1
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance - No additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 0
- name: Launch Windows 2016 VM Instance 1 Additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 1
- name: Launch Windows 2016 VM Instance 2 Additional Disks
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 2
etc
It looks like you should build your vm with 1 disk, that you know for sure will always be there when you run the play, which you could make to be the first variable in this list:
vms:
0:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "80"
vm_disk_type: "thin"
1:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "50"
vm_disk_type: "thin"
2:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "20"
vm_disk_type: "thin"
3:
vm_disk_datastore: ""
vm_disk_size: ""
vm_disk_type: ""
Since I don't know of a way that you can loop through sub module arguments to make them present or not, build your vm with the one disk you know for sure will be there. Which mentioned above, for consistency, should be the first one in your list. Or you will need to rebuild the logic to drill into the list looking for a specific value that you can key in on with jinja filters like selectattr and map.
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vms.0.vm_disk_size }}"
type: "{{ vms.0.vm_disk_type }}"
datastore: "{{ vms.0.vm_disk_datastore }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Once its built, then use this different module to drill into adding additional disks
- name: Add disks to virtual machine
vmware_guest_disk:
hostname: "{{ vm_servername }}"
datacenter: "{{ vm_datacenter }}"
disk:
- size_gb: "{{ item.vm_disk_size }}"
type: "{{ item.vm_disk_type }}"
datastore: "{{ item.vm_disk_datastore }}"
state: present
loop: "{{ vms }}"
loop_control:
label: "Disk {{ my_idx}} - {{ item.vm_disk_datastore }}"
index_var: my_idx
when:
- item.vm_disk_datastore != ""
- item.vm_disk_size != ""
- item.vm_disk_type != ""
- my_idx > 0
That will feed through a loop pulling that variable in from where ever you decided to define it, and then loop through that list of dictonaries pulling the values with the correct labels, but only doing so if they have a value (if leaving blanks in that list is something you plan on doing, if not, the when isn't even needed). This will also allow you to expand upward by just putting more items in the list without having to expand out your task. If you need to edit the first task because you can't rely on list order, but need to key into a variable, then you need to make sure you make a change to the disks on the second task as well to omit using the disk added in the first task.
Also, the documentation on this module states that existing disks called called with this module will be adjusted to that size, and can't be reduced using the module, so if you want to avoid that, put in some data checks to compare the configured disk size to what is in your variable, so you can skip the task if it is going to be reduced (or if you just don't want an existing vm to be altered).

Ansible pamd: module failure

i have this playbook as a part of a role that do some changes in pam modules :
---
- name: "{{ BANNER}} - SET MODE"
copy:
remote_src: True
src: "{{ LOGIN_DEF }}"
dest: "{{ LOGIN_DEF_BCK }}_RH7-021_{{ CK_ORA }}"
replace:
path: "{{ LOGIN_DEF }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '(.*FAIL_DELAY.*)', dst: '#\1' }
lineinfile:
path: "{{ LOGIN_DEF }}"
line: 'FAIL_DELAY 10'
replace:
path: "{{ PASSWORDAUTH }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '^auth .* pam_faildelay.so', dst: '' }
pamd:
name: password-auth
type: auth
control: sufficient
module_path: 'pam_unix.so'
new_type: auth
new_control: optional
new_module_path: 'pam_faildelay.so'
module_arguments:
state: after
replace:
path: "{{ SYSTEMAUTH }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '^auth .* pam_faildelay.so', dst: '' }
pamd:
name: system-auth
type: auth
control: sufficient
module_path: 'pam_unix.so'
new_type: auth
new_control: optional
new_module_path: 'pam_faildelay.so'
module_arguments:
state: after
debug: msg="{{ MSG_SET }}"
When i run i have this error :
TASK [RH7-021 : pamd] ***********************************************************************************************************************************************
fatal: [10.13.203.165]: FAILED! => {"changed": false, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 113, in <module>\r\n _ansiballz_main()\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 48, in invoke_module\r\n imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 880, in <module>\r\n File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 816, in main\r\n File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 458, in __init__\r\n
File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 371, in rule_from_string\r\n
AttributeError: 'NoneType' object has no attribute 'group'\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/home/PG005856/HARDENING/main.retry
I can't figure it out what's wrong morover i used the same method on other playbook at it worked fine.
The control node has this ansible version :
ansible 2.7.6
config file = /home/PG005856/HARDENING/ansible.cfg
configured module search path = [u'/home/PG005856/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /ansible/lib/python2.7/site-packages/ansible
executable location = /ansible/bin/ansible
python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Thr target server is :
Linux rh7-test-ansible 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
I have read that was a bug but i could imagined that with ansible version 2.7 was resolved.
I don't know what to do i could archieve the same result with some lines of sed with shell module but i'd like to use the pamd: module.

os_user (and other os modules) Unable to establish connection error

This same ansible playbook/role works fine with my openstack liberty deployment but with my newer pike openstack (deployed using OAD) I am having an issue running any os modules. Example belows:
ansible version: 2.3.2.0
cloud.yml on openstack utility container:
# Ansible managed
clouds:
default:
auth:
auth_url: http://172.29.236.10:5000/v3
project_name: admin
tenant_name: admin
username: admin
password: admin
user_domain_name: Default
project_domain_name: Default
region_name: RegionOne
interface: internal
identity_api_version: "3"
role/task:
---
- name: Install random password generator package
apt: name={{item}} state=present
with_items:
- apg
- name: Random generate passwords
command: apg -n {{ pass_cnt }} -M NCL -q
register: passwdss
- name: Create users
os_user:
cloud: "{{CLOUD_NAME}}"
state: present
name: "{{ item.0 }}"
password: "{{ item.1 }}"
domain: default
with_together:
- "{{userid}}"
- "{{passwdss.stdout_lines}}"
- name: Create user environments
os_project:
cloud: "{{CLOUD_NAME}}"
state: present
name: "{{ item }}"
description: "{{ item }}"
domain_id: default
enabled: True
with_items: "{{tenantid}}"
- name: Assign user to specified role in designated environment
os_user_role:
cloud: "{{CLOUD_NAME}}"
user: "{{ item.0 }}"
role: "{{ urole }}"
project: "{{ item.1 }}"
with_together:
- "{{userid}}"
- "{{tenantid}}"
- name: User password assignment
debug: msg="User {{ item.0 }} was added to {{ item.2 }} project, with the assigned password of {{ item.1 }}"
with_together:
- userid
- passwdss.stdout_lines
- tenantid
The initial passwd generator and password create tasks complete without issue. But once os_user is run I get the following error:
ndebug2: Received exit status from master 0\r\nShared connection to 172.29.239.130 closed.\r\n",
"module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/tmp/ansible_BXmCwn/ansible_module_os_user.py\", line 284, in <module>\r\n main()\r\n File \"/tmp/ansible_BXmCwn/ansible_module_os_user.py\", line 220, in main\r\n user = cloud.get_user(name)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 1016, in get_user\r\n return _utils._get_entity(self.search_users, name_or_id, filters)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/_utils.py\", line 220, in _get_entity\r\n entities = func(name_or_id, filters, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 999, in search_users\r\n users = self.list_users()\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 981, in list_users\r\n data = self._identity_client.get('/users')\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 419, in _identity_client\r\n 'identity', version_required=True, versions=['3', '2'])\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 496, in _discover_endpoint\r\n self.keystone_session, base_url)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py\", line 428, in get_discovery\r\n authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 1164, in get_discovery\r\n disc = Discover(session, url, authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 402, in __init__\r\n authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 101, in get_version_data\r\n resp = session.get(url, headers=headers, authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 845, in get\r\n return self.request(url, 'GET', **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 703, in request\r\n resp = send(**kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 777, in _send_request\r\n raise exceptions.ConnectFailure(msg)\r\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://10.0.11.100:5000: ('Connection aborted.', BadStatusLine(\"''\",))\r\n",
"msg": "MODULE FAILURE",
"rc": 0
}
I've tested and 10.0.11.100:5000 is avalable from the utility container. I'm not sure what would cause the Unable to establish connection to http://10.0.11.100:5000: ('Connection aborted.', BadStatusLine(\"''\",))\r\n".
I am surprised also that its trying to connect to my external vip for :5000 instead of the internal authurl that is defined in the cloud.yml file: 172.29.236.10:5000
Any ideas for what to look for would be extremely welcome.
Unless specifically specified the os_modules appear to grab and use the public endpoints. By providing the openstack modules with a parameter/value of "endpoint_type: internal" the issue seems to be resolved in my OAD openstack deployment.

Resources