os_user (and other os modules) Unable to establish connection error - ansible

This same ansible playbook/role works fine with my openstack liberty deployment but with my newer pike openstack (deployed using OAD) I am having an issue running any os modules. Example belows:
ansible version: 2.3.2.0
cloud.yml on openstack utility container:
# Ansible managed
clouds:
default:
auth:
auth_url: http://172.29.236.10:5000/v3
project_name: admin
tenant_name: admin
username: admin
password: admin
user_domain_name: Default
project_domain_name: Default
region_name: RegionOne
interface: internal
identity_api_version: "3"
role/task:
---
- name: Install random password generator package
apt: name={{item}} state=present
with_items:
- apg
- name: Random generate passwords
command: apg -n {{ pass_cnt }} -M NCL -q
register: passwdss
- name: Create users
os_user:
cloud: "{{CLOUD_NAME}}"
state: present
name: "{{ item.0 }}"
password: "{{ item.1 }}"
domain: default
with_together:
- "{{userid}}"
- "{{passwdss.stdout_lines}}"
- name: Create user environments
os_project:
cloud: "{{CLOUD_NAME}}"
state: present
name: "{{ item }}"
description: "{{ item }}"
domain_id: default
enabled: True
with_items: "{{tenantid}}"
- name: Assign user to specified role in designated environment
os_user_role:
cloud: "{{CLOUD_NAME}}"
user: "{{ item.0 }}"
role: "{{ urole }}"
project: "{{ item.1 }}"
with_together:
- "{{userid}}"
- "{{tenantid}}"
- name: User password assignment
debug: msg="User {{ item.0 }} was added to {{ item.2 }} project, with the assigned password of {{ item.1 }}"
with_together:
- userid
- passwdss.stdout_lines
- tenantid
The initial passwd generator and password create tasks complete without issue. But once os_user is run I get the following error:
ndebug2: Received exit status from master 0\r\nShared connection to 172.29.239.130 closed.\r\n",
"module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/tmp/ansible_BXmCwn/ansible_module_os_user.py\", line 284, in <module>\r\n main()\r\n File \"/tmp/ansible_BXmCwn/ansible_module_os_user.py\", line 220, in main\r\n user = cloud.get_user(name)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 1016, in get_user\r\n return _utils._get_entity(self.search_users, name_or_id, filters)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/_utils.py\", line 220, in _get_entity\r\n entities = func(name_or_id, filters, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 999, in search_users\r\n users = self.list_users()\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 981, in list_users\r\n data = self._identity_client.get('/users')\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 419, in _identity_client\r\n 'identity', version_required=True, versions=['3', '2'])\r\n File \"/usr/local/lib/python2.7/dist-packages/shade/openstackcloud.py\", line 496, in _discover_endpoint\r\n self.keystone_session, base_url)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py\", line 428, in get_discovery\r\n authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 1164, in get_discovery\r\n disc = Discover(session, url, authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 402, in __init__\r\n authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/discover.py\", line 101, in get_version_data\r\n resp = session.get(url, headers=headers, authenticated=authenticated)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 845, in get\r\n return self.request(url, 'GET', **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/positional/__init__.py\", line 108, in inner\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 703, in request\r\n resp = send(**kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py\", line 777, in _send_request\r\n raise exceptions.ConnectFailure(msg)\r\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://10.0.11.100:5000: ('Connection aborted.', BadStatusLine(\"''\",))\r\n",
"msg": "MODULE FAILURE",
"rc": 0
}
I've tested and 10.0.11.100:5000 is avalable from the utility container. I'm not sure what would cause the Unable to establish connection to http://10.0.11.100:5000: ('Connection aborted.', BadStatusLine(\"''\",))\r\n".
I am surprised also that its trying to connect to my external vip for :5000 instead of the internal authurl that is defined in the cloud.yml file: 172.29.236.10:5000
Any ideas for what to look for would be extremely welcome.

Unless specifically specified the os_modules appear to grab and use the public endpoints. By providing the openstack modules with a parameter/value of "endpoint_type: internal" the issue seems to be resolved in my OAD openstack deployment.

Related

Error on Ansible VMWARE VMOTION module - need to move VMs using ansible from primary vcenter to DR site for Emergency Move

I am attempting to migrate VMs from one vcenter to another in the environment using ansible.
Both of the vcenters are running vSphere Client version 7.0.3.01100.
ansible-helper is a RHEL 8 system with Python 3 and the required pyvmomi and vsphere-automation-sdk-python packages.
Note: The vmware snapshot and power off work. It just fails at the vmotion stage
The user that is being used to attempt the vmotion is the same (AD Controlled) and has the same privileges.
Here is the playbook used to attempt the move:
---
- name: Migrate Systems to Subterra (Fail-over to DR Site)
hosts: subterra
gather_facts: no
vars_files:
- environments/vars/vault.yml
vars:
vcenter_hostname: "vcenter1.company.com"
vcenter_username: "DOMAIN\\ansible_user"
dst_vcenter: "vcenter2.company.com"
datacenter_name: "DC1"
datastore: "SAN_Location1"
tasks:
- name: Gather one specific VM
community.vmware.vmware_vm_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
validate_certs: false
delegate_to: ansible-helper
register: vminfo
- name: List the VM's Information
ansible.builtin.debug:
var: vminfo
- name: print keys of cluster data
ansible.builtin.debug:
msg: "{{ vminfo.virtual_machines | list }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vminfo.virtual_machines[0].folder
- name: Testing printing the json_query
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Listing Datastore name
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: Define VM Folder
ansible.builtin.set_fact:
vm_folder_loc: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Define Data Store from vm output
ansible.builtin.set_fact:
dest_data_store: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vm_folder_loc
- name: Create a snapshot prior to migration
community.vmware.vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "{{ vm_folder_loc }}"
name: "{{ inventory_hostname }}"
state: present
memory_dump: false
validate_certs: false
snapshot_name: Snapshot_Prior_to_Migration
description: "Snapshot of system prior to migration to vcenter2"
delegate_to: ansible-helper
- name: PowerOff VM
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredoff
delegate_to: ansible-helper
- name: Perform vMotion of virtual machine to Subterra
community.vmware.vmware_vmotion:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
destination_datastore: "{{ dest_data_store }}"
destination_datacenter: "vcenter2"
destination_resourcepool: "Linux"
validate_certs: false
delegate_to: ansible-helper
- name: PowerOn VM
community.vmware.vmware_guest:
hostname: "{{ dst_vcenter }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredon
delegate_to: ansible-helper
The playbook works until it gets to the community.vmware.vmware_vmotion module. Then I get the following error:
{
"exception": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_port": null,
"ansible_host": "192.168.28.12",
"ansible_user": "devops"
},
"module_stderr": "OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /var/lib/awx/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 32265\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.28.12 closed.\r\n",
"changed": false,
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
Here is my ansible.cfg:
[defaults]
inventory = ./environments/test/hosts
ANSIBLE_SSH_ARGS = -C -o ControlMaster=auto -o ControlPersist=30m
ANSIBLE_PIPELINING = True
retry_files_enabled = False
host_key_checking = False
allow_world_readable_tmpfiles = True
remote_user = devops
private_key_file = ../keys/private_key_rsa
role_path = ./roles/

Ansible valueFrom aws secrets manager

I need to set environment vars for Container in AWS Fargate,
Values for those vars are in AWS Secret Manager, secret ARN is arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J, it stores two key/value secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
In CloudFormation the following worked perfect:
ContainerDefinitions:
- Name: "prowler"
Image: !Ref Image
Environment:
- Name: AWS_ACCESS_KEY_ID
Value: '{{resolve:secretsmanager:secret.automation-user:SecretString:AWS_ACCESS_KEY_ID}}'
I have to do the same with Ansible (v2.9.15) and community.aws.ecs_taskdefinition module
Based on "official" example I have the following snippet:
- name: Create task definition
ecs_taskdefinition:
family: "{{ task_definition_name }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
environment:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J/AWS_ACCESS_KEY_ID"
..but it doesn't work:
TASK [ansible-role-prowler-deploy : Create task definition] ********************
[0;31mAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'value'[0m
[0;31mfatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.ecs_taskdefinition', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 520, in <module>\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 357, in main\nKeyError: 'value'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}[0m
I tried a few ways of that syntax, but no luck(
it turned out that secret section should have been used:
- name: Create ECS task definition
ecs_taskdefinition:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
family: "{{ task_definition_name }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
repositoryCredentials:
credentialsParameter: "{{ artifactory_creds_arn }}"
logConfiguration:
logDriver: awslogs
options:
"awslogs-group": "{{ log_group_name }}"
"awslogs-region": "{{ aws_region }}"
"awslogs-stream-prefix": "ecs"
secrets:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "{{ aws_ak_arn }}"
- name: "AWS_SECRET_ACCESS_KEY"
valueFrom: "{{ aws_sk_arn }}"
environment:
- name: "AWS_ACCOUNT_ID"
value: "{{ aws_id }}"

Lookup secrets from AWS secret manager | Ansible

Using Terraform code I have created Other type of secrets in AWS Secrets Manager.
I need to use these AWS secrets in Ansible code. I found this below link but I am unable to proceed it.
https://docs.ansible.com/ansible/2.8/plugins/lookup/aws_secret.html
I have below Ansible code:-
database.yml
- name: Airflow | DB | Create MySQL DB
mysql_db:
login_user: "{{ mysql_user }}"
# login_password: "{{ mysql_root_password }}"
login_password: "{{ lookup('ca_dev', 'mysql_root_password') }}"
# config_file: /etc/my.cnf
# login_unix_socket: /var/lib/mysql/mysql.sock
# encrypted: yes
name: "airflow"
state: "present"
How can I incorporate AWS secret Manager in my ansible code?
Error message:-
TASK [../../roles/airflow : Airflow | DB | Create MySQL DB] **************************************************************************************************************************************************************************
task path: /home/ec2-user/cng-ansible/roles/airflow/tasks/database.yml:25
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 140, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 539, in _execute
self._task.post_validate(templar=templar)
File "/usr/lib/python2.7/site-packages/ansible/playbook/task.py", line 267, in post_validate
super(Task, self).post_validate(templar)
File "/usr/lib/python2.7/site-packages/ansible/playbook/base.py", line 364, in post_validate
value = templar.template(getattr(self, name))
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 540, in template
disable_lookups=disable_lookups,
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 495, in template
disable_lookups=disable_lookups,
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 746, in do_template
res = j2_concat(rf)
File "<template>", line 8, in root
File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 193, in call
return __obj(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ansible/template/__init__.py", line 631, in _lookup
instance = self._lookup_loader.get(name.lower(), loader=self._loader, templar=self)
File "/usr/lib/python2.7/site-packages/ansible/plugins/loader.py", line 381, in get
obj = getattr(self._module_cache[path], self.class_name)
AttributeError: 'module' object has no attribute 'LookupModule'
fatal: [127.0.0.1]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
RUNNING HANDLER [../../roles/airflow : restart rabbitmq-server]
task path: /home/ec2-user/cng-ansible/roles/airflow/handlers/main.yml:28
to retry, use: --limit #/home/ec2-user/cng-ansible/plays/airflow/installAirflow.retry
PLAY RECAP
127.0.0.1 : ok=39 changed=7 unreachable=0 failed=1
ansible-doc -t lookup -l output
The error {"msg": "lookup plugin (ca_dev) not found"} suggests your issue is the misuse of the lookup command.
The following line:
login_password: "{{ lookup('ca_dev', 'mysql_root_password') }}"
Should look something like
login_password: "{{ lookup('aws_secret', 'mysql_root_password') }}"
ca_dev is not a valid lookup type, whereas aws_secret is.
You can see a list of supported lookup plugins for Ansible 2.8 in the Lookup Plugins section of the official documentation.
If you are using a custom lookup plugin, or backporting a plugin from a future version of ansible to an older version, you must make sure that it is in a directory visible to ansible.
You can either place the custom file in the default location ansible looks in ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup or configure your ansible.cfg to look in a different place using the following lookup_plugins ini key under the defaults section.
DEFAULT_LOOKUP_PLUGIN_PATH
Description: Colon separated paths in which Ansible will search for Lookup Plugins.
Type: pathspec
Default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
Ini Section: defaults
Ini Key: lookup_plugins
Environment: ANSIBLE_LOOKUP_PLUGINS
Documentation for this can be found in the Ansible Configuration section of the official documentation

How can I merge lists with ansible 2.6.3

In the past I used ansible 2.3.1.0 and now want to use ansible 2.6.3.
I now want to install some packages with apt. therefore I have to source lists and I want to merge them in one. In ansible 2.3.1.0 I just used the following:
apt_packages:
- "{{ packages_list1 + packages_list2 }}"
And I am getting the following error:
Traceback (most recent call last):\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 1128, in <module>\r\n main()\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 1106, in main\r\n
allow_unauthenticated=allow_unauthenticated\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 521, in install\r\n pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 439, in expand_pkgspec_from_fnmatches\r\n
pkgname_pattern, version = package_split(pkgspec_pattern)\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 312, in package_split\r\n
parts = pkgspec.split('=', 1)\r\nAttributeError: 'list' object has no attribute 'split'\r\n", "msg": "MODULE FAILURE", "rc": 1}
Content of the role:
apt:
state: present
name: "{{ apt_packages }}"
force: yes
when: apt_packages is defined```
With apt_packages: "{{ packages_list1 + packages_list2 }}"(without the -), it will work.
Working sample:
- hosts: localhost
gather_facts: no
vars:
pkgs1:
- perl
- python
pkgs2:
- vim
- tar
packages: "{{ pkgs1 + pkgs2 }}"
tasks:
- apt:
name: "{{ packages }}"
state: present

An exception occurred during task execution in ansible playbook to launch ec2 instance

---
- hosts: localhost
gather_facts: false
vars:
keypair: id_rsa
instance_type: t2.micro
image: ami-6f68cf0f
region: us-west-2
tasks:
- name: launch ec2-instance
ec2:
key_name: "{{ keypair }}"
instance_type: "{{ instance_type }}"
image: ami-6f68cf0f
wait: true
group: wide-open
region: "{{ region }}"
aws_access_key: '************'
aws_secret_key: '********************************'
register: ec2
This what I run but it shows the following error
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2", line 3628, in <module>
main()
File "/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2", line 1413, in main
(instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc)
File "/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2", line 898, in create_instances
grp_details = ec2.get_all_security_groups()
File "/usr/lib/python2.6/site-packages/boto/ec2/connection.py", line 2984, in get_all_security_groups
[('item', SecurityGroup)], verb='POST')
File "/usr/lib/python2.6/site-packages/boto/connection.py", line 1186, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>5d241900-6b6e-4398-aba3-16d5738fb6d5</RequestID></Response>
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2"}, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2\", line 3628, in <module>\n main()\n File \"/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2\", line 1413, in main\n (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc)\n File \"/root/.ansible/tmp/ansible-tmp-1483922048.3-267705964376313/ec2\", line 898, in create_instances\n grp_details = ec2.get_all_security_groups()\n File \"/usr/lib/python2.6/site-packages/boto/ec2/connection.py\", line 2984, in get_all_security_groups\n [('item', SecurityGroup)], verb='POST')\n File \"/usr/lib/python2.6/site-packages/boto/connection.py\", line 1186, in get_list\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>5d241900-6b6e-4398-aba3-16d5738fb6d5</RequestID></Response>\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
The error message you received is a fairly good one in this case:
AWS was not able to validate the provided access credentials
The credentials in question here are those for your AWS account. In particular, your aws_access_key and aws_secret_key variables.
You just need to get the right values for those variables, check what you have against what you were given by whoever issued those credentials/your backup of them if it was yourself.

Resources