Ansible pamd: module failure - ansible

i have this playbook as a part of a role that do some changes in pam modules :
---
- name: "{{ BANNER}} - SET MODE"
copy:
remote_src: True
src: "{{ LOGIN_DEF }}"
dest: "{{ LOGIN_DEF_BCK }}_RH7-021_{{ CK_ORA }}"
replace:
path: "{{ LOGIN_DEF }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '(.*FAIL_DELAY.*)', dst: '#\1' }
lineinfile:
path: "{{ LOGIN_DEF }}"
line: 'FAIL_DELAY 10'
replace:
path: "{{ PASSWORDAUTH }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '^auth .* pam_faildelay.so', dst: '' }
pamd:
name: password-auth
type: auth
control: sufficient
module_path: 'pam_unix.so'
new_type: auth
new_control: optional
new_module_path: 'pam_faildelay.so'
module_arguments:
state: after
replace:
path: "{{ SYSTEMAUTH }}"
regexp: '{{ item.src }}'
replace: '{{ item.dst }}'
with_items:
- { src: '^auth .* pam_faildelay.so', dst: '' }
pamd:
name: system-auth
type: auth
control: sufficient
module_path: 'pam_unix.so'
new_type: auth
new_control: optional
new_module_path: 'pam_faildelay.so'
module_arguments:
state: after
debug: msg="{{ MSG_SET }}"
When i run i have this error :
TASK [RH7-021 : pamd] ***********************************************************************************************************************************************
fatal: [10.13.203.165]: FAILED! => {"changed": false, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 113, in <module>\r\n _ansiballz_main()\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n
File \"/home/ccansible/.ansible/tmp/ansible-tmp-1561986679.75-245340126875212/AnsiballZ_pamd.py\", line 48, in invoke_module\r\n imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 880, in <module>\r\n File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 816, in main\r\n File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 458, in __init__\r\n
File \"/tmp/ansible_pamd_payload_NpycuP/__main__.py\", line 371, in rule_from_string\r\n
AttributeError: 'NoneType' object has no attribute 'group'\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/home/PG005856/HARDENING/main.retry
I can't figure it out what's wrong morover i used the same method on other playbook at it worked fine.
The control node has this ansible version :
ansible 2.7.6
config file = /home/PG005856/HARDENING/ansible.cfg
configured module search path = [u'/home/PG005856/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /ansible/lib/python2.7/site-packages/ansible
executable location = /ansible/bin/ansible
python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Thr target server is :
Linux rh7-test-ansible 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
I have read that was a bug but i could imagined that with ansible version 2.7 was resolved.
I don't know what to do i could archieve the same result with some lines of sed with shell module but i'd like to use the pamd: module.

Related

Error on Ansible VMWARE VMOTION module - need to move VMs using ansible from primary vcenter to DR site for Emergency Move

I am attempting to migrate VMs from one vcenter to another in the environment using ansible.
Both of the vcenters are running vSphere Client version 7.0.3.01100.
ansible-helper is a RHEL 8 system with Python 3 and the required pyvmomi and vsphere-automation-sdk-python packages.
Note: The vmware snapshot and power off work. It just fails at the vmotion stage
The user that is being used to attempt the vmotion is the same (AD Controlled) and has the same privileges.
Here is the playbook used to attempt the move:
---
- name: Migrate Systems to Subterra (Fail-over to DR Site)
hosts: subterra
gather_facts: no
vars_files:
- environments/vars/vault.yml
vars:
vcenter_hostname: "vcenter1.company.com"
vcenter_username: "DOMAIN\\ansible_user"
dst_vcenter: "vcenter2.company.com"
datacenter_name: "DC1"
datastore: "SAN_Location1"
tasks:
- name: Gather one specific VM
community.vmware.vmware_vm_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
validate_certs: false
delegate_to: ansible-helper
register: vminfo
- name: List the VM's Information
ansible.builtin.debug:
var: vminfo
- name: print keys of cluster data
ansible.builtin.debug:
msg: "{{ vminfo.virtual_machines | list }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vminfo.virtual_machines[0].folder
- name: Testing printing the json_query
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Listing Datastore name
ansible.builtin.debug:
msg: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: Define VM Folder
ansible.builtin.set_fact:
vm_folder_loc: "{{ vminfo | json_query('virtual_machines[0].folder') }}"
- name: Define Data Store from vm output
ansible.builtin.set_fact:
dest_data_store: "{{ vminfo | json_query('virtual_machines[0].datastore_url.name') }}"
- name: List the VM's Folder Information
ansible.builtin.debug:
var: vm_folder_loc
- name: Create a snapshot prior to migration
community.vmware.vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "{{ vm_folder_loc }}"
name: "{{ inventory_hostname }}"
state: present
memory_dump: false
validate_certs: false
snapshot_name: Snapshot_Prior_to_Migration
description: "Snapshot of system prior to migration to vcenter2"
delegate_to: ansible-helper
- name: PowerOff VM
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredoff
delegate_to: ansible-helper
- name: Perform vMotion of virtual machine to Subterra
community.vmware.vmware_vmotion:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
vm_name: "{{ inventory_hostname }}"
destination_datastore: "{{ dest_data_store }}"
destination_datacenter: "vcenter2"
destination_resourcepool: "Linux"
validate_certs: false
delegate_to: ansible-helper
- name: PowerOn VM
community.vmware.vmware_guest:
hostname: "{{ dst_vcenter }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ inventory_hostname }}"
validate_certs: false
state: poweredon
delegate_to: ansible-helper
The playbook works until it gets to the community.vmware.vmware_vmotion module. Then I get the following error:
{
"exception": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_port": null,
"ansible_host": "192.168.28.12",
"ansible_user": "devops"
},
"module_stderr": "OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /var/lib/awx/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 32265\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.28.12 closed.\r\n",
"changed": false,
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/devops/.ansible/tmp/ansible-tmp-1676650446.81-32325-16791288075867/AnsiballZ_vmware_vmotion.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_vmotion', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 549, in <module>\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 545, in main\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_vmotion.py\", line 243, in __init__\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 249, in find_datastore_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 209, in find_object_by_name\r\n File \"/tmp/ansible_community.vmware.vmware_vmotion_payload_eyr21noz/ansible_community.vmware.vmware_vmotion_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py\", line 796, in get_all_objs\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\r\n self.f(*(self.args + (obj,) + args), **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 511, in _InvokeMethod\r\n list(map(CheckField, info.params, args))\r\n File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 1098, in CheckField\r\n % (info.name, info.type.__name__, valType.__name__))\r\nTypeError: For \"container\" expected type vim.ManagedEntity, but got str\r\n",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
Here is my ansible.cfg:
[defaults]
inventory = ./environments/test/hosts
ANSIBLE_SSH_ARGS = -C -o ControlMaster=auto -o ControlPersist=30m
ANSIBLE_PIPELINING = True
retry_files_enabled = False
host_key_checking = False
allow_world_readable_tmpfiles = True
remote_user = devops
private_key_file = ../keys/private_key_rsa
role_path = ./roles/

Ansible valueFrom aws secrets manager

I need to set environment vars for Container in AWS Fargate,
Values for those vars are in AWS Secret Manager, secret ARN is arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J, it stores two key/value secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
In CloudFormation the following worked perfect:
ContainerDefinitions:
- Name: "prowler"
Image: !Ref Image
Environment:
- Name: AWS_ACCESS_KEY_ID
Value: '{{resolve:secretsmanager:secret.automation-user:SecretString:AWS_ACCESS_KEY_ID}}'
I have to do the same with Ansible (v2.9.15) and community.aws.ecs_taskdefinition module
Based on "official" example I have the following snippet:
- name: Create task definition
ecs_taskdefinition:
family: "{{ task_definition_name }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
environment:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "arn:aws:secretsmanager:eu-west-1:909628726468:secret:secret.automation-user-KBSm8J/AWS_ACCESS_KEY_ID"
..but it doesn't work:
TASK [ansible-role-prowler-deploy : Create task definition] ********************
[0;31mAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'value'[0m
[0;31mfatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1607197370.8633459-17-102108854554553/AnsiballZ_ecs_taskdefinition.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.ecs_taskdefinition', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 520, in <module>\n File \"/tmp/ansible_ecs_taskdefinition_payload_5roid3ob/ansible_ecs_taskdefinition_payload.zip/ansible/modules/cloud/amazon/ecs_taskdefinition.py\", line 357, in main\nKeyError: 'value'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}[0m
I tried a few ways of that syntax, but no luck(
it turned out that secret section should have been used:
- name: Create ECS task definition
ecs_taskdefinition:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
family: "{{ task_definition_name }}"
execution_role_arn: "{{ execution_role_arn }}"
containers:
- name: prowler
essential: true
image: "{{ image }}"
repositoryCredentials:
credentialsParameter: "{{ artifactory_creds_arn }}"
logConfiguration:
logDriver: awslogs
options:
"awslogs-group": "{{ log_group_name }}"
"awslogs-region": "{{ aws_region }}"
"awslogs-stream-prefix": "ecs"
secrets:
- name: "AWS_ACCESS_KEY_ID"
valueFrom: "{{ aws_ak_arn }}"
- name: "AWS_SECRET_ACCESS_KEY"
valueFrom: "{{ aws_sk_arn }}"
environment:
- name: "AWS_ACCOUNT_ID"
value: "{{ aws_id }}"

Where is the syntax error in ansible-playbook?

I'm facing some (silly) problems with ansible-playbook. I don't understand the error due to is a syntax-error but another similar playbook has been executed successfully and this one not.
ERROR! 'copy' is not a valid attribute for a Play
The error appears to have been in '/home/manu/monitorizacion/node-exporter/playbooks/deploy-node-exporter.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: Upload project directory to '{{ docker_home }}'
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
The thing is that i have a (install.yml) playbook with 2 include_tasks
---
- hosts: wargame
vars:
project_home: ../../node-exporter/
scripts_dir: /home/docker/node-exporter/textfile-collector/
docker_home: /home/docker/node-exporter/
tasks:
- include_tasks: packages.yml
- include_tasks: deploy-node-exporter.yml
packages.yml is fine and it is executed without problem
---
- name: Instalar paquetes requeridos para Docker e instalación
apt:
update_cache: true
name: "{{ packages }}"
vars:
packages:
- lsb-core
- apt-transport-https
- ca-certificates
- curl
- gnupg2
- python-setuptools
- python-pip
- git
- smartmontools
- name: Instalar clave Docker
apt_key:
url: https://download.docker.com/linux/debian/gpg
- name: Instalar repositorio Docker
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/debian {{ ansible_lsb.codename }} stable"
- name: Actualizar listado de paquetes
apt:
update_cache: yes
- name: Instalar Docker
apt:
name: docker-ce
- name: Instalar Docker-compose
pip:
name: docker-compose
- name: Grupo docker
group:
name: docker
state: present
- name: Usuario docker
user:
name: docker
group: docker
As you can see, the last executed task is "Usuario docker"
ok: [188.166.52.222] => {"append": false, "changed": false, "comment": "", "group": 998, "home": "/home/docker", "move_home": false, "name": "docker", "shell": "", "state": "present", "uid": 1002}
TASK [include_tasks] *****************************************************************************************************************************
fatal: [188.166.52.222]: FAILED! => {"reason": "no action detected in task. This often indicates a misspelled module name, or incorrect module path.\n\nThe error appears to have been in '/home/manu/monitorizacion/node-exporter/playbooks/deploy-node-exporter.yml': line 24, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n mode: '0755'\n- name: Deploy node-exporter\n ^ here\n"}
And now the error is in deploy-node-exporter.yml but the syntax is the same as in install.yml
---
- name: Upload project directory to '{{ docker_home }}'
copy:
src: "{{ project_home }}"
dest: "{{ docker_home }}"
directory_mode: yes
follow: yes
owner: docker
group: docker
- name: Fix perms for dirs
file:
path: "{{ item }}"
state: touch
mode: '0755'
with_items:
- "{{ docker_home }}"
- "{{ docker_home }}/textfile-collector"
- "{{ docker_home }}/playbooks"
- name: Give exec perms to smartmon.sh
file:
path: "{{ scripts_dir }}/smartmon.sh"
state: touch
mode: '0755'
- name: Deploy node-exporter
docker_compose:
project_src: "{{ docker_home }}"
build: no
state: present
recreate: always
register: output
- name: Creates cron entry for smartmon.sh
cron:
name: Cron job for collecting smartctl stats
minute: "*/15"
user: root
job: "{{ scripts_dir }}/smartmon.sh > {{ scripts_dir }}/smartmon.prom"
Or I'm blind due to i'm not able to find the syntax error or i think is an ansible problem
My current installed version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
--- update ---
I've put it everything on a single file
---
- hosts: myServer
vars:
project_home: ../../node-exporter/
scripts_dir: /home/docker/node-exporter/textfile-collector/
docker_home: /home/docker/node-exporter/
tasks:
- name: Instalar paquetes requeridos para Docker e instalación
apt:
update_cache: true
name: "{{ packages }}"
vars:
packages:
- lsb-core
- apt-transport-https
- ca-certificates
- curl
- gnupg2
- python-setuptools
- python-pip
- git
- smartmontools
- name: Instalar clave Docker
apt_key:
url: https://download.docker.com/linux/debian/gpg
- name: Instalar repositorio Docker
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/debian {{ ansible_lsb.codename }} stable"
- name: Actualizar listado de paquetes
apt:
update_cache: yes
- name: Instalar Docker
apt:
name: docker-ce
- name: Instalar Docker-compose
pip:
name: docker-compose
- name: Grupo docker
group:
name: docker
state: present
- name: Usuario docker
user:
name: docker
group: docker
- name: Upload project directory to '{{ docker_home }}'
copy:
src: "../../node-exporter/"
dest: "/home/docker/node-exporter/"
directory_mode: yes
follow: yes
owner: docker
group: docker
- name: Fix perms for dirs
file:
path: "{{ item }}"
state: touch
mode: "0755"
with_items:
- "/home/docker/node-exporter/"
- "/home/docker/node-exporter/textfile-collector"
- "/home/docker/node-exporter/playbooks"
- name: Give exec perms to smartmon.sh
file:
path: "{{ scripts_dir }}/smartmon.sh"
state: touch
mode: "0755"
- name: Creating cron entry for smartmon.sh
cron:
name: Cron job for collecting smartctl stats
minute: "*/15"
user: root
job: "{{ scripts_dir }}/smartmon.sh > {{ scripts_dir }}/smartmon.prom"
In this file I've removed
- name: Deploying node-exporter
docker_compose:
project_src: "{{ docker_home }}"
build: no
state: present
recreate: always
register: output
And now it works (without this part of code)
┌─[root#hippi3c0w] - [~manu/monitorizacion/node-exporter/playbooks] - [sáb ago 31, 20:59]
└─[$] <git:(master*)> ansible-playbook install_new.yml --syntax-check
playbook: install_new.yml
But the problem comes back when i add this part to the yaml file, so the problem is without doubts following part
- name: Deploying node-exporter
docker_compose:
project_src: "{{ docker_home }}"
build: no
state: present
recreate: always
register: output
Do you know what could be the problem? Maybe docker_compose?
--- UPDATE ---
It was due to ansible version. 2.7 doesn't support docker_compose directive. Updated to 2.8.4 and now it works properly.
PLAY RECAP ***************************************************************************************************************************************
[MyIP] : ok=16 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
┌─[root#hippi3c0w] - [~manu/monitorizacion/node-exporter/playbooks] - [dom sep 01, 08:34]
└─[$] <git:(master*)> ansible --version; ansible-playbook install.yml --syntax-check
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.16+ (default, Jul 8 2019, 09:45:29) [GCC 8.3.0]
playbook: install.yml
Would you please be more specific about which error you want to solve.
This error?
- name: Upload project directory to '{{ docker_home }}'
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Or That one?
ok: [188.166.52.222] => {"append": false, "changed": false, "comment": "", "group":
998, "home": "/home/docker", "move_home": false, "name": "docker", "shell": "",
"state": "present", "uid": 1002}
TASK [include_tasks]
fatal: [188.166.52.222]: FAILED! => {"reason": "no action detected in task. This
often indicates a misspelled module name, or incorrect module path.\n\nThe error
appears to have been in '/home/manu/monitorizacion/node-exporter/playbooks/deploy-
node-exporter.yml': line 24, column 3, but may\nbe elsewhere in the file depending
on the exact syntax problem.\n\nThe offending line appears to be:\n\n mode: '
0755'\n- name: Deploy node-exporter\n ^ here\n"}
JFYI
copy is possible in my ansible version
tasks:
- name: Test copy
copy:
src: "/home/manu/test"
dest: "/home/user"
follow: yes
owner: user
group: user
mode: 0755
└─[$] <git:(master*)> ansible-playbook test.yaml --syntax-check
playbook: test.yaml
┌─[root#hippi3c0w] - [~manu/monitorizacion/node-exporter/playbooks] - [sáb ago 31, 20:33]
└─[$] <git:(master*)> ansible-playbook test.yaml -u user -v
Using /etc/ansible/ansible.cfg as config file
/etc/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/etc/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected
PLAY [myserver] ***********************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************
ok: [myIP]
TASK [Test copy] *********************************************************************************************************************************
changed: [myIP] => {"changed": true, "checksum": "55ca6286e3e4f4fba5d0448333fa99fc5a404a73", "dest": "/home/user/test", "gid": 1001, "group": "user", "mode": "0755", "owner": "user", "path": "/home/user/test", "size": 3, "state": "file", "uid": 1001}
PLAY RECAP ***************************************************************************************************************************************
myIP : ok=2 changed=1 unreachable=0 failed=0
--- UPDATE ---
It was due to ansible version. 2.7 doesn't support docker_compose directive. Updated to 2.8.4 and now it works properly.
PLAY RECAP ***************************************************************************************************************************************
[MyIP] : ok=16 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
┌─[root#hippi3c0w] - [~manu/monitorizacion/node-exporter/playbooks] - [dom sep 01, 08:34]
└─[$] <git:(master*)> ansible --version; ansible-playbook install.yml --syntax-check
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.16+ (default, Jul 8 2019, 09:45:29) [GCC 8.3.0]
playbook: install.yml

Ansible: Copying files from local to remote - "AttributeError: 'dict' object has no attribute 'endswith'"

I am trying to copy files (scripts and rpms) stored locally to a set of servers. I can copy the files when the names are hard coded, but not when I am using a variable.
ansible-lint comes back with no errors.
When use variable replacement I get the error:
TASK [Copy cpu_gov.sh]
***************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute 'endswith'
fatal: [ceph3]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
In debug mode I can see that it is a Python error on a trailing "/". All other uses of the variable work fine, only when it is in the ""src:" field does it fail.
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run
trailing_slash = source.endswith(os.path.sep)
AttributeError: 'dict' object has no attribute 'endswith'
fatal: [ceph3]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
---
### Test
#
- hosts: all
vars:
#isdct_rpm: foobar.txt
isdct_rpm: isdct-3.0.16-1.x86_64.rpm
cpu_gov: cpu_gov.sh
irq_bal: irq_balance.sh
root_dir: /root
bin_dir: /root/bin
files_dir: /root/projects/ansible/bootstrap/files
remote_user: root
tasks:
These work just fine -
- name: ISDCT rpm exists?
stat:
path: "{{ root_dir }}/{{ isdct_rpm }}"
register: isdct_rpm
tags:
- tools
- name: cpu_gov exists?
stat:
path: "{{ bin_dir }}/{{ cpu_gov }}"
register: cpu_gov
tags:
- tools
- name: irq_balance exists?
stat:
path: "{{ bin_dir }}/{{ irq_bal }}"
register: irq_bal
tags:
- tools
The first task is the failing one:
- name: Copy ISDCT rpm
copy:
remote_src: no
src: "{{ isdct_rpm }}"
dest: "{{ root_dir }}"
when: not isdct_rpm.stat.exists
These work fine:
- name: Copy rpm
copy:
remote_src: no
src: isdct-3.0.16-1.x86_64.rpm
dest: /root
when: not isdct_rpm.stat.exists
- name: Copy cpu_gov.sh
copy:
remote_src: no
src: cpu_gov.sh
# - fails - src: "{{ cpu_gov }}"
dest: "{{ bin_dir }}"
when: not cpu_gov.stat.exists
- name: Copy irq_balance.sh
copy:
remote_src: no
src: irq_balance.sh
dest: /root
when: not irq_bal.stat.exists
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run
trailing_slash = source.endswith(os.path.sep)
AttributeError: 'dict' object has no attribute 'endswith'
fatal: [ceph3]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
You have a variable in your vars section named isdct_rpm which is a string, but you're registering a dictionary variable with the same name in your ISDCT rpm exists? task. This overrides the string value.
Stop trying to use the same variable name for two different purposes and I suspect things will work as expected.
#larsks Answered my question. I was using the same name for my variable and the register value.
This works:
#
###--- Copy the scripts over if needed
#
- name: Copy ISDCT rpm
copy:
remote_src: no
src: "{{ isdct_rpm }}"
dest: "{{ root_dir }}"
when: not isdctrpm.stat.exists
- name: Copy cpu_gov.sh
copy:
remote_src: no
#src: cpu_gov.sh
src: "{{ cpu_gov }}"
dest: "{{ bin_dir }}"
when: not cpugov.stat.exists
- name: Copy irq_balance.sh
copy:
remote_src: no
src: "{{ irq_bal }}"
dest: "{{ bin_dir }}"
when: not irqbal.stat.exists

How can I merge lists with ansible 2.6.3

In the past I used ansible 2.3.1.0 and now want to use ansible 2.6.3.
I now want to install some packages with apt. therefore I have to source lists and I want to merge them in one. In ansible 2.3.1.0 I just used the following:
apt_packages:
- "{{ packages_list1 + packages_list2 }}"
And I am getting the following error:
Traceback (most recent call last):\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 1128, in <module>\r\n main()\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 1106, in main\r\n
allow_unauthenticated=allow_unauthenticated\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 521, in install\r\n pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 439, in expand_pkgspec_from_fnmatches\r\n
pkgname_pattern, version = package_split(pkgspec_pattern)\r\n File \"/tmp/ansible_k0tmag/ansible_module_apt.py\", line 312, in package_split\r\n
parts = pkgspec.split('=', 1)\r\nAttributeError: 'list' object has no attribute 'split'\r\n", "msg": "MODULE FAILURE", "rc": 1}
Content of the role:
apt:
state: present
name: "{{ apt_packages }}"
force: yes
when: apt_packages is defined```
With apt_packages: "{{ packages_list1 + packages_list2 }}"(without the -), it will work.
Working sample:
- hosts: localhost
gather_facts: no
vars:
pkgs1:
- perl
- python
pkgs2:
- vim
- tar
packages: "{{ pkgs1 + pkgs2 }}"
tasks:
- apt:
name: "{{ packages }}"
state: present

Resources