Ansible unarchive module fails with Unexpected error when accessing exploded file: [Errno 2] No such file or directory: - ansible

My zip file contains a folder UAT-04022021_01 which has several files.
ls output on my local ansible server zip achieve file and its contents below:
-rwxrwxr-x 1 user1 user1 171910544 Feb 4 07:02 /web
/playbooks/automation/deployments/tmpfiles/04/UAT-04022021_01.zip
$ unzip -l "/web/playbooks/automation/deployments/tmpfiles/04/UAT-04022021_01.zip"
Archive: /web/playbooks/automation/deployments/tmpfiles/04/UAT-04022021_01.zip
Length Date Time Name
--------- ---------- ----- ----
0 02-04-2021 17:31 UAT-04022021_01/
144241065 02-04-2021 15:58 UAT-04022021_01/canv.ear
8342584 02-04-2021 16:00 UAT-04022021_01/canpizza.ear
10522141 02-04-2021 16:01 UAT-04022021_01/cantity.ear
8778258 02-04-2021 15:59 UAT-04022021_01/canipc.ear
--------- -------
171884048 5 files
[user1#linuxlocalhost deployments]$
Below is my code to extract the zip to all remote destination hosts
---
- name: "Play 1"
host: "{{ dest_host }}"
user: "{{ USER }}"
- name: Give better permissions to the zip file
tags: validate
file:
path: "{{ playbook_dir }}/tmpfiles/{{ Latest_Build_Number }}/{{ depfile | basename }}"
mode: 0775
delegate_to: localhost
run_once: true
- name: "Untar artifacts in CURRENT Folder"
tags: validate
unarchive:
src: "{{ playbook_dir }}/tmpfiles/{{ Latest_Build_Number }}/{{ depfile | basename }}"
dest: "/web/bea_apps/applications/{{ domain_home }}/{{ item }}/CURRENT/"
with_items: "{{ CLUSTER.split(',') }}"
Output:
#############################################\n\n')
The full traceback is:
File "/tmp/ansible_MCNgND/ansible_module_unarchive.py", line 873, in main
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
File "/tmp/ansible_MCNgND/ansible_modlib.zip/ansible/module_utils/basic.py", line 1473, in set_fs_attributes_if_different
file_args['path'], file_args['mode'], changed, diff, expand
File "/tmp/ansible_MCNgND/ansible_modlib.zip/ansible/module_utils/basic.py", line 1204, in set_mode_if_different
path_stat = os.lstat(b_path)
failed: [myserver2] (item=VL_BATCH) => {
"changed": false,
"dest": "/web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/",
"gid": 64332,
"group": "wladmin",
"handler": "ZipArchive",
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"content": null,
"creates": null,
"delimiter": null,
"dest": "/web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/",
"directory_mode": null,
"exclude": [],
"extra_opts": [],
"follow": false,
"force": null,
"group": null,
"keep_newer": false,
"list_files": false,
"mode": null,
"original_basename": "UAT-04022021_01.zip",
"owner": null,
"regexp": null,
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/wladmin/.ansible/tmp/ansible-tmp-1612477399.28-110140157282432/source",
"unsafe_writes": null,
"validate_certs": true
}
},
"item": "VL_BATCH",
"mode": "0755",
"msg": "Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/UAT-04022021_01/'",
"owner": "wladmin",
"size": 2,
"src": "/home/wladmin/.ansible/tmp/ansible-tmp-1612477399.28-110140157282432/source",
"state": "directory",
"uid": 600000008
}
/web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/ is empty and has permission for the user to write to.
I checked on the destination and it does contain this folder /web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/ but not /web/bea_apps/applications/mydom1/VL_BATCH/CURRENT/UAT-04022021_01/
It was supposed to create this folder UAT-04022021_01 if it unzips under the CURRENT folder.
After researching a bit more I tried this suggested fix of environment but it does not help either.
- name: "Untar artifacts in CURRENT Folder"
tags: validate
unarchive:
src: "{{ playbook_dir }}/tmpfiles/{{ Latest_Build_Number }}/{{ depfile | basename }}"
dest: "/web/bea_apps/applications/{{ domain_home }}/{{ item }}/CURRENT/"
with_items: "{{ CLUSTER.split(',') }}"
environment:
LANG: C
LC_ALL: C
LC_MESSAGES: C
My source is Linux while destination is Solaris.
Can you please suggest what could be wrong?

Related

Correct way to join a new Windows guest to a domain when deployed form a template

I am attempting to automate the deployment of a Windows guest in a VMware environment and I have been quite able to do so as long as I am happy with having to manually add it to the domain, which I am not. I have tried using the vmware_client customization and it does not appear to even be applied.
Ansible 2.9
Red Hat 8
YML
---
- hosts: all
gather_facts: false
vars_files:
- group_vars/all
- build_info/vars
tasks:
- debug:
var: "{{ dusername }}"
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
annotation: "This machine was built from a template through a process triggered by an ansible playbook"
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
cluster: "{{ cluster }}"
folder: "{{ folder }}"
name: "{{ bname }}"
template: "{{ template }}"
datastore: "{{ datastore}}"
state: poweredon
networks:
- name: "{{ bnetworks.netname }}"
ip: "{{ bnetworks.ip }}"
netmask: "{{ bnetworks.netmask }}"
gateway: "{{ bnetworks.gateway }}"
domain: "{{ bnetworks.domain }}"
start_connected: yes
dns_servers: "{{bnetworks.dns_servers}}"
dns_suffix: "{{bnetworks.dns_suffix}}"
customization:
hostname: "{{ bname }}"
domainadmin: "{{ dusername }}"
domainadminpassword: "{{ dpassword }}"
joindomain: "{{ bnetworks.domain }}"
fullname: "{{ ladminname }}"
wait_for_ip_address: yes
wait_for_customization: yes
with_dict: "{{ bnetworks }}"
delegate_to: localhost'
The result
{
"changed": true,
"instance": {
"module_hw": true,
"hw_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"hw_power_status": "poweredOn",
"hw_guest_full_name": "Microsoft Windows Server 2016 or later (64-bit)",
"hw_guest_id": "windows9Server64Guest",
"hw_product_uuid": "4227f81c-1b25-13fd-45f2-d9399408a5f6",
"hw_processor_count": 2,
"hw_cores_per_socket": 1,
"hw_memtotal_mb": 8192,
"hw_interfaces": [
"eth0"
],
"hw_datastores": [
"na2-tntr02-dat"
],
"hw_files": [
"[na2-tntr02-dat] ********/********.vmx",
"[na2-tntr02-dat] ********/********.nvram",
"[na2-tntr02-dat] ********/********.vmsd",
"[na2-tntr02-dat] ********/********.vmxf",
"[na2-tntr02-dat] ********/********.vmdk"
],
"hw_esxi_host": "na2-devesx01.********",
"hw_guest_ha_state": true,
"hw_is_template": false,
"hw_folder": "/NA2/vm/GLOBAL OPERATIONS/Storage",
"hw_version": "vmx-13",
"instance_uuid": "5027d0ef-7a89-73a6-1da9-6e15df083592",
"guest_tools_status": "guestToolsRunning",
"guest_tools_version": "11269",
"guest_question": null,
"guest_consolidation_needed": false,
"ipv4": "10.6.6.10",
"ipv6": null,
"annotation": "This machine was built from a template through a process triggered by an ansible playbook",
"customvalues": {},
"snapshots": [],
"current_snapshot": null,
"vnc": {},
"moid": "vm-21984",
"vimref": "vim.VirtualMachine:vm-21984",
"hw_cluster": "NA2-NonPROD",
"hw_eth0": {
"addresstype": "assigned",
"label": "Network adapter 1",
"macaddress": "00:50:56:a7:fc:87",
"ipaddresses": [
"fe80::4835:d689:f1a7:6dd4",
"10.6.6.10"
],
"macaddress_dash": "00-50-56-a7-fc-87",
"summary": "DVSwitch: 50 27 7a 36 e1 9e ae 1a-29 5a 4a 79 8d 2b 6f a3",
"portgroup_portkey": "191",
"portgroup_key": "dvportgroup-71"
}
},
"invocation": {
"module_args": {
"annotation": "This machine was built from a template through a process triggered by an ansible playbook",
"hostname": "na2-pdvcva01.********",
"username": "RP4VM#vsphere.local",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"validate_certs": false,
"datacenter": "NA2",
"cluster": "NA2-NonPROD",
"folder": "/GLOBAL OPERATIONS/Storage",
"name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"template": "NA2_ZWindows2016STD",
"datastore": "na2-tntr02-dat",
"state": "poweredon",
"networks": [
{
"name": "FIRM_NA|PROD_ap|STORAGE_epg",
"ip": "10.6.6.10",
"netmask": "255.255.255.0",
"gateway": "10.6.6.1",
"domain": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"start_connected": true,
"dns_servers": [
"10.6.2.16",
"10.2.2.17"
],
"dns_suffix": [
"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"asia.global-legal.com",
"emea.global-legal.com"
],
"type": "static"
}
],
"customization": {
"hostname": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"domainadmin": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"domainadminpassword": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"joindomain": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"fullname": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"wait_for_ip_address": true,
"wait_for_customization": true,
"port": 443,
"is_template": false,
"customvalues": [],
"name_match": "first",
"use_instance_uuid": false,
"disk": [],
"cdrom": [],
"hardware": {},
"force": false,
"state_change_timeout": 0,
"linked_clone": false,
"vapp_properties": [],
"proxy_host": null,
"proxy_port": null,
"uuid": null,
"guest_id": null,
"esxi_hostname": null,
"snapshot_src": null,
"resource_pool": null,
"customization_spec": null,
"convert": null
}
},
"_ansible_no_log": false,
"item": {
"key": "netname",
"value": "FIRM_NA|PROD_ap|STORAGE_epg"
},
"ansible_loop_var": "item",
"_ansible_item_label": {
"key": "netname",
"value": "FIRM_NA|PROD_ap|STORAGE_epg"
},
"_ansible_delegated_vars": {}
}
However the machine does not join the domain. I fully expect a fail because the domain credentials supplied do not have the authority to add the machine to the domain.

Ansible Symbolic Link Task Role Failure

I am new to ansible and executing the following ansible task:
- name: Create symbolic links
file:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: "{{ jboss_usr }}"
group: "{{ jboss_grp }}"
state: link
with_items:
- { src: "/apps/etc/jboss", dest: "/etc/jboss" }
- { src: "/apps/var/log/jboss", dest: "/var/log/jboss" }
And I got the following error:
2018-12-21 21:27:23,469 p=15185 u=ex_sam | failed: [hostname.x] (item={u'dest': u'/etc/jboss', u'src': u'/apps/etc/jboss'}) => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"dest": "/etc/jboss",
"diff_peek": null,
"directory_mode": null,
"follow": true,
"force": true,
"group": "jboss",
"mode": null,
"original_basename": null,
"owner": "jboss",
"path": "/etc/jboss",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/apps/etc/jboss",
"state": "link",
"unsafe_writes": null,
"validate": null
}
},
"item": {
"dest": "/etc/jboss-as",
"src": "/apps/etc/jboss"
},
"msg": "Error while linking: [Errno 13] Permission denied",
"path": "/etc/jboss-as",
"state": "absent"
}
I am trying to find out why the symbolic link creation failed.
I read the following:
https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#id6
I says the "changed" attribute is a boolean indicating if the task had to make changes.
But, there are lots of null parameters in the invocation:module_args elements of the json
Does that mean the values are really “null” or they are being set to a default value?
I have looked into the ansible documentation and I am not sure if the invocation:module_args null values are representative of the outcome of the trying to create the symbolic link i.e. are the null input or output of executing the tasks.
I think some of the nulls are defaults, but will appreciate some helpful comments on the possible relation between the json returned in my ansible error log and the actual "Error while linking: [Errno 13] Permission denied".
Thanks all for you anticipated help.
I think, is the permissions on /etc. So probably you need to add the option:
become: true
To your task.

Browsing JSON variable in Ansible

For some reasons, I'm not allowed to use jsqon_query with Ansible, I'm trying to reach stdout element in a results list in a variable resulting from a shell call.
The JSON variable is saved this way :
"request": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_ignore_errors": null,
"_ansible_item_result": true,
"_ansible_no_log": false,
"_ansible_parsed": true,
"changed": true,
"cmd": "echo \"****:********\" | grep -o -P '^*****:[^\\n]*$' | awk '{split($0,a,\":\"); print a[2]}'",
"delta": "0:00:00.003660",
"end": "2018-10-31 17:26:17.697864",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "echo \"**************\" | grep -o -P '^************:[^\\n]*$' | awk '{split($0,a,\":\"); print a[2]}'",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "**********:************",
"rc": 0,
"start": "2018-10-31 17:26:17.694204",
"stderr": "",
"stderr_lines": [],
"stdout": "**********",
"stdout_lines": [
"*********"
]
}
]
}
}
I'm trying to browse my stdout element this way :
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , request.results[0].stdout ) }}" dest= "/tmp/tarball/"
I tryied also :
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , request.results[.stdout] ) }}" dest= "/tmp/tarball/"
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , item.stdout ) }}" dest= "/tmp/tarball/"
with_items: "{{ request.results }}"
I've no idea why I'm always getting the same error :
- template error while templating string: unexpected '.'. String: {{ request.results[.stdout] }} (when trying with [.stdout)
- The task includes an option with an undefined variable (when putting [0] index)
I've finally solved my problem using :
- name: Tarball copy
copy:
src: "{{ '%s/%s' | format( TARBALL_DIR , request.results[0].stdout ) }}"
dest: "/tmp/tarball/"
It seems that src and dest couldn't accept space after the equal char.

attempting to dynamically reference jinja templates in ansible playbook

I am attempting to dynamically reference j2 templates pulled from artifactory.
The following code works:
- name: Confiure the sqoop templates
template: src="{{ item }}" dest="{{ local_root }}/{{ FEED_NAME }}/{{ FEED_SCHEMA }}/sqoopgen_jobs/{{ item|replace(\"/tmp/tardis/\", \"\") }}" mode=0744
with_fileglob:
- "/tmp/{{ FEED_NAME }}/*.j2"
The following does not:
- name: Confiure the sqoop templates
template: src="{{ item }}" dest="{{ local_root }}/{{ FEED_NAME }}/{{ FEED_SCHEMA }}/sqoopgen_jobs/{{ item|replace(\"/tmp/{{ FEED_NAME }}/\", \"\") }}" mode=0744
with_fileglob:
- "/tmp/{{ FEED_NAME }}/*.j2"
Failing with error:
failed: [host_ip] => (item=/tmp/tardis/forecast.j2) => {"changed": true, "failed": true, "invocation": {"module_args": {"backup": false, "content": null, "delimiter": null, "dest": "/path/tardis/tardis/sqoopgen_jobs//tmp/tardis/forecast.j2", "directory_mode": null, "follow": true, "force": true, "group": null, "mode": "0744", "original_basename": "forecast.j2", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/jenkins/.ansible/tmp/ansible-tmp-1457719702.84-279730849155325/source", "validate": null}}, "item": "/tmp/tardis/forecast.j2", "msg": "Destination directory /path/tardis/tardis/sqoopgen_jobs//tmp/tardis does not exist"}
I'm reading this as a case of ansible not expanding a variable within a variable: such as here
What is the best approach to take in this situation? I want to keep the playbook as generic as possible.
ansible 2.0.1.0
thanks
You can not nest Jinja expressions like so:
{{ item|replace("/tmp/{{ FEED_NAME }}/", "") }}
What you want is to concatenate strings.
{{ item|replace("/tmp/" ~ FEED_NAME ~ "/", "") }}
See "Other Operators" in the Jinja2 docs:
~
Converts all operands into strings and concatenates them.
{{ "Hello " ~ name ~ "!" }} would return (assuming name is set to 'John') Hello John!.
- name: Configure the sqoop templates
template:
src: "{{ item }}"
dest: "{{ local_root }}/{{ FEED_NAME }}/{{ FEED_SCHEMA }}/sqoopgen_jobs/{{ item|replace('/tmp/' ~ FEED_NAME ~ '/', '') }}"
mode: 0744
with_fileglob:
- "/tmp/{{ FEED_NAME }}/*.j2"

Error when running this ansible play "ERROR! 'unicode object' hasno attribute 'comment'

Here is my playbook
- name: Add multiple users
user:
name: "{{ item[0].name }}"
comment: "{{ item[0].comment }}"
uid: "{{ item[0].uid }}"
groups: "{{ item[0].groups}}"
shell: /bin/bash
with_nested:
- "{{ name }}"
- "{{ comment }}"
- "{{ uid }}"
- "{{ groups }}"
Here is my vars file
---
name:
- test1
- test2
comment:
- "comment1"
- "comment2"
uid:
- 150
- 151
groups: "sudo, admin"
I'm not sure what is causing this, any ideas? I believe I may need to use with subelement instead of with nested? Am I on the right track there?
UPDATE:
Changed my code but am now experiencing the following. Updated code and error message
- name: Add new group if it doesn't exist already
group:
name: "{{ group }}"
when: group is defined
- name: Add multiple users
user:
name: "{{ item.0 }}"
comment: "{{item.1 }}"
uid: "{{ item.2 }}"
group: "{{ group }}"
groups: "{{ groups }}"
append: yes
with_together:
- "{{ name }}"
- "{{ comment }}"
- "{{ uid }}"
- "{{ group }}"
And variable file:
name:
- test1
- test2
comment:
- "comment1"
- "comment2"
uid:
- 150
- 151
group: sudo
groups:
- admin
- test
However, now I am receiving this error.
failed: [127.0.0.1] => (item=[u'test1', u'comment1', 150, u'sudo']) => {"failed": true, "invocation": {"module_args": {"append": true, "comment": "comment1", "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": "sudo", "groups": "{'ungrouped': ['127.0.0.1'], 'all': ['127.0.0.1']}", "home": null, "login_class": null, "move_home": false, "name": "test1", "non_unique": false, "password": null, "remove": false, "shell": null, "skeleton": null, "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on ubuntu-512mb-sfo1-01", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "150", "update_password": "always"}, "module_name": "user"}, "item": ["test1", "comment1", 150, "sudo"], "msg": "Group 'all': ['127.0.0.1']} does not exist"}
failed: [127.0.0.1] => (item=[u'test2', u'comment2', 151, None]) => {"failed": true, "invocation": {"module_args": {"append": true, "comment": "comment2", "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": "sudo", "groups": "{'ungrouped': ['127.0.0.1'], 'all': ['127.0.0.1']}", "home": null, "login_class": null, "move_home": false, "name": "test2", "non_unique": false, "password": null, "remove": false, "shell": null, "skeleton": null, "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on ubuntu-512mb-sfo1-01", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "151", "update_password": "always"}, "module_name": "user"}, "item": ["test2", "comment2", 151, null], "msg": "Group 'all': ['127.0.0.1']} does not exist"}
The problem is conflicting variable names. groups is a reserved variable and holds the groups from the inventory. And all is a automatically generated group which holds all the hosts of your inventory.
From the docs:
Even if you didn’t define them yourself, Ansible provides a few variables for you automatically. The most important of these are hostvars, group_names, and groups. Users should not use these names themselves as they are reserved. environment is also reserved.
and
groups is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group.
Simply rename your variable and it should work. In general it's a good idea to prefix all variables of a role with the role name. This gets more important if you use 3rd party roles, e.g. from Ansible Galaxy, just to avoid conflicts. So instead of groups you could use myrole_groups and can be quite sure there never will be conflicts.

Resources