I'm currently trying to implement an Ansible playbook to simply unarchive a .tar.gz or .tgz file. What I have currently implemented does unarchive the file, but leaves behind a .gz file instead of the directory that was archived. For example, I have testDirectory that contains testFile. I have compressed testDirectory into both testDirectory.tar.gz and testDirectory.tgz. When I run my playbook on either file, it untars the file leaving behind testDirectory.gz.
Here's my code:
- hosts: localhost
tasks:
- unarchive:
src: '{directory}/testArchive.tar.gz'
dest: '{directory}'
The goal is to simply unarchive the file, leaving behind {directory}/testArchive. What am I missing?
unarchive does not handle .gz files per the documentation
There is discussion of this issue from 2017 that references a future uncompress module that does not appear to exist, and a statement by the maintainer that he was abandoning maintenance of the unarchive module.
A possible work around is: command: gunzip filename.gz
Related
This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!
For HCL Connections, we still need WebSphere and I want to automate this complex and slow process with Ansible. WebSphere needs to be manually downloaded with differenet ZIP files for each component, for example:
├── CIK1VML.zip
├── CIK1WML.zip
└── CIK1XML.zip
The char after CIK1 identifies the part. On the command line, I can unzip them by replacing those part identifier with a question mark:
unzip '/cnx-smb/was/supplements/CIK1?ML.zip' -d /tmp/was-suppl-manual
I'd like to use the unarchive module cause it supports features like remote_src which would be usefull for me, so I tried a simple POC playbook:
- hosts: 127.0.0.1
connection: local
tasks:
- name: Unpack test
become: yes
unarchive:
src: "/cnx-smb/was/supplements/CIK1?ML.zip"
remote_src: no
dest: "/tmp/was-extracted"
But this doesn't work:
TASK [Unpack test] **********************************************************************************************************************************************************************************************************************************************************
Wednesday 10 February 2021 16:17:25 +0000 (0:00:00.637) 0:00:00.651 ****
fatal: [127.0.0.1]: FAILED! => changed=false
msg: |-
Could not find or access '/cnx-smb/was/supplements/'CIK1?ML.zip'' on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option
I also tried different src paths like /cnx-smb/was/supplements/'CIK1?ML.zip', cause the unzip CLI call works only when at least the filename is masked in quotes, or alternatively the entire path. Ansible accepts only when the file name is quoted, '/cnx-smb/was/supplements/CIK1?ML.zip' seems to be interpreted as relative path (which obviously fails).
It seems that those multipart zip-archives aren't really "multi part" archives, as I know from compression formats like 7zip where we have File.partX.7z which are only used together. 7zip validates them and throws an error if e.g. a part is missing.
The situation is different on those zip files. I took a look in them and noticed that I can extract every single zip file without the others. Every zip file contains a part of the installation archive. It seems that zip itself doesn't divide a large folder into parts. It's IBM who put some folders like disk2 in a seperate archive file for whatever reason.
This means I can do the same with ansible: Just extract every single file on its own, but in the same directory:
- hosts: 127.0.0.1
connection: local
vars:
base_dir: /cnx-smb/was/supplements/
tasks:
- name: Unpack
become: yes
unarchive:
src: "{{ base_dir }}/{{ item }}"
remote_src: no
dest: "/tmp/was-extracted"
with_items:
- CIK1VML.zip
- CIK1WML.zip
- CIK1XML.zip
Both extracted folderse (Ansible + manually using zip command with ? placeholder) were of the same size and contains the same data:
vagrant#ansible:/cnx-repo$ du -hs /tmp/was-extracted/
3.0G /tmp/was-extracted/
vagrant#ansible:/cnx-repo$ du -hs /tmp/was-suppl-manual
3.0G /tmp/was-suppl-manual
I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.
Please be informed that, im trying to copy a bulk files from my source server to the destination server using ansible. While trying an error. Please help me.
---
- name: Going to copy bulk files
hosts: test
vars_prompt:
- name: copy
prompt: Enter the Bulk File to Copy
private: no
tasks:
- name: Copy bulk files
shell: cp /tmp/guru/{{ copy }}* /ansible/sri
The shell module executes a shell command on the destination server, which explains the error message cp: cannot stat ‘/tmp/guru/a*’: No such file or directory: the source files of the cp does not exists on the destination server.
Ansible provide a lot of modules which are more appropriate to use than executing shell commands.
In your case, the copy module is the one you need: it copies files from source server to destination server. You can combine it with a with_fileglob loop:
tasks:
- name: Copy bulk files
copy:
src: "{{ item }}"
dest: /ansible/sri
with_fileglob: "/tmp/guru/{{ copy }}*"
I'm willing to transfer the contents of a folder unzipped from a source say myfolder to a location say dest_dir but apparently everything I try moves/copies/generates myfolder in the dest_dir location.
I tried
command: mv src dest_dir
I also tried unarchiving in the dest_dir location using,
unarchive:
src: /path/to/myfolder
dest: dest_dir
copy: no
become: yes
Apparently, for copy module, I found that remote_src does not support recursive copying yet.
What is the correct way to go about this?
Normally, in my system, I would do mv /path/to/myfolder/* dest_dir but wildcards throw an error with Ansible.
I'm using Ansible 2.3.2.
The reason you can't do it easily in Ansible is because Ansible was not designed to do it.
Just execute the command directly with shell module. Your requirement is not idempotent anyway:
- shell: mv /path/to/myfolder/* dest_dir
become: yes
Pay attention to mv defaults, you might want to add -f to prevent it from prompting for confirmation.
Otherwise play with synchronize module, but there's no value added for "move" operation. Just complexity.