Ansible `archive` module to archive without compression - ansible

I have to archive a bunch of files, and want to avoid compression to save time. This is a daily operation to archive 1 TB of data, and write it to a different drive, so "time is of the essence".
Looking at the Ansible archive module documentation it's not clear how to build up the target file without compression.
Currently, my Ansible task looks like this:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tgz"
owner: "{{backup_user}}"
group: "{{backup_group}}"
Is it possible to speed up this process by telling the module to NOT compress? If yes, how?

Based on this other answer on superuser, tar is not compressing files per default, on the other hand gz, which is the default format of archive is.
So you could try going by:
- name: Create snapshot tarball
become: true
archive:
path: "{{ snapshots_path.stdout_lines }}"
dest: "{{backup_location}}{{short_date.stdout}}_snapshot.tar"
format: tar
owner: "{{backup_user}}"
group: "{{backup_group}}"
This is also backed-up by the manual page of tar:
DESCRIPTION
GNU tar is an archiving program designed to store multiple files in a
single file (an archive), and to manipulate such archives. The
archive can be either a regular file or a device (e.g. a tape drive,
hence the name of the program, which stands for tape archiver), which
can be located either on the local or on a remote machine.

Related

How to archive multiple folders under one folder using Ansible

I'm using the below ansible-playbook code to archive multiple folders under IBM folder.
Below is my absolute path Directory structure:
/app
|-- /IBM
|--/test
|--/log
|--/common
|--/api
I wish to build an archive (gz) that has only IBM folder containing only common and api folders.
Thus, I wrote the below playbook:
- name: Creating the archive
archive:
path:
- /was/IBM/common
- /was/IBM/api
dest: /var/backup/mysetup.tar.gz
exclude_path:
- /was/IBM/log
- /was/IBM/test
format: gz
This gives me the file mysetup.tar.gz.
I want the mysetup.tar.gz file to have a folder called IBM which should have two folders common and api. Thus, I'm expecting the below in the mysetup.tar.gz
IBM
|--/common
|--/api
But, the mysetup.tar.gz has no IBM folder but only the common and api folders.
Can you please guide me as to how I can get the archive to have both the folders inside the IBM folder?
You need to include whole IBM folder and then exclude paths you do not want
- name: Creating the archive
archive:
path:
- /was/IBM
dest: /var/backup/mysetup.tar.gz
exclude_path:
- /was/IBM/log
- /was/IBM/test
format: gz

Resolve Local Files by Playbook Directory?

I have the following Ansible role which simply does the following:
Create a temporary directory.
Download Goss, a server testing tool, into that temporary directory.
Upload a main Goss YAML file for the tests.
Upload additional directories for additional included tests.
Here are a couple places where I'm using it:
naftulikay.python-dev
naftulikay.ruby-dev
Specifically, these playbooks upload a local file adjacent to the playbook named goss.yml and a directory goss.d again adjacent to the playbook.
Unfortunately, it seems that Ansible logic has changed recently, causing my tests to not work as expected. My role ships with a default goss.yml, and it appears that when I set goss_file: goss.yml within my playbook, it uploads degoss/files/goss.yml instead of the Goss file adjacent to my playbook.
If I'm passing the name of a file to a role, is there a way to specify that Ansible should look up the file in the context of the playbook or the current working directory?
The actual role logic that is no longer working is this:
# deploy test files including the main and additional test files
- name: deploy test files
copy: src={{ item }} dest={{ degoss_test_root }} mode=0644 directory_mode=0755 setype=user_tmp_t
with_items: "{{ [goss_file] + goss_addtl_files + goss_addtl_dirs }}"
changed_when: degoss_changed_when
I am on Ansible 2.3.2.0 and I can reproduce this across distributions (namely CentOS 7, Ubuntu 14.04, and Ubuntu 16.04).
Ansible searches for relative paths in role's scope first, then in playbook's scope.
For example if you want to copy file test.txt in role r1, search order is this:
/path/to/playbook/roles/r1/files/test.txt
/path/to/playbook/roles/r1/test.txt
/path/to/playbook/roles/r1/tasks/files/test.txt
/path/to/playbook/roles/r1/tasks/test.txt
/path/to/playbook/files/test.txt
/path/to/playbook/test.txt
You can inspect your search_path order by calling ansible with ANSIBLE_DEBUG=1.
To answer your question, you have to options:
Use filename that doesn't exist within role's scope. Like:
goss_file: local_goss.yml
Supply absolute path. For example, you can use:
goss_file: '{{ playbook_dir }}/goss.yml'
Ansible doesn't apply search logic if the path is absolute.

Ansible win_unzip Module takes far to long

At our customer the Ansible module win_unzip takes for to long when executed. Our code is:
- name: unzip zip package into C:\server\dlls
win_unzip:
src: "{{app_path}}\\app_dll.zip"
dest: "{{app_path}}\\dlls"
rm: true
This step takes more than 10 minutes. The zip file is copied with win_copy in the direct step before, code is here:
- name: copy zip package to C:\server
win_copy:
src: "path2zip.zip"
dest: "{{app_path}}\\app_dll.zip"
The extraction successfully finishes, but it blocks our Pipeline for more than 10 minutes, which isnĀ“t acceptable.
We reduced the time needed to unzip the package with the help of the Powershell Module Expand-Archive to nearly zero. Here is the code:
- name: name: unzip zip package into C:\server\dlls
win_shell: "Expand-Archive {{app_path}}\\app_dll.zip -DestinationPath {{app_path}}\\dlls"
Our pipeline is now fast again, but it would be nice to have a fast Ansible win_unzip Module!

Have ansible role retrieve its files from external location as part of its own role

So one thing we've encountered in our project is that we do not want to store our large files in our git repo for our ansible roles because it slows down cloning (and git limits files to 100 mb anyways).
What we've done is store our files in a separate internal location, where our files can sit statically and have no size restrictions. Our roles are written so that they first pull these static files to their local files folder and then continue like normal.
i.e.
roles/foo/tasks/main.yml
- name: Create role's files directory
file:
path: "{{roles_files_directory}}"
state: directory
- name: Copy static foo to local
get_url:
url: "{{foo_static_gz}}"
dest: "{{roles_files_directory}}/{{foo_gz}}"
#....Do rest of the tasks...
roles/foo/vars/main.yml
roles_files_directory: "/some/path/roles/foo/files"
foo_static_gz: "https://internal.foo.tar.gz"
foo_gz: "foo.tar.gz"
The main thing I don't find really sound is the hard coded path to the role's files directory. I preferably would like to dynamically look up the path when running ansible, but I haven't been able to find documentation on that. The issue can arise because different users may check roles to a different root paths. Does anyone know how to dynamically know the role path, or have some other pattern that solves the overall problem?
Edit:
I discovered there's actually a {{playbook_dir}} variable that would return "/some/path", which might be dynamic enough in this case. Still isn't safe against the situation where the role name might change, but that's a way rarer occurrence and can be handled through version control.
What about passing values from the command line?
---
- hosts: '{{ hosts }}'
remote_user: '{{ user }}'
tasks:
- ...
ansible-playbook release.yml --extra-vars "hosts=vipers user=starbuck"
http://docs.ansible.com/playbooks_variables.html#passing-variables-on-the-command-line
I just want to add another possible solution: you can try to add custom "facter".
Here is a link to official documentation: http://docs.ansible.com/setup_module.html
And I found this article that might be useful: http://serverascode.com/2015/01/27/ansible-custom-facts.html

How do I prevent module.run in saltstack if my file hasn't changed?

In the 2010.7 version of SaltStack, the onchanges element is available for states. However, that version isn't available for Windows yet, so that's right out.
And unfortunately salt doesn't use the zipfile module to extract zipfiles. So I'm trying to do this:
/path/to/nginx-1.7.4.zip:
file.managed:
- source: http://nginx.org/download/nginx-1.7.4.zip
- source_hash: sha1=747987a475454d7a31d0da852fb9e4a2e80abe1d
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
But this tries to extract the files every time. I don't want it to do that, I only want it to extract the file if the .zip file changes, because once it's been extracted then it'll be running (I've got something setup to take care of that). And once it's running, I can't overwrite nginix.exe because Windows is awesome like that.
So how can I extract the file only if it's a newer version of nginx?
I would probably use jinja to test for the existence of a file that you know would only exist if the zip file has been extracted.
{% if salt['file.exists']('/path/to/extract/known_file.txt') %}
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
{% endif %}
This will cause the extract_nginx state to not appear in the final rendered sls file if the zip file has been extracted.

Resources