Ansible: Copy contents of a directory to a location - ansible

I'm willing to transfer the contents of a folder unzipped from a source say myfolder to a location say dest_dir but apparently everything I try moves/copies/generates myfolder in the dest_dir location.
I tried
command: mv src dest_dir
I also tried unarchiving in the dest_dir location using,
unarchive:
src: /path/to/myfolder
dest: dest_dir
copy: no
become: yes
Apparently, for copy module, I found that remote_src does not support recursive copying yet.
What is the correct way to go about this?
Normally, in my system, I would do mv /path/to/myfolder/* dest_dir but wildcards throw an error with Ansible.
I'm using Ansible 2.3.2.

The reason you can't do it easily in Ansible is because Ansible was not designed to do it.
Just execute the command directly with shell module. Your requirement is not idempotent anyway:
- shell: mv /path/to/myfolder/* dest_dir
become: yes
Pay attention to mv defaults, you might want to add -f to prevent it from prompting for confirmation.
Otherwise play with synchronize module, but there's no value added for "move" operation. Just complexity.

Related

Ansible can't copy files on remote server, but the command runs correctly if run from command line

I'm trying to move everything under /opt/* to a new location on the remote server. I've tried this using command to run rsync directly, as well as using both the copy and the sychronize ansible module. In all cases I get the same error message saying:
"msg": "rsync: link_stat \"/opt/*\" failed: No such file or directory
If I run the command listed in the "cmd" part of the ansible error message directly on my remote server it works without error. I'm not sure why ansible is failing.
Here is the current attempt using sychronize:
- name: move /opt to new partition
become: true
synchronize:
src: /opt/*
dest: /mnt/opt/*
delegate_to: "{{ inventory_hostname }}"
You should skip the wildcards that is a common mistake:
UPDATE
Thanks to the user: # Zeitounator, I managed to do it with synchronize.
The advantage of using synchronize instead of copy module is performance, it's much faster if you have a lot of files to copy.
- name: move /opt to new partition
become: true
synchronize:
src: /opt/
dest: /mnt/opt
delegate_to: "{{ inventory_hostname }}"
So basically the initial answer was right but you needed to delete the wildcards "*" and the slash on dest path.
Also, you should add the deletion of files on /opt/

Ansible Unarchive leaving behind *.gz file instead of directory

I'm currently trying to implement an Ansible playbook to simply unarchive a .tar.gz or .tgz file. What I have currently implemented does unarchive the file, but leaves behind a .gz file instead of the directory that was archived. For example, I have testDirectory that contains testFile. I have compressed testDirectory into both testDirectory.tar.gz and testDirectory.tgz. When I run my playbook on either file, it untars the file leaving behind testDirectory.gz.
Here's my code:
- hosts: localhost
tasks:
- unarchive:
src: '{directory}/testArchive.tar.gz'
dest: '{directory}'
The goal is to simply unarchive the file, leaving behind {directory}/testArchive. What am I missing?
unarchive does not handle .gz files per the documentation
There is discussion of this issue from 2017 that references a future uncompress module that does not appear to exist, and a statement by the maintainer that he was abandoning maintenance of the unarchive module.
A possible work around is: command: gunzip filename.gz

Copying entire contents of folder including subfolders to destination

I am using ansible to move .js files from my local machine to an ec2 development environment and am having an issue copying the entire folder structure.
I am using the following task to move the files and seem to be running into an issue where only the files directly in the dist folder are getting copied. I need to copy the entire folder including the child files and folders to the destination folder.
- name: Copy each file over that matches the given pattern
copy:
src: "{{ item }}"
dest: "/home/admin/microservice/dist"
owner: "admin"
group: "admin"
force: "yes"
recurse: "true"
mode: 0755
with_fileglob:
- "/Users/myfolder/WebStormProjects/project/microservice/dist/*.js"
I need to copy the entire folder contents from the source to the destination, including subfolders and files? What can I do to fix this task to make this happen?
With the copy module, the solution to your problem would be much more complicated than you think, because:
you can't match a directory and *.js files in a single globbing operation,
even if you could, you can't use the same "copy" operation to copy the file as well as create a directory (notice: create directory, not copy! as the latter would imply copying with all the files).
You'd need to handle the directories and files separately (see an implementation in the first revision of this answer).
With rsync, the solution is much more concise and requires only setting appropriate filters --include='*/' --include='*.js' --exclude='*'.
The synchronize task implementing this in Ansible:
- synchronize:
src: /source/Users/myfolder/WebStormProjects/project/microservice/dist/
dest: /home/admin/microservice/dist/
rsync_opts:
- --include=*/
- --include=*.js
- --exclude=*
Note 1: it is important not to add quotes for the filtered values in rsync_opts.
Note 2: you might still need to set the appropriate ownership and permissions.
First using copy module here shouldn't be ideal as "The copy module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see synchronize module, which is a wrapper around rsync."
copy module documentation
synchonize module documentation
However you can perform as below using copy module::
copy:
src: "{{ item }}"
dest: /home/admin/microservice/dist
with_lines: "find /home/admin/microservice/dist -type f -name *.js "
Similarly you can try with "synchronize" module as below::
synchronize:
src: "{{ item }}"
dest: /home/admin/microservice/dist
with_lines: "find /home/admin/microservice/dist -type f -name *.js "
If you want to retain the Directory layout, You can do it as below::
In Step1=> You will copy the necessary pattern files in parent directory structure into a temp directory.
Step2=>Then you need to copy the temp directory into the destination. After wards you can delete the temp directory or whatever your use case is.
- name: copy pattern files and directory into a temp directory
shell: find . -type f -name "*.js" | cpio -pvdmB /temp/dir/
args:
chdir: "/Users/myfolder/WebStormProjects/project/microservice/dist/"
- name: Copy the temp directory recursively to destination directory
copy:
src: "/temp/dir/"
dest: "/home/admin/microservice/dist/"
owner: "admin"
group: "admin"
force: "yes"
mode: 0755

Ansible copy module requires writable parent directory?

Need to set /proc/sys/net/ipv4/conf/all/forwarding to 1
That's can be easily done via command
- name: Enable IPv4 traffic forwarding
command: echo 1 > /proc/sys/net/ipv4/conf/all/forwarding
But that's bad practice - it will be always "changed" task.
So I tried the following:
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" force=yes
Which failed with msg: "Destination /proc/sys/net/ipv4/conf/all not writable"
According to sources seems like copy always requires parent directory will be writable. But 1) I don't understand why 2) Any other "idiomatic" way to set destination file to required value?
While I still do not understand why copy needs to check parent directory permissions, thanks to #larsks:
sysctl module changes both sysctl.conf and /proc values
and this solves my task
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" unsafe_writes=true
will disable Ansible's atomic write functionality, and instead write 1 to the file directly.
Atomic writes are good and useful because they mean you will never get a corrupted file that has the output of multiple processes interleaved, but /proc is a special magic thing. The classic Unix dance of writing to a temporary file until you're done, and then renaming it to the final filename you want breaks because /proc doesn't let you create random temporary files.
I found a workaround for this problem:
- name: Create temp copy of mongod.conf
copy:
src : /etc/mongod.conf
dest: /tmp/mongod.conf
remote_src: yes
diff: no
check_mode: no
changed_when: false
- name: Copy config file mongod.conf
copy:
src : "/source/of/your/mongod.conf"
dest: "/tmp/mongod.conf"
register: result
- name: Copy temp mongod.conf to /etc/mongod.conf
shell: "cp --force /tmp/mongod.conf /etc/mongod.conf"
when: result.changed == true

Permission change based on file extension using ansible

I want to change the permission of the file based on it's extension using ansible, for example I have directory test and within this directory I have a lot of shell script(.sh) and python(.py) files. I want to change the permission of shell script to 0700 and python files to 0644. Can you please help me that how I can achieve this. Thanks
There are many ways to do this. This should work. Tweak it to your needs.
- file: path={{item}} mode=0644
with_fileglob:
- <full_path>/*.py
- file: path={{item}} mode=0700
with_fileglob:
- <full_path>/*.sh
If the files are on remote, do this:
- shell: ls /test/*.py
register: py_files
- file: path={{item}} mode=0644
with_items: py_files.stdout_lines

Resources