This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!
Related
How to delete the oldest directory with ansible.
suppose I have the following tree structure
Parent Directory
-Dir2020-05-20
-Dir2020-05-21
-Dir2020-05-22
-Dir2020-05-23
now every time an ansible playbook is run, it should delete the oldest directory, For e.g it should delete Dir2020-05-20 in its first run if we consider its creation date to be 2020-05-20.
age attribute of file module does not seen helpful as i have to run this playbook very randomly and i want to keep limited no. of these directories.
Just assign dirpath to the path of your "Parent Directory" where all these directories are present
---
- hosts: localhost
vars:
dir_path: "/home/harshit/ansible/test/" ##parent directory path, make sure it ends with a slash
tasks:
- name: find oldest directory
shell:
cmd: "ls `ls -tdr | head -n 1 `"
chdir: "{{dir_path}}"
register: dir_name_to_delete
- name: "delete oldest directory: {{dir_path}}{{dir_name_to_delete.stdout}}"
file:
state: absent
path: "{{dir_path}}{{dir_name_to_delete.stdout}}"
Considering a recommended practice is not to use shell or command modules wherever possible I suggest a pure ansible solution for this case:
- name: Get directory list
find:
paths: "{{ target_directory }}"
file_type: directory
register: found_dirs
- name: Get the oldest dir
set_fact:
oldest_dir: "{{ found_dirs.files | sort(attribute='mtime') | first }}"
- name: Delete oldest dir
file:
state: absent
path: "{{ oldest_dir.path }}"
when:
- found_dirs.files | count > 3
There are two ways to know how many files were found with find module - either using its return value matched like this when: found_dirs.matched > 3 or using count filter. I prefer the latter method because I just use this filter in a lot of other cases so this is just a habit.
For your reference, ansible has a whole bunch of useful filters (e.g. I used count and sort here, but there are dozens of them). One does not need to remember those filter names, of course, just keep in mind they exist and might be useful in many cases.
i am working with ansible and using a playbook.
In this playbook I am performing a download (from a web url) and unarchive of a file into hosts (using unarchive module), and after that I am copying some files from control machine into hosts (using copy module).
What is happening is that every time I use unarchive module, although every file is the same, ansible is overwriting files in hosts.
How can I make it so that it does not overwrite if contents are the same?
My playbook:
---
- hosts: group1
sudo: yes
tasks:
- name: Download and Extract apache
unarchive:
src: http://mirrors.up.pt/pub/apache/tomcat/tomcat-9/v9.0.1/bin/apache-tomcat-9.0.1.tar.gz
dest: /opt/
remote_src: yes
- name: Copy file to host
copy: src=/etc/ansible/files/myfile.xml dest=/opt/apache-tomcat-9.0.1/conf/myfile.xml
Add a creates option referencing something the unarchive places.
c.f. the documentation page here (check the version against what you are using.)
e.g.:
- unarchive:
remote_src : yes
src : "{{ url }}"
dest : "{{ install_dir }}/"
creates : "{{ flagFile }}"
If the unarchive creates a /tmp/foo directory with a file named bar, then flagFile can be /tmp/foo/bar, and unarchive won't run again if it's already there.
I've handled this multiple ways, depending on the situation.
Option 1:
One of the files created by the archive has a specific name, such as foo-version_no
In this case, I can add the option:
creates: 'foo=version_no'
If this file exists, ansible will not extract it.
Caveat:
If the extracted directory is supposed to contain 12 files, and at least 1 of them has been removed or altered, the unarchive module will not replace them.
Option 2:
If there is no version number in the file name, examine one of the files in the extract directory and asses if it is the correct file.
Perhaps a checksum on the file; a grep for a unique parameter; or executing a command with the '--version' option.
Register the results into a variable:
register: correct_file
The unarchive command can then have the when parameter:
unarchive:
. . .
. . .
when: (correct_file.stdout != My_Required_Version and correct_file is defined)
The last part, about being defined, is there because, if you determine the extracted files don;t exist at all, you would skip checking the version, so the variable 'correct_file' will be undefined.
Caveat:
See Caveat from Option 1.
Option 3:
Use an argument, or extract command that will not over write files if they have not changed.
Caveat:
There will still be a little over head, because the extract command needs to asses each file, but will not make a new one.
Option 4:
Have another way to assess the quality of the files in the extracted directory, and set a variable based upon that result.
A simple example would be to run a checksum on all files, the run a checksum on that output, yielding a "Checksum of Checksums"
Ex:
curr_sum=$( cksum $( cksum /path/to/extracted/files/* ) )
if [ $curr_sum -eq $Correct_Value ]
then
echo "This is OK"
exit 0
else
echo "This is not ok"
exit 1
fi
In ansible, you would run this command and register the result. Then compare the output to the pre-set value:
cmd: "script_name"
register: cksum_answer
failed_when: failed
. . .
unarchive:
. . .
. . .
when: cksum_answer.rc == 1
I have OS X "El capitan" 10.11.6 and I am using Ansible 2.1.1.0 to run some maintenance tasks on a remote Linux server Ubuntu 16.04 Xenial. I am trying to get the following list of folders sorted on the remote machine (Linux), so I can remove the old ones when needed:
/releases/0.0.0
/releases/0.0.1
/releases/0.0.10
/releases/1.0.0
/releases/1.0.5
/releases/2.0.0
I have been trying with the module find in Ansible, but it returns a not sorted list. Is there an easy way to achieve this with Ansible?
You can sort items with sort filter:
- hosts: localhost
gather_facts: no
tasks:
- find: path="/tmp" patterns="test*"
register: files
- debug: msg="{{ files.files | sort(attribute='ctime') | map(attribute='path') | list }}"
Just change sort attribute to your need.
But beware that string sort is not numeric, so /releases/1.0.5 will go after /releases/1.0.10.
Interesting solutions, thanks a lot guys. But I think I have found the easiest way in Ubuntu, just using ls -v /releases/ will apply natural sorting to all folders:
- name: List of releases in ascendent order
command: ls -v /releases/
register: releases
- debug: msg={{ releases.stdout_lines }}
The response is:
ok: [my.remote.com] => {
"msg": [
"0.0.0",
"0.0.1",
"0.0.10",
"1.0.0",
"1.0.5",
"2.0.0"
]
}
If you want to find files older than a period, maybe age and age_stamp parameters of find module can help you. For example:
# Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
- find: paths="/tmp" age="4w" size="1m" recurse=yes
It sounds like what you want to do is real simple but the standard ansible modules doesn't quite have what you needed.
As an alternative you can write your own script using your favorite programming language then use the copy module to pass that script to the host and use command to execute it. When done, use file to remove that script.
The downside of it is that the target host will need to have the required executable to run your script. For instance if you are doing a python script then the target host will need python
Example:
- name: Send your script to the target host
copy: src=directory_for_scripts/my_script.sh dest=/tmp/my_script.sh
- name: Execute my script on target host
command: >
/bin/bash /tmp/my_script.sh
- name: Clean up the target host by removing script
file: path=/tmp/my_script.sh state=absent
I'm writing an Ansible playbook which will install and configure an agent of some monitoring system my company uses. One of the steps required for the successful configuration of the agent is to configure certain directives in nagios.cfg file.
The nagios.cfg can reside on two different paths based on the way it was installed (package manager / from source).
The two relevant paths are:
/usr/local/nagios/etc/nagios.cfg
/etc/nagios3/nagios.cfg
What I want Ansible to do is to find the correct path and then insert it into a variable which I'll be able to use in the following configuration steps.
I've started with this:
- stat: path=/usr/local/nagios/etc/nagios.cfg
register: nag_conf_usrlocal
when: nag_conf_usrlocal.stat.exists
I thought about stating the file in the first location, understand if it exists there and if so then insert it to a variable and if it's not there then the next path should be stat'ed and if the file exists there then variable should include the correct path where the file exists.
How can it be done?
What you have done is not wrong, it just needs some fixing:
- stat: path=/usr/local/nagios/etc/nagios.cfg
register: nag_conf_usrlocal
- stat: path=/etc/nagios3/nagios.cfg
register: nag_conf_etc
when: nag_conf_usrlocal.stat.exists
- set_fact:
nagios_path: "/usr/local/nagios/etc/nagios.cfg"
when: nag_conf_usrlocal.stat.exists
- set_fact:
nagios_path: "/etc/nagios3/nagios.cfg"
when: nag_conf_etc.stat.exists
This is probably not the best solution, but it's to give you an overwiew of how this could be achieved.
I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.