I have OS X "El capitan" 10.11.6 and I am using Ansible 2.1.1.0 to run some maintenance tasks on a remote Linux server Ubuntu 16.04 Xenial. I am trying to get the following list of folders sorted on the remote machine (Linux), so I can remove the old ones when needed:
/releases/0.0.0
/releases/0.0.1
/releases/0.0.10
/releases/1.0.0
/releases/1.0.5
/releases/2.0.0
I have been trying with the module find in Ansible, but it returns a not sorted list. Is there an easy way to achieve this with Ansible?
You can sort items with sort filter:
- hosts: localhost
gather_facts: no
tasks:
- find: path="/tmp" patterns="test*"
register: files
- debug: msg="{{ files.files | sort(attribute='ctime') | map(attribute='path') | list }}"
Just change sort attribute to your need.
But beware that string sort is not numeric, so /releases/1.0.5 will go after /releases/1.0.10.
Interesting solutions, thanks a lot guys. But I think I have found the easiest way in Ubuntu, just using ls -v /releases/ will apply natural sorting to all folders:
- name: List of releases in ascendent order
command: ls -v /releases/
register: releases
- debug: msg={{ releases.stdout_lines }}
The response is:
ok: [my.remote.com] => {
"msg": [
"0.0.0",
"0.0.1",
"0.0.10",
"1.0.0",
"1.0.5",
"2.0.0"
]
}
If you want to find files older than a period, maybe age and age_stamp parameters of find module can help you. For example:
# Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
- find: paths="/tmp" age="4w" size="1m" recurse=yes
It sounds like what you want to do is real simple but the standard ansible modules doesn't quite have what you needed.
As an alternative you can write your own script using your favorite programming language then use the copy module to pass that script to the host and use command to execute it. When done, use file to remove that script.
The downside of it is that the target host will need to have the required executable to run your script. For instance if you are doing a python script then the target host will need python
Example:
- name: Send your script to the target host
copy: src=directory_for_scripts/my_script.sh dest=/tmp/my_script.sh
- name: Execute my script on target host
command: >
/bin/bash /tmp/my_script.sh
- name: Clean up the target host by removing script
file: path=/tmp/my_script.sh state=absent
Related
This question already has answers here:
Ansible - Save registered variable to file
(5 answers)
Closed 2 months ago.
I am trying to gain knowledge in Ansible and solve a few problems:
I want to, not sure if it is even possible. Can the output be saved local to the server the playbook is being run on?
in the example, I am just printing to terminal I am running the playbook. I it not much use when there is a large amount of data. I would like it to be saved in a file on the server I am running the playbook instead.
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Print output to console
debug:
msg: "{{command_output.stdout}}"
I really want the output to go to a file. I cant find anything about if this is possible.
as you can read on the ansible documentation, you can create a local configuration file ansible.cfg inside the directory vers you have your playbook and then set the proper config log file to output all the playbook output inside: Ansible output documentation
By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
To save Ansible output in a single log on the control node, set the log_path configuration file setting. You may also want to set display_args_to_stdout, which helps to differentiate similar tasks by including variable values in the Ansible output.
To save Ansible output in separate logs, one on each managed node, set the no_target_syslog and syslog_facility configuration file settings.
To save Ansible output to a secure database, use AWX or Red Hat Ansible Automation Platform. You can then review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
If you just want to output the result of the task on file, use the copy module on the localhost delegation
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Create your local file on master node
ansible.builtin.file:
path: /your/local/file
owner: foo
group: foo
mode: '0644'
delegate_to: localhost
- name: Print output to file
ansible.builtin.copy:
content: "{{command_output.stdout}}"
dest: /your/local/file
delegate_to: localhost
This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!
On Solaris Ansible's setup module does not gather information about installed zones. How to extend the setup module to gather the output of zoneadm list -iv?
create a script named /etc/ansible/facts.d/zoneadm.fact and gather whatever information you need there. This can be whatever you want (bash/python/etc).
When you are done, echo it to stdout as json format.
Deploy that script via ansible, make it executable
Gather facts and notice that the new facts are present under ansible_local.zoneadm
More infos can be found here
I was dealing with the same dilemma ~2y ago. Previous reply is correct, but you have to deploy some stuff before and the rerun Ansible to get new facts too.
There's an option to write your own Ansible module, which returns JSON looking like:
{
'changed': false,
'ansible_facts': {
'sunos': {
'zonename': 'global',
'zones': {},
}
}
Facts returned by such module are then merged to ones coming from setup module. To be more portable, best to include such module into Ansible role and include just one task calling this module, plus add tag always, to get this fact collection run even when you choose subset of tasks by specifying tags on cmdline.
I've got my old role pushed here on GitHub. Probably will not work out-of-the-box... was used with Ansible 1.0, but get inspired.
The reason for my misunderstanding was, that I thought, I have to put something on the Ansible machine, which gets automatically deployed to the target system as it is done with the modules. But facts gathering works differently! One has to take cake, that the fatcs gathering scripts are already on the target system, before the setup starts working. I would say this is a design error in Ansible or at least a still not implemented feature.
To add this missing functionality, it is necessary to write a play, which works before all other things. I came up with the following solution:
---
- name: facts deployment
gather_facts: false
hosts: all
become: true
tasks:
- set_fact: setup_necessary=false
- file:
path: "/etc/ansible/facts.d"
state: directory
recurse: yes
register: facts_directory
- set_fact: setup_necessary=true
when: facts_directory.changed
- name: solaris facts
gather_facts: false
hosts: solaris
become: true
tasks:
- include: deploy_fact.yml
with_items:
- { shell: bash, file: nonglobal_zones }
- { shell: bash, file: solaris_eeprom }
- name: setup after facts update
gather_facts: false
hosts: all
tasks:
- setup:
when: setup_necessary
The above playbook does all plays with gather_facts: false, to prevent any setup run before the facts have been deployed. All plays set the variable setup_necessary, when any change to the target system has been made. It is not possible to use a handler for this, because handlers run at the end of a play, but not at the end of a playbook or after some plays (Ansible limitation 1).
First the directory gets created and after that all facts files are deployed. It is necessary to put the body of the look into a separate task file, because it is not possible to group two tasks together in a playbook (Ansible limitation 2).
The contents of the deploy_fact.yml file uses the template module to transfer the facts scripts to the target system.
---
- name: "/etc/ansible/facts.d/{{item.file}}.fact"
template:
src: "{{inventory_dir}}/facts.d/{{item.shell}}.j2"
dest: "/etc/ansible/facts.d/{{item.file}}.fact"
mode: 0755
register: facts_file
- set_fact: setup_necessary=true
when: facts_file.changed
The reason why I use the template module is, that every facts script needs some kind of error handling which takes care that proper JSON gets created in case of an error. This is my current error handling but it can be still improved. Some people also don't like set -eu, which is somehow a matter of taste.
#! /bin/bash
{% include item.file + '.bash' %}
set -eu
_stderr=$(mktemp)
trap 'rm -f "$_stderr"' EXIT
if _stdout=$(main 2>$_stderr); then
if [ "$_stdout" ]; then
echo "$_stdout"
else
echo null
fi
else
jq -Rsc "{\"ERROR\":{\"failed\":true,\"exit\":$?,\"msg\":.}}" $_stderr
fi
The template does nothing more than just including the file passed by the loop via the implicit item variable. The wrapper expects from the included file, that a main functions gets defined. This is my main function for non-global zones:
main ()
{
zoneadm list -i |
grep -v global |
jq -Rc . |
jq -sc .
}
And this one collects the Solaris eeprom data:
main ()
{
eeprom |
sed 's/^\([^=]*\)=\(.*\)$/{"\1":"\2"}/' |
sed 's/^\(.*\): data not available.$/{"\1":null}/' |
sed 's/:"false"}$/:false}/g' |
sed 's/:"true"}$/:true}/g' |
sed 's/:"\([0-9][0-9]*\)"}$/:\1}/' |
sed '/^{"boot-device"/{s/":"/":["/;s/ /","/g;s/"}$/"]}/;}' |
jq -sc add
}
My bottom line is that it is somehow possible to extend Ansible's facts gathering but it is far from obvious and it is a bit painful, because it makes any ad hoc usage of the setup module impossible. Instead of requiring the user to implement the above stuff, Ansible should move all of the above into the setup module (Ansible limitation 3).
I'm currently developing ansible script to build and deploy java project.
so, I can set the log_path like below
log_path=/var/log/ansible.log
but, It is hard to look up build history.
Is it possible to append datetime to log file name?
for example,
ansible.20150326145515.log
I don't believe there is a built-in way to generate the date on the fly like that but one option you have is to use a lookup which can shell out to date. Example:
log_path="/var/log/ansible.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.log"
Here is an option using ANSIBLE_LOG_PATH environment variable thanks to Bash shell alias:
alias ansible="ANSIBLE_LOG_PATH=ansible-\`date +%Y%m%d%H%M%S\`.log ansible"
Feel free to use an absolute path if you prefer.
I found it.
just add task to copy(or mv command) log locally
- name: Copy ansible.log
connection: local
command: mv ./logs/ansible.log ./logs/ansible.{{ lookup('pipe', 'date %Y%M%d%H%M%S') }}.log
run_once: true
thanks to #jarv
How about this:
- shell: date +%Y%m%d%H%M%S
register: timestamp
- debug: msg="foo.{{timestamp.stdout}}.log"
Output:
TASK [command] *****************************************************************
changed: [blabla.example.com]
TASK [debug] *******************************************************************
ok: [blabla.example.com] => {
"msg": "foo.20160922233847.log"
}
According to the nice folks at the #ansible freenode IRC, this can be accomplished with a custom callback plugin.
I haven't done it yet because I can't install the Ansible Python library on this machine. Specifically, Windows 7 can't have directory names > 260 chars in length, and pip tries to make lengthy temporary paths. But if someone gets around to it, please post it here.
Small improvement on #ickhyun-kwon answer:
- name: "common/_ansible_log_path.yml: rename ansible.log"
connection: local
shell: |
mkdir -vp {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ ;
mv -vf {{ inventory_dir }}/logs/ansible.log {{ inventory_dir }}/logs/{{ svn_deploy.release }}/ansible.{{ svn_deploy.release }}.{{ lookup('pipe', 'date +%Y-%m-%d-%H%M') }}.log args:
executable: /bin/bash
chdir: "{{ inventory_dir }}"
run_once: True
ignore_errors: True
This has separate log directories per svn release, ensures the log directory actually exists before the mv command.
Ansible interprets ./ as the current playbook directory, which may or may not be the root of your ansible repository, whereas mine live in ./playbooks/$project/$role.yml. For me {{ inventory_dir }}/logs/ happens to correspond to the ~/ansible/log/ directory, though alternative layout configurations do not guarantee this.
I am unsure the correct way to formally extract the absolute ansible.cfg::log_path variable
Also the date command for month is +%m and not %M which is Minute
I have faced a similar problem while trying to set dynamic log paths for various playbooks.
A simple solution seems to be to pass the log filename dynamically to the ANSIBLE_LOG_PATH environment variable. Checkout -> https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In this particular case just export the environment variable when running the intended playbook on your terminal:
export ANSIBLE_LOG_PATH=ansible.`date +%s`.log; ansible-playbook test.yml
Else if the intended filename cannot be generated by the terminal, you can always use a runner playbook which runs the intended playbook from the within:
---
- hosts:
- localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: set dynamic variables
set_fact:
task_name: dynamic_log_test
log_dir: /path/to/log_directory/
- name: Change the working directory and run the ansible-playbook as shell command
shell: "export ANSIBLE_LOG_PATH={{ log_dir }}log_{{ task_name|lower }}.txt; ansible-playbook test.yml"
register: shell_result
This should log the result of test.yml to /path/to/log_directory/log_dynamic_log_test.txt
Hope you find this helpful!
I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.