I am working on ansible-playbook, granted I am kind of new at this. Anyways, in the ansible playbook, I modified a file and when I rerun the playbook, I don’t want that file to be modified again.
- name: Check if the domain config.xml has been edited
stat: path={{ domainshome }}/{{ domain_name }}/config/config.xml
register: config
- name: Config.xml modified
debug: msg="The Config.xml has been modified"
when: config.changed
- name: Edit the config.xml - remove extra file-store bad tag
shell: "sed -i '776,780d' {{ domainshome }}/{{ domain_name }}/config/config.xml"
when: config.changed
When I run for the first time, it skips this step.
I need this step to run once and skip if the playbook is rerun.
I am trying to write ansible-playbook and remove entries from config file only when it’s executed for the first time so that it can run the jvm.
Q: "Remove entries from config file only when it’s executed for the first time."
A: It's possible to use the creates parameter of the shell module to make sure the configuration file has been edited only once. For example
- name: Edit the config.xml - remove extra file-store bad tag
shell: "sed -i '776,780d' {{ domainshome }}/{{ domain_name }}/config/config.xml"
args:
creates: "{{ domainshome }}/{{ domain_name }}/config/config.xml.lock"
- name: Create lock file
file:
state: touch
path: "{{ domainshome }}/{{ domain_name }}/config/config.xml.lock"
Notes
Quoting: creates: A filename, when it already exists, this step will not be run.
Fit the path and name of the lock file to your needs
stat module returns information about a file only and never changes a file. The registered variable register: config in this task would never report a file has been changed.
file module and state: touch is not idempotent. Quoting: an existing file or directory will receive updated file access and modification times (similar to the way touch works from the command line).
A better solution would be to modify the command and create the lock file along with sed. For example "sed -i ... && touch /path-to-lockfile/lockfile".
Related
I would like to create a new directory with a specified mode/owner but only if it does not yet exist.
I can do it by first checking stat:
- name: Determine if exists
stat:
path: "{{ my_path }}"
register: path
- name: Create path
file:
path: "{{ my_path }}"
owner: someuser
group: somegroup
mode: 0775
state: directory
when: not path.stat.exists
Is it possible to do this without the extra step?
If not, is there a better way to accomplish this?
Ansible can be used to manage the directory in question, always ensuring that it will have the defined ownership and permissions irrespective of whether it exists or not.
If you want to avoid any chances of modifying an existing directory for some reason, the way you accomplished it using Ansible modules (requiring two tasks) is correct.
However, if you do need to accomplish this in 1 step - you can use the command module to run the install command to create directory.
Example:
- name: Create path
command:
cmd: "install -o someuser -g somegroup -m 0775 -d {{ my_path }}"
creates: "{{ my_path }}"
Here we are using the creates property to prevent the command from running when the path already exists.
I want to read a file by ansible and find specific thing and store all of them in a file in my localhost
for example there is /tmp/test file in all host and I want to grep specific thing in this file and store all of them in my home.
What should I do?
There might be many ways to accomplish this. The choice of Ansible modules (or even tools) can vary.
One approach is (using only Ansible):
Slurp the remote file
Write new file with filtered content
Fetch the file to Control machine
Example:
- hosts: remote_host
tasks:
# Slurp the file
- name: Get contents of file
slurp:
src: /tmp/test
register: testfile
# Filter the contents to new file
- name: Save contents to a variable for looping
set_fact:
testfile_contents: "{{ testfile.content | b64decode }}"
- name: Write a filtered file
lineinfile:
path: /tmp/filtered_test
line: "{{ item }}"
create: yes
when: "'TEXT_YOU_WANT' in item"
with_items: "{{ testfile_contents.split('\n') }}"
# Fetch the file
- name: Fetch the filtered file
fetch:
src: /tmp/filtered_test
dest: /tmp/
This will fetch the file to /tmp/<ANSIBLE_HOSTNAME>/tmp/filtered_test.
You can use the Ansible fetch module to download files from the remote system to your local system. You can then do the processing locally, as shown in this Ansible cli example:
REMOTE=[YOUR_REMOTE_SERVER]; \
ansible -m fetch -a "src=/tmp/test dest=/tmp/ansiblefetch/" $REMOTE && \
grep "[WHAT_YOU_ARE_INTERESTED_IN]" /tmp/ansiblefetch/$REMOTE/tmp/test > /home/user/ansible_files/$REMOTE
This snippet runs the ad-hoc version of Ansible, calling the module fetch with the source folder (on the remote) and the destination folder (locally) as arguments. Fetch copies the file into a folder [SRC]/[REMOTE_NAME]/[DEST], from which we then grep what we are interested in, and output that in the /home/user/ansible_files/$REMOTE.
How do I append date, timestamp in ansible log file ?
Currently i have it as log_path=/var/ansible-playbooks/ansible.log in the ansible.cfg
Everytime I run, i need this log to file to be saved with the timestamp
example ansible-20160808142400.log
Use ANSIBLE_LOG_PATH environment variable.
Execute playbook as follows:
ANSIBLE_LOG_PATH=/tmp/ansible_$(date "+%Y%m%d%H%M%S").log ansible-playbook myplabook.yml
Alternatively you can write your own callback plugin that will log what you want and where you want it to.
If you're running on a UNIX based system you can take advantage of the behavior of inodes. Define a log path in your ansible.cfg. I created a directory in $HOME/.ansible.
log_path = $HOME/.ansible/log/ansible.log
Create a pre-task section in your playbooks and include the following task:
- name: Create the log file for this run
shell: /bin/bash -l -c "mv {{ lookup('env', 'HOME') }}/.ansible/log/ansible.log {{ lookup('env', 'HOME') }}/.ansible/log/ansible.log-{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
delegate_to: localhost
become: yes
become_user: "{{ lookup('env', 'USER') }}"
When ansible starts running a playbook it creates the log file and starts writing to it. The log file is then renamed to ansible.log-YYYYmmddHHMMSS and the ansible process continues to write to it because even though the log file's name has changed the inode associated with it hasn't.
Small task:
- name: Configure hosts
template: src=host.cfg.j2 dest=/etc/shinken/hosts/{{item.host_name}}.cfg
with_items: shinken_hosts
when: shinken_hosts is defined
notify: reload config
I want to remove all other configs (files) in /etc/shinken/hosts/ configured by this task.
How can I do this?
(It is really important if I fix a typo in 'shinken_hosts', and want to automatically remove old config with mistake in the name).
you might want to check this, slide 19.
This assumes that you know what files needs to exist in the specific dir, and then deletes all others.
# tidy_expected: [‘conf1.cfg’, conf2.cfg’]
- find: paths={{tidy_path}} #/etc/myapp
register: existing
- file: path={{item.path}} state=absent
when: item.path|basename not in tidy_expected
with_items: “{{existing.files|default([ ])}}”
register: removed
- mail: body=“{{removed}}”
I'm trying to turn these lines into something I can put in an ansible playbook:
# Install Prezto files
shopt -s extglob
shopt -s nullglob
files=( "${ZDOTDIR:-$HOME}"/.zprezto/runcoms/!(README.md) )
for rcfile in "${files[#]}"; do
[[ -f $rcfile ]] && ln -s "$rcfile" "${ZDOTDIR:-$HOME}/.${rcfile##*/}"
done
So far I've got the following:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_fileglob:
- ~/.zprezto/runcoms/z*
I know it isn't the same, but it would select the same files: except with_fileglob looks on the host machine, and I want it to look on the remote machine.
Is there any way to do this, or should I just use a shell script?
A clean Ansible way of purging unwanted files matching a glob is:
- name: List all tmp files
find:
paths: /tmp/foo
patterns: "*.tmp"
register: tmp_glob
- name: Cleanup tmp files
file:
path: "{{ item.path }}"
state: absent
with_items:
- "{{ tmp_glob.files }}"
Bruce P's solution works, but it requires an addition file and gets a little messy. Below is a pure ansible solution.
The first task grabs a list of filenames and stores it in files_to_copy. The second task appends each filename to the path you provide and creates symlinks.
- name: grab file list
shell: ls /path/to/src
register: files_to_copy
- name: create symbolic links
file:
src: "/path/to/src/{{ item }}"
dest: "path/to/dest/{{ item }}"
state: link
with_items: files_to_copy.stdout_lines
The file module does indeed look on the server where ansible is running for files when using with_fileglob, etc. Since you want to work with files that exist solely on the remote machine then you could do a couple things. One approach would be to copy over a shell script in one task then invoke it in the next task. You could even use the fact that the file was copied as a way to only run the script if it didn't already exist:
- name: Copy link script
copy: src=/path/to/foo.sh
dest=/target/path/to/foo.sh
mode=0755
register: copied_script
- name: Invoke link script
command: /target/path/to/foo.sh
when: copied_script.changed
Another approach would be to create an entire command line that does what you want and invoke it using the shell module:
- name: Generate links
shell: find ~/.zprezto/runcoms/z* -exec ln -s {} ~ \;
You can use with_lines to accomplish this:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_lines: ls ~/.zprezto/runcoms/z*