How to replace a directory with a symlink using ansible? - ansible

I would like to replace /etc/nginx/sites-enabled with a symlink to my repo. I'm trying to do this using file module, but that doesn't work as the file module doesn't remove a directory with force option.
- name: setup nginx sites-available symlink
file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes
notify: restart nginx
I could fall back to using shell.
- name: setup nginx sites-available symlink
shell: test -d /etc/nginx/sites-available && rm -r /etc/nginx/sites-available && ln -sT /repo/etc/nginx/sites-available /etc/nginx/sites-available
notify: restart nginx
Is there any better way to achieve this instead of falling back to shell?

When you take your action, it's actually things:
delete a folder
add a symlink in its place
This is probably also the cleanest way to represent in Ansible:
tasks:
- name: remove the folder
file: path=/etc/nginx/sites-available state=absent
- name: setup nginx sites-available symlink
file: path=/etc/nginx/sites-available
src=/repo/etc/nginx/sites-available
state=link
force=yes
notify: restart nginx
But, always removing and adding the symlink is not so nice, so adding a task to check the link target might be a nice addition:
- name: check the current symlink
stat: path=/etc/nginx/sites-available
register: sites_available
And a 'when' condition to the delete task:
- name: remove the folder (only if it is a folder)
file: path=/etc/nginx/sites-available state=absent
when: sites_available.stat.isdir is defined and sites_available.stat.isdir

Related

How should I install Splunk from remote tgz package?

Im trying to perform an additional task on the output of stdout_lines.
Here is the playbook:
- name: Change to Splunk user
hosts:
sudo: yes
sudo_user: splunk
gather_facts: true
tasks:
- name: Run WGET & install SPLUNK
command: wget -O splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz https://download.splunk.com/products/splunk/releases/9.0.2/linux/splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz
- name: run 'ls' to get SPLUNK_PACKAGE_NAME
shell: 'ls -l'
register: command_output
- debug:
var: command_output.stdout_lines
I am using wget to download Splunk on the server and I need the Splunk package name so that I can extract the file in the next task.
For that, I tried to register ls -l as command_output.
Now, I need to untag it (tar xvzf splunk_package_name.tgz -C/opt), but I dont know how I can use the stdout_lines output in my tar command.
In Ansible, your use case should resume to one single task, using the unarchive module, along with the remote_src parameter set to true and the src one to your URL.
As described in the documentation:
If remote_src=yes and src contains ://, the remote machine will download the file from the URL first.
So, you end up with this single task:
- name: Install Splunk from remote archive
unarchive:
src: "https://download.splunk.com/products/splunk/releases/9.0.2\
/linux/splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz"
remote_src: true
## with this, you will end up with Splunk installed in /opt/splunk
dest: /opt

Best way to create directory if not exists, but do not modify if exists

I would like to create a new directory with a specified mode/owner but only if it does not yet exist.
I can do it by first checking stat:
- name: Determine if exists
stat:
path: "{{ my_path }}"
register: path
- name: Create path
file:
path: "{{ my_path }}"
owner: someuser
group: somegroup
mode: 0775
state: directory
when: not path.stat.exists
Is it possible to do this without the extra step?
If not, is there a better way to accomplish this?
Ansible can be used to manage the directory in question, always ensuring that it will have the defined ownership and permissions irrespective of whether it exists or not.
If you want to avoid any chances of modifying an existing directory for some reason, the way you accomplished it using Ansible modules (requiring two tasks) is correct.
However, if you do need to accomplish this in 1 step - you can use the command module to run the install command to create directory.
Example:
- name: Create path
command:
cmd: "install -o someuser -g somegroup -m 0775 -d {{ my_path }}"
creates: "{{ my_path }}"
Here we are using the creates property to prevent the command from running when the path already exists.

How to skip a task if a file has been modified

I am working on ansible-playbook, granted I am kind of new at this. Anyways, in the ansible playbook, I modified a file and when I rerun the playbook, I don’t want that file to be modified again.
- name: Check if the domain config.xml has been edited
stat: path={{ domainshome }}/{{ domain_name }}/config/config.xml
register: config
- name: Config.xml modified
debug: msg="The Config.xml has been modified"
when: config.changed
- name: Edit the config.xml - remove extra file-store bad tag
shell: "sed -i '776,780d' {{ domainshome }}/{{ domain_name }}/config/config.xml"
when: config.changed
When I run for the first time, it skips this step.
I need this step to run once and skip if the playbook is rerun.
I am trying to write ansible-playbook and remove entries from config file only when it’s executed for the first time so that it can run the jvm.
Q: "Remove entries from config file only when it’s executed for the first time."
A: It's possible to use the creates parameter of the shell module to make sure the configuration file has been edited only once. For example
- name: Edit the config.xml - remove extra file-store bad tag
shell: "sed -i '776,780d' {{ domainshome }}/{{ domain_name }}/config/config.xml"
args:
creates: "{{ domainshome }}/{{ domain_name }}/config/config.xml.lock"
- name: Create lock file
file:
state: touch
path: "{{ domainshome }}/{{ domain_name }}/config/config.xml.lock"
Notes
Quoting: creates: A filename, when it already exists, this step will not be run.
Fit the path and name of the lock file to your needs
stat module returns information about a file only and never changes a file. The registered variable register: config in this task would never report a file has been changed.
file module and state: touch is not idempotent. Quoting: an existing file or directory will receive updated file access and modification times (similar to the way touch works from the command line).
A better solution would be to modify the command and create the lock file along with sed. For example "sed -i ... && touch /path-to-lockfile/lockfile".

Ansible copy module requires writable parent directory?

Need to set /proc/sys/net/ipv4/conf/all/forwarding to 1
That's can be easily done via command
- name: Enable IPv4 traffic forwarding
command: echo 1 > /proc/sys/net/ipv4/conf/all/forwarding
But that's bad practice - it will be always "changed" task.
So I tried the following:
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" force=yes
Which failed with msg: "Destination /proc/sys/net/ipv4/conf/all not writable"
According to sources seems like copy always requires parent directory will be writable. But 1) I don't understand why 2) Any other "idiomatic" way to set destination file to required value?
While I still do not understand why copy needs to check parent directory permissions, thanks to #larsks:
sysctl module changes both sysctl.conf and /proc values
and this solves my task
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" unsafe_writes=true
will disable Ansible's atomic write functionality, and instead write 1 to the file directly.
Atomic writes are good and useful because they mean you will never get a corrupted file that has the output of multiple processes interleaved, but /proc is a special magic thing. The classic Unix dance of writing to a temporary file until you're done, and then renaming it to the final filename you want breaks because /proc doesn't let you create random temporary files.
I found a workaround for this problem:
- name: Create temp copy of mongod.conf
copy:
src : /etc/mongod.conf
dest: /tmp/mongod.conf
remote_src: yes
diff: no
check_mode: no
changed_when: false
- name: Copy config file mongod.conf
copy:
src : "/source/of/your/mongod.conf"
dest: "/tmp/mongod.conf"
register: result
- name: Copy temp mongod.conf to /etc/mongod.conf
shell: "cp --force /tmp/mongod.conf /etc/mongod.conf"
when: result.changed == true

How can I check if file has been downloaded in ansible

I am downloading the file with wget from ansible.
- name: Download Solr
shell: wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
args:
chdir: {{project_root}}/solr
but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.
Note: this answer covers general question of "How can i check the file existence in ansible", not a specific case of downloading file.
The problems with the previous answers using "command" or "shell" actions is that they won't work in --check mode. Actually, first action will be skipped, and next will error out on "when: solr_exists.rc != 0" condition (due to variable not being defined).
Since Ansible 1.3, there's more direct way to check for file existance - using "stat" module. It of course also works well as "local_action" to check a local file existence:
- local_action: stat path={{secrets_dir}}/secrets.yml
register: secrets_exist
- fail: msg="Production credentials not found"
when: secrets_exist.stat.exists == False
Unless you have a reason to use wget why not use get_url module. It will check if the file needs to be downloaded.
---
- hosts : all
gather_facts : no
tasks:
- get_url:
url="http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip"
dest="{{project_root}}/solr-4.7.0.zip"
NOTE: If you put the directory and not the full path in the dest ansible will still download the file to a temporary dir but do an md5 check to decide whether to copy to the dest dir.
And if you need to save state of download you can use:
---
- hosts : all
gather_facts : no
tasks:
- get_url:
url="http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip"
dest="{{project_root}}/solr-4.7.0.zip"
register: get_solr
- debug:
msg="solr was downloaded"
when: get_solr|changed
Many modules are already aware of the result and will be skipped if its already there, like file or geturl. Others like command have a creates option, which will skip this command if that file already exists (or doesn't exist, if you use the removes option).
So you should first check the available modules, if they are smart enough already. If not: I recommend the stats module. Advantage over the other solution: No "red errors but ignored" in the output.
- name: Check MySQL data directory existence
stat: path=/var/lib/mysql-slave
register: mysql_slave_data_dir
- name: Stop MySQL master to copy data directory
service: name=mysql state=stopped
sudo: yes
when: not mysql_slave_data_dir.stat.exists
There are at least two options here.
You can register a variable if the file exists, then use a when condition to execute the command on the condition that the file doesn't already exist:
- command: /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip
register: solr_zip
ignore_errors: True
- name: Download Solr
shell: chdir={{project_root}}/solr /usr/bin/wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
when: solr_zip|failed
You could also use the commands module with the creates option:
- name: Download Solr
command: /usr/bin/wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip chdir={{project_root}}/solr creates={{project_root}}/solr/solr-4.7.0.zip
This article might be useful
Out of it comes this example:
tasks:
- shell: if [[ -f "/etc/monitrc" ]]; then /bin/true; else /bin/false; fi
register: result
ignore_errors: True
- command: /bin/something
when: result|failed
- command: /bin/something_else
when: result|success
- command: /bin/still/something_else
when: result|skipped
So basically you can do this checking by registering a variable from a command and checking its return code. (You can also do this by checking its stdout)
- name: playbook
hosts: all
user: <your-user>
vars:
project_root: /usr/local
tasks:
- name: Check if the solr zip exists.
command: /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip
ignore_errors: True
register: solr_exists
- name: Download Solr
shell: chdir={{project_root}}/solr wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
when: solr_exists.rc != 0
This basically says that if the /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip command returns a code that is not 0, meaning it doesn't exist then execute the task Download Solr
Hope it helps.
my favourite is to only download the file if it is newer than the local file (which includes when the local file does not exist)
the -N option with wget does this: https://www.gnu.org/software/wget/manual/html_node/Time_002dStamping-Usage.html .
sadly, i don't think there is an equivalent feature in get_url
so a very small change:
- name: Download Solr
shell: chdir={{project_root}}/solr wget -N http://<SNIPPED>/solr-4.7.0.zip
Use the creates argument
- name: Download Solr
shell: creates={{working_directory}}/solr/solr-4.7.0.zip chdir={{working_directory}}/solr wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip

Resources