Ansible copy module requires writable parent directory? - ansible

Need to set /proc/sys/net/ipv4/conf/all/forwarding to 1
That's can be easily done via command
- name: Enable IPv4 traffic forwarding
command: echo 1 > /proc/sys/net/ipv4/conf/all/forwarding
But that's bad practice - it will be always "changed" task.
So I tried the following:
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" force=yes
Which failed with msg: "Destination /proc/sys/net/ipv4/conf/all not writable"
According to sources seems like copy always requires parent directory will be writable. But 1) I don't understand why 2) Any other "idiomatic" way to set destination file to required value?

While I still do not understand why copy needs to check parent directory permissions, thanks to #larsks:
sysctl module changes both sysctl.conf and /proc values
and this solves my task

- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" unsafe_writes=true
will disable Ansible's atomic write functionality, and instead write 1 to the file directly.
Atomic writes are good and useful because they mean you will never get a corrupted file that has the output of multiple processes interleaved, but /proc is a special magic thing. The classic Unix dance of writing to a temporary file until you're done, and then renaming it to the final filename you want breaks because /proc doesn't let you create random temporary files.

I found a workaround for this problem:
- name: Create temp copy of mongod.conf
copy:
src : /etc/mongod.conf
dest: /tmp/mongod.conf
remote_src: yes
diff: no
check_mode: no
changed_when: false
- name: Copy config file mongod.conf
copy:
src : "/source/of/your/mongod.conf"
dest: "/tmp/mongod.conf"
register: result
- name: Copy temp mongod.conf to /etc/mongod.conf
shell: "cp --force /tmp/mongod.conf /etc/mongod.conf"
when: result.changed == true

Related

Ansible can't copy files on remote server, but the command runs correctly if run from command line

I'm trying to move everything under /opt/* to a new location on the remote server. I've tried this using command to run rsync directly, as well as using both the copy and the sychronize ansible module. In all cases I get the same error message saying:
"msg": "rsync: link_stat \"/opt/*\" failed: No such file or directory
If I run the command listed in the "cmd" part of the ansible error message directly on my remote server it works without error. I'm not sure why ansible is failing.
Here is the current attempt using sychronize:
- name: move /opt to new partition
become: true
synchronize:
src: /opt/*
dest: /mnt/opt/*
delegate_to: "{{ inventory_hostname }}"
You should skip the wildcards that is a common mistake:
UPDATE
Thanks to the user: # Zeitounator, I managed to do it with synchronize.
The advantage of using synchronize instead of copy module is performance, it's much faster if you have a lot of files to copy.
- name: move /opt to new partition
become: true
synchronize:
src: /opt/
dest: /mnt/opt
delegate_to: "{{ inventory_hostname }}"
So basically the initial answer was right but you needed to delete the wildcards "*" and the slash on dest path.
Also, you should add the deletion of files on /opt/

Find a file and rename it ansible playbook [duplicate]

This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!

How to run linux like cp command on same server..but copy says it does not find remote server

I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.

Ansible - Unarchive module overwriting

i am working with ansible and using a playbook.
In this playbook I am performing a download (from a web url) and unarchive of a file into hosts (using unarchive module), and after that I am copying some files from control machine into hosts (using copy module).
What is happening is that every time I use unarchive module, although every file is the same, ansible is overwriting files in hosts.
How can I make it so that it does not overwrite if contents are the same?
My playbook:
---
- hosts: group1
sudo: yes
tasks:
- name: Download and Extract apache
unarchive:
src: http://mirrors.up.pt/pub/apache/tomcat/tomcat-9/v9.0.1/bin/apache-tomcat-9.0.1.tar.gz
dest: /opt/
remote_src: yes
- name: Copy file to host
copy: src=/etc/ansible/files/myfile.xml dest=/opt/apache-tomcat-9.0.1/conf/myfile.xml
Add a creates option referencing something the unarchive places.
c.f. the documentation page here (check the version against what you are using.)
e.g.:
- unarchive:
remote_src : yes
src : "{{ url }}"
dest : "{{ install_dir }}/"
creates : "{{ flagFile }}"
If the unarchive creates a /tmp/foo directory with a file named bar, then flagFile can be /tmp/foo/bar, and unarchive won't run again if it's already there.
I've handled this multiple ways, depending on the situation.
Option 1:
One of the files created by the archive has a specific name, such as foo-version_no
In this case, I can add the option:
creates: 'foo=version_no'
If this file exists, ansible will not extract it.
Caveat:
If the extracted directory is supposed to contain 12 files, and at least 1 of them has been removed or altered, the unarchive module will not replace them.
Option 2:
If there is no version number in the file name, examine one of the files in the extract directory and asses if it is the correct file.
Perhaps a checksum on the file; a grep for a unique parameter; or executing a command with the '--version' option.
Register the results into a variable:
register: correct_file
The unarchive command can then have the when parameter:
unarchive:
. . .
. . .
when: (correct_file.stdout != My_Required_Version and correct_file is defined)
The last part, about being defined, is there because, if you determine the extracted files don;t exist at all, you would skip checking the version, so the variable 'correct_file' will be undefined.
Caveat:
See Caveat from Option 1.
Option 3:
Use an argument, or extract command that will not over write files if they have not changed.
Caveat:
There will still be a little over head, because the extract command needs to asses each file, but will not make a new one.
Option 4:
Have another way to assess the quality of the files in the extracted directory, and set a variable based upon that result.
A simple example would be to run a checksum on all files, the run a checksum on that output, yielding a "Checksum of Checksums"
Ex:
curr_sum=$( cksum $( cksum /path/to/extracted/files/* ) )
if [ $curr_sum -eq $Correct_Value ]
then
echo "This is OK"
exit 0
else
echo "This is not ok"
exit 1
fi
In ansible, you would run this command and register the result. Then compare the output to the pre-set value:
cmd: "script_name"
register: cksum_answer
failed_when: failed
. . .
unarchive:
. . .
. . .
when: cksum_answer.rc == 1

How can I check if file has been downloaded in ansible

I am downloading the file with wget from ansible.
- name: Download Solr
shell: wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
args:
chdir: {{project_root}}/solr
but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.
Note: this answer covers general question of "How can i check the file existence in ansible", not a specific case of downloading file.
The problems with the previous answers using "command" or "shell" actions is that they won't work in --check mode. Actually, first action will be skipped, and next will error out on "when: solr_exists.rc != 0" condition (due to variable not being defined).
Since Ansible 1.3, there's more direct way to check for file existance - using "stat" module. It of course also works well as "local_action" to check a local file existence:
- local_action: stat path={{secrets_dir}}/secrets.yml
register: secrets_exist
- fail: msg="Production credentials not found"
when: secrets_exist.stat.exists == False
Unless you have a reason to use wget why not use get_url module. It will check if the file needs to be downloaded.
---
- hosts : all
gather_facts : no
tasks:
- get_url:
url="http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip"
dest="{{project_root}}/solr-4.7.0.zip"
NOTE: If you put the directory and not the full path in the dest ansible will still download the file to a temporary dir but do an md5 check to decide whether to copy to the dest dir.
And if you need to save state of download you can use:
---
- hosts : all
gather_facts : no
tasks:
- get_url:
url="http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip"
dest="{{project_root}}/solr-4.7.0.zip"
register: get_solr
- debug:
msg="solr was downloaded"
when: get_solr|changed
Many modules are already aware of the result and will be skipped if its already there, like file or geturl. Others like command have a creates option, which will skip this command if that file already exists (or doesn't exist, if you use the removes option).
So you should first check the available modules, if they are smart enough already. If not: I recommend the stats module. Advantage over the other solution: No "red errors but ignored" in the output.
- name: Check MySQL data directory existence
stat: path=/var/lib/mysql-slave
register: mysql_slave_data_dir
- name: Stop MySQL master to copy data directory
service: name=mysql state=stopped
sudo: yes
when: not mysql_slave_data_dir.stat.exists
There are at least two options here.
You can register a variable if the file exists, then use a when condition to execute the command on the condition that the file doesn't already exist:
- command: /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip
register: solr_zip
ignore_errors: True
- name: Download Solr
shell: chdir={{project_root}}/solr /usr/bin/wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
when: solr_zip|failed
You could also use the commands module with the creates option:
- name: Download Solr
command: /usr/bin/wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip chdir={{project_root}}/solr creates={{project_root}}/solr/solr-4.7.0.zip
This article might be useful
Out of it comes this example:
tasks:
- shell: if [[ -f "/etc/monitrc" ]]; then /bin/true; else /bin/false; fi
register: result
ignore_errors: True
- command: /bin/something
when: result|failed
- command: /bin/something_else
when: result|success
- command: /bin/still/something_else
when: result|skipped
So basically you can do this checking by registering a variable from a command and checking its return code. (You can also do this by checking its stdout)
- name: playbook
hosts: all
user: <your-user>
vars:
project_root: /usr/local
tasks:
- name: Check if the solr zip exists.
command: /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip
ignore_errors: True
register: solr_exists
- name: Download Solr
shell: chdir={{project_root}}/solr wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
when: solr_exists.rc != 0
This basically says that if the /usr/bin/test -e {{project_root}}/solr/solr-4.7.0.zip command returns a code that is not 0, meaning it doesn't exist then execute the task Download Solr
Hope it helps.
my favourite is to only download the file if it is newer than the local file (which includes when the local file does not exist)
the -N option with wget does this: https://www.gnu.org/software/wget/manual/html_node/Time_002dStamping-Usage.html .
sadly, i don't think there is an equivalent feature in get_url
so a very small change:
- name: Download Solr
shell: chdir={{project_root}}/solr wget -N http://<SNIPPED>/solr-4.7.0.zip
Use the creates argument
- name: Download Solr
shell: creates={{working_directory}}/solr/solr-4.7.0.zip chdir={{working_directory}}/solr wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip

Resources