I want to change the permission of the file based on it's extension using ansible, for example I have directory test and within this directory I have a lot of shell script(.sh) and python(.py) files. I want to change the permission of shell script to 0700 and python files to 0644. Can you please help me that how I can achieve this. Thanks
There are many ways to do this. This should work. Tweak it to your needs.
- file: path={{item}} mode=0644
with_fileglob:
- <full_path>/*.py
- file: path={{item}} mode=0700
with_fileglob:
- <full_path>/*.sh
If the files are on remote, do this:
- shell: ls /test/*.py
register: py_files
- file: path={{item}} mode=0644
with_items: py_files.stdout_lines
Related
I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.
Please be informed that, im trying to copy a bulk files from my source server to the destination server using ansible. While trying an error. Please help me.
---
- name: Going to copy bulk files
hosts: test
vars_prompt:
- name: copy
prompt: Enter the Bulk File to Copy
private: no
tasks:
- name: Copy bulk files
shell: cp /tmp/guru/{{ copy }}* /ansible/sri
The shell module executes a shell command on the destination server, which explains the error message cp: cannot stat ‘/tmp/guru/a*’: No such file or directory: the source files of the cp does not exists on the destination server.
Ansible provide a lot of modules which are more appropriate to use than executing shell commands.
In your case, the copy module is the one you need: it copies files from source server to destination server. You can combine it with a with_fileglob loop:
tasks:
- name: Copy bulk files
copy:
src: "{{ item }}"
dest: /ansible/sri
with_fileglob: "/tmp/guru/{{ copy }}*"
Need to set /proc/sys/net/ipv4/conf/all/forwarding to 1
That's can be easily done via command
- name: Enable IPv4 traffic forwarding
command: echo 1 > /proc/sys/net/ipv4/conf/all/forwarding
But that's bad practice - it will be always "changed" task.
So I tried the following:
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" force=yes
Which failed with msg: "Destination /proc/sys/net/ipv4/conf/all not writable"
According to sources seems like copy always requires parent directory will be writable. But 1) I don't understand why 2) Any other "idiomatic" way to set destination file to required value?
While I still do not understand why copy needs to check parent directory permissions, thanks to #larsks:
sysctl module changes both sysctl.conf and /proc values
and this solves my task
- name: Enable IPv4 traffic forwarding
copy: content=1 dest="/proc/sys/net/ipv4/conf/all/forwarding" unsafe_writes=true
will disable Ansible's atomic write functionality, and instead write 1 to the file directly.
Atomic writes are good and useful because they mean you will never get a corrupted file that has the output of multiple processes interleaved, but /proc is a special magic thing. The classic Unix dance of writing to a temporary file until you're done, and then renaming it to the final filename you want breaks because /proc doesn't let you create random temporary files.
I found a workaround for this problem:
- name: Create temp copy of mongod.conf
copy:
src : /etc/mongod.conf
dest: /tmp/mongod.conf
remote_src: yes
diff: no
check_mode: no
changed_when: false
- name: Copy config file mongod.conf
copy:
src : "/source/of/your/mongod.conf"
dest: "/tmp/mongod.conf"
register: result
- name: Copy temp mongod.conf to /etc/mongod.conf
shell: "cp --force /tmp/mongod.conf /etc/mongod.conf"
when: result.changed == true
I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.
I'm trying to turn these lines into something I can put in an ansible playbook:
# Install Prezto files
shopt -s extglob
shopt -s nullglob
files=( "${ZDOTDIR:-$HOME}"/.zprezto/runcoms/!(README.md) )
for rcfile in "${files[#]}"; do
[[ -f $rcfile ]] && ln -s "$rcfile" "${ZDOTDIR:-$HOME}/.${rcfile##*/}"
done
So far I've got the following:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_fileglob:
- ~/.zprezto/runcoms/z*
I know it isn't the same, but it would select the same files: except with_fileglob looks on the host machine, and I want it to look on the remote machine.
Is there any way to do this, or should I just use a shell script?
A clean Ansible way of purging unwanted files matching a glob is:
- name: List all tmp files
find:
paths: /tmp/foo
patterns: "*.tmp"
register: tmp_glob
- name: Cleanup tmp files
file:
path: "{{ item.path }}"
state: absent
with_items:
- "{{ tmp_glob.files }}"
Bruce P's solution works, but it requires an addition file and gets a little messy. Below is a pure ansible solution.
The first task grabs a list of filenames and stores it in files_to_copy. The second task appends each filename to the path you provide and creates symlinks.
- name: grab file list
shell: ls /path/to/src
register: files_to_copy
- name: create symbolic links
file:
src: "/path/to/src/{{ item }}"
dest: "path/to/dest/{{ item }}"
state: link
with_items: files_to_copy.stdout_lines
The file module does indeed look on the server where ansible is running for files when using with_fileglob, etc. Since you want to work with files that exist solely on the remote machine then you could do a couple things. One approach would be to copy over a shell script in one task then invoke it in the next task. You could even use the fact that the file was copied as a way to only run the script if it didn't already exist:
- name: Copy link script
copy: src=/path/to/foo.sh
dest=/target/path/to/foo.sh
mode=0755
register: copied_script
- name: Invoke link script
command: /target/path/to/foo.sh
when: copied_script.changed
Another approach would be to create an entire command line that does what you want and invoke it using the shell module:
- name: Generate links
shell: find ~/.zprezto/runcoms/z* -exec ln -s {} ~ \;
You can use with_lines to accomplish this:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_lines: ls ~/.zprezto/runcoms/z*