I have a.dsx file in the remote server which I wish to rename. I have ansible playbook that gets the artefacts from nexus, zips it and then unzips it to the remote server.
That unzipped file needs to be renamed.
unarchive:
remote_src: yes
src: {{destinationDir}}/{{artefactid}}-{{version}}.tar.gz
dest: {{destinationDir}}
The filename which gets unarchived is djp-1.0.2-20200805.123-1.dsx
And i just want djp.dsx
Actually the filename which I mentioned is just an example.. The filename would keep changing everytime we do deployment. Can you please suggest how can I modify the move command then.
Please use mv command to rename the file just as you would rename in a file in your terminal. As discussed in the comments
1) set_fact to a variable: item.path is the file your want to rename -set_fact: fname: {{ item.path | basename }}. You also have to find the files first.
2) - set_fact: prefix: "{{ fname | regex_replace('(\w+)-.*', '\\1') }}"
3) - name: Rename file command: mv ./djp-1.0.2-20200805.123-1.dsx ./{{prefix}}.dsx
Related
Below is the folder structure
playbook
|-groups_Vars
|-host
|-roles
|-archive-artifact
|-task
|-main.yml
|-archive-playbook.yml
myfile
In my main.yml, I need to archive the playbook in playbook.tar.gz.
- archive:
path: "<earlier path>/playbook/*"
dest: "<earlier path>/playbook.tar.gz"
format: gz
The folder that holds a playbooks is accessible in the special variable playbook_dir.
Getting the parent directory of a file or directory in Ansible is possible via the filter dirname.
And, as pointed in the documentation, path can be either a single element or a list of elements, so you could also have myfile included in that list.
So, to archive the playbook directory in the parent folder of the playbook directory, one could do:
- archive:
path:
- "{{ playbook_dir }}/*"
- "{{ playbook_dir | dirname }}/myfile"
dest: "{{ playbook_dir | dirname }}/playbook.tar.gz"
format: gz
I'm trying to figure out how one would copy or write the contents of a slurped variable to a remote (preferable) file. If this is not possible, what's the cleanest way to do it in steps?
I have something like this:
- name: Load r user public key
slurp:
src: *path*
register: slurped_r_key
- name: Decode r key
set_fact:
r_content: "{{ slurped_r_key.content | b64decode }}"
I want to get the contents of {{ r_content }} into a file in the remote machines that are part of an inventory group. If I cannot do that directly, what's the best way? Should I copy the contents to a local file and then scp the file over to the remote machines?
Thanks in advance!
To copy the variable to a file you can try as below:
- name: copy
copy:
content: "{{r_content}}"
dest: /tmp/testing
I'm trying to find files older than one day in Ansible and after that, create a tar.gz of those files, I've tried archive module, although it is creating only a tar of the last element in the list. Is there any way to create tar.gz including all files?
Below is my script:
- name: Check all files
find: paths=/myfiles
file_type=file
age=1
age_stamp=mtime
register: files
failed_when: files.matched < 10
- name: Remove previous tarFile
file: path=/tmp/test.tar.gz
state=absent
- name: compress all the files in tar.gz
archive: path="{{ item.path }}"
dest=/tmp/test.tar.gz
format=gz
with_items:
"{{ files.files }}"
it is creating only a tar of the last element in the list
It is creating and overwriting /tmp/test.tar.gz for each file in the loop. When you check, you see only the last one.
If you look at the archive module docs, you will see:
path Remote absolute path, glob, or list of paths or globs for the file or files to compress or archive.
So you can provide the list of files as a value for the path parameter:
- name: compress all the files in tar.gz
archive:
path: "{{ files_to_archive }}"
dest: /tmp/test.tar.gz
format: gz
vars:
files_to_archive: "{{ files.files | map(attribute='path') | list }}"
i have created multiple zip files using below method.
- name: 'Create zip archive of {{ date_input }} NMON HTML files'
archive:
path: /tmp/{{ inventory_hostname }}_*.html
dest: /tmp/NMON-{{ inventory_hostname }}.zip
format: zip
when: option == "5"
delegate_to: localhost
For some weird reasons I'm having troubles with a simple task which is copying a content of the folder myfiles (few files in there) to the dist/myfiles location. Task looks like this:
name: Deploy config files like there is no tomorrow
copy:
src: "{{ item }}"
dest: "/home/{{ ansible_user_id }}/dist/{{ item }}"
with_items:
- 'config'
- 'myfiles/'
myfiles folder exist under the dist and config file is copied to the dist folder.
Is this possible in Ansible or I should copy each file separately? Am I doing it completely wrong?
Your task copies both: the config file and the myfiles on Debian and CentOS targets properly.
If for some reason you have a problem, you might have a look at Looping over Fileglobs.
You need to split the task into two, with the second one looking like:
- name: Deploy multiple config files
copy:
src: "{{ item }}"
dest: "/home/{{ ansible_user_id }}/dist/myfiles/{{ item | basename }}"
with_fileglob:
- /path/to/myfiles/*
For a recursive copy, check this question on SeverFault
Alternatively, you could use the synchronize module, but pay special attention when using become. See this question on SuperUser.
I'm trying to turn these lines into something I can put in an ansible playbook:
# Install Prezto files
shopt -s extglob
shopt -s nullglob
files=( "${ZDOTDIR:-$HOME}"/.zprezto/runcoms/!(README.md) )
for rcfile in "${files[#]}"; do
[[ -f $rcfile ]] && ln -s "$rcfile" "${ZDOTDIR:-$HOME}/.${rcfile##*/}"
done
So far I've got the following:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_fileglob:
- ~/.zprezto/runcoms/z*
I know it isn't the same, but it would select the same files: except with_fileglob looks on the host machine, and I want it to look on the remote machine.
Is there any way to do this, or should I just use a shell script?
A clean Ansible way of purging unwanted files matching a glob is:
- name: List all tmp files
find:
paths: /tmp/foo
patterns: "*.tmp"
register: tmp_glob
- name: Cleanup tmp files
file:
path: "{{ item.path }}"
state: absent
with_items:
- "{{ tmp_glob.files }}"
Bruce P's solution works, but it requires an addition file and gets a little messy. Below is a pure ansible solution.
The first task grabs a list of filenames and stores it in files_to_copy. The second task appends each filename to the path you provide and creates symlinks.
- name: grab file list
shell: ls /path/to/src
register: files_to_copy
- name: create symbolic links
file:
src: "/path/to/src/{{ item }}"
dest: "path/to/dest/{{ item }}"
state: link
with_items: files_to_copy.stdout_lines
The file module does indeed look on the server where ansible is running for files when using with_fileglob, etc. Since you want to work with files that exist solely on the remote machine then you could do a couple things. One approach would be to copy over a shell script in one task then invoke it in the next task. You could even use the fact that the file was copied as a way to only run the script if it didn't already exist:
- name: Copy link script
copy: src=/path/to/foo.sh
dest=/target/path/to/foo.sh
mode=0755
register: copied_script
- name: Invoke link script
command: /target/path/to/foo.sh
when: copied_script.changed
Another approach would be to create an entire command line that does what you want and invoke it using the shell module:
- name: Generate links
shell: find ~/.zprezto/runcoms/z* -exec ln -s {} ~ \;
You can use with_lines to accomplish this:
- name: Link Prezto files
file: src={{ item }} dest=~ state=link
with_lines: ls ~/.zprezto/runcoms/z*