I am trying to execute the scripts from a directory in a particular order with an interval of 90 sec each iteration. For instance, I want to to start the scripts under /opt/scripts/, and it has (SK-1.sh, SK-2.sh, PX.sh, N1.sh,N2.sh). I want to start scripts that are SK and then PX and finally N.
- name: execute scripts A
shell: "ls -ltr {{ item }}"
args:
chdir: "/opt/scripts/"
with_fileglob:
- "/opt/scripts/*SK*.sh"
- "/opt/scripts/*PX*.sh"
- "/opt/scripts/*N*.sh"
loop_control:
pause: 90
But fileglob will look up for those scripts only on the controller host (Ansible tower),
hence not working. Is there some kind of filter I can use that would work on remote hosts rather controller host ?
I am trying to avoid the listing of the files from a previous task and register the items to process.
After all that is becoming a two step task.
Q: "Start scripts that are SK and then PX and finally N."
A: The Ansible module find neither keeps the order of the patterns nor sorts the results. You'll have to iterate the list of the globs on your own and sort the results to achieve the required order. For example, create a file with the tasks
shell> cat find_and_execute.yml
- find:
paths: /tmp
patterns: "{{ eitem }}"
register: result
- debug:
msg: "{{ '%H:%M:%S'|strftime }} Execute {{ item }}"
loop: "{{ result.files|map(attribute='path')|list|sort }}"
loop_control:
pause: 3
and use it in a playbook. For example
shell> cat pb.yml
- hosts: test_11
tasks:
- include_tasks: find_and_execute.yml
loop:
- 'SK*.sh'
- 'PX*.sh'
- 'N*.sh'
loop_control:
loop_var: eitem
gives
msg: 18:19:14 Execute /tmp/SK-1.sh
msg: 18:19:17 Execute /tmp/SK-2.sh
msg: 18:19:21 Execute /tmp/PX.sh
msg: 18:19:24 Execute /tmp/N1.sh
msg: 18:19:27 Execute /tmp/N2.sh
Fit the parameters to your needs.
- find:
paths: /tmp
patterns:
- 'SK*.sh'
- 'PX*.sh'
- 'N*.sh'
register: result
- debug:
msg: "{{ result.files|map(attribute='path')|list }}"
give
msg:
- /tmp/SK-2.sh
- /tmp/SK-1.sh
- /tmp/N2.sh
- /tmp/N1.sh
- /tmp/PX.sh
Related
I am working on small playbook that will search for files in specified directories and then delete them if they met certain conditions. I have following playbook so far
- hosts:
- localhost
gather_facts: false
tasks:
- name: find logs
find:
paths: "{{ item.0 }}"
file_type: file
patterns: "{{ item.1 }}"
register: find_logs
with_nested:
- ["/var/log/apache2", "/var/log/nginx"]
- ["access.log", "error.log"]
- debug:
var: item
loop:
- "{{ find_logs }}"
So this will obviously look into /var/log/apache2 and /var/log/nginx directories and search for access.log and error.log files. Now, following ansible's documentation I want to access files return values and their paths. The issue I'm having right now is with nested loop and registered find_logs variable which holds list of dictionaries in results key. If I do find_logs.results then I will get a list of dictionaries and each of these dictionaries will have another list of files kept in files section. How can I 'flatten' this list even more to be able to retrieve files.path for every element produced by nested loop? To be honest I also tried find_logs | json_query('results[*].files[*]') but that gives me another list and I can't seem to iterate over it to get what I want (which is path of file). Any idea on how to make this work?
I completely misunderstood documentation for find module, Nested loop can be excluded in this case and replaced with following syntax
- hosts:
- localhost
gather_facts: false
tasks:
- name: find logs
find:
paths:
- /var/log/apache2
- /var/log/nginx
file_type: file
patterns:
- "access.log"
- "error.log"
- "other_vhosts_access.log"
register: find_logs
- name: check what was registered
debug:
msg: "my path -> {{ item.path }}"
with_items:
- "{{ find_logs.files }}"
I am currently taking two hosts and dynamically adding them to a group, followed by a synchronize task using with_together to use 3 lists of 2 elements in parallel to copy the specified files between two remote servers.
Here's an example based on the idea:
---
- name: Configure Hosts for Copying
hosts: localhost
gather_facts: no
tasks:
- name: Adding given hosts to new group...
add_host:
name: "{{ item }}"
groups: copy_group
with_items:
- ["remoteDest1", "remoteDest2"]
- name: Copy Files between servers
hosts: copy_group
gather_facts: no
tasks:
- name: Copying files...
synchronize:
src: "{{ item[1] }}"
dest: "{{ item[2] }}"
with_together:
- ["remoteSrc1", "remoteSrc2"]
- ["/tmp/remote/source/one/", "/tmp/remote/source/two/"]
- ["/tmp/remote/dest/one/", "/tmp/remote/dest/two/"]
delegate_to: "{{ item[0] }}"
Currently, it does both operations for both servers, resulting in 4 operations.
I need it to synchronize like so:
copy /tmp/remote/source/one/ from remoteSrc1 to /tmp/remote/dest/one/ on remoteDest1
copy /tmp/remote/source/two/ from remoteSrc2 to /tmp/remote/dest/two/ on remoteDest2
Which would mean it's a 1:1 ratio; essentially acting on the hosts in the same manner as with_together does for the lists.
The hosts are obtained dynamically, so I can't just make a different play for each host.
Since synchronize is essentially simplified version of rsync, then if there's a simple solution for this using rsync directly, then it would be much appreciated.
There isn't native functionality for this, so this is how I solved it:
Given the original task, add the following two lines:
- "{{ groups['copy_group'] }}"
when: inventory_hostname == item[3]
To get:
- name: Copying files...
synchronize:
src: "{{ item[1] }}"
dest: "{{ item[2] }}"
with_together:
- ["remoteSrc1", "remoteSrc2"]
- ["/tmp/remote/source/one/", "/tmp/remote/source/two/"]
- ["/tmp/remote/dest/one/", "/tmp/remote/dest/two/"]
- "{{ groups['copy_group'] }}"
delegate_to: "{{ item[0] }}"
when: inventory_hostname == item[3]
Essentially, by adding the hosts as a list, they can be used in the when statement to execute the task only when the current host (inventory_hostname) matches the host currently being indexed in the list.
The result is that the play only runs against each host once in a serial manner with the other list items that have the same index.
I am writing a playbook to locate a string pattern in a sequence of files. If I run my utility through the command module it will generate one or more strings on STDOUT. To run this across a number of systems I would like to run the command with_items:
- command: "findstring {{ item }}"
with_items:
- "string1"
- "string2"
register: found
failed_when: found.rc >= 2
And then iterate over the result to post process the info:
- name: Print strings we found
debug:
var: "{{ item }}"
with_items: found.results
Is there something equivalent to loop.index that can be used with "results" in the task above? This would allow me to do something like {{ item[INDEX].stdout }} to get the strings that were generated. I haven't been able to find an answer in the official documentation so I thought I would post here to see what the gurus think.
If you need to iterate over every line from all commands, use:
- debug:
msg: "Do smth for line {{ item }}"
with_items: "{{ found | json_query('results[].stdout_lines[]') }}"
This will take ever element from found.results, then every element from every stdout_lines.
For me this works in ansible [core 2.11.6]:
- name: Command which outputs multiple lines
ansible.builtin.command:
cmd: ls -l /
register: _ls_cmd
changed_when: no
- name: Debug to show each line
ansible.builtin.debug:
msg: "ITEM: {{ item }}"
with_items: "{{ _ls_cmd.stdout_lines }}"
I have the following playbook
- hosts: all
gather_facts: False
tasks:
- name: Check status of applications
shell: somecommand
register: result
changed_when: False
always_run: yes
After this task, I want to run a mail task that will mail the accumulated output of all the commands for the above task registered in the variable result. As of right now, when I try and do this, I get mailed for every single host. Is there some way to accumulate the output across multiple hosts and register that to a variable?
You can extract result from hostvars inside a run_once task:
- hosts: mygroup
gather_facts: false
tasks:
- shell: date
register: date_res
changed_when: false
- debug:
msg: "{{ ansible_play_hosts | map('extract', hostvars, 'date_res') | map(attribute='stdout') | list }}"
run_once: yes
This will print out a list of all date_res.stdout from all hosts in the current play and run this task only once.
While trying to copy the results of date_res.stdout to a file on host only single host data is copied not the all host's data is available
- name: copy all
copy:
content: "{{ allhost_out.stdout }}"
dest: "/ngs/app/user/outputsecond-{{ inventory_hostname }}.txt"
Im creating a deployment playbook for our web services. Each web service is in its own directory such as:
/webapps/service-one/
/webapps/service-two/
/webapps/service-three/
I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes.
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
ignore_errors: yes
While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these:
This:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
when: shell: [ -d /webapps/{{item}} ]
or this:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
stat:
path: /webapps/{{item}}
register: path
when: path.stat.exists == True
I'd collect facts first and then do only necessary things.
- name: Check existing services
stat:
path: "/tmp/{{ item }}"
with_items: "{{ services_to_stop }}"
register: services_stat
- name: Stop existing services
with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
shell: "/webapps/scripts/stopService.sh {{ item }}"
Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps):
---
- hosts: localhost
connection: local
vars:
dir_names: []
tasks:
- find:
paths: "/webapps"
file_type: directory
recurse: no
register: tmp_dirs
- set_fact: dir_names="{{ dir_names+ [item['path']] }}"
no_log: True
with_items:
- "{{ tmp_dirs['files'] }}"
- debug: var=dir_names
You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this:
- name: Stop services
with_items: "{{ dir_names }}"
shell: "/webapps/scripts/stopService.sh {{item | basename }}"