Team,
I have 10 tasks and I want to run 2-10 only if condition in task1 is met.
- name: 1Check if the node needs to be processed
stat: /tmp/fscache-cleaned-up1.log
register: dc_status
- name: 2Check if the node needs to be processed
stat: /tmp/fscache-cleaned-up2.log
register: dc_status
failed_when: dc_status.stat.exists
..
..
..
You have to use "path" in your stat module call and "block" for combining dependant tasks, like this:
- name: Check if the node needs to be processed
stat:
path: /tmp/fscache-cleaned-up1.log
register: dc_status
- name: Run this only when the log file exists
block:
- name: Install something
yum:
name:
- somepackage
state: present
- name: Apply a config template
template:
src: templates/src.j2
dest: /etc/foo.conf
- name: Start a service and enable it
service:
name: bar
state: started
enabled: True
when:
- dc_status.stat.exists
- dc_status.stat.is_file
Additional information: ansible stat module, block usage
Related
I am implementing a role in ansible where I need to:
Start an application (retry max of 3 times). If the app was successfully started, it will return "OK", otherwise it will return "NOK".
If the app was not successfully started (NOK), we should try to delete a xpto directory (max 3 times - after each NOK status). After this 3 times, (if the app was not successfully started) we should get fail status, and we should abort execution.
If the app starts OK, there is no need to clean the directory, and we are ready to run the app.
We need to be aware of this:
Each time I try to start the app, if I get status "NOK" I must run a task to delete the xpto directory.
We can retry up to 3 times to start the app (and to delete the directory).
Each time we try to start the app with NOK status we must run the task to delete the directory.
If at any attempt of starting the app we get status OK (app started with success), we don't want to run task to delete the directory - In this case we should move to last task to run the app.
The role has only this 3 tasks (start app, delete directory, run the app)
For now I have only this with where I am missing a lot of the mentioned features:
---
- name: Start app
command: start app
register: result
tags: myrole
- name: Delete directory xpto if app didn't started ok
command: rm -rf xpto
when:
- result.stdout is search("NOK")
tags: myrole
- name: Run the application
command: app run xpto
when:
- result.stdout is search("OK")
tags: myrole
I have been pointed to an other question with a response which allows me to implement the 3 retries with abort option.
I am still missing the way to implement the option if the app starts ok (task1) and proceed directly to run the app (task3) (not going throw task2) and I don't know where to start.
Building-up on the response you have been pointed to (which was inspired by a blog post on dev.to), adapting to your use case and adding some good practice.
start_run_app.yml would contain the needed tasks that can be retried:
---
- name: Group of tasks to start and run the application which could fail
block:
- name: Increment attempts counter
ansible.builtin.set_fact:
attempt_number: "{{ attempt_number | d(0) | int + 1 }}"
- name: Start the application
ansible.builtin.command: start app
register: result_start
failed_when: result_start.rc != 0 or result_start.stdout is search('NOK')
# If we get here then application started ok above
# register run result to disambiguate possible failure
# in the rescue section
- name: Run the application
ansible.builtin.command: app run xpto
register: result_run
rescue:
- name: Fail/stop here if application started but did not run
ansible.builtin.fail:
msg: "Application started but did not run. Exiting"
when:
- result_run is defined
- result_run is failed
- name: Delete directory xpto since app didn't start ok
ansible.builtin.command: rm -rf xpto
- name: "Fail if we reached the max of {{ max_attempts | d(3) }} attempts"
# Default will be 3 attempts if max_attempts is not passed as a parameter
ansible.builtin.fail:
msg: Maximum number of attempts reached
when: attempt_number | int == max_attempts | int | d(3)
- name: Show number of attempts
ansible.builtin.debug:
msg: "group of tasks failed on attempt {{ attempt_number }}. Retrying"
- name: Add delay if configured
# no delay if retry_delay is not passed as parameter
ansible.builtin.wait_for:
timeout: "{{ retry_delay | int | d(omit) }}"
when: retry_delay is defined
- name: Include ourselves to retry.
ansible.builtin.include_tasks: start_run_app.yml
And you can include this file like so (example for a full playbook, adapt to your exact need).
---
- name: Start and run my application
hosts: my_hosts
tasks:
- name: Include retry-able set of tasks to start and run application
ansible.builtin.include_tasks: start_run_app.yml
vars:
max_attempts: 6
retry_delay: 5
Considering that none of the tasks shown have any error handling, it seems that the scripts are all returning the exit code "0"; it would be better that in the logic where "action terminated" is printed to the output, it would also change the exit code to a custom value that then can be included in the Ansible logic:
# changes in the shell script
...
echo "action terminated"
exit 130
...
that way the task 2 can be set with
- name: Task2
command: "{{ home }}/script2.sh"
when:
- result.rc == 130
tags: role1
After the execution of the task2, include an additional task that retries task1
- name: Task2.5
command: "{{ home }}/script1.sh"
register: result2
until: "result2.rc != 130"
ignore_errors: yes
retries: 3
delay: 5
when:
- result.rc == 130
tags: role1
- name: Fail if script1 failed after all the attempts
fail:
msg: "script1 could not be completed"
when:
- result.rc == 130
- result2.failed
tags: role1
note that the when evaluates if the first attempt failed, as the register will keep track the status of the task in a different variable, this one is used in the evaluation of until. The Fail task will be executed only if both attempts were unsuccessful.
EDIT
If changing the exit code is not possible, you need to replace that condition to the search by text
- name: Task1
command: "{{ home }}/script1.sh"
register: result
tags: role1
- name: Task2
command: "{{ home }}/script2.sh"
when:
- result.stdout is search("action terminated")
tags: role1
- name: Task2.5
command: "{{ home }}/script1.sh"
register: result2
until: "'action terminated' not in result.stdout"
ignore_errors: yes
retries: 3
delay: 5
when:
- result.stdout is search("action terminated")
tags: role1
- name: Exit the execution if script1 failed after all the attempts
fail:
msg: "script1 could not be completed"
when:
- result.stdout is search("action terminated")
- result2.failed
tags: role1
- name: Task3
command: "{{ home }}/script3.sh"
tags: role1
When a task failed on any host on ansible, it should not be trigger again by the next task. When a task failed on host, the host is removed from the current bash of ansible and the rest of tasks will not be run on that host. If you want to try a task for n, you will need to use the until and loop.
- name: Task1
command: "{{ home }}/script1.sh"
register: result
tags: role1
- name: Task2
command: "{{ home }}/script2.sh"
when:
- result.stdout is search("action terminated")
tags: role1
- name: Task1 again
command: "{{ home }}/script1.sh"
register: result2
tags: role1
until: result2.stdout is search("action terminated")
retries: 3
- name: Task2 Again
command: "{{ home }}/script2.sh"
when:
- result2.stdout is not search("not action terminated")
- result2.stdout is search("action terminated")
tags: role1
- name: Task3
command: "{{ home }}/script3.sh"
tags: role1
when:
- result.stdout is search("not action terminated")
But you can only trigger a task by using the handler. But the playbook is going from top tasks to down and never go back on a previous tasks.
I have two yaml files. One is azure-pipeline.yml
name: test-resources
trigger: none
resources:
repositories:
- repository: pipeline
type: git
name: test-templates
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml#pipeline
parameters:
folderName: test-3scale
As for this file, when I run the pipeline, I could choose Yes or No as options before running it.
The other one is the playbook.yml for Ansible
- hosts: localhost
connection: local
become: true
vars_files:
- test_service.yml
- "vars/test.yml"
collections:
- test_collection
tasks:
- name: Find out playbooks pwd
shell: pwd
register: playbook_path_output
no_log: false
- debug: var=playbook_path_output.stdout
- name: echo something
shell: echo 'test this out'
register: playbook_ls_content_output
no_log: false
- debug: var=playbook_ls_content_output.stdout
I wish to add a condition in the playbook.yml task, so that
When I choose "Yes" when running the pipeline, task named "echo something" will run, but if I choose "No", this task will be skipped. I am really new in yaml syntax and logic. Could someone help? Many thanks!
These runs successfully on my side(I can judge the condition with no problem, at compile time it will be expanded.):
azure-pipeline.yml
trigger: none
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml
parameters:
whetherYesOrNo: ${{parameters.whetherYesOrNo}}
ansible-playbook-deploy.yml
parameters:
- name: whetherYesOrNo
type: string
default: No
steps:
- ${{ if eq(parameters.whetherYesOrNo, 'Yes') }}:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
Repository structure on my side:
If Yes:
If No:
I have the following two playbooks / tasks files. I want to re-run the children tasks until the result is neither changed nor failed, but max 6 times.
I have the impression that the until statement is simply ignored with the import statement.
The child file is executed only once, with no errors or failures.
I inserted my test task directly in the until tasks in the parent file -> everything works fine.
But I need to use more than one child task (run the main task and then restart).
I know that you can't use until-loop with blocks and include_tasks don't work with until either. But I read the documentation as saying that import_tasks and until should work together (https://docs.ansible.com/ansible/latest/collections/ansible/builtin/import_tasks_module.html#attributes)
Is this behavior correct / intended or am I doing something wrong? If this behavior is intended, how could I solve my problem?
playbook.yaml
- name: "make this working"
hosts: mygroup
tasks:
- name: rerun up to 6 times if not everything is ok
until: (not result_update.changed) and (not result_update.failed)
retries: 5
ansible.builtin.import_tasks: "./children.yaml"
children.yaml
- name: shell1
ansible.windows.win_shell: echo "test1"
register: result_update
# changed_when: false
- name: shell2
ansible.windows.win_shell: echo "test2"
I couldn't solve this problem with the until statement. Instead I build my own loop system which is clearly not really ansible like and not an ideal solution but it works fine (at least for my needs).
playbook.yaml
- name: "make this working"
hosts: mygroup
tasks:
- name: create variable for maximum tries
ansible.builtin.set_fact:
tries_max: 10
- name: create variable for counting tries
ansible.builtin.set_fact:
tries_counter: 1
- name: run this ({{ tries_counter }} / {{ tries_max }})
ansible.builtin.include_tasks: "./children.yaml"
children.yaml
- name: shell1
ansible.windows.win_shell: echo "test1"
register: result_update
# changed_when: false
- name: shell2
ansible.windows.win_shell: echo "test2"
- name: increase try counter by 1
ansible.builtin.set_fact:
tries_counter: "{{ (tries_counter | int) + 1 }}"
- name: run tasks again if not everything is ok ({{ tries_counter }} / {{ tries_max }})
when: ((result_update.changed) or (result_update.failed)) and ((tries_counter | int) <= (tries_max | int))
ansible.builtin.include_tasks: "./children.yaml"
I am new to ansible and have exhausted my forum searches. I cannot seem to find an answer for this issue I am having with with_items and when. The playbook as it is now will run, but it results in failed messages "src file does not exist" for every path in the list that does not exist on that machine.
Since, this is being run against several machines, that's a lot of failed (red) messages that mean nothing. I thought the when statement would only run the task IF the statresult existed. This is not the case.
Basically what I am trying to do is check several machines to see if these two paths exist. If they do, create a symlink for each one. All the paths to check in are different. Right now I have:
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: Check that domains exist
stat:
path: '/path/to/the/domain/{{ item.domainpath }}'
get_attributes: no
get_checksum: no
get_md5: no
get_mime: no
register: item.statresult
with_items:
- { domainpath: 'path1/', statresult: 'stat_result_path1' }
- { domainpath: 'path2/', statresult: 'stat_result_path2' }
- { domainpath: 'path3/', statresult: 'stat_result_path3' }
- name: Create symlink for bin on existing domain machines
file:
src: '/path/to/the/domain/{{ item.srcbin }}'
dest: /path/new/symlink_bin_link
state: link
with_items:
- { srcbin: 'path1/bin/', domainexists: 'stat_result_path1.stat.exists' }
- { srcbin: 'path2/bin/', domainexists: 'stat_result_path2.stat.exists' }
- { srcbin: 'path3/bin/', domainexists: 'stat_result_path3.stat.exists' }
when: item.domainexists
ignore_errors: yes
- name: Create symlink for config on existing domain machines
file:
src: '/path/to/the/domain/{{ item.srcconfig }}'
dest: /path/new/symlink_config_link
state: link
with_items:
- { srcconfig: 'path1/config/', domainexists: 'stat_result_path1.stat.exists' }
- { srcconfig: 'path2/config/', domainexists: 'stat_result_path2.stat.exists' }
- { srcconfig: 'path3/config/', domainexists: 'stat_result_path3.stat.exists' }
when: item.domainexists
ignore_errors: yes
I have to use ignore_errors because otherwise it will not go to the second task. I have tried to use when: item.domainexists == true but that results in the task getting skipped even when it matches a path that exists.
Even if the when statement iterates over every with_items, it should not matter because as long as it matches one, it should do the task correctly?
This is how your playbook should look like:
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: Check that domains exist
stat:
path: /path/to/the/domain/{{ item }}
loop:
- path1
- path2
- path3
register: my_stat
- name: Ensure symlinks are created for bin on existing domain machines
file:
src: /path/new/symlink_bin_link
dest: /path/to/the/domain/{{ item }}/bin
state: link
loop: "{{ my_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
- name: Ensure symlinks are created for config on existing domain machines
file:
src: /path/new/symlink_config_link
dest: /path/to/the/domain/{{ item }}/config
state: link
loop: "{{ my_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
Explanation:
register: item.statresult is a nonsensical construct in Ansible, you should provide a name of a variable as a parameter
that variable will contain a list of results for any task running in a loop
you should process that list (see this answer to learn about selectattr and map) to get a list of the paths which exist (only)
you should loop over that filtered-and-mapped list
Also: src and dest should be defined the other way round for symlinks than in your code.
Further, you can combine the last two tasks into one by adding a product filter to the iterable definition.
New ansible user (v2.3) and pulled playbook for github but not understanding idempotence. Once elasticsearch is installed and started with playbook, I now want to be able to just modify or add an es_config parm and then rerun playbook expecting an updated elasticsearch.yml config file and elasticsearch restarted. That include_roles elasticsearch however is skipped on rerun and not sure what to modify to change that.
ansible-playbook --ask-become-pass -vv elk.yml
elk.yml
---
#
# Playbook to install the ELK stack
#
- hosts: servers
strategy: debug
remote_user: ansible
become: yes
become_user: root
tasks:
- debug: msg="elk main"
- include_role:
name: java
name: elasticsearch
name: kibana
name: nginx
name: logstash
vars:
es_instance_name: "node1"
es_config: {
discovery.zen.ping.unicast.hosts: "logstash-2, logstash-3",
network.host: _eno1_,
cluster.name: logstash,
node.name: "{{ ansible_hostname }}",
http.port: 9200,
transport.tcp.port: 9300,
node.data: true,
node.master: true,
bootstrap.memory_lock: true,
index.number_of_shards: 3,
index.number_of_replicas: 1
}
es_major_version: "6.x"
es_version: "6.2.1"
es_heap_size: "26g"
es_cluster_name: "logstash"
roles/elasticsearch/tasks/main.yml
---
- name: os-specific vars
include_vars: "{{ansible_os_family}}.yml"
tags:
- always
- debug: msg="es parms"
- name: check-set-parameters
# include: elasticsearch-parameters.yml
include_tasks: elasticsearch-parameters.yml
tags:
- always
#- include: java.yml
# when: es_java_install
# tags:
# - java
#- include: elasticsearch.yml
- include_tasks: elasticsearch.yml
roles/elasticsearch/tasks/elasticsearch.yml
---
- name: Include optional user and group creation.
when: (es_user_id is defined) and (es_group_id is defined)
include_tasks: elasticsearch-optional-user.yml
- name: Include specific Elasticsearch
include_tasks: elasticsearch-Debian.yml
when: ansible_os_family == 'Debian'
roles/elasticsearch/tasks/elasticsearch-Debian.yml
---
- set_fact: force_install=no
- set_fact: force_install=yes
when: es_allow_downgrades
- name: Debian - Install apt-transport-https to support https APT downloads
become: yes
apt: name=apt-transport-https state=present
when: es_use_repository
- name: Debian - Add Elasticsearch repository key
become: yes
apt_key: url="{{ es_apt_key }}" state=present
when: es_use_repository and es_apt_key
- name: Debian - Add elasticsearch repository
become: yes
apt_repository: repo={{ item.repo }} state={{ item.state}}
with_items:
- { repo: "{{ es_apt_url_old }}", state: "absent" }
- { repo: "{{ es_apt_url }}", state: "present" }
when: es_use_repository
- name: Debian - Include versionlock
include: elasticsearch-Debian-version-lock.yml
when: es_version_lock
- name: Debian - Ensure elasticsearch is installed
become: yes
apt: name=elasticsearch{% if es_version is defined and es_version != ""
%}={{$
when: es_use_repository
register: debian_elasticsearch_install_from_repo
notify: restart elasticsearch
- name: Debian - Download elasticsearch from url
get_url: url={% if es_custom_package_url is defined %}{{
es_custom_package_ur$
when: not es_use_repository
- name: Debian - Ensure elasticsearch is installed from downloaded package
become: yes
apt: deb=/tmp/elasticsearch-{{ es_version }}.deb
when: not es_use_repository
register: elasticsearch_install_from_package
notify: restart elasticsearch
roles/elasticsearch/tasks/elasticsearch-parameters.yml
# Check for mandatory parameters
- fail: msg="es_instance_name must be specified and cannot be blank"
when: es_instance_name is not defined or es_instance_name == ''
- fail: msg="es_proxy_port must be specified and cannot be blank when
es_proxy_$
when: (es_proxy_port is not defined or es_proxy_port == '') and
(es_proxy_hos$
- debug: msg="WARNING - It is recommended you specify the parameter
'http.port'"
when: es_config['http.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'transport.$
when: es_config['transport.tcp.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'discovery.$
when: es_config['discovery.zen.ping.unicast.hosts'] is not defined
#If the user attempts to lock memory they must specify a heap size
- fail: msg="If locking memory with bootstrap.memory_lock a heap size must
be s$
when: es_config['bootstrap.memory_lock'] is defined and
es_config['bootstrap.$
#Check if working with security we have an es_api_basic_auth_username and
es_ap$
- fail: msg="Enabling security requires an es_api_basic_auth_username and
es_ap$
when: es_enable_xpack and ("security" in es_xpack_features) and
es_api_basic_$
- set_fact: file_reserved_users={{ es_users.file.keys() | intersect
(reserved_x$
when: es_users is defined and es_users.file is defined and
(es_users.file.key$
- fail:
msg: "ERROR: INVALID CONFIG - YOU CANNOT CHANGE RESERVED USERS THROUGH THE$
when: file_reserved_users | default([]) | length > 0
- set_fact: instance_default_file={{default_file |
dirname}}/{{es_instance_name$
- set_fact: instance_init_script={{init_script | dirname
}}/{{es_instance_name}$
- set_fact: conf_dir={{ es_conf_dir }}/{{es_instance_name}}
- set_fact: m_lock_enabled={{ es_config['bootstrap.memory_lock'] is defined
and$
#TODO - if transport.host is not local maybe error on boostrap checks
#Use systemd for the following distributions:
#Ubuntu 15 and up
#Debian 8 and up
#Centos 7 and up
#Relies on elasticsearch distribution installing a serviced script to
determine$
- set_fact: use_system_d={{(ansible_distribution == 'Debian' and
ansible_distri$
- set_fact: instance_sysd_script={{sysd_script | dirname
}}/{{es_instance_name}$
when: use_system_d
#For directories we also use the {{inventory_hostname}}-{{ es_instance_name
}} $
- set_fact: instance_suffix={{inventory_hostname}}-{{ es_instance_name }}
- set_fact: pid_dir={{ es_pid_dir }}/{{instance_suffix}}
- set_fact: log_dir={{ es_log_dir }}/{{instance_suffix}}
- set_fact: data_dirs={{ es_data_dirs | append_to_list('/'+instance_suffix)
}}
formatting issue with include_role. Fixed problem with one role per include as follows:
- include_role:
name: java
- include_role:
name: elasticsearch
and so forth.