exit on ssh failure - ansible

in my play i ssh to a host and execute a number of roles, however if I fail to ssh into the instance the next include carry on regardless, id like to exit/fail when - build fails to execute
here is the example,
FYI - app_ec2 creates an instance on AWS and sets the host, build.yml then applies configuration to this instance and launch then users this instance to create an AMI and then a ASGroup
---
- hosts: localhost
connection: local
serial: 1
gather_facts: true
any_errors_fatal: true
max_fail_percentage: 0
vars_files:
- "vars/security.vars"
- "vars/{{ env }}/common.vars"
- "vars/server.vars"
roles:
- app_ec2
- include: build.yml
- include: launch-asg.yml
build.yml:
- hosts: "{{ role }}"
serial: 1
gather_facts: true
sudo: yes
any_errors_fatal: true
max_fail_percentage: 0
vars_files:
- "vars/{{ env }}/common.vars"
- "vars/server.vars"
roles:
- default
- restart
- awscli
- cloudwatch-logs
- ntp
- java
- tomcat
- newrelic
- newrelic_apm
- "{{role}}"
- app_liquibase
- restart

I am suggesting to use Ansible Blocks:
tasks:
- block:
- debug: msg='i execute normally'
- command: /bin/false
- debug: msg='i never execute, cause ERROR!'
rescue:
- debug: msg='I caught an error'
- command: /bin/false
- debug: msg='I also never execute :-('
always:
- debug: msg="this always executes"
And you can use this workaround:
Run some playbook on current host (-c local == local connection)
Call
inside some block run ansible that connects to remote host, and if
block fails retry.
PS:
Ansible playbooks should use Idempotency concept:
The concept that change commands should only be applied when they need
to be applied, and that it is better to describe the desired state of
a system than the process of how to get to that state. As an analogy,
the path from North Carolina in the United States to California
involves driving a very long way West but if I were instead in
Anchorage, Alaska, driving a long way west is no longer the right way
to get to California. Ansible’s Resources like you to say “put me in
California” and then decide how to get there. If you were already in
California, nothing needs to happen, and it will let you know it
didn’t need to change anything.

Related

Restart Service on failure using Ansible

Hope someone can help and point me in the right direction as I am new to Ansible. I have an Ansible playbook that installs windows updates and another playbook that checks to see if the specific service is running after the server has been rebooted.
Is there a way in which after reboot if the service is not started then it will attempt to restart the service.
These are my playbooks at the minute
Windows Update
---
- hosts: all
gather_facts: no
tasks:
- name: Install all security, critical, and rollup updates
become: True
win_updates:
category_names:
- SecurityUpdates
- CriticalUpdates
server_selection: windows_update
reboot: no
Service Check
---
- hosts: all
tasks:
- name: Active Directory Checks
block:
- name: Check Active Directory Domain Services are running
become: True
win_service:
name: "{{ item }}"
start_mode: auto
state: started
loop:
- NTDS
- ADWS
- Dfs
- DFSR
- DNS
- Kdc

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

Ansible playbook example code fail to run

I'm started to try Ansible, and using example code from Ansible Documentation. After I try several examples, I get error at the beginning of the code. It says
- name: Change the hostname to Windows_Ansible
^ here(Point at name)"
Any advice would be appreciate.
I tried this one
https://docs.ansible.com/ansible/latest/modules/win_hostname_module.html#win-hostname-module
---
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required
The below task will change the hostname of the server. Make sure you run on a test server so that it wont create issues. If you just wanted to test some playbook, use the second playbook with win_command
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Test
win_command: whoami
register: res

Ansible launch ssh-copy-id before running play?

I want to use Ansible to disable selinux in some remote servers. I don't know yet the full list of the servers, it will come from time to time.
That would be great if the ssh-copy-id phase would be integrated somehow in the playbook - you would expect that from an automation system ? I don't mind getting asked for the password one time per server.
With various reading, I understand I can run a local_action in my task:
---
- name: Disable SELinux
hosts: all
remote_user: root
gather_facts: False
tasks:
- local_action: command ssh-copy-id {{remote_user}}#{{hostname}}
- selinux:
state: disabled
However:
It fails because {{remote_user}} and {{hostname}} are not accessible in this context.
I need to gather_factsto False, because it's executed before local_action
Any idea if that's possible within Ansible playbooks ?
You may try this:
- hosts: all
gather_facts: no
tasks:
- set_fact:
rem_user: "{{ ansible_user | default(lookup('env','USER')) }}"
rem_host: "{{ ansible_host }}"
- local_action: command ssh-copy-id {{ rem_user }}#{{ rem_host }}
- setup:
- selinux:
state: disabled
Define remote user and remote host first, then make local action, then enforce fact gathering with setup.

Ansible - playbook calls another playbook with variables, tags, and limits

I have a blue green deploy playbook. It relies on certain variables to determine which hosts to apply the underlying roles. Here is one of the roles for an example:
- name: Remove current server from load balancer
hosts: 'tag_Name_{{server_name}}_production'
remote_user: ec2-user
sudo: true
roles:
- remove-load-balancer
I can call this playbook with specified limits and tags and it works wonderfully - but only for one type of server. For example, this command will blue green deploy our services servers:
ansible-playbook blue.green.yml -i ec2.py -l tag_Name_services_production,tag_Name_services_production_old --skip-tags=restart,stop -e server_name=services -e core_repo=~/core
I would like to write a master blue green playbook which essentially runs several playbooks - first for the api servers and then for the services servers. I have tried using includes but cannot seem to get the syntax right - ansible either complains that my task doesn't do anything or complains that the syntax is incorrect:
- name: Blue green deploy to all production boxes.
hosts: localhost
tasks:
- include: blue.green.single.yml
hosts:
- tag_Name_api_production
- tag_Name_api_production_old
vars:
- server_name: api
skip-tags:
- restart
- stop
- include: blue.green.single.yml
hosts:
- tag_Name_services_production
- tag_Name_services_production_old
vars:
- server_name: services
skip-tags:
- restart
- stop
Ideally I'd be able to call this like so:
ansible-playbook blue.green.yml -i ec2.py -e core_repo=~/core
Has anyone done this successfully? If so - how can I accomplish this?
Would this work for your case?
- name: Blue green deploy to all production boxes.
hosts: [tag_Name_api_production, tag_Name_api_production_old]
tasks:
- include: blue.green.single.yml
vars:
- server_name: api
skip-tags:
- restart
- stop
- name: Blue green deploy to all production boxes.
hosts: [tag_Name_services_production, tag_Name_services_production_old]
tasks:
- include: blue.green.single.yml
vars:
- server_name: services
skip-tags:
- restart
- stop

Resources