How to wait for ssh to become available on a host before installing a role? - ansible

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools

Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

Related

Restart Service on failure using Ansible

Hope someone can help and point me in the right direction as I am new to Ansible. I have an Ansible playbook that installs windows updates and another playbook that checks to see if the specific service is running after the server has been rebooted.
Is there a way in which after reboot if the service is not started then it will attempt to restart the service.
These are my playbooks at the minute
Windows Update
---
- hosts: all
gather_facts: no
tasks:
- name: Install all security, critical, and rollup updates
become: True
win_updates:
category_names:
- SecurityUpdates
- CriticalUpdates
server_selection: windows_update
reboot: no
Service Check
---
- hosts: all
tasks:
- name: Active Directory Checks
block:
- name: Check Active Directory Domain Services are running
become: True
win_service:
name: "{{ item }}"
start_mode: auto
state: started
loop:
- NTDS
- ADWS
- Dfs
- DFSR
- DNS
- Kdc

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

Ansible: Trigger the task only when previous task is successful and the output is created

I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...

exit on ssh failure

in my play i ssh to a host and execute a number of roles, however if I fail to ssh into the instance the next include carry on regardless, id like to exit/fail when - build fails to execute
here is the example,
FYI - app_ec2 creates an instance on AWS and sets the host, build.yml then applies configuration to this instance and launch then users this instance to create an AMI and then a ASGroup
---
- hosts: localhost
connection: local
serial: 1
gather_facts: true
any_errors_fatal: true
max_fail_percentage: 0
vars_files:
- "vars/security.vars"
- "vars/{{ env }}/common.vars"
- "vars/server.vars"
roles:
- app_ec2
- include: build.yml
- include: launch-asg.yml
build.yml:
- hosts: "{{ role }}"
serial: 1
gather_facts: true
sudo: yes
any_errors_fatal: true
max_fail_percentage: 0
vars_files:
- "vars/{{ env }}/common.vars"
- "vars/server.vars"
roles:
- default
- restart
- awscli
- cloudwatch-logs
- ntp
- java
- tomcat
- newrelic
- newrelic_apm
- "{{role}}"
- app_liquibase
- restart
I am suggesting to use Ansible Blocks:
tasks:
- block:
- debug: msg='i execute normally'
- command: /bin/false
- debug: msg='i never execute, cause ERROR!'
rescue:
- debug: msg='I caught an error'
- command: /bin/false
- debug: msg='I also never execute :-('
always:
- debug: msg="this always executes"
And you can use this workaround:
Run some playbook on current host (-c local == local connection)
Call
inside some block run ansible that connects to remote host, and if
block fails retry.
PS:
Ansible playbooks should use Idempotency concept:
The concept that change commands should only be applied when they need
to be applied, and that it is better to describe the desired state of
a system than the process of how to get to that state. As an analogy,
the path from North Carolina in the United States to California
involves driving a very long way West but if I were instead in
Anchorage, Alaska, driving a long way west is no longer the right way
to get to California. Ansible’s Resources like you to say “put me in
California” and then decide how to get there. If you were already in
California, nothing needs to happen, and it will let you know it
didn’t need to change anything.

Ansible - actions BEFORE gathering facts

Does anyone know how to do something (like wait for port / boot of the managed node) BEFORE gathering facts? I know I can turn gathering facts off
gather_facts: no
and THEN wait for port but what if I need the facts while also still need to wait until the node boots up?
Gathering facts is equivalent to running the setup module. You can manually gather facts by running it. It's not documented, but simply add a task like this:
- name: Gathering facts
setup:
In combination with gather_facts: no on playbook level the facts will only be fetched when above task is executed.
Both in an example playbook:
- hosts: all
gather_facts: no
tasks:
- name: Some task executed before gathering facts
# whatever task you want to run
- name: Gathering facts
setup:
Something like this should work:
- hosts: my_hosts
gather_facts: no
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for port=22
- name: gather facts
setup:
- continue with my tasks...
The wait_for will execute locally on your ansible host, waiting for the servers to respond on port 22, then the setup module will perform fact gathering, after which you can do whatever else you need to do.
I was trying to figure out how to provision a host from ec2, wait for ssh to come up, and then run my playbook against it. Which is basically the same use case as you have. I ended up with the following:
- name: Provision App Server from Amazon
hosts: localhost
gather_facts: False
tasks:
# #### call ec2 provisioning tasks here ####
- name: Add new instance to host group
add_host: hostname="{{item.private_ip}}" groupname="appServer"
with_items: ec2.instances
- name: Configure App Server
hosts: appServer
remote_user: ubuntu
gather_facts: True
tasks: ----configuration tasks here----
I think the ansible terminology is that I have two plays in a playbook, each operating on a different group of hosts (localhost, and the appServer group)

Resources