Ansible - actions BEFORE gathering facts - ansible

Does anyone know how to do something (like wait for port / boot of the managed node) BEFORE gathering facts? I know I can turn gathering facts off
gather_facts: no
and THEN wait for port but what if I need the facts while also still need to wait until the node boots up?

Gathering facts is equivalent to running the setup module. You can manually gather facts by running it. It's not documented, but simply add a task like this:
- name: Gathering facts
setup:
In combination with gather_facts: no on playbook level the facts will only be fetched when above task is executed.
Both in an example playbook:
- hosts: all
gather_facts: no
tasks:
- name: Some task executed before gathering facts
# whatever task you want to run
- name: Gathering facts
setup:

Something like this should work:
- hosts: my_hosts
gather_facts: no
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for port=22
- name: gather facts
setup:
- continue with my tasks...
The wait_for will execute locally on your ansible host, waiting for the servers to respond on port 22, then the setup module will perform fact gathering, after which you can do whatever else you need to do.

I was trying to figure out how to provision a host from ec2, wait for ssh to come up, and then run my playbook against it. Which is basically the same use case as you have. I ended up with the following:
- name: Provision App Server from Amazon
hosts: localhost
gather_facts: False
tasks:
# #### call ec2 provisioning tasks here ####
- name: Add new instance to host group
add_host: hostname="{{item.private_ip}}" groupname="appServer"
with_items: ec2.instances
- name: Configure App Server
hosts: appServer
remote_user: ubuntu
gather_facts: True
tasks: ----configuration tasks here----
I think the ansible terminology is that I have two plays in a playbook, each operating on a different group of hosts (localhost, and the appServer group)

Related

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

Ansible: Trigger the task only when previous task is successful and the output is created

I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...

Ansible - prevent gathering facts for hosts that are not used

I have a playbook.yml along the lines:
- name: Play1
hosts: vm1
...
tags: pl1
- name: Play2
hosts: vm2
...
tags: pl2
Now imagine scenario, that vm1 is dead and I don't care about it for the time being and I want to run only the 2nd play like this:
ansible-playbook playbook.yml --tags=pl2
But now, Ansible fails with the error when gathering facts for vm1. Is there a way to instruct Ansible to be smarter and ignore other plays completely? And why does it even do that?
If you really think that's the best way to manage your servers...
Use gather_facts: false and add explicitly a setup task:
- name: Play1
gather_facts: no
hosts: vm1
pre_tasks:
- setup:
...
tags: pl1
I think what you're looking for is the --limit option, which only runs tasks on the specified host(s) and/or group(s).
ansible-playbook playbook.yml --limit=vm2
https://docs.ansible.com/ansible/2.9/user_guide/intro_patterns.html

How to create a file locally with ansible templates on the development machine

I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks.
I want to use ansible templates locally to create some generic files.
But how would i take ansible to execute something locally?
I read something with local_action but i guess i did not get this right.
This is for the webbserver...but how do i take this and create some files locally?
- hosts: webservers
remote_user: someuser
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
You can delegate tasks with the param delegate_to to any host you like, for example:
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
delegate_to: localhost
See Playbook Delegation in the docs.
If your playbook should in general run locally and no external hosts are involved though, you can simply create a group which contains localhost and then run the playbook against this group. In your inventory:
[local]
localhost ansible_connection=local
and then in your playbook:
hosts: local
Ansible has a local_action directive to support these scenarios which avoids the localhost and/or ansible_connection workarounds and is covered in the Delegation docs.
To modify your original example to use local_action:
- name: create some file
local_action: template src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
which looks cleaner.
If you cannot do/allow localhost SSH, you can split the playbook on local actions and remote actions.
The connection: local says to not use SSH for a playbook, as shown here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#local-playbooks
Example:
# myplaybook.yml
- hosts: remote_machines
tasks:
- debug: msg="do stuff in the remote machines"
- hosts: 127.0.0.1
connection: local
tasks:
- debug: msg="ran in local ansible machine"
- hosts: remote_machines
tasks:
- debug: msg="do more stuff in remote machines"

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources