How to skip role when host is unreachable? - ansible

I have disable_root role, which creates admin user and disables root user from connecting to server via ssh.
When I rerun playbook with this role for second time I get unreachable error (which is ok, as I just disabled it).
I'd like to skip this role in such case and continue with other roles (which would run as a admin user). How can I do this?
This is my playbook (disbale_root uses ansible_user: root var)
- hosts: webservers
roles:
- disable_root
- common
- nginx

One playbook shall connect to the remote host as root, if necessary, or as admin if this user has already been created. This can be done with remote_user (see Ansible remote_user vs ansible_user
).
Let's create the playbook to disable root and enable admin
> cat play-disable-root.yml
- hosts: webservers
remote_user: 'root'
tasks:
- include_role:
name: disable_root
when: play_disable_root|default(false)
In the first play import this playbook if admin can't connect to the remote host
- hosts: webservers
remote_user: 'admin'
tasks:
- delegate_to: localhost
command: ping -c1 "{{ inventory_hostname }}"
register: result
ignore_errors: true
- set_fact:
play_disable_root: true
when: result.rc != 0
- import_playbook: play-disable-root.yml
in the second play proceed with remaining roles
- hosts: webservers
remote_user: 'admin'
roles:
- common
- nginx
Both first and second play may be put into one playbook.
(code not tested)
Updates:
it's not possible to conditionally import playbooks

Related

Launch a play in playbook based on gather facts

I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.

Ansible. Reconnecting the playbook connection

The server is being created. Initially there is user root, his password and ssh on 22 port (default).
There is a written playbook, for example, for a react application.
When you start playbook'a, everything is deployed for it, but before deploying, you need to configure the server to a minimum. Those. create a new sudo user, change the ssh port and copy the ssh key to the server. I think this is probably needed for any server.
After this setting, yaml appears in the host_vars directory with the variables for this server (ansible_user, ansible_sudo_pass, etc.)
For example, there are 2 roles: initial-server, deploy-react-app.
And the playbook itself (main.yml) for a specific application:
- name: Deploy
hosts: prod
roles:
- role: initial-server
- role: deploy-react-app
How to make it so that when you run ansible-playbook main.yml, the initial-server role is executed from the root user with his password, and the deploy-react-app role from the newly created one user and connection was by ssh key and not by password (root)? Or is it, in principle, not the correct approach?
Note: using dashes (-) in role names is deprecated. I fixed that in my below example
Basically:
- name: initialize server
hosts: prod
remote_user: root
roles:
- role: initial_server
- name: deploy application
hosts: prod
# That one will prevent to gather facts twice but is not mandatory
gather_facts: false
remote_user: reactappuser
roles:
- role: deploy_react_app
You could also set the ansible_user for each role vars in a single play:
- name: init and deploy
hosts: prod
roles:
- role: initial_server
vars:
ansible_user: root
- role: deploy_react_app
vars:
ansible_user: reactappuser
There are other possibilities (using an include_role task). This really depends on your precise requirement.

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

Attach elastic IP's to bastion hosts when provisioned automatically

---
- hosts: localhost
gather_facts: False
roles:
- provision_ec2
# this uses a variable defined in the first role of this playbook, provision_ec2.
- hosts: "{{ hostvars['localhost'].bastion_server_group }}"
become: yes
become_method: sudo
roles:
- hosts_file
# this won't work on bastion servers until we automate a way to connect to the newly provisioned bastion server.
# This would require some proxy command and attaching an elastic IP, then pushing that to the ssh_config.
Because now we do the above mention comments manually every time when we spin a new bastion server, I need your help to know how automate the process of attaching the Elastic IP to the newly provisioned bastion server? I am new to Yaml and ansible, I am learning yaml from last few weeks.
- hosts: '{{HOST_GROUP}}'
gather_facts: False
roles:
- { role: ec2_tags, when: server_type != 'bastion' }
- { role: ec2_tag_volumes, when: server_type == 'app' or server_type == 'util' }
There is ec2_eip module. Use it just after creating your instance with ec2 module.
Example usage:
- ec2_eip:
region: "{{ region }}"
state: present
in_vpc: yes
device_id: "{{ ec2_result.instances[0].id }}"
reuse_existing_ip_allowed: yes

Ansible launch ssh-copy-id before running play?

I want to use Ansible to disable selinux in some remote servers. I don't know yet the full list of the servers, it will come from time to time.
That would be great if the ssh-copy-id phase would be integrated somehow in the playbook - you would expect that from an automation system ? I don't mind getting asked for the password one time per server.
With various reading, I understand I can run a local_action in my task:
---
- name: Disable SELinux
hosts: all
remote_user: root
gather_facts: False
tasks:
- local_action: command ssh-copy-id {{remote_user}}#{{hostname}}
- selinux:
state: disabled
However:
It fails because {{remote_user}} and {{hostname}} are not accessible in this context.
I need to gather_factsto False, because it's executed before local_action
Any idea if that's possible within Ansible playbooks ?
You may try this:
- hosts: all
gather_facts: no
tasks:
- set_fact:
rem_user: "{{ ansible_user | default(lookup('env','USER')) }}"
rem_host: "{{ ansible_host }}"
- local_action: command ssh-copy-id {{ rem_user }}#{{ rem_host }}
- setup:
- selinux:
state: disabled
Define remote user and remote host first, then make local action, then enforce fact gathering with setup.

Resources