How to make an ansible playbook that works according to user input? - ansible

In my ansible playbook i am taking 2 inputs from user and i also wanted to take a third input which should be optional at times and if user provides the value for var3 then playbook must execute a task otherwise it should not, so what is the way to achieve this?
Also i wanted to know that i am using awx open-source UI for ansible so i choose the hosts to run the playbook in ansible awx inventory, after that what should i write in 'hosts' of my playbook or it can be left alone.
- name: Updating "{{ service_name }}" server codebase and starting its service.
hosts: all
tasks:
- name: Stopping nginx service
command: sudo service nginx stop
- name: Performing git checkout in the specified directory "{{ path }}"
command: git checkout .
args:
chdir: "{{ path }}"
- name: Running npm install in the directory "{{ path }}"
command: npm install
args:
chdir: "{{ path }}/node_modules"
- name: Restarting the "{{ service_name }}" service
command: sudo service "{{ service_name }}" restart
- name: Restarting the nginx service
command: sudo service nginx restart

Who is the user in this instance? you? if you are the user then you can run
ansible-playbook -i hosts <your-playbook> -e "service_name=<yourservice>"
to dynamically change the service_name variable upon playbook excecution.
you can then add the second variable to the command also, but be aware with the 'optional' third variable as i'm sure if you do not reference all variables in your playbook you will get an error.
EDIT: You will need to ref both service_name and path variables when you execute the ansible-playbook command, where is the 3rd variable as it doesnt appear to be in your provided code sample?

Related

Ansible run role based on condition

I'm using Ansible to install an agent on Linux servers. There are different install procedures based on if the system is running systemd or initd. I created a role for both install procedures, but I want to see if the server is running systemd or initd first and then run the corresponding role. Below is the code I have created. Will this type of conditional work this way or am I missing the mark?
tasks:
- name: check if running initd or systemd and role the correct role
command: pidof systemd
register: pid_systemd
- name: check if running initd or systemd and role the correct role
command: pidof /sbin/init
register: pid_initd
- include_role:
name: install-appd-machine-agent-initd
when: pid_initd.stdout == '1'
- include_role:
name: install-appd-machine-agent-systemd
when: pid_systemd.stdout == '1'
Ansible collects facts of a system using gather_facts via setup module. This provides a magic variable called ansible_service_mgr. This variable can be used to conditionally execute tasks.
For example, to run your roles conditionally:
tasks:
- include_role:
name: install-appd-machine-agent-initd
when: ansible_service_mgr == "sysvinit"
- include_role:
name: install-appd-machine-agent-systemd
when: ansible_service_mgr == "systemd"

Ansible pre-check before run playbook

Is it possible to add a condition before to run a playbook which check if there is a title, a description, the environment and the versions on the playbook ?
For example my test.yml playbook:
---
#Apache servers
#Linux
#Ubuntu
#version 2.1.1
#Testing for secure the webserver
task:
xxxxxx
xxxxxx
And I would like to check if all the comment before are present before to run this task !
I tried to test this solution :
name: run Apachetest playbook
include: test.yml
when: "{{ lookup('file', 'test.yml').split('\n')[1] == '#Apache servers' }}"
But still not working...
BS
Comments are, well, comments. They do not impact the execution and are just ignored. So there is no way, and actually no real reason, to check if comments are present or not. You would need to implement that yourself.
To check playbooks, roles, etc. there is ansible-lint which will verify the syntax and some best practices (e.g. if you use a command or shell for something there is a module for) but this does not verify comments (again, checking for comments does not make sense from a execution perspective, as they are ignored).
You want some information to be present in your playbook, that is what I understand. If I was you, I would either create a git hook, that verifies if the information is present before letting you push that code to your repository or establish a proper review-process, where the reviewer only accepts a merge/pull request, if the information is present.
Otherwise, here is the code, that will do what you are trying to do:
---
#Apache server
- hosts: all
tasks:
- name: set fact
set_fact:
pb: "{{ lookup('file', 'test.yml').split('\n')[1] }}"
- name: check if we found it
debug:
msg: 'found'
when: "'#Apache server' in pb"
You could use the apache role for apache installed like that
---
- hosts: apache
sudo: yes
tasks:
- name: install apache2
apt: name=apache2 update_cache=yes state=latest
have a look here how-to-install-apache-on-ansible

How to make Ansible display the host name before asking for the sudo password?

I have an Ansible playbook to update my Debian based servers. For simplicity and security reasons, I don't want to use a vault for the passwords and I also don't want to store them in a publically accessible config file. So I ask for the password for every client with
become: yes
become_method: sudo
Now, when the playbook runs, it seems the first thing Ansible does is ask for the sudo password, but I don't know for which server (the passwords are different). Is there a way to get Ansible to print the current host name before it asks for the password?
The update playbook is similar to this:
---
- hosts:
all
gather_facts: no
vars:
verbose: false
log_dir: "log/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- block:
- setup:
rescue:
- name: "Install required python-minimal package"
raw: "apt-get update && apt-get install -y --force-yes python-apt python-minimal"
- setup:
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout | search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
(source: http://www.panticz.de/Debian-Ubuntu-mass-dist-upgrade-with-Ansible)
Your above become configuration does not make ansible ask you for a become password: it just advises it to use become with the sudo method (which will work without any password if your have the correct keys configured for example).
If you are asked for a become password, it's because (it's a guess but I'm rather confident...) you used the --ask-become-pass option when running ansible-playbook.
In this case, you are prompted only once at the beginning of the playbook operations and this default become password will be used on all servers you connect to except if you defined an other one in your inventory for a specific host/group.
If you have different become passwords depending on your machines, you don't really have an other option: you need to declare those passwords in your inventory (and it is strongly advised to use ansible-vault encryption) or use some other mechanisms to get them out of an external application (hashicorp vault, dynamic inventory, cyberark...)

Is there a better way to conditionally add a user to a group with Ansible?

I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)

Ansible role with shared python virtual environment

I have ansible role which have task delegated to localhost:
- name: Test role
hosts: my_hosts
gather_facts: no
tasks:
- name: Register remote hosts
include_role: name=register_remote_hosts
delegate_to: localhost
The Role register_remote_systems must work for every host in my_hosts, but must be ran from the box where Ansible is invoked, that is why there is delegate_to.
The role register_remote_hosts checks for a specific application on localhost and if one is not installed it creates virtual environment and then installs it:
- name: Check if my_app is installed system-wide
shell: |
my_app --version >/dev/null 2>&1
register: my_app_cmd
failed_when: my_app_cmd.rc not in [0, 127]
- name: Install My App
block:
- name: Create temporary directory for my_app
tempfile:
state: directory
suffix: my_app
register: my_app_temp
- name: Create virtual environment
command: virtualenv "{{ my_app_temp.path }}"
- name: Install my_app
pip:
name: my_app
state: latest
virtualenv: "{{ my_app_temp.path }}"
virtualenv_site_packages: yes
- name: Set Virtual Environment variable
set_fact:
venv_activate: "source {{ my_app_temp.path }}/bin/activate"
when: my_app_cmd.rc != 0
- name: Use my_app
shell: |
{{ venv_activate | default('echo "Using my_app from system path"') }}
my_app --version
Everything works great, but if there are many hosts in my_hosts then a lot of venvs are being created.
What would be the best approach to create role which is reusing same venv with my_app installed. Note that role is included in many different playbooks and I do not want to write additional role included in every playbook where "Register remote hosts" included role is used. There is of course concurrency problem of creating venv prior to using that in other playbooks.
Above solution works and I can live with it, but maybe there are nicer design patterns for such problems in Ansible.
Solution is to put run_once (thanks #ssbarnea):
when: my_app_cmd.rc != 0
run_once: yes

Resources