I have ansible role which have task delegated to localhost:
- name: Test role
hosts: my_hosts
gather_facts: no
tasks:
- name: Register remote hosts
include_role: name=register_remote_hosts
delegate_to: localhost
The Role register_remote_systems must work for every host in my_hosts, but must be ran from the box where Ansible is invoked, that is why there is delegate_to.
The role register_remote_hosts checks for a specific application on localhost and if one is not installed it creates virtual environment and then installs it:
- name: Check if my_app is installed system-wide
shell: |
my_app --version >/dev/null 2>&1
register: my_app_cmd
failed_when: my_app_cmd.rc not in [0, 127]
- name: Install My App
block:
- name: Create temporary directory for my_app
tempfile:
state: directory
suffix: my_app
register: my_app_temp
- name: Create virtual environment
command: virtualenv "{{ my_app_temp.path }}"
- name: Install my_app
pip:
name: my_app
state: latest
virtualenv: "{{ my_app_temp.path }}"
virtualenv_site_packages: yes
- name: Set Virtual Environment variable
set_fact:
venv_activate: "source {{ my_app_temp.path }}/bin/activate"
when: my_app_cmd.rc != 0
- name: Use my_app
shell: |
{{ venv_activate | default('echo "Using my_app from system path"') }}
my_app --version
Everything works great, but if there are many hosts in my_hosts then a lot of venvs are being created.
What would be the best approach to create role which is reusing same venv with my_app installed. Note that role is included in many different playbooks and I do not want to write additional role included in every playbook where "Register remote hosts" included role is used. There is of course concurrency problem of creating venv prior to using that in other playbooks.
Above solution works and I can live with it, but maybe there are nicer design patterns for such problems in Ansible.
Solution is to put run_once (thanks #ssbarnea):
when: my_app_cmd.rc != 0
run_once: yes
Related
In my ansible playbook i am taking 2 inputs from user and i also wanted to take a third input which should be optional at times and if user provides the value for var3 then playbook must execute a task otherwise it should not, so what is the way to achieve this?
Also i wanted to know that i am using awx open-source UI for ansible so i choose the hosts to run the playbook in ansible awx inventory, after that what should i write in 'hosts' of my playbook or it can be left alone.
- name: Updating "{{ service_name }}" server codebase and starting its service.
hosts: all
tasks:
- name: Stopping nginx service
command: sudo service nginx stop
- name: Performing git checkout in the specified directory "{{ path }}"
command: git checkout .
args:
chdir: "{{ path }}"
- name: Running npm install in the directory "{{ path }}"
command: npm install
args:
chdir: "{{ path }}/node_modules"
- name: Restarting the "{{ service_name }}" service
command: sudo service "{{ service_name }}" restart
- name: Restarting the nginx service
command: sudo service nginx restart
Who is the user in this instance? you? if you are the user then you can run
ansible-playbook -i hosts <your-playbook> -e "service_name=<yourservice>"
to dynamically change the service_name variable upon playbook excecution.
you can then add the second variable to the command also, but be aware with the 'optional' third variable as i'm sure if you do not reference all variables in your playbook you will get an error.
EDIT: You will need to ref both service_name and path variables when you execute the ansible-playbook command, where is the 3rd variable as it doesnt appear to be in your provided code sample?
I have an Ansible playbook to update my Debian based servers. For simplicity and security reasons, I don't want to use a vault for the passwords and I also don't want to store them in a publically accessible config file. So I ask for the password for every client with
become: yes
become_method: sudo
Now, when the playbook runs, it seems the first thing Ansible does is ask for the sudo password, but I don't know for which server (the passwords are different). Is there a way to get Ansible to print the current host name before it asks for the password?
The update playbook is similar to this:
---
- hosts:
all
gather_facts: no
vars:
verbose: false
log_dir: "log/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- block:
- setup:
rescue:
- name: "Install required python-minimal package"
raw: "apt-get update && apt-get install -y --force-yes python-apt python-minimal"
- setup:
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout | search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
(source: http://www.panticz.de/Debian-Ubuntu-mass-dist-upgrade-with-Ansible)
Your above become configuration does not make ansible ask you for a become password: it just advises it to use become with the sudo method (which will work without any password if your have the correct keys configured for example).
If you are asked for a become password, it's because (it's a guess but I'm rather confident...) you used the --ask-become-pass option when running ansible-playbook.
In this case, you are prompted only once at the beginning of the playbook operations and this default become password will be used on all servers you connect to except if you defined an other one in your inventory for a specific host/group.
If you have different become passwords depending on your machines, you don't really have an other option: you need to declare those passwords in your inventory (and it is strongly advised to use ansible-vault encryption) or use some other mechanisms to get them out of an external application (hashicorp vault, dynamic inventory, cyberark...)
A recurring theme that's in my ansible playbooks is that I often must execute a command with sudo privileges (sudo: yes) because I'd like to do it for a certain user. Ideally I'd much rather use sudo to switch to that user and execute the commands normally. Because then I won't have to do my usual post commands clean up such as chowning directories. Here's a snippet from one of my playbooks:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
- name: change perms
file: dest={{ dst }} state=directory mode=0755 owner=some_user
sudo: yes
Ideally I could run commands or sets of commands as a different user even if it requires sudo to su to that user.
With Ansible 1.9 or later
Ansible uses the become, become_user, and become_method directives to achieve privilege escalation. You can apply them to an entire play or playbook, set them in an included playbook, or set them for a particular task.
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
become: yes
become_user: some_user
You can use become_with to specify how the privilege escalation is achieved, the default being sudo.
The directive is in effect for the scope of the block in which it is used (examples).
See Hosts and Users for some additional examples and Become (Privilege Escalation) for more detailed documentation.
In addition to the task-scoped become and become_user directives, Ansible 1.9 added some new variables and command line options to set these values for the duration of a play in the absence of explicit directives:
Command line options for the equivalent become/become_user directives.
Connection specific variables which can be set per host or group.
As of Ansible 2.0.2.0, the older sudo/sudo_user syntax described below still works, but the deprecation notice states, "This feature will be removed in a future release."
Previous syntax, deprecated as of Ansible 1.9 and scheduled for removal:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
sudo_user: some_user
In Ansible 2.x, you can use the block for group of tasks:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
In Ansible >1.4 you can actually specify a remote user at the task level which should allow you to login as that user and execute that command without resorting to sudo. If you can't login as that user then the sudo_user solution will work too.
---
- hosts: webservers
remote_user: root
tasks:
- name: test connection
ping:
remote_user: yourname
See http://docs.ansible.com/playbooks_intro.html#hosts-and-users
A solution is to use the include statement with remote_user var (describe there : http://docs.ansible.com/playbooks_roles.html) but it has to be done at playbook instead of task level.
You can specify become_method to override the default method set in ansible.cfg (if any), and which can be set to one of sudo, su, pbrun, pfexec, doas, dzdo, ksu.
- name: I am confused
command: 'whoami'
become: true
become_method: su
become_user: some_user
register: myidentity
- name: my secret identity
debug:
msg: '{{ myidentity.stdout }}'
Should display
TASK [my-task : my secret identity] ************************************************************
ok: [my_ansible_server] => {
"msg": "some_user"
}
I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)
my inventory file's contents -
[webservers]
x.x.x.x ansible_ssh_user=ubuntu
[dbservers]
x.x.x.x ansible_ssh_user=ubuntu
in my tasks file which is in common role i.e. it will run on both hosts but I want to run a following task on host webservers not in dbservers which is defined in inventory file
- name: Install required packages
apt: name={{ item }} state=present
with_items:
- '{{ programs }}'
become: yes
tags: programs
is when module helpful or there is any other way? How could I do this ?
If you want to run your role on all hosts but only a single task limited to the webservers group, then - like you already suggested - when is your friend.
You could define a condition like:
when: inventory_hostname in groups['webservers']
Thank you, this helps me too.
hosts file:
[production]
host1.dns.name
[internal]
host2.dns.name
requirements.yml file:
- name: install the sphinx-search rpm from a remote repo on x86_64 - internal host
when: inventory_hostname in groups['internal']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-1.rhel7.x86_64.rpm
state: present
- name: install the sphinx-search rpm from a remote repo on i386 - Production
when: inventory_hostname in groups['production']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-2.rhel6.i386.rpm
state: present
An alternative to consider in some scenarios is -
delegate_to: hostname
There is also this example form the ansible docs, to loop over a group. https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html -
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
loop: "{{groups['dbservers']}}"