How can I get ansible to only install MySQL if it's being run via Vagrant? - vagrant

Our actual setup runs on AWS where we have RDS available, but in vagrant we naturally need to install MySQL locally. What's the normal way of skipping installation with Vagrant? My ansible file looks something like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php

I would recommend having specific groups in your inventory file, and run an 'install locally' playbook on the vagrant instances. This also means you would want to run an 'install RDS config' playbook on the AWS instances of course...
Trying to do all the things in all the places in one playbook is possible, but imo its cleaner to have different playbooks for different environments.

You can do this, as the vagrant always created a directory on the root level "/vagrant"
So just check it like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php
tasks:
- name: Check that /vagrant directory exist
command: /usr/bin/test -e /vagrant
register: dir_exists
roles:
- common-web
- { role: bennojoy.mysql, when: when: dir_exists.rc == 0 }
- php
Here I am supposing that "bennojoy.mysql" is your main mysql role, please check it and let me know if it work for you. Thanks

Related

How can I move the centos user's homedir using ansible?

I've got a CentOS cluster where /home is going to get mounted over nfs. So I think the centos user's home should get moved to somewhere which will remain local, maybe /var/lib/centos or something. But given centos is the ansible_user I can't use:
hosts: cluster
become: yes
tasks:
- ansible.builtin.user:
name: centos
move_home: yes
home: "/var/lib/centos"
as unsurprisingly it fails with
usermod: user centos is currently used by process 45028
Any semi-tidy workarounds for this, or better ideas?
I don't think you're going to be able to do this with the user module if you're connecting as the centos user. However, if you handle the individual steps yourself it should work:
---
- hosts: centos
gather_facts: false
become: true
tasks:
- command: rsync -a /home/centos/ /var/lib/centos/
- command: sed -i 's,/home/centos,/var/lib/centos,' /etc/passwd
args:
warn: false
- meta: reset_connection
- command: rm -rf /home/centos
args:
warn: false
This relocates the home directory, updates /etc/passwd, and then removes the old home directory. The reset_connection is in there to force a new ssh connection: without that, Ansible will be unhappy when you remove the home directory.
In practice, you'd want to add some logic to the above playbook to make it idempotent.

Is there a better way to conditionally add a user to a group with Ansible?

I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)

How to run multiple kitchen-ansible role tests

I'm using kitchen and ansible to test-drive server configurations. Every example I can find has a .kitchen.yml file in the same folder as the ansible role. I would like to execute multiple tests but there doesn't seem to be an in-built way of doing this - kitchen test expects a single .kitchen.yml file in the folder it's run in (along with the serverspec ruby spec files and a default.yml file that wraps the actual role) e.g.
roles
- role_1
- tasks
mail.yml
- test/integration/default/serverspec/localhost
role_spec.rb
default.yml
.kitchen.yml
I would rather separate out the files used for testing from the files used to configure the servers and to that end I have created a suite per role and specified the provisioner playbook in the suite config:
suites:
- name: role_1
provisioner:
playbook: test/integration/role_1/default.yml
- name: role_2
provisioner:
playbook: test/integration/role_2/default.yml
My *_spec.rb files then have to be in a folder named test/integration/role_1/serverspec
This also allows me to run multiple role tests via a single kitchen test but I'm not sure if this is the way to be going. If I had a playbook that had multiple roles, I can't see how I can re-use the *_spec.rb files.
How is this meant to be done?
This now available with the latest busser-ansiblespec see:
https://github.com/neillturner/busser-ansiblespec
https://github.com/neillturner/ansible_repo
https://github.com/neillturner/kitchen-ansible
What I do with my Ansible roles is the following.
My .kitchen.yml file in the "root" of the role:
---
driver:
name: docker
provision_command: sed -i '/tsflags=nodocs/d' /etc/yum.conf
provisioner:
name: ansible_playbook
ansible_yum_repo: "http://mirror.logol.ru/epel/6/x86_64/epel-release-6-8.noarch.rpm"
hosts: localhost
requirements_path: requirements.yml
platforms:
- name: centos-6.6
verifier:
ruby_bindir: '/usr/bin'
suites:
- name: zabbix-server-mysql
playbook: zabbix-server-mysql.yml
provisioner:
name: ansible_playbook
playbook: test/integration/zabbix-server-mysql.yml
- name: zabbix-server-pgsql
provisioner:
name: ansible_playbook
playbook: test/integration/zabbix-server-pgsql.yml
In the "test/integration" directory I have the following setup:
./zabbix-server-mysql/serverspec/localhost/ansible-zabbix-server_spec.rb
./zabbix-server-mysql/serverspec/spec_helper.rb
./zabbix-server-mysql.yml
./zabbix-server-pgsql/serverspec/localhost/ansible-zabbix-server_spec.rb
./zabbix-server-pgsql/serverspec/spec_helper.rb
./zabbix-server-pgsql.yml
The zabbix-server-pgsql.yml and zabbix-server-mysql.yml files are the playbooks that is calling the role itself, like this:
- hosts: localhost
roles:
- role: geerlingguy.mysql
- role: ansible-zabbix-server
zabbix_url: zabbix.example.com
zabbix_version: 2.4
database_type: mysql
database_type_long: mysql
Hope this helps you.
I don't know how to reuse the _spec.rb files, so I can't give an answer on that one. (Do want to know the answer, so I'll bookmark this page ;-))
Kind regards,
Werner

Ansible 1.9.1 'become' and sudo issue

I am trying to run an extremely simple playbook to test a new Ansible setup.
When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file:
[defaults]
host_key_checking=false
log_path=./logs/ansible.log
executable=/bin/bash
#callback_plugins=./lib/callback_plugins
######
[privilege_escalation]
become=True
become_method='sudo'
become_user='tstuser01'
become_ask_pass=False
[ssh_connection]
scp_if_ssh=True
I get the following error:
fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo'
FATAL: all hosts have already failed -- aborting
The playbook is also very simple:
# Checks the hosts provisioned by midrange
---
- name: Test su connecting as current user
hosts: all
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
#action: ping
command: /usr/bin/whoami
I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
The issue is with configuration; I also took it as an example and got the same problem. After playing awhile I noticed that the following works:
1) deprecated sudo:
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
2) new become
---
- hosts: all
become: yes
become_method: sudo
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
3) using ansible.cfg:
[privilege_escalation]
become = yes
become_method = sudo
and then in a playbook:
---
- hosts: all
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
since you "becoming" tstuser01 (not a root like me), please play a bit, probably user name should not be quoted too:
become_user = tstuser01
at least this is the way I define remote_user in ansible.cfg and it works... My issue resolved, hope yours too
I think you should use the sudo directive in the hosts section so that subsequent tasks can run with sudo privileges unless you explicitly specified sudo:no in a task.
Here's your playbook that I've modified to use sudo directive.
# Checks the hosts provisioned by midrange
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
command: /usr/bin/whoami

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources