I'm using kitchen and ansible to test-drive server configurations. Every example I can find has a .kitchen.yml file in the same folder as the ansible role. I would like to execute multiple tests but there doesn't seem to be an in-built way of doing this - kitchen test expects a single .kitchen.yml file in the folder it's run in (along with the serverspec ruby spec files and a default.yml file that wraps the actual role) e.g.
roles
- role_1
- tasks
mail.yml
- test/integration/default/serverspec/localhost
role_spec.rb
default.yml
.kitchen.yml
I would rather separate out the files used for testing from the files used to configure the servers and to that end I have created a suite per role and specified the provisioner playbook in the suite config:
suites:
- name: role_1
provisioner:
playbook: test/integration/role_1/default.yml
- name: role_2
provisioner:
playbook: test/integration/role_2/default.yml
My *_spec.rb files then have to be in a folder named test/integration/role_1/serverspec
This also allows me to run multiple role tests via a single kitchen test but I'm not sure if this is the way to be going. If I had a playbook that had multiple roles, I can't see how I can re-use the *_spec.rb files.
How is this meant to be done?
This now available with the latest busser-ansiblespec see:
https://github.com/neillturner/busser-ansiblespec
https://github.com/neillturner/ansible_repo
https://github.com/neillturner/kitchen-ansible
What I do with my Ansible roles is the following.
My .kitchen.yml file in the "root" of the role:
---
driver:
name: docker
provision_command: sed -i '/tsflags=nodocs/d' /etc/yum.conf
provisioner:
name: ansible_playbook
ansible_yum_repo: "http://mirror.logol.ru/epel/6/x86_64/epel-release-6-8.noarch.rpm"
hosts: localhost
requirements_path: requirements.yml
platforms:
- name: centos-6.6
verifier:
ruby_bindir: '/usr/bin'
suites:
- name: zabbix-server-mysql
playbook: zabbix-server-mysql.yml
provisioner:
name: ansible_playbook
playbook: test/integration/zabbix-server-mysql.yml
- name: zabbix-server-pgsql
provisioner:
name: ansible_playbook
playbook: test/integration/zabbix-server-pgsql.yml
In the "test/integration" directory I have the following setup:
./zabbix-server-mysql/serverspec/localhost/ansible-zabbix-server_spec.rb
./zabbix-server-mysql/serverspec/spec_helper.rb
./zabbix-server-mysql.yml
./zabbix-server-pgsql/serverspec/localhost/ansible-zabbix-server_spec.rb
./zabbix-server-pgsql/serverspec/spec_helper.rb
./zabbix-server-pgsql.yml
The zabbix-server-pgsql.yml and zabbix-server-mysql.yml files are the playbooks that is calling the role itself, like this:
- hosts: localhost
roles:
- role: geerlingguy.mysql
- role: ansible-zabbix-server
zabbix_url: zabbix.example.com
zabbix_version: 2.4
database_type: mysql
database_type_long: mysql
Hope this helps you.
I don't know how to reuse the _spec.rb files, so I can't give an answer on that one. (Do want to know the answer, so I'll bookmark this page ;-))
Kind regards,
Werner
Related
I need to deploy my apps to two VMs using an Ansible playbook. Each VM serves a different purpose and my app consists of several different components, so each component has its own set of tasks in the playbook.
My inventory file looks like this:
[vig]
192.168.10.11
[websvr]
192.168.10.22
Ansible playbooks only have one for declaring hosts, which is right at the top and all the tasks execute against the specified hosts. But what I hope to achieve is:
Tasks 1 to 10 execute against the vig group
Tasks 11 to 20 execute against the websvr group
All in the same playbook, as in: ansible-playbook -i <inventory file> deploy.yml.
Is that possible? Do I have to use Ansible roles to achieve this?
Playbooks can have multiple plays (see https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html).
Playbooks can contain multiple plays. You may have a playbook that
targets first the web servers, and then the database servers. For
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- hosts: databases
remote_user: root
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
- name: ensure that postgresql is started
service:
name: postgresql
state: started
I need to use a provisioning tool called vcommander
looks like this tool supports running ansible locally on the provisioned host
only, and you need to provide him only the playbook to run.
I already have working ansible roles which includes tasks and templates for examples the ntp role looks toughtly like below:
ls -1 roles/ntp/
defaults
LICENSE
meta
README.md
tasks
templates
the main task looks something like this:
cat roles/ntp/tasks/main.yml
---
- name: "Configure ntp.conf"
template:
src: "ntp.conf.j2"
dest: "/etc/ntp.conf"
mode: "0644"
become: True
- name: Updating time zone
shell: tzdata-update
- name: Ensure NTP-related packages are installed.
package:
name: ntp
state: present
- name: Ensure NTP is running and enabled as configured.
service:
name: ntpd
state: started
enabled: yes
and the template looks like this:
cat roles/ntp/templates/ntp.conf.j2
{{ ansible_managed }}
restrict 127.0.0.1
restrict -6 ::1
server 127.127.1.0
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
Is there a way to include the main.yml and the ntp.conf.j2 (template)
and other files included in the roles into one yml file?
Please provide an example.
Are you asking for a playbook that applies your ntp role to localhost?
It would look like this:
cat playbook.yml
- hosts: localhost
roles:
- ntp
See docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-roles
I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)
Our actual setup runs on AWS where we have RDS available, but in vagrant we naturally need to install MySQL locally. What's the normal way of skipping installation with Vagrant? My ansible file looks something like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php
I would recommend having specific groups in your inventory file, and run an 'install locally' playbook on the vagrant instances. This also means you would want to run an 'install RDS config' playbook on the AWS instances of course...
Trying to do all the things in all the places in one playbook is possible, but imo its cleaner to have different playbooks for different environments.
You can do this, as the vagrant always created a directory on the root level "/vagrant"
So just check it like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php
tasks:
- name: Check that /vagrant directory exist
command: /usr/bin/test -e /vagrant
register: dir_exists
roles:
- common-web
- { role: bennojoy.mysql, when: when: dir_exists.rc == 0 }
- php
Here I am supposing that "bennojoy.mysql" is your main mysql role, please check it and let me know if it work for you. Thanks
I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")