Ansible ntp server + several clients deployment - ansible

I know this is kind of a newbie question but: How should I define my hosts file? if I want a NTP server and several clients and still have the ability to do common things to all the hosts?
I have, let's say, 10 servers. One of them will be the NTP server and the rest will sync against it. I could define my hosts file like:
[ntp-server]
hostA
[ntp-slaves]
hostB
hostC
hostD
...
[my_servers]
ntp-server
ntp-slaves
So, I can apply common config (ssh, iptables) to all of them. I can have other classifications like webservers, loadbalancers or any other. So, far I can't figure out how to solve this the "elegant" way.
Also, related to this, should I have two roles? (ntp_server, ntp_client) or a single one with differente behaviour if it's inside the client or server group?
Thank You

You're on the right track, but the correct syntax for my_servers is :
[my_servers:children]
ntp-server
ntp-slaves
Yes, Ansible needs a little help when building groups from other groups.
Note that you can combine [my_servers:children] (to add groups) and [my_servers] (to add hosts) in the same inventory file.

Related

Ansible: Including non-managed systems into the configuration

I'm writing an Ansible playbook, that sets up systems, where in certain configurations, some systems might be provided by another organization. A very simple example:
Inventory:
An app server
A db server
Playbook:
Setup up both servers, and add application.properties file to app server with IP, port and user/pass of the db server.
Works so far, but then a requirement comes, that in some deployments the DB server is provided by another organization, so I can't include it in inventory, as the setup step fails, yet I still want to generate the properties file for the app server (with db server info I get from other people).
What would be the least painful solution that would cover both scenarios (my own db server and provided db server), considering there's 6 such server types, not just 2 (so not just 2 different scenarios, there are many permutations of which server is provided for me and which is mine).
Edit:
To be more specific, the problem I have is that if I use vars when a system is not mine and facts when it is mine, then I have problems writing application.properties.j2 template, since facts and vars are referenced differently. How would use var in a template, but use a fact if var isn't defined?
I assume you have an inventory file per deployment. So an inventory file might look like this:
[app]
an-app.server
[db]
a-db.server
[deploymentA:children]
app
db
[deploymentA:vars]
db_ip=foo
dp_port=foo
db_user=foo
db_pass=foo
The deploymentA:vars you probably would store as group_vars/deploymentA but for simplicity I just put it into the inventory.
Now the playbook:
- hosts: db
roles:
- setup-db-server
- hosts: app
roles:
- setup-app-server
In a deployment where the db server is not managed by yourself, you simply would not have a db server defined in your inventory. The group db would be empty. In that case the first play will simply print "no hosts matched" or something like that and proceed to the next play.
In a deployment where the db server is managed by you, the role setup-db-server will be run and you can use the deploymentA:vars for configuration.
In your app play you have the deploymentA:vars no matter if the host is yours or not and can roll out the application.properties file.

How to differentiate between staging/production with a dynamic inventory?

I'm stuck. Googled the hell out of the Web and couldn't find an answer.
I've been using Ansible for years, but always with static inventories. To differentiate between different environments like staging and production, I used different static inventory files, staging and production, respectively. When I needed to provision staging servers, I'd do:
ansible-playbook site.yml -i staging
When I wanted to do the same for production, I'd do:
ansible-playbook site.yml -i production
Both staging and production need variables with different values, so I have group_vars/staging and group_vars/production. All good and according to best practices.
Now, I need to provision EC2 instances in AWS. I'm using this AWS guide. I have a playbook with two plays. The first is run against localhost, creates/finds required EC2 instances in AWS, and populates a group with add_host. The second play uses that group to run against the EC2 instances discovered in the first play. All according to that guide.
It all works great except one thing. I have no idea how to specify which environment to provision and hence the required variables are not being loaded from group_vars/(staging|production). Basically, what I want is something similar to -i (staging|production) I used all these years with static inventories, but it seems that using -i doesn't make sense now since the inventory is dynamic. I want a way to be able to load variables from either group_vars/staging or group_vars/production based on an argument I pass to ansible-playbook when I run it.
How do I do that? What's the best practice?
While I am not sure how to do it with ansible EC2 moduel as we don't use it to build boxes from ansible level, there is a simple way to get what you want with ec2 external inventory script and simple settings in your inventories/main. What you need to do is set up the ec2.py and ec2.ini inside of your inventories so it will be used as source of instances. Make sure to uncomment group_by_tag_keys = True inside of ec2.ini.
Next step is to differentiate which instance goes where. While there are many selection methods available in ec2.py, I prefer to specifically tag each instance accordingly. So all my instances have a tag called environment which is filled accordingly (in your case it would be either staging or production). Then all is left is to handle it inside of your inventories/main, and here is a small example how to do it.
First you must define empty group for tags you want to use:
[tag_environment_staging]
[tag_environment_production]
so we can later reference to them. After that all there is left to do is specify those groups as children for appropriate stages. So after that our minimal file will look like that:
[tag_environment_staging]
[tag_environment_production]
[staging:children]
tag_environment_staging
[production:children]
tag_environment_production
And there you go. From now on every instance pulled from ec2 via dynamic inventory script that comes with environment tag will be matched to appropriate config in group_vars. All you have to remember that when dealing with dynamic inventories you want your -i point at inventories directory rather than specific file for it to work right.
I have a similar problem with dynamic inventories but for Openstack. The solution I've come up with so far is to use an environment variable to specify whether I want to target the staging or production environment. It should be applicable to your case as well. In our setup $OS_PROJECT_NAME is either stage or prod. In ansible.cfg set
inventory = ./inventories/${OS_PROJECT_NAME}/openstack.py
Then we have environment specific group variables under
inventories/(stage|prod)/group_vars/
The drawback is you have to have the inventory script in two places or have it symlinked. Beware also that group_vars found relative to the playbook directory will still override the inventory group_vars.

How can I speedup ansible by not running entire sections too often?

Assume that a normal deployment script does a lot of things and many of them are related to preparing the OS itself.
These tasks are taking a LOT of time to run even if there is nothing new to do and I want to prevent running them more often than, let's say once a day.
I know that I can use tags to filter what I am running but that's not the point: I need to make ansible aware that these "sections" executed successfully one hour ago and due to this, it would skip the entire block now.
I was expecting that caching of facts was supposed to do this but somehow I wasnt able to see any read case.
You need to figure out how to determine what "executed successfully" means. Is is just that a given playbook ran to completion? Certain roles ran to completion? Certain indicators exist that allow you determine success?
As you mention, I don't think fact caching is going to help you here unless you want to introduce custom facts into each host (http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d)
I typically come up with a set of variables/facts that indicate a role has already been run. Sometimes this involves making shell calls and registering vars, looking at gathered facts and determining if certain files exist. My typical pattern for a role looks something like this
roles/my_role/main.yml
- name: load facts
include: facts.yml
- name: run config tasks if needed
include: config.yml
when: fact_var1 and fact_vars2 and inv_var1 and reg_var1
You could also dynamically write a yaml variable file that get's included in your playbooks and contains variables about the configured state of your environment. This is a more global option and doesn't really work if you need to look at the configured status of individual machines. An extension of this would be to write status variables to host_vars or group_vars inventory variable files. These would get loaded automatically on a host by host basis.
Unfortunately, as far as I know, fact caching only caches host based facts such as those created by the setup module so wouldn't allow you to use set_fact to register a fact that, for example, a role had been completed and then conditionally check for that at the start of the role.
Instead you might want to consider using Ansible with another product to orchestrate it in a more complex fashion.

Prevent usage of wrong Ansible inventory

Imagine a server setup with Ansible with a production and a reference system/cluster and a separate server running Ansible (with ssh-keys). The different clusters are identified in two inventory files.
Every playbook usage will somehow look like ansible-playbook -i production ... or ansible-playbook -i reference....
How do I prevent an accidental usage of the production inventory?
This could happen easily by either using an history entry in the shell or copy the command from some documentation.
Some ideas:
As a start every documentation is referring to the reference inventory and is also using --check.
Use two different Ansible instances and the common part is mirrored via Git. But this will result in some overhead.
Prompt once for i.e. database passwords and use different on production and reference. But not all task/tags have such a password requirement.
Actually I'm looking for something like a master password when using a specific inventory? Or a task that is always executed even if a tag is used? What is the best practice here? Do you have other ideas? Or am I somehow totally wrong here and there is a better way for my scenario?
Your production inventory could either be vaulted or even just include a vaulted file (which doesn't actually have to contain anything).
This will mean that when you attempt to run:
ansible-playbook -i production playbook.yml
It will fail with a message telling you to pass a vault password to it.
The vault password could be something simple such as "pr0duct10n" so it's entire purpose is to remind people that they are about to run it in production and so to think things through (much like how many people use a sudo password).

ansible design help (servers, teams, roles, playbooks and more)

We are trying to design an Ansible system for our crew.
We have some open questions that cause us to stop and think and maybe hear other ideas.
The details:
4 development teams.
We hold CI servers, DB servers, and a personal virtual machine for each programer.
A new programer receives a clean VM and we would like to use Ansible to "prepare" it for him according to team he is about to join.
We also want to use Ansible for weekly updates (when needed) on some VMs - it might be for a whole team or for all our VMs.
Team A and Team B shares some of their needs (for example, they both use Django) but there are naturally applications that Team A uses and Team B does not.
What we have done:
We had old "maintenance" bash scripts that we translate to YAML scripts.
We grouped them into Ansible roles
We have an inventory file which contains group for each team and our servers:
`
[ALL:children]
Team A
Team B
...
[Team A]
...
[Team B]
...
[CIservers]
...
[DBservers]
...
We have large playbook that contains all our roles (with tag to each):
- hosts: ALL
roles:
- { role x, tags: 'x' }
- { role y, tags: 'y' }
...
We invoke Ansible like that:
ansible-playbook -i inventory -t TAG1,TAG2 -l TeamA play.yml
The Problems:
We have a feeling we are not using roles as we should. We ended up with roles like "mercurial" or "eclipse" that install and configure (add aliases, edit PATH, creates symbolic links, etc) and role for apt_packages (using apt module to install the packages we need) and role for pip_packages (using pip module to install the packages we need).
Some of our roles depends on other roles (we used the meta folder to declare those dependencies). Because our playbook contains all the roles we have, when we run it without tags (on a new programer VM for example) the roles that other roles depends on are running twice (or more) and it is a waste of time. We taught to remove the roles that other depends on from our playbook, but it is not a good solution because in this way we loose the ability to run that role by itself.
We are not sure how to continue from this point. Whether to yield roles dependencies and create playbooks that implement those dependencies by specify the roles in the right order.
Should we change our roles into something like TeamA or DBserver that will unite many of our current roles (in such case, how do we handle the common tasks between TeamA and TeamB and how do we handle the tasks that relevant only for TeamA?)
Well, that is about everything.
Thanks in advance!
Sorry for the late answer and I guess your team has probably figured out the solution by now. I suspect you'll have the standard ansible structure with group_vars, hosts_vars, a roles folder and a site.yml as outlines below
site.yml
group_vars
host_vars
roles
common
dbserver
ciserver
I suspect your team is attempting to link everything into a single site.yml file. This is fine for the common tasks which operate based on roles and tags. I suggest for those edge cases, you create a second or third playbook at the root level, which can be specific to a team or a weekly deployment. In this case, you can still keep the common tasks in the standard structure, but you are not complicating your tasks with all the meta stuff.
site.yml // the standard ansible way
teamb.yml // we need to do something slightly different
Again, as you spot better ways of implementing a task, the playbooks can be refactored and tasks moved from the specific files to the standard roles
Seems you are still trying to see whats the best way to use ansible when you have multiple teams which will work on the same and don't want to affect others task. Have a look at this boilerplate it might help.
If you look in that repo. You will see there are multiple roles and you can design the playbook as per your requirement.
Example:
- common.yml (This will be common between all the team)
- Else you can create using by teamname.yml or project.yml
If you use any of the above you just need to define the proper role in the playbook & it should associate with the right host & group vars.

Resources