Capistrano 3 Role specific tasks - ruby

I've switched from Capistrano 2 to Capistrano 3 recently, a lot has changed, and I'm having some troubles trying to adapt the new Capistrano to what it was being done with Capistrano 2 in the project I'm working on.
The biggest problem I'm facing at the moment is filtering by roles. I know you can do:
ROLES=web,worker cap production deploy
but if you have a single server with all the roles, that seems to do nothing really.
With Capistrano 2 I could run:
cap worker deploy
and all worker tasks would be applied. Capistrano 2 had the roles specified on the tasks and if the role wasn't requested the tasks was skipped (in most cases). However it does not seem the case for Capistrano 3, the filter is great on a multi server environment where you have specific servers for each role. But if servers share a role or if there's a single one, it gets a bit weird. In the new Capistrano tasks seems to check whether there is a host with a given role rather than checking if the task should run or not based on the role. It seems to me that ROLES is intended to limit the servers rather than the tasks.
So I wonder if this is possible in Capistrano 3. Another way of viewing this is grouping tasks under a name. I would like to select which group of tasks are being executed.
I can achieve this with some tinkering, I could check if ROLES is present and skip a task or not based on it, I could select which recipes to load depending on role, I could dynamically attach the tasks based on the ROLES var, or maybe group the tasks in role named files and do that dynamic loading depending on ROLES, etc, but perhaps there's something I'm missing.
Any thoughts?

It seems to me that ROLES is intended to limit the servers rather than the tasks.
Yes, that is exactly right. In Capistrano 3, tasks have no relation to roles whatsoever. Within a task, commands can be executed on servers that match a certain role. When you filter using ROLES, you limit the servers where the commands are run, but you don't limit the tasks themselves.
One way you could limit tasks is to define your own high-level task that invokes the tasks you want.
For example:
# In deploy.rb
task "worker" do
invoke "task1"
invoke "task2"
# etc.
end
This defines a worker task that in turn executes a specific list of tasks, which can be whatever you want. Then you can run:
cap production worker
Which will run all these worker-related tasks on your production server.

Related

run an inluded files in parallel on same target

my target is to be able to configure multiple iis website on the same target and in parallel.
each website has multiple tasks that configures it, like creating app pool, creating website, setting the website app pool, create the needed directory paths, creating virtual directories, bindings....
so as i need to create multiple websites, which means i need to loop over a bunch of tasks, so i have set these all these tasks in a separate file and in my main playbook i have created a loop which import the file for each website.
but i endup with a serial execution, i want to be able to create websites in parallel.
i didn't find anyway to achieve this as the async is not supported on ansible block
so is it possible to be done with ansible?
thanks
there is a module called async -- I believe this is something you are looking for
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
This module will help you to run the tasks in parallel. Ideal for creating "n" of websites is IIS -- that are independent.

How can I speedup ansible by not running entire sections too often?

Assume that a normal deployment script does a lot of things and many of them are related to preparing the OS itself.
These tasks are taking a LOT of time to run even if there is nothing new to do and I want to prevent running them more often than, let's say once a day.
I know that I can use tags to filter what I am running but that's not the point: I need to make ansible aware that these "sections" executed successfully one hour ago and due to this, it would skip the entire block now.
I was expecting that caching of facts was supposed to do this but somehow I wasnt able to see any read case.
You need to figure out how to determine what "executed successfully" means. Is is just that a given playbook ran to completion? Certain roles ran to completion? Certain indicators exist that allow you determine success?
As you mention, I don't think fact caching is going to help you here unless you want to introduce custom facts into each host (http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d)
I typically come up with a set of variables/facts that indicate a role has already been run. Sometimes this involves making shell calls and registering vars, looking at gathered facts and determining if certain files exist. My typical pattern for a role looks something like this
roles/my_role/main.yml
- name: load facts
include: facts.yml
- name: run config tasks if needed
include: config.yml
when: fact_var1 and fact_vars2 and inv_var1 and reg_var1
You could also dynamically write a yaml variable file that get's included in your playbooks and contains variables about the configured state of your environment. This is a more global option and doesn't really work if you need to look at the configured status of individual machines. An extension of this would be to write status variables to host_vars or group_vars inventory variable files. These would get loaded automatically on a host by host basis.
Unfortunately, as far as I know, fact caching only caches host based facts such as those created by the setup module so wouldn't allow you to use set_fact to register a fact that, for example, a role had been completed and then conditionally check for that at the start of the role.
Instead you might want to consider using Ansible with another product to orchestrate it in a more complex fashion.

control ansible task file execution

My current Ansible project is setup like so:
backup-gitlab.yml
roles/
aws_backups/
tasks/
main.yml
backup-vm.yml
gitlab/
tasks/
main.yml
start.yml
stop.yml
backup-gitlab.yml needs to do the following:
Invoke stop.yml on the gitlab host.
Invoke backup-gitlab.yml on a different host.
Invoke start.yml on the gitlab host.
The problem I'm running into is Ansible doesn't seem to support a way of choosing which task files to run within the same role in the same playbook. Before I was using tags to control what Ansible would do, but in this case tagging the include statements for start.yml and stop.yml doesn't work because Ansible doesn't appear to have a way to dynamically change the applied tags that are run once they are set through the command line.
I can't come up with an elegant way to achieve this.
Some options are:
Have each task file be contained within its own role. This is annoying because I will end up with a million roles that are not grouped in any way. It's essentially abandoning the whole 'role' concept.
Use include with hard coded paths. This is prone to error as things move around. Also, since Ansible deprecated combining with_items with include (or using any sort of dynamic looping with include), I can no longer quickly change up the task files being run. Any minor change in my workflow requires lots of coding changes. I would really like to stick with using tags from the command line to control exactly what Ansible does.
Use shell scripts to invoke separate Ansible playbooks.
Use conditionals (when clause) on every single Ansible action, and control what gets run by setting variables. While several people have recommended this on SO, it sounds awful. I will have to add the conditional to hundreds of actions and every time I run a playbook the output will be cluttered by hundred's of 'skip' statements.
Leverage Jinja templates and ansible's local_connection to dynamically build static main.yml files with all the required task files included in the proper order (using computed relative paths). Then invoke that computed main.yml file. This is dangerous and convoluted.
Use top level Ansible plays to invoke lower level plays. Seems messy, also this brings in problems when I need to pass variables between plays. Using Ansible's Python Api may help this.
Ansible strives to bring VMs into idempotent states but this isn't very helpful and is a dated way of thinking in my opinion (I would have stuck with Chef if that is all I wanted). I want to leverage Ansible to actually do things such as: actively change configuration states, kick off processes, monitor events, react to events, etc. Essentially I want it to automate as much of my job as possible. The current 'role' structure (with static configurations) that Ansible recommends doesn't fit this paradigm very well even though their usage of remote command execution via SSH gets us so close to the dream.
Just use a playbook for these types of management tasks.
Granted the skip statements do somewhat clutter the output. If you wish to fix that you can further breakdown the roles into something like aws_backups-setup and aws_backups-managment.
Also the roles documentation has some information on how you can run pre_tasks and post_tasks on roles.

ansible design help (servers, teams, roles, playbooks and more)

We are trying to design an Ansible system for our crew.
We have some open questions that cause us to stop and think and maybe hear other ideas.
The details:
4 development teams.
We hold CI servers, DB servers, and a personal virtual machine for each programer.
A new programer receives a clean VM and we would like to use Ansible to "prepare" it for him according to team he is about to join.
We also want to use Ansible for weekly updates (when needed) on some VMs - it might be for a whole team or for all our VMs.
Team A and Team B shares some of their needs (for example, they both use Django) but there are naturally applications that Team A uses and Team B does not.
What we have done:
We had old "maintenance" bash scripts that we translate to YAML scripts.
We grouped them into Ansible roles
We have an inventory file which contains group for each team and our servers:
`
[ALL:children]
Team A
Team B
...
[Team A]
...
[Team B]
...
[CIservers]
...
[DBservers]
...
We have large playbook that contains all our roles (with tag to each):
- hosts: ALL
roles:
- { role x, tags: 'x' }
- { role y, tags: 'y' }
...
We invoke Ansible like that:
ansible-playbook -i inventory -t TAG1,TAG2 -l TeamA play.yml
The Problems:
We have a feeling we are not using roles as we should. We ended up with roles like "mercurial" or "eclipse" that install and configure (add aliases, edit PATH, creates symbolic links, etc) and role for apt_packages (using apt module to install the packages we need) and role for pip_packages (using pip module to install the packages we need).
Some of our roles depends on other roles (we used the meta folder to declare those dependencies). Because our playbook contains all the roles we have, when we run it without tags (on a new programer VM for example) the roles that other roles depends on are running twice (or more) and it is a waste of time. We taught to remove the roles that other depends on from our playbook, but it is not a good solution because in this way we loose the ability to run that role by itself.
We are not sure how to continue from this point. Whether to yield roles dependencies and create playbooks that implement those dependencies by specify the roles in the right order.
Should we change our roles into something like TeamA or DBserver that will unite many of our current roles (in such case, how do we handle the common tasks between TeamA and TeamB and how do we handle the tasks that relevant only for TeamA?)
Well, that is about everything.
Thanks in advance!
Sorry for the late answer and I guess your team has probably figured out the solution by now. I suspect you'll have the standard ansible structure with group_vars, hosts_vars, a roles folder and a site.yml as outlines below
site.yml
group_vars
host_vars
roles
common
dbserver
ciserver
I suspect your team is attempting to link everything into a single site.yml file. This is fine for the common tasks which operate based on roles and tags. I suggest for those edge cases, you create a second or third playbook at the root level, which can be specific to a team or a weekly deployment. In this case, you can still keep the common tasks in the standard structure, but you are not complicating your tasks with all the meta stuff.
site.yml // the standard ansible way
teamb.yml // we need to do something slightly different
Again, as you spot better ways of implementing a task, the playbooks can be refactored and tasks moved from the specific files to the standard roles
Seems you are still trying to see whats the best way to use ansible when you have multiple teams which will work on the same and don't want to affect others task. Have a look at this boilerplate it might help.
If you look in that repo. You will see there are multiple roles and you can design the playbook as per your requirement.
Example:
- common.yml (This will be common between all the team)
- Else you can create using by teamname.yml or project.yml
If you use any of the above you just need to define the proper role in the playbook & it should associate with the right host & group vars.

Ansible, run task if playbook includes role

Let's imagine a playbook with following roles: base, monitoring, nginx and another playbook with only base and nginx.
Now I want in monitoring role to run a task only if playbook includes nginx role, because for monitoring nginx I have to pass a little bit different configuration to monitoring service.
How to execute a task what dependes on another role existence?
While my workaround in the comments might have worked for you, I feel it's still not the best approach. It's not modular. For example in a situation where you change monitoring system, you'd need to go into each role and check if it has monitoring component and update that...Not the most optimal way.
Perhaps a better way would be to still include a separate monitoring role, but there execute specific tasks using playbook conditionals. For example, nginx monitoring task would execute only when this server is part of your [webservers] group. Or when a certain variable is set to a specific value or some other appropriate conditional is met.
It is possible to set fact with set_fact in nginx role (set_fact: nginx=True) and then check it in monitoring role and execute task when fact is defined and true ( when: (ansible_facts['nginx'] is defined) and (ansible_facts['nginx'] == True)).

Resources