Ansible - overriding variables of globally installed roles - ansible

I am wondering what's the best practice for customizing/overriding variables used in globally installed Ansible roles (/etc/local/ansible) that are used across many of our playbooks?
Such roles might include variables in defaults/main.yml as well as in vars/.
After requiring a globally installed role in my local playbook, it would be natural to customize those variables. Since there is no direct access to the role's directory, is my only option to override these variables in group_vars / host_vars? Or, perhaps passing overriding vars in the playbook directly, but this does not seem like a good idea?

Consult the docs on variable precedence. In particular, they state:
Basically, anything that goes into “role defaults” (the defaults folder inside the role) is the most malleable and easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in namespace. The idea here to follow is that the more explicit you get in scope, the more precedence it takes with command line -e extra vars always winning. Host and/or inventory variables can win over role defaults, but not explicit includes like the vars directory or an include_vars task.
Hence, given the precedence list you should locate the variables wherever it seems most sensible.
Note that the precedence rules differ between Ansible versions 1 and 2.

vars of roles are not meant to be overridden. If they were, they were (or should be) defaults. vars can only be overridden with --extra-vars passed from command line.

You could define all the variables you want to override in JSON/YAML file and pass to the playbook using --extra-vars. Eg: ansible-playbook site.yml -i inventory --extra-vars #vars.json

Related

How to overwrite a default variable in ansible.cfg dynamically?

I have a playbook with the following task that must copy the 2 Gb file from local to remote servers and extract files:
- name: Copy archived file to target server and extract
unarchive:
src: /path_to_source_dir/file.tar.gz
dest: /path_to_dest_dir
This task fails because ansible copies file to /home mount point on the target server and there's not enough space there:
sftp> put /path_to_source_dir/file.tar.gz /home/my_user_name/.ansible/tmp/ansible-tmp-1551129648.53-14181330218552/source
scp: /home/my_user_name/.ansible/tmp/ansible-tmp-1551129648.53-14181330218552/source: No space left on device
The reason for that is because ansible.cfg has a default parameter:
remote_tmp = ~/.ansible/tmp
How to overwrite this parameter from the playbook (if possible) and make ansible to copy file to the same destination directory specified in the task? So it would be like this:
remote_tmp = /path_to_dest_dir/.ansible/tmp
And the destination path is going to be different each time for a different target server!
Cleaning /home is not an option for me.
The answer here unfortunately is not very clear to me.
There are a few different ways to achieve what you are looking to do. Which one is a matter of preference and your use case.
You found the first way, setting an environment variable before running the playbook. Great for a quick on-the-fly job. Remembering to do that every time you run a certain playbook is indeed annoying. A slight variation of that is to use the environment keyword to set that variable for the play. You can also set environment variable in a role, block or a single task. https://docs.ansible.com/ansible/devel/reference_appendices/playbooks_keywords.html?highlight=environment%20directive. Here is an example of it in use: https://docs.ansible.com/ansible/devel/reference_appendices/faq.html?highlight=environment.
Using the environment keyword in a play et al works well for a specific application of automation, but what if you want Ansible to always use a different remote tmp path for specific servers? Like all variables, the remote_tmp can be sourced from inventory host and group variables not just the config file or environment variables. You need to mind you variable precedence if it is being set in different places. With this you could set remote_tmp in your inventory for that host or a group of hosts. Ansible will always use that path for that host or hosts in that group without having to define it in every play or roles. If you need to change that path, you change it in your inventory and it changes the behavior for all playbook runs without any additional edits. Here is an example of it being used as a host variable in static inventory: https://docs.ansible.com/ansible/devel/reference_appendices/faq.html?highlight=remote_tmp#running-on-solaris
So while the specific issue of "dynamically" setting the remote tmp directory on a host is not a best practice topic per se, it does become an example of the best practice of making the most of variables in Ansible.
For reference, remote temporary directories are handled by the shell plugins. While many parameters are shared, there are others that are specific to the shell Ansible using. Ansible uses sh by default. https://docs.ansible.com/ansible/latest/plugins/shell/sh.html
Hope that helps. Happy automating.

How do I define variables for the current user in Ansible?

We are using vagrant and ansible to create standard development environments.
The ansible playbooks, vagrant files, etc. are in a git repository.
I've using variable file separation to refer to variable files in the developer's home directory for some senstitive and/or user-specific information (e.g. email address).
We use the variables by doing a vars_file: as part of the playbook, but have to do it for every play.
I don't want to put it in the group_vars/all file because it would then be in the repository and not specific to the user.
I would rather not have a file in the repository that is ignored because people still manage to include it and it screw everybody else up.
Is there a way of doing an equivalent of groups/all which can contain tasks and/or variable definitions that will automatically run whenever a playbook is run?
We use the variables by doing a vars_file: as part of the playbook, but have to do it for every play.
Nope, you can do it on playbook level. (But this might be a new thing, could have been impossible back then, I did not check.)
Is there a way of doing an equivalent of groups/all which can contain tasks and/or variable definitions that will automatically run whenever a playbook is run?
Automatically run/included when?! I don't think this is possible as there would be a lot of open questions like:
Should this be specified on the target machine or the ansible server?
How do you specify for which user should this happen on which host?
If there are tasks: do you want this to be executed on each playbook
when it is run using the given user? What about tasks which specifies
that they run as root (become)? What about tasks that specify a
given user to be executed as? What about tasks that are run as root
but creates a file with the owner matching the given user?
As there are no user scopes with variables and we don't really have a "user context" outlined (see the last questions) we are currently stuck with inclusion of variable files explicitly. Hence the below options:
You can keep using vars_file and specify a first found list.
vars_file:
- - ~/ansible_config/vars.yml
- <default vars file somewhere on the machine>
This way the ansible executor user can redefine values...
You can use the --extra-vars #<filepath> syntax to include all variables from a file, and you can have more than one of these.
A similar thing I do is that I include every variable from every yml file within my GLOBAL_INPUT_DIR (which is an environment variable that can be defined before running the bash script executing ansible-playbook or in a your bash profile or something).
EXTRA_ARGS=`{
{
find "${GLOBAL_INPUT_DIR}" -iname "*.yml";
}\
| while read line; do echo "--extra-vars #${line} "; done \
| tr -d "\n"
}`
ansible-playbook $# ${EXTRA_ARGS}
I usually include something like this in my doings to provide an easy way of redifining variables...
BUT: be aware that this will redefine ALL occurances of a variable name within the playbook (but this was also true with vars_file).

How to detect an inventory environment in Ansible?

I have the requirement to skip some steps in my scripts when I run a deployment against production.
When a playbook is started, it always requires an environment (-i option), so there would be information I could query to distinguish which steps to take.
This leads me to ask:
How can I query the environment I am running a playbook in?
As an alternative, I could provide an extra variable as a parameter like -e "env=prod". But this would be redundant, since I have specified the environment already with -i...
Another option would be to set up a group environment, put all hosts of this environment in there, and define a group_var called env: prod. But putting all hosts in this group is overkill.
Bottom line: can I query the environment? Is there another option I'm not considering?
From Magic Variables in the Ansible documentation:
Also available, inventory_dir is the pathname of the directory holding Ansible’s inventory host file, inventory_file is the pathname and the filename pointing to the Ansible’s inventory host file.
Use string manipulation to extract the information you want from the above variable (e.g., the last segment from the path).
A filter exists to extract the last part of a pathname/filename :
managing-file-names-and-path-names
So you can use inventory_file | basename

Combine two default Ansible host files including one being ec2.py?

I'm using Ansible is a mixed environment of AWS and non-AWS machines. I'd like to avoid passing hosts on the command line. How do I combine multiple host files in Ansible and make it the default? The current recommendation on the Ansible site is to override /etc/ansible/hosts with ec2.py. which prevents me from adding additional hosts. Thanks.
You can mix dynamic and static inventory files by creating a directory and dropping ec2.py in it plus your ini formatted inventory list as a separate file.
It is mentioned briefly in the docs here.
for example:
./inventory/ec2.py
./inventory/additional-hosts
ansible-playbook ... -i inventory/
Note that any file with the executable bit set will be treated as a dynamic inventory so make sure you files have the correct permissions.

Rename roles/rolename/tasks/main.yml to rolename.yml in Ansible

By default, Ansible looks for the tasks for a role in a main.yml. I have too many main.yml files and I'd like to rename this to rolename.yml or something that is more unique. How can I change Ansible's default behavior to use rolename.yml instead of tasks/main.yml?
As Bruce already pointed out this is hardcoded. But I have an issue with this behavior as well, as my IDE displays the filename in the tab and I used to have a bazillion tabs named "main.yml".
My standard setup is to have two files:
main.yml
role-name.yml
In the main.yml then simply is an include task to the role-name.yml. Along with this include I handle tags, because I want all my roles to be tagged with their name.
---
- include: role-name.yml
tags: role-name
...
Unfortunately there's no way to do this. The name main.yml is hardcoded into the ansible source code. (If you really care, look for the function _resolve_main in this file.)
Role tasks will always be in the file roles/<rolename>/tasks/main.yml, variables in roles/<rolename>/vars/main.yml, etc. Because the path that each file lives in provides the full detail of the name of the role & purpose of the file, there's really no need to change the name from main.yml. You would just end up with something like roles/<rolename>/tasks/<rolename>.yml which is redundant.
This is all documented in Ansible's Best Practices document.
As workaround one can symlink conveniently named rolename_tasks.yml to main.yml...

Resources