This question already has answers here:
Ansible: Access host/group vars from within custom module
(3 answers)
Closed 7 years ago.
I'm writing a custom module in Ansible, specific to a Playbook. Is it possible to directly access a playbook variable, without needing to pass it to the task as a parameter?
It is not possible, because the module is executed remotely and all the variables are not available unless explicitly passed.
I had the same question a while ago and Pruce P offered an interesting workaround in his answer.
Though I had another idea in the meantime, but this is only theoretical and never tested: Beside normal modules, Ansible has a special kind of module: action plugins. (...not documented) They are used exactly like modules but are executed locally. Since they run locally they do have access to Ansibles runner object which holds all the group/host vars etc.
The cool thing is, you can call a module programmatically from the action plugin. So you could have a small wrapper action plugin which then calls the actual module and passes all (required) vars to it. Since it is undocumented you need to look at the available plugins. For instance here is one which calls a module: assemble.
I have written something here which interacts with vars from the runner object.
Related
I'm using AWX as a task runner to run a variety of Ansible modules. Some of the Ansible modules are third-party modules whose parameters I can't control without forking the module, which is undesirable for a variety of reasons.
AWX supplies ansible_user as one variable that is used by some of the modules I'm using, and I'm trying to allow a user to some hosts by setting another variable, user_override.
I first thought to simply add the line ansible_user: "{{ user_override | default(ansible_user) }}" to the task's parameters, which would work... but the modules in question don't accept credentials via parameters. My next thought was to add a vars: entry to the playbook and supply the override there via the same markup as above. This unfortunately results in the error recursive loop detected in template string, which has been the bane of my existence while working through this problem.
I've also tried using the if/else syntax and intermediate variables, but neither appear to solve this problem.
How can I achieve this override functionality without forking AWX or the module in question?
Mods: This is distinct from the pile of questions asking about simple variable defaulting because the existing questions aren't in the context of AWX or can be solved by simply using default() or default(lookup()).
At this point, I'm pretty sure that the thing I (the asker) was trying to do is intended to be done by just sometimes setting a value, but some challenging-to-anonymize constraints in our software make that somewhere between challenging and not an option.
A sideways solution to the particular problem of overriding the user is that AWX sets the username in remote_user as well as ansible_user, which is then used by networkcli. So you can use the line `remote_user: "{{ override_user | default(ansible_user) }}". This doesn't help generally, but does answer the question.
I wish to create an Ansible module that will read and return the version of my software installed on a system. Reading the ansible docs it's not clear if the result should become part of the facts dictionary, or info (so create a mysw_facts or mysw_info module). But I suspect an _info module is more appropriate based on the docs statement to use _info for
information on individual files or programs.
As well, since the version will change as my (next) module can upgrade my software, will the ansible facts be redetected/updated?
Ansible is primarily designed for running tasks on remote machines. I want to use it in kind of the reverse - pull data from a bunch of machines and store it locally. I love the setup module but right now I'm interested in pulling a list of users - which the setup module doesn't show me.
Also, I want to add this info into a mariadb table.
I can do this project with the shell module but, my question is, is there a better way?
Unless someone can tell me a better practice, I'll use the shell module for two parts: First, I'll simply cat /etc/passwd and register the output from that. Then, do a local_action shell script that invokes a mysql command to update the table.
I'd love to both pull the data and insert/update the data into my table using standard modules. But there don't seem to be modules that will do either.
You could use local facts. It is described here https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#local-facts-facts-d
If a remotely managed system has an /etc/ansible/facts.d directory, any files in this directory ending in .fact, can be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible.
then you get the local facts in json format with this line:
ansible <hostname> -m setup -a "filter=ansible_local"
For system user facts you could write a shell-script which returns the system user in json format and put this script under /etc/ansible/facts.d . Each time you gather facts from this machine, you have this facts too.
Assume that a normal deployment script does a lot of things and many of them are related to preparing the OS itself.
These tasks are taking a LOT of time to run even if there is nothing new to do and I want to prevent running them more often than, let's say once a day.
I know that I can use tags to filter what I am running but that's not the point: I need to make ansible aware that these "sections" executed successfully one hour ago and due to this, it would skip the entire block now.
I was expecting that caching of facts was supposed to do this but somehow I wasnt able to see any read case.
You need to figure out how to determine what "executed successfully" means. Is is just that a given playbook ran to completion? Certain roles ran to completion? Certain indicators exist that allow you determine success?
As you mention, I don't think fact caching is going to help you here unless you want to introduce custom facts into each host (http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d)
I typically come up with a set of variables/facts that indicate a role has already been run. Sometimes this involves making shell calls and registering vars, looking at gathered facts and determining if certain files exist. My typical pattern for a role looks something like this
roles/my_role/main.yml
- name: load facts
include: facts.yml
- name: run config tasks if needed
include: config.yml
when: fact_var1 and fact_vars2 and inv_var1 and reg_var1
You could also dynamically write a yaml variable file that get's included in your playbooks and contains variables about the configured state of your environment. This is a more global option and doesn't really work if you need to look at the configured status of individual machines. An extension of this would be to write status variables to host_vars or group_vars inventory variable files. These would get loaded automatically on a host by host basis.
Unfortunately, as far as I know, fact caching only caches host based facts such as those created by the setup module so wouldn't allow you to use set_fact to register a fact that, for example, a role had been completed and then conditionally check for that at the start of the role.
Instead you might want to consider using Ansible with another product to orchestrate it in a more complex fashion.
My current Ansible project is setup like so:
backup-gitlab.yml
roles/
aws_backups/
tasks/
main.yml
backup-vm.yml
gitlab/
tasks/
main.yml
start.yml
stop.yml
backup-gitlab.yml needs to do the following:
Invoke stop.yml on the gitlab host.
Invoke backup-gitlab.yml on a different host.
Invoke start.yml on the gitlab host.
The problem I'm running into is Ansible doesn't seem to support a way of choosing which task files to run within the same role in the same playbook. Before I was using tags to control what Ansible would do, but in this case tagging the include statements for start.yml and stop.yml doesn't work because Ansible doesn't appear to have a way to dynamically change the applied tags that are run once they are set through the command line.
I can't come up with an elegant way to achieve this.
Some options are:
Have each task file be contained within its own role. This is annoying because I will end up with a million roles that are not grouped in any way. It's essentially abandoning the whole 'role' concept.
Use include with hard coded paths. This is prone to error as things move around. Also, since Ansible deprecated combining with_items with include (or using any sort of dynamic looping with include), I can no longer quickly change up the task files being run. Any minor change in my workflow requires lots of coding changes. I would really like to stick with using tags from the command line to control exactly what Ansible does.
Use shell scripts to invoke separate Ansible playbooks.
Use conditionals (when clause) on every single Ansible action, and control what gets run by setting variables. While several people have recommended this on SO, it sounds awful. I will have to add the conditional to hundreds of actions and every time I run a playbook the output will be cluttered by hundred's of 'skip' statements.
Leverage Jinja templates and ansible's local_connection to dynamically build static main.yml files with all the required task files included in the proper order (using computed relative paths). Then invoke that computed main.yml file. This is dangerous and convoluted.
Use top level Ansible plays to invoke lower level plays. Seems messy, also this brings in problems when I need to pass variables between plays. Using Ansible's Python Api may help this.
Ansible strives to bring VMs into idempotent states but this isn't very helpful and is a dated way of thinking in my opinion (I would have stuck with Chef if that is all I wanted). I want to leverage Ansible to actually do things such as: actively change configuration states, kick off processes, monitor events, react to events, etc. Essentially I want it to automate as much of my job as possible. The current 'role' structure (with static configurations) that Ansible recommends doesn't fit this paradigm very well even though their usage of remote command execution via SSH gets us so close to the dream.
Just use a playbook for these types of management tasks.
Granted the skip statements do somewhat clutter the output. If you wish to fix that you can further breakdown the roles into something like aws_backups-setup and aws_backups-managment.
Also the roles documentation has some information on how you can run pre_tasks and post_tasks on roles.