include playbook with a variable name, which is defined on another host - ansible

I have some trouble getting my playbook to include another. I use a playbook to roll out a clean VM. After that, I'd like to run another playbook, to configure the VM in a certain way. I've added the new host to the inventory and have ssh access to it.
Our team has set up a project per servertype. I've retrieved the right path to the project in an early stage (running against localhost) and used set_fact to put it in "servertype_project".
I expect this to work (at the playbook-level, running against the new VM):
- name: "Run servertype playbook ({{ project }}) on VM"
vars:
project: "{{ hostvars['localhost']['servertype_project'] }}"
include: "{{ project }}/ansible/playbook.yml"
But it fails the syntax check with this message:
ERROR! {{ hostvars['localhost']['servertype_project'] }}: 'hostvars' is undefined
I can retrieve the right string if I refer to {{ hostvars['localhost']['servertype_project'] }} from within a task, but not from the level in which I can include another playbook.
Since the value is determined at runtime, I think set_fact is the correct way to store the variable, but that one is host-specific. Is there any way I can pass it along to this host as well? Or did I miss some global-var-like-option?

You are not missing anything. This is expected. Facts are bound to hosts and hostvars is not accessible in playbook scope.
There is no way to define a variable that would be accessible in the include declaration, except for passing it as an extra-vars argument in CLI. But if you use an in-memory inventory (which is a reasonable assumption), then calling another ansible-playbook instance is out of question.
I think you should rethink your flow and instead of including a dynamically-defined play, have statically-defined play(s) running against your in-memory inventory and include the roles conditionally.

Related

How can I run an ansible role locally?

I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?
Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"

Ansible cloudformation update stack

I'm trying to update a cloudformation stack with just a param value that I need to change via Ansible. The stack is previously created and has about 20 input params, but I just need to update the value for one. I tried the following:
- name: update
cloudformation:
stack_name: "{{ stack_name }}"
state: present
region: "{{ region }}"
disable_rollback: false
args:
template_parameters:
CreateAlarms: "{{ create_alarms }}"
When I run it, the play throws an error stating that it expects values for the other template params. From the ansible documentation here http://docs.ansible.com/ansible/latest/cloudformation_module.html, it says "If state is present, the stack does exist, and neither template nor template_url is specified, the previous template will be reused." How do I tell the cloudformation module to use previous values as well? I know that aws cli supports it via the usePreviousValue flag, but how I do it with Ansible cloudformation?
Thanks in advance.
author/maintainer of the current Ansible cloudformation module here. There isn't a method to reuse previous values, you must specify the parameters every time. Usually that's fine because you have your params stored in your Ansible playbook anyhow.
If you're nervous, the values are listed in the cloudformation console, and you can also use changesets in Ansible to make sure only the expected are changing.

How to set an Ansible tier-specific inventory variable?

Is it possible to create an Ansible inventory variable that isn't associated with an inventory host or group but the server tier the playbook is run on? I have an Ansible role that installs the libffi-dev package using APT, but I may want to install a different version of that package on each server tier. I've created a variable "libffi-dev_ver" for that purpose. I also have the inventory files "development", "staging", and "production" that correspond to each of my tiers.
My role's main task, which is called from my main site.yml playbook, checks that version variable exists prior to running the role:
# roles/misc_libs/tasks/main.yml
- include: check_vars.yml tags=misc_libs
- include: misc_libs.yml tags=misc_libs
check_vars.yml checks to ensure that the package version variable exists:
# roles/misc_libs_tasks/check_vars.yml
- name: check that required role variables are set
fail: msg="{{ item }} is not defined"
when: not {{ item }}
with_items:
- libffi-dev_ver
The misc_libs role actually uses that variable to install the package:
# roles/misc_libs/tasks/misc_libs.yml
- name: install libffi-dev
apt: >
pkg=libffi-dev={{ libffi-dev_ver }}
update_cache=yes
cache_valid_time=3600
become: True
My development inventory file looks like this:
# development
[webservers]
web01.example.com ansible_ssh_host=<ip_address>
[dev:children]
webservers
[webservers:vars]
libffi-dev_ver="3.1-2+b2"
When I run this command:
ansible-playbook -i development site.yml -l webservers
I get this Ansible error:
fatal: [web01.example.com] => error while evaluating conditional: not libffi-dev_ver
What is the correct way to declare a package versioning variable like this in Ansible? The variable's value depends on the server tier which indicates to me that it goes in an inventory file since inventory files are server tier-specific. But all inventory variables seem to have to be associated with a host or group. I've done that but the role still doesn't see the variable. I could add a task to the role that detects the server tier and uses a "when" conditional to set the variable accordingly but that solution seems ugly because if you're installing multiple packages in a role, you'd need three conditionals for each package version variable. I've looked through the Ansible documentation and read numerous blog posts on setting up multi-tier playbooks but I don't see this particular situation addressed. What's the right way to declare a tier-specific variable like this?
The problem was that the variable 'libffi-dev_ver' I declared is actually a Jinja2 identifier that must adhere to Python 2.x naming rules. The '-' (dash) is an invalid character according to these rules. Once I changed it to an '_' (underscore), I no longer got the error.
Also, the check_vars.yml playbook is actually unnecessary. There is an Ansible configuration variable error_on_undefined_vars which will cause steps containing an undefined variable to fail. Since it's true by default, I don't need to run check_vars.yml as all variables are already being checked.
One place to declare server tier-specific variables seems to be in a file in the group_vars directory that has the same name as the group which is named after that tier in your inventory file. So in my case my 'development' inventory file contains a 'dev' child group. This group contains the web server where I want to install the libffi-dev package. Therefore, I created a file 'group_vars/dev' and declared a variable in that file called 'libffi_dev_ver' which I can reference in my misc_libs.yml playbook.
I don't get what you are attempting to accomplish. Why is:
# roles/misc_libs/tasks/misc_libs.yml
- name: install libffi-dev
apt: >
pkg=libffi-dev={{ libffi-dev_ver }}
update_cache=yes
cache_valid_time=3600
become: True
when: libffi-dev_ver is defined
not enough?

How to create dynamic variables in ansible

The real scenario, want to get a resource id of sqs in AWS, which will be returned after the execution of a playbook. So, using this variable in files to configure the application.
Persisting variables from one playbook to another
checking out the documentation, modules like set_fact and register have scope only for that specific host. There are many purpose of using the variables from one host to another.
Alternatives I can think of:
using Command module and echoing the variables to a file. Later, using the variable file using vars section or include.
Setting the env variables and then accessing it but this will be difficult.
So what is the solution?
If you're gathering facts, you can access hostvars via the normal jinja2 + variable lookup:
e.g.
- hosts: serverA.example.org
gather_facts: True
...
tasks:
- set_fact:
taco_tuesday: False
and then, if this has run, on another host:
- hosts: serverB.example.org
...
tasks:
- debug: var="{{ hostvars['serverA.example.org']['ansible_memtotal_mb'] }}"
- debug: var="{{ hostvars['serverA.example.org']['taco_tuesday'] }}"
Keep in mind that if you have multiple Ansible control machines (where you call ansible and ansible-playbook from), you should take advantage of the fact that Ansible can store its facts/variables in a cache (currently Redis and json), that way the control machines are less likely to have different hostvars. With this, you could set your control machines to use a file in a shared folder (which has its risks -- what if two control machines are running on the same host at the same time?), or set/get facts from a Redis server.
For my uses of Amazon data, I prefer to just fetch the resource each time using a tag/metadata lookup. I wrote an Ansible plugin that allows me to do this a little more easily as I prefer this to thinking about hostvars and run ordering (but your mileage may vary).
You can pass variables On The Command Line: http://docs.ansible.com/ansible/playbooks_variables.html#passing-variables-on-the-command-line
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
You can use local connection to run playbook = get variable and apply it to another playbook:
- hosts: 127.0.0.1
connection: local
- shell: ansible-playbook -i ...
register: sqs_id
- shell: ansible-playbook -i ... -e "sqs_id={{sqs_id.stdout}}"
Also delegation might be useful in this scenario:
http://docs.ansible.com/ansible/playbooks_delegation.html#delegation
Also you can store output in the local file and use (http://docs.ansible.com/ansible/playbooks_delegation.html#delegation):
- name: take a sqs id
local_action: command cat ~/sqs_id
PS:
I don't understand why you can't write complex playbook where will be included many roles that will share variables?
You can write "common" variables to a host_vars or group_vars this way all the servers has access to it.
Another way may be to create a custom ansible module/lookup plugin to hide all the boilerplate code and get an easy and flexible access to the variables you need.
I had a similar issue with azure DevOps pipelines.
I created VM:s with terraform, ssh-keys and windows username/password was generated by terraform and stored it in a KeyVault.
So I then needed to query KeyVault before running Ansible on all created VM:s. I ended up using Azure python SDK to get all secrets. I also generate an inventory file and a host_vars folder with a file for each VM.
The actual play-book is now very basic and does the job perfectly. All variables for terraform and ansible is in a json file. And the python script is less than 30 lines.

Ansible - accessing local environment variables

I wonder if there is a way for Ansible to access local environment variables.
The documentation references accessing variable on the target machine:
{{ lookup('env', 'SOMEVAR') }}
Is there a way to access environment variables on the source machine?
I have a Linux vm running on osx, and for me:
lookup('env', 'HOME') returns "/Users/Gonzalo" (the HOME variable from osx), while ansible_env.HOME returns "/root" (the HOME variable from the vm).
Worth to mention, that ansible_env.VAR fails if the variable does not exists, while lookup('env', 'VAR') does not fail.
Use ansible lookup:
- set_fact: env_var="{{ lookup('env','ENV_VAR') }}"
Those variables are in the management machine I suppose source machine in your case.
Check this: https://docs.ansible.com/ansible/devel/collections/ansible/builtin/env_lookup.html
Basically, if you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the HOME environment variable on management machine:`
Now, if you need to access it in the remote machine you can just run your ansible script locally in the remote machine.
Or you could just the ansible facts variables. If it's not in the ansible facts you can just run a shell command to get it.
Use delegate_to to run it on any machine you want:
- name: get running ansible user
ansible.builtin.set_fact:
local_ansible_user: "{{ lookup('env', 'USER') }}"
delegate_to: localhost

Resources