Accessing Ansible variables in molecule test, TestInfra - ansible

I picked up molecule while researching around inspec and how to use it in ansible. I found molecule very cool and adopted it. I wanted to use it in 2 ways.
1- When developing a role or playbook
2- After a particular playbook have been run on production.
On number 1: I found this very useful question/ressponse on stackoverflow and that has helped me shape my thinking.I put my variable file for the role kafka under group_vars/all as suggested in the stackoverflow post
- kafka
- - molecule
- - - default
- - - - molecule.yml
- - - - playbook.yml
- - - - ...
- - - - group_vars
- - - - - all.yml
- - - - tests
- - - - - test_default.py
- - tasks
- - - main.yml
- - ....
test_default.py
import os
import testinfra.utils.ansible_runner
import pytest
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
#pytest.fixture()
def AnsibleVars(host):
all_vars = host.ansible.get_variables()
return all_vars
def test_hosts_file(host):
f = host.file('/etc/hosts')
assert f.exists
assert f.user == 'root'
assert f.group == 'root'
def test_downloaded_binary(host, AnsibleVars):
# arch = host.file(AnsibleVars['kafka_archive_temp'])
result = host.ansible('debug','var=kafka_archive_temp')
arch = host.file(result['kafka_archive_temp'])
assert arch.exists
assert arch.is_file
def test_installation_directory(host,AnsibleVars):
# dir = host.file(AnsibleVars['kafka_final_path'])
result = host.ansible('debug','var=kafka_final_path')
dir = host.file(result['kafka_final_path'])
assert dir.exists
assert dir.is_directory
assert dir.user == AnsibleVars['kafka_user_on_os']
assert dir.group == AnsibleVars['kafka_group_on_os']
def test_user_created(host,AnsibleVars):
user = host.user(AnsibleVars['kafka_user_on_os'])
assert user.name == AnsibleVars['kafka_user_on_os']
assert user.group == AnsibleVars['kafka_group_on_os']
group_vars/all.yml
kafka_version: "2.2.1"
kafka_file_name: "kafka_2.12-{{ kafka_version }}.tgz"
kafka_user_on_os: kafka
kafka_group_on_os: kafka
kafka_zookeeper_service: zookeeper
kafka_service: kafka
kafka_log_folder: /var/log/kafka
kafka_zookeeper_port: 2181
kafka_archive_temp: "/tmp/{{ kafka_file_name }}"
kafka_final_path: "/usr/local/kafka/{{ kafka_version }}"
kafka_get_binaries_details:
- {
dest: "{{ kafka_archive_temp }}",
url: "http://www-us.apache.org/dist/kafka/2.2.1/kafka_2.12-2.2.1.tgz"
}
....
molecule verify
molecule verify
--> Validating schema /Users/joseph/Engineering/configuration-management-ansible/roles/kafka/molecule/default/molecule.yml.
Validation completed successfully.
--> Test matrix
└── default
└── verify
--> Scenario: 'default'
--> Action: 'verify'
--> Executing Testinfra tests found in /Users/joseph/Engineering/configuration-management-ansible/roles/kafka/molecule/default/tests/...
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/joseph/Engineering/configuration-management-ansible/roles/kafka/molecule/default
plugins: testinfra-3.1.0
collected 8 items
tests/test_default.py ........ [100%]
============================== 8 passed in 18.34s ==============================
Verifier completed successfully.
However the method host.ansible.get_variables() could not resolve a variable inside another variable like : kafka_final_path: "/usr/local/kafka/{{ kafka_version }}".
I ended up using the following:
result = host.ansible('debug','var=kafka_final_path')
dir = host.file(result['kafka_final_path'])
to get the value of kafka_final_path.
Question 1.1: Looking at how there is a need of a little manipulation before a variable of a variable get be retrieved with all needed interpolation, I am wondering there is any better way of writing these tests?
Question 2.1: On number 2, I would like to create a different scenario for testing like for EC2 on AWS. On those playbooks, I use external variable files passed to ansible-playbook since they have higher precedence. I am wondering in that case how I would access those variables from the external vars_files in testinfra?

Related

Ansible - Passing a dictionary to a module parameter

I'm using fortinet.fortios.system_global module as describe here: https://docs.ansible.com/ansible/latest/collections/fortinet/fortios/fortios_system_global_module.html#ansible-collections-fortinet-fortios-fortios-system-global-module
My goal is to pass a dictionary to the system_global parameter with the allowed sub-parameters. I have the dictionary as follows for example:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key:value>
This dictionary lives in a separate file called forti.yml.
I then use include_vars to pull this yml file into my play as follows:
vars_files:
- /path/to/forti.yml
And then I use the system_global module:
- name: Configure system_global task
fortios_system_global:
access: "{{ access_token }}"
system_global: "{{ forti }}"
However, when I run the play it throws an error like so:
"msg": "Unsupported parameters for (fortios_system_global) module: system_global.admin-concurrent, system_global.admin-console-timeout, system_global.admin-hsts-max-age,<and so on>. Supported parameters include: member_path, member_state, system_global, vdom, enable_log, access_token."
I tried putting the key:value pairs in the vars: in the play level and passed it to the module the same way and it worked.
vars:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key: value>
What am I missing? They're both type: dict, the data are exactly the same. Not sure what I'm missing here. Can someone please help?
You have - and the parameters are supposed to be _ so it is telling you the module parameter does not exist
vars:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key: value>
should be
vars:
forti:
admin_concurrent: enable
admin_console_timeout: 0
admin_hsts_max_age: 15552000
<more key: value>
Keep on automating!
Just look at module examples here: https://docs.ansible.com/ansible/latest/collections/fortinet/fortios/fortios_system_global_module.html#ansible-collections-fortinet-fortios-fortios-system-global-module

Ansible callback plugin: how to get play attribute values with variables expanded?

I have a play below and am trying to get the resolved value of the remote_user attribute inside the callback plugin.
- name: test play
hosts: "{{ hosts_pattern }}"
strategy: free
gather_facts: no
remote_user: "{{ my_remote_user if my_remote_user is defined else 'default_user' }}"
tasks:
- name: a test task
shell: whoami && hostname
I am currently accessing the play field attribute as follows:
def v2_playbook_on_play_start(self, play):
self._play_remote_user = play.remote_user
And I also tried saving the remote_user within v2_playbook_on_task_start to see if this does the trick, as this is where the templated task name is made available.
def v2_playbook_on_task_start(self, task, is_conditional):
self._tasks[task._uuid].remote_user = task.remote_user
self._tasks[task._uuid].remote_user_2 = task._get_parent_attribute('remote_user')
However all cases above give me {{ my_remote_user if my_remote_user is defined else 'default_user' }} instead of the expanded/resolved value.
In general, is there a neat way to get a collection of all play attributes with resolved values as defined in the playbook?
Happily much easier for action plugins.
ActionBase class has templar and loader properties already.
One can iterate over task_vars and render all with Templar.template
for k in task_vars:
new_module_args = merge_hash(
new_module_args,
{k: self._templar.template(task_vars.get(k, None))}
)
and call module
result = self._execute_module(
module_name='my_module',
task_vars=task_vars,
module_args=new_module_args
)
I don't think there is an easy way to achieve this.
PlayContext is templated inside task_executor here.
And this happens after all callback methods are already notified.
So you should use Templar class manually (but I'm not sure you can get correct variables context for it to work correctly).
Credit goes Konstantin's tip to use the Templar class.
I came up with a solution for Ansible 2.3.1 - not entirely sure if it's the optimum one but it seems to work. This is an example code:
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.plugins.strategy import SharedPluginLoaderObj
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'your_name'
def __init__(self):
super(CallbackModule, self).__init__()
# other shenanigans
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
def v2_playbook_on_play_start(self, play):
self.play = play
def _all_vars(self, host=None, task=None):
# host and task need to be specified in case 'magic variables' (host vars, group vars, etc) need to be loaded as well
return self.play.get_variable_manager().get_vars(
loader=self.playbook.get_loader(),
play=self.play,
host=host,
task=task
)
def v2_runner_on_ok(self, result):
templar = Templar(loader=self.playbook.get_loader(),
shared_loader_obj=SharedPluginLoaderObj(),
variables=self._all_vars(host=result._host, task=result._task))
remote_user = templar.template(self.play.remote_user)
# do something with templated remote_user

Ansible Dict and Tags

I have a playbook creating EC2 by using a dictionary declared in vars: then registering the IPs into a group to be used later on.
The dict looks like this:
servers:
serv1:
name: tag1
type: t2.small
region: us-west-1
image: ami-****
serv2:
name: tag2
type: t2.medium
region: us-east-1
image: ami-****
serv3:
[...]
I would like to apply tags to this playbook in the simplest way so I can create just some of them using tags. For example, running the playbook with --tags tag1,tag3 would only start EC2 matching serv1 and serv3.
Applying tags on the dictionary doesn't seem possible and I would like to avoid doing multiplying tasks like:
Creatinge EC2
Register infos
Getting private IP from previously registered infos
adding host to group
While I already have a working loop for the case I want to create all EC2 at once, is there any way to achieve that (without relying on --extra-vars, which would need key=value) ? For example, filtering out the dictionary by keeping only what is tagged before running the EC2 loop ?
I doubt you can do this out of the box. And not sure this is good idea at all.
Because tags are used to filter tasks in Ansible, so you will have to mark all tasks with tags: always.
You can accomplish this with custom filter plugin, for example (./filter_plugins/apply_tags.py):
try:
from __main__ import cli
except ImportError:
cli = False
def apply_tags(src):
if cli:
tags = cli.options.tags.split(',')
res = {}
for k,v in src.iteritems():
keep = True
if 'name' in v:
if v['name'] not in tags:
keep = False
if keep:
res[k] = v
return res
else:
return src
class FilterModule(object):
def filters(self):
return {
'apply_tags': apply_tags
}
And in your playbook:
- debug: msg="{{ servers | apply_tags }}"
tags: always
I found a way to match my needs without touching to the rest so I'm sharing it in case other might have a similar need.
I needed to combine dictionaries depending on tags, so my "main" dictionary wouldn't be static.
Variables became :
- serv1:
- name: tag1
type: t2.small
region: us-west-1
image: ami-****
- serv2:
- name: tag2
type: t2.medium
region: us-east-1
image: ami-****
- serv3:
[...]
So instead of duplicating my tasks, I used set_fact with tags like this:
- name: Combined dict
# Declaring empty dict
set_fact:
servers: []
tags: ['always']
- name: Add Server 1
set_fact:
servers: "{{ servers + serv1 }}"
tags: ['tag1']
- name: Add Server 2
set_fact:
servers: "{{ servers + serv2 }}"
tags: ['tag2']
[..]
20 lines instead of multiply tasks for each server, change vars from dictionary to lists, a few tags and all good :) Now if I add a new server it will only take a few lines.

How to provide a condition within Chef recipe to see if it running under test kitchen?

I am using encrypted data bags within Chef and I want to add a condition within my Chef recipe as follows:
If (test kitchen) then
encryptkey = data_bag_item("tokens", "encryptkey")
If ( not test kitchen ) then
secret = Chef::EncryptedDataBagItem.load_secret("/etc/chef/encrypted_data_bag_secret")
encryptkey = Chef::EncryptedDataBagItem.load("tokens", "encryptkey", secret)
I have added data_bags_path and encrypted_data_bag_secret_key_path within kitchen.yml as follows:
provisioner:
name: chef_zero
chef_omnibus_url: omni-url/chef/install.sh
roles_path: 'test/integration/default/roles'
data_bags_path: "test/integration/default/data_bags"
encrypted_data_bag_secret_key_path: "test/integration/default/encrypted_data_bag_secret"
Use the attributes in your kitchen.yaml.
suites:
- name: default
data_bags_path: 'databags'
run_list:
- recipe[x::y]
attributes: {'kitchen' : 'true' }
Inside your recipe put if condition using the value of node['chef-mode'].
if node['kitchen'] == 'true'
#something
else
#else
end
Just use data_bag_item("tokens", "encryptkey") for both. It will take care of decryption for you automatically.

How to append values in ansible variable using conditional statements

I am passing a set of values to the ansible play book. Using the value I try to create a string.
For example, I pass arguments: first_nm, last_nm and nick_nm to my playbook via --extra-vars. And inside my role/:
<task-name>/
vars/main.yml
I tried to do the following:
full_name: {{first_nm}} {{last_nm}}{{'-'+nick_nm if nick_nm is defined else ''}}
My Problem:
Since nick_nm is optional, when its empty or not defined if get the full name as for example : david john- with a - appended to the value.
So how can I avoid this append. Is there a better way to do the same?
You should also check if string is not empty. In your setup you only checking if the variable exists, and since it does the condition evaluates to True and gives you - + nick_nm
You can do it like this:
---
- hosts: localhost
gather_facts: no
connection: local
vars:
- first_nm: John
- last_nm: Smith
- nick_nm:
tasks:
- set_fact: full_name="{{first_nm}} {{last_nm}}{% if nick_nm is defined and nick_nm %}-{{nick_nm}}{%endif%}"
- debug: var=full_name

Resources