How to always run some ansible roles after previous failures? - ansible

I have a set of playbooks that do look like
- name: Run test
hosts: tester
roles:
- { role: setup_repos }
- { role: setup_environment }
- { role: install_packages }
- { role: run_tests }
- { role: collect_logs }
The current problem is that all over the first 4 roles we have ignore_errors: true which is not a good practice as it makes very hard to read the output and to debug.
The only reason why ignore_errors was abused was because we wanted to be able to perform the collect_logs at the end, regardless the outcome.
How can we refactor this in order to remove the ignore_errors and have a more of a fail-fast strategy.
Please note that we have lots of playbooks calling collect_logs role, so "moving code inside playbook" is not really a way to reuse it.

On ansible 2.4 or newer one should replace role: block with tasks like role_include or role_import, which gives you the ability to use normal logic used by tasks. You could use handlers, ignore_errors, and so on.

I belive handlers and notify would help you achive what you want.
You wont need to change your roles behavior (although it might be a good idea).
You should notify at the end of each of your roles, the handler would only run once.
http://docs.ansible.com/ansible/latest/playbooks_intro.html#handlers-running-operations-on-change
Also if you choose to start handleing errors you could use the --force-handlers to force handler execution even when the playbook failed.
http://docs.ansible.com/ansible/latest/playbooks_error_handling.html#handlers-and-failure

Related

How to access tasks parameters from an Ansible task in the Python module code?

Is it possible to access Ansible tasks parameters from the Python module code?
Specifically, I would like to check if there is a register on the task in order to return a more complete info set.
Is it possible to access tasks parameters from the Python code of an Ansible module?
Yes, of course. You may have a look into Developing modules and Creating a module, in example
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
Specifically, I would like to check if there is a register on the task
Please take note that Registering variables of the Return Values is done
... from the output of an Ansible task with the task keyword register.
This means the task, respective the module called within doesn't know about the fact if the output will become registered or not and since that is done after the execution of the module code and providing the final result.
... in order to return a more complete info set.
Therefore you need to provide an other way of controlling the data structure of the result set.
In example
...
supports_check_mode=True
...
if module.check_mode:
...
Or just introduce a separate parameter on your Custom Module like
verbose: True
or
verbose_level: 1 # whereby 0 means false or OFF and i.e. up to 4
which can be checked within the module and simply controls the verbosity of the result set.

How to optionally apply environment configuration?

I want to optionally apply a VPC configuration based on whether an environment variable is set.
Something like this:
custom:
vpc:
securityGroupIds:
- ...
subnetIds:
- ...
functions:
main:
...
vpc: !If
- ${env:USE_VPC}
- ${self:custom.vpc}
- ~
I'd also like to do similar for alerts (optionally add emails to receive alerts) and other fields too.
How can this be done?
I've tried the above configuration and a variety of others but just receive various different errors
For example:
Configuration error:
at 'functions.main.vpc': must have required property 'securityGroupIds'
at 'functions.main.vpc': must have required property 'subnetIds'
at 'functions.main.vpc': unrecognized property 'Fn::If'
Currently, the best way to achieve such behavior is to use JS/TS-based configuration instead of YAML. With TS/JS, you get full power of a programming language to shape your configuration however you want, including use of such conditional checks to exclude certain parts of the configuration. It's not documented too well, but you can use this as a starting point: https://github.com/serverless/examples/tree/v3/legacy/aws-nodejs-typescript
In general, you can do whatever you want, as long as you export a valid object (or a promise that resolves to a valid object) with serverless configuration.

How to execute tasks dynamically in Gradle 5?

In gradle 5, the execute() method is removed. What is the quickest way to migrate a gradle 4 tasks. I cannot use dependsOn because execution is dynamic based on e.g. the environmentName or another condition:
task clearData() {
doLast {
if ( environmentName in nonProductionEnvironments ) {
clearTask1.execute()
clearTask2.execute()
} else {
throw new GradleException("Not allowed to clear data in this environment.")
}
}
}
I'm not familiar with execute method from tasks, but if it must be dynamic then I suggest adding a listener somewhere depending what you're trying to react to.
There are:
Build listeners: https://docs.gradle.org/current/javadoc/org/gradle/BuildListener.html
Task listeners: https://docs.gradle.org/current/javadoc/org/gradle/api/execution/TaskExecutionGraph.html
There are more, but I believe one of those may solve your issue. Since dependsOn does not work for you, then doing whatever work you're trying to do as a Task does not sound like the right approach.

How to use computed properties in github actions

I am trying to rebuild my ci-cd within the new github actions yaml format (new), the issue is that I can't seem to use computed values as arguments in a step.
I have tried the following
- name: Download Cache
uses: ./.github/actions/cache
with:
entrypoint: restore_cache
args: --bucket=gs://[bucket secret] --key=node-modules-cache-$(checksum package.json)-node-12.7.0
However "$(checksum package.json)" is not valid as part of an argument.
Please not this has nothing to do with if the command checksum exists, it does exist within the container.
I'm trying to copy this kind of setup in google cloud build
- name: gcr.io/$PROJECT_ID/restore_cache
id: restore_cache_node
args:
- '--bucket=gs://${_CACHE_BUCKET}'
- '--key=node-modules-cache-$(checksum package.json)-node-${_NODE_VERSION}'
I expected to be able to use compute arguments in a similar way to other ci-cd solutions.
Is there a way to do this that I am missing? Maybe being able to use 'run:' within a docker container to run some commands.
The only solution I'm aware of at the moment is to compute the value in a previous step so that you can use it in later steps.
See this answer for a method using set-output. This is the method I would recommend for passing computed values between workflow steps.
Github Actions, how to share a calculated value between job steps?
Alternatively, you can create environment variables. Computed environment variables can also be used in later steps.
How do I set an env var with a bash expression in GitHub Actions?

How can I access ansible variables inside an action plugin (without providing them as arguments)

I want to write an action plugin (specifically, a variation of 'assert') that logs the role calling the action plugin to file without including the role name as an argument to the plugin.
I can see (per this question) that "{{role_name}}" is a well-defined variable. But I have no idea how to access it in Python.
I don't want to have to do:
- name: example asset
custom_assert:
that: 1 > 0
msg: "Basic maths has broken"
role: "{{role_name}}"
I've tried out the following method (based on the email exchange here)
from ansible.inventory.manager import InventoryManager
from ansible import constants as C
inventory = InventoryManager(self._loader, C.DEFAULT_HOST_LIST)
return inventory.get_host(self._connection.host).vars
But all that I can access through there is some variables set in my hosts file - not the full range range of variables set with "register" or "setup" or known to ansible for other reasons (such as role_name).
(Additionally, I would like to access the task name as well - although the 'that' and 'msg' arguments nominally include all the info I need, I forsee benefits from being able to log the task name as well):
I think that the solution provided under Custom Ansible Callback not receiving group_vars/host_vars is what you are looking for.
Basically, you need to access the play variable manager as follows:
def v2_playbook_on_play_start(self, play):
variable_manager = play.get_variable_manager()
hostvars = variable_manager.get_vars()['hostvars']

Resources