How to extend ansible playbooks to achieve complex conditions? - ansible

I understand that ansible is limited to being a configuration tool i.e., we create configurations in yaml files and python scripts does execution by looking at the configuration.
There are howevever a useful attribute when that will help in deciding which configuration to be done based on the condition mentioned in when and the order of the configurations is also done based on the order of the tasks,
variables and facts are available in achieving dynamic configurations.
However, my requirement involves complex loops and conditions and recursive processing which is either entirely not achievable in playbooks or multiple tasks need to be created with the conditions.
Few issues I find in using ansible playbooks are:
No if else structures
loops has very limited functionality
variable scopes does not work like in scripting languages
And the issue with recursive tasks is like for example:
start_installation.yml does installation of packages defined in a variable
The package has dependencies and the dependencies has dependencies i.e., recursive dependencies and installation should be done on the dependencies first recursively by calling start_installation.yml. However, this is creating problems with the variable scoping i.e., if a package_to_install is 'A' at the time of starting start_installation.yml for A, and if A has dependency 'B', then package_to_install will be set to 'B', at the time of starting start_installation.yml for B. Now, once installation of B is done, it can't do installation of A, as the variable scope is not local to the called function.
My question is mainly if it is a correct approach to use Ansible in doing these tasks or do I need to use a scripting language to do the required checks?

To answer your question:
" ... it is a correct approach to use Ansible in doing these tasks or do I need to use a scripting language to do the required checks?
Correct approach is to use package – Generic OS package manager. If this does not work create a Minimal, Complete, and Verifiable example

Related

What is `ac_cv_func_malloc_0_nonnull` as provided to ./configure?

I'm cross-compling with mingw and got this error:
undefined reference to `rpl_realloc'
After some searching I found this can be resolved as follows in configure.ac or as environment variables set prior to calling ./mingw64-configure:
ac_cv_func_malloc_0_nonnull=yes
ac_cv_func_realloc_0_nonnull=yes
What defines these macros, and as there any documentation on the subject? I couldn't find any...
What defines these macros, and as there any documentation on the subject?
Autoconf uses the ac_cv_ prefix for its "cache variables", in which it records the results of configuration tests it has performed. In the event that the same check is requested multiple times, these allow it to use the previously-determined result instead of performing the check again.
The general naming convention for these is documented in the Autoconf manual. The particular cache variable names you ask about are documented to cache the results of the Autoconf's AC_FUNC_MALLOC and AC_FUNC_REALLOC macros, respectively. That documentation also speaks to the rpl_realloc name.
It is allowed to use these variables in configure.ac to programmatically determine the results of those checks, but it is a relatively nasty hack to assign values to those variables directly. In this particular case, however, the error suggests that whoever prepared the autotooling for the project you're trying to build did a sloppy job of it. If fudging the cache variables gets you a successful build and a working program then that's a tempting and much easier alternative to actually fixing the project.

Omnet++ - Accessing parameters of a different module in initialization (.ini) file and using for loop

I need to generate Poisson arrival of traffic and thus need to set the start times of applications in clients accordingly. For this I need two things:
1. access parameters of different modules and use them as input for defining a parameter of another module
2. use a for loop to define parameters of modules
For e.g. - the example below demonstrates what I am trying to do.
I have 100 clients and each client has 20 applications. I want to set the start time of the first application of the first client and want to write the rest using a loop.
// iat = interArrivalTime
**.cli[0].app[0].startTime = 1 // define this
**.cli[0].app[1].startTime = <**.cli[0].app[0].startTime> + exponential(<iat>)
**.cli[0].app[2].startTime = <**.cli[0].app[1].startTime> + exponential(<iat>)
.
.
.
**.cli[n].app[m].startTime = <**.cli[n].app[m-1].startTime> + exponential(<iat>)
I looked at the 'ned' functions but could not find any solution.
Of course I can write a script for hardcoding the start times of several clients, but the script would output a huge file which is very hard to manage if the number of clients and applications are too big.
Thank You!
INI files are basically pattern matchers. Each time a module is initialized, the left side of the = sign on each line in the INI file is matched against the actual module path, beginning from the start of the INI file. On the first match from the beginning, the right side of the line is used as the value of the parameter.
In short, these are not assignment operations, rather rules telling each module how to initialize their own parameters. For example it is undefined, in what order these lines will be used during the initialization. Something that is earlier in the INI file is nit necessarily used earlier during module initialization. Of course this prevents you referring an other module's parameter. In fact you may not use any other parameters at all.
In short, INI files are declarative, not procedural constructs so cross references, loops and other procedural constructs cannot be used here.
If you want to create dependencies between module parameters, you can code that in the initialize() method of your module, by explicitly initializing a parameter from the C++ code. You can access any other module's parameter using C++ APIs.
Of course, if you don't want to modify existing applications this is not an optimal solution however you can create a separate module that is responsible for your 'procedural' initialization and that separate module can run through all of you applications and set the required parameters as needed. This approach is used at several places in INET where the initialization data must be computed. One notable example is the calculation of routing table information. e.g. Ipv4FlatNetworkConfigurator
An other approach would be to set up and configure your simulation from a scripting language like python. This is not (yet) supported by OMNeT++ however.
Long story short, write a configurator module and do your initialization there.

Ansible role dependencies - install but don't run (yet) - how?

I want my roles to be reusable and self-contained. To be reusable, each role does a focused piece of work following the "single level of abstraction" paradigm. This leads to many small "atomic" roles, with layers of coordination roles building on them to provide more complex abstractions.
To be self-contained, each role should declare its dependencies on other roles. I don't want such dependencies to be tightly bound to the top-level playbook (e.g. by way of an omnibus playbook/roles/requirements.yml). A role should be fully responsible for managing (declaring) roles it depends upon. This is easily done by way of abc_role/meta/main.yml with suitable entires in the "dependencies" array.
All well and good - except that Ansible Tower not only pulls external dependent roles, e.g. from public or private (custom) repository, IT ALSO RUNS THE ROLE. The "pulling of external roles" happens by Tower in a pre-job that follows all the dependencies recursively and gathers them into the playbook's "roles" directory. This is 100% prior to launching the job template itself. This same declaration of dependencies is also used, by Ansible proper, as a sequence of "pre-tasks" for the role - but without the benefit of any coordination by the using role. This is an unfortunate conflation of the "make available for later use" functionality and the "execute certain tasks first" functionality.
One simple reason for wanting to separate these two functionalities is so that I can have a role available (i.e. it's been installed from wherever) but only execute it conditionally based on dynamic host values. (Even where it's possible to put a role's task flow into "dependencies[]" those items don't seem to honor the "when:" conditional stanza. Thus, conditionals for dependencies are off the table.)
As per https://github.com/ansible/ansible/issues/35905 "Allow role dependencies to not be executed" - this desire of mine has some recognized value. Unfortunately, #35905's commentary / conversation offers no direct solution or mitigation (that I could find).
But I realllly want a solution with the properties that I want. Yes, I want what I want.
So I banged my head bloody, cursed my protoplasmic ancestry, and finally derived log(deps^tags) as a power set of {42} - et voila! (See my selfie answer, below.)
Simply enhance each element of abc_role/meta/main.yml ~ "dependencies:" with "tags: [never]"
This gives me exactly what I want:
roles can declare what sub-roles they depend upon and have them made available (i.e. pulled down from wherever they come from and installed)
yet NOT compel the execution of said sub-roles in declaration sequence as "pre-tasks" of the depending role.
For the "I want to see an actual example" crowd -
===== abc_role/meta/main.yml =====
galaxy_info:
# boring stuff elided...
dependencies:
- src: ssh://git#repos-galore.example.com/projectum/subrollio.git
scm: git
tags: [never] # <<<=== This is the key bit
I have tested this and it works well.
_______________
< AWX 6.1.0.0 >
---------------
\ ^__^
\ (oo)\_______
(__) A )\/\
||----w |
|| ||
Ansible 2.8.2
EDIT: Actually, this doesn't quite work in practice.
It does work as described, but when a parent-role is selected by explicit tag (e.g. from a playbook), then the child-roles listed in the parent's dependencies array are executed, despite having a "tags: [never]" element.
I'm still trying to revive my method, by fiddling with the tags or manipsmating them somehow and will update this posting when I have a definitive answer. In the meantime, I wanted to clarify the (very) serious limits of the solution I found and described - and (hope springs eternal) maybe get a better answer from our community...
RE-EDIT
After re-blooding my head through various bangings on the role dependencies array with a multitude of tag combinations plus reading through some of the Ansible source, I have given up my quest.
Over a year ago, #bcoca (one of the Ansible contributors, I think) said an "install but don't execute" meta keyword would be a good option, but there's been no traction on the feature request since then. Smells like dead fish (meaning: this isn't likely to get done).
So, it's back to the (very annoying) "bloat the playbook's requirements.yml with all its transitively-required roles" approach and then just deal with the code-maintenance craziness that entails. It's a fugly way to do things, but at least it can be made to work.

What is going on in this make file line?

Ok so im getting in to Kernel Module Development and the guides all pretty much use the same basic make file that contains this line:
make -C /lib/modules/`uname -r`/build M=$(PWD) modules
So my questions are:
Why is a make file calling make? that seems recursive
what is the M for? i cant find a make -M flag in any of the man pages
Recursive use of make is a common technique for introducing modularity into your build process. For example, in your particular case, you could support a new architecture by putting the relevant component in a folder whose name matches the uname -r output for that architecture, and you wouldn't have to change the master makefile at all. Another example, if you make one component modular, it makes it much easier to reuse in another project without making large changes to the new project's master makefile.
Just like it can be helpful to separate your code into files, modules, and classes (the latter for languages other than C, obviously), it can be helpful to separate your build process into separate modules. It's just a form of organization to make managing your projects easier. You might group related functionality into separate libraries, or plugins, and build them separately. Different individuals or teams could work on the separate components without all of them needing write access to the master makefile. You might want to build your components separately so that you can test them separately.
It's not impossible to do all of these things without recursive use of make, of course, but it's one common way of organizing things. Even if you don't use make recursively, you're still going to end up with a bunch of different component "makefiles" on a large project - they'll just be imported or included into the master makefile, rather than standing alone by themselves and being run via separate invocations of make.
Creating and maintaining a single makefile for a very large project is not a trivial matter. However, as the article Recursive make considered harmful describes, recursive use of make is not without its own problems, either.
As for your M, that's just overriding a variable at the command line. Somewhere in the makefile(s) the variable M will be used, and if you specify its value at the command line in this way, then the value you specify will override any other assignments to that variable that may occur in the makefile(s).

How to avoid implicit include statements in Rhapsody code generation

I'm creating code for interfaces specified in IBM Rational Rhapsody. Rhapsody implicitly generates include statements for other data types used in my interfaces. But I would like to have more control over the include statements, so I specify them explicitly as text elements in the source artifacts of the component. Therefore I would like to prevent Rhapsody from generating the include statements itself. Is this possible?
If this can be done, it is mostly likely with Properties. In the feature box click on properties and filter by 'include' to see some likely candidates. Not all of the properties have descriptions of what exactly they do so good luck.
EDIT:
I spent some time looking through the properties as well an could not find any to get what you want. It seems likely you cannot do this with the basic version of Rhapsody. IBM does license an add-on to customize the code generation, called Rules Composer (I think); this would almost certainly allow you to customize the includes but at quite a cost.
There are two other possible approaches. Depending on how you are customizing the include statements you may be able to write a simple shell script, perhaps using sed, and then just run that script to update your code every time Rhapsody generates it.
The other approach would be to use the Rhapsody API to create a plugin/tool that iterates through all the interfaces and changes the source artifacts accordingly. I have not tried this method myself but I know my coworkers have used the API to do similar things.
Finally I found the properties that let Rhapsody produce the required output: GenerateImplicitDependencies for several elements and GenerateDeclarationDependency for Type elements. Disabling these will avoid the generation of implicit include statements.

Resources