I want my roles to be reusable and self-contained. To be reusable, each role does a focused piece of work following the "single level of abstraction" paradigm. This leads to many small "atomic" roles, with layers of coordination roles building on them to provide more complex abstractions.
To be self-contained, each role should declare its dependencies on other roles. I don't want such dependencies to be tightly bound to the top-level playbook (e.g. by way of an omnibus playbook/roles/requirements.yml). A role should be fully responsible for managing (declaring) roles it depends upon. This is easily done by way of abc_role/meta/main.yml with suitable entires in the "dependencies" array.
All well and good - except that Ansible Tower not only pulls external dependent roles, e.g. from public or private (custom) repository, IT ALSO RUNS THE ROLE. The "pulling of external roles" happens by Tower in a pre-job that follows all the dependencies recursively and gathers them into the playbook's "roles" directory. This is 100% prior to launching the job template itself. This same declaration of dependencies is also used, by Ansible proper, as a sequence of "pre-tasks" for the role - but without the benefit of any coordination by the using role. This is an unfortunate conflation of the "make available for later use" functionality and the "execute certain tasks first" functionality.
One simple reason for wanting to separate these two functionalities is so that I can have a role available (i.e. it's been installed from wherever) but only execute it conditionally based on dynamic host values. (Even where it's possible to put a role's task flow into "dependencies[]" those items don't seem to honor the "when:" conditional stanza. Thus, conditionals for dependencies are off the table.)
As per https://github.com/ansible/ansible/issues/35905 "Allow role dependencies to not be executed" - this desire of mine has some recognized value. Unfortunately, #35905's commentary / conversation offers no direct solution or mitigation (that I could find).
But I realllly want a solution with the properties that I want. Yes, I want what I want.
So I banged my head bloody, cursed my protoplasmic ancestry, and finally derived log(deps^tags) as a power set of {42} - et voila! (See my selfie answer, below.)
Simply enhance each element of abc_role/meta/main.yml ~ "dependencies:" with "tags: [never]"
This gives me exactly what I want:
roles can declare what sub-roles they depend upon and have them made available (i.e. pulled down from wherever they come from and installed)
yet NOT compel the execution of said sub-roles in declaration sequence as "pre-tasks" of the depending role.
For the "I want to see an actual example" crowd -
===== abc_role/meta/main.yml =====
galaxy_info:
# boring stuff elided...
dependencies:
- src: ssh://git#repos-galore.example.com/projectum/subrollio.git
scm: git
tags: [never] # <<<=== This is the key bit
I have tested this and it works well.
_______________
< AWX 6.1.0.0 >
---------------
\ ^__^
\ (oo)\_______
(__) A )\/\
||----w |
|| ||
Ansible 2.8.2
EDIT: Actually, this doesn't quite work in practice.
It does work as described, but when a parent-role is selected by explicit tag (e.g. from a playbook), then the child-roles listed in the parent's dependencies array are executed, despite having a "tags: [never]" element.
I'm still trying to revive my method, by fiddling with the tags or manipsmating them somehow and will update this posting when I have a definitive answer. In the meantime, I wanted to clarify the (very) serious limits of the solution I found and described - and (hope springs eternal) maybe get a better answer from our community...
RE-EDIT
After re-blooding my head through various bangings on the role dependencies array with a multitude of tag combinations plus reading through some of the Ansible source, I have given up my quest.
Over a year ago, #bcoca (one of the Ansible contributors, I think) said an "install but don't execute" meta keyword would be a good option, but there's been no traction on the feature request since then. Smells like dead fish (meaning: this isn't likely to get done).
So, it's back to the (very annoying) "bloat the playbook's requirements.yml with all its transitively-required roles" approach and then just deal with the code-maintenance craziness that entails. It's a fugly way to do things, but at least it can be made to work.
Related
I understand that ansible is limited to being a configuration tool i.e., we create configurations in yaml files and python scripts does execution by looking at the configuration.
There are howevever a useful attribute when that will help in deciding which configuration to be done based on the condition mentioned in when and the order of the configurations is also done based on the order of the tasks,
variables and facts are available in achieving dynamic configurations.
However, my requirement involves complex loops and conditions and recursive processing which is either entirely not achievable in playbooks or multiple tasks need to be created with the conditions.
Few issues I find in using ansible playbooks are:
No if else structures
loops has very limited functionality
variable scopes does not work like in scripting languages
And the issue with recursive tasks is like for example:
start_installation.yml does installation of packages defined in a variable
The package has dependencies and the dependencies has dependencies i.e., recursive dependencies and installation should be done on the dependencies first recursively by calling start_installation.yml. However, this is creating problems with the variable scoping i.e., if a package_to_install is 'A' at the time of starting start_installation.yml for A, and if A has dependency 'B', then package_to_install will be set to 'B', at the time of starting start_installation.yml for B. Now, once installation of B is done, it can't do installation of A, as the variable scope is not local to the called function.
My question is mainly if it is a correct approach to use Ansible in doing these tasks or do I need to use a scripting language to do the required checks?
To answer your question:
" ... it is a correct approach to use Ansible in doing these tasks or do I need to use a scripting language to do the required checks?
Correct approach is to use package – Generic OS package manager. If this does not work create a Minimal, Complete, and Verifiable example
I'm creating code for interfaces specified in IBM Rational Rhapsody. Rhapsody implicitly generates include statements for other data types used in my interfaces. But I would like to have more control over the include statements, so I specify them explicitly as text elements in the source artifacts of the component. Therefore I would like to prevent Rhapsody from generating the include statements itself. Is this possible?
If this can be done, it is mostly likely with Properties. In the feature box click on properties and filter by 'include' to see some likely candidates. Not all of the properties have descriptions of what exactly they do so good luck.
EDIT:
I spent some time looking through the properties as well an could not find any to get what you want. It seems likely you cannot do this with the basic version of Rhapsody. IBM does license an add-on to customize the code generation, called Rules Composer (I think); this would almost certainly allow you to customize the includes but at quite a cost.
There are two other possible approaches. Depending on how you are customizing the include statements you may be able to write a simple shell script, perhaps using sed, and then just run that script to update your code every time Rhapsody generates it.
The other approach would be to use the Rhapsody API to create a plugin/tool that iterates through all the interfaces and changes the source artifacts accordingly. I have not tried this method myself but I know my coworkers have used the API to do similar things.
Finally I found the properties that let Rhapsody produce the required output: GenerateImplicitDependencies for several elements and GenerateDeclarationDependency for Type elements. Disabling these will avoid the generation of implicit include statements.
I am designing a new YAML file, and I want to use the most standard style of naming. Which is it?
Hyphenated?
- job-name:
...
lower_case_with_underscores?
- job_name:
...
CamelCase?
- jobName:
...
Use the standard dictated by the surrounding software.
For example, in my current project the YAML file contains default values for Python attributes. Since the names used in YAML appear in the associated Python API, it is clear that on this particular project, the YAML names should obey the Python lower_case_with_underscores naming convention per PEP-8.
My next project might have a different prevailing naming convention, in which case I will use that in the associated YAML files.
Kubernetes using camelCase: https://kubernetes.io/docs/user-guide/jobs/
apiVersion, restartPolicy
CircleCI using snake_case: https://circleci.com/docs/1.0/configuration/
working_directory restore_cache, store_artifacts
Jenkins with dash-case: https://github.com/jenkinsci/yaml-project-plugin/blob/master/samples/google-cloud-storage/.jenkins.yaml
stapler-class
So it looks like projects and teams use their own conventions and there is no one definite standard.
A less popular opinion derived from years of experience:
TL;DR
Obviously stick to the convention but IMHO follow the one that is established in your project's YML files and not the one that comes with the dependencies. I dare to say naming convention depends on too many factors to give a definitive answer or even try to describe a good practice other than "have some".
Full answer
Libraries might change over time which leads to multiple naming conventions in one config more often than any sane programmer would like - you can't do much about it unless you want to introduce (and later maintain) a whole new abstraction layer dedicated to just that: keeping the parameter naming convention pristine.
A one example of why you would want a different naming convention in your configs vs. configs that came with the dependencies is searchability, e.g. if all dependencies use a parameter named request_id, naming yours request-id or requestId will make it distinct and easily searchable while not hurting how descriptive the name is.
Also, it sometimes makes sense to have multiple parameters with the same name nested in different namespaces. In that case it might be justified to invent a whole new naming convention based on some existing ones, e.g.:
order.request-id.format and
notification.request-id.format
While it probably isn't necessary for your IDE to differentiate between the two (as it's able to index parameters within the namespace) you might consider doing so anyway as a courtesy for your peers - not only other developers who could use different IDEs but especially DevOps and admins who usually do use less specialized tools during maintenance, migrations and deployment.
Finally, another good point raised by one of my colleagues is that distinctive parameter names can be easily converted into a different convention with something as simple as one awk command. Doing so the other way around is obviously possible but by an order of magnitude more complicated which often spawns debates in the KISS advocates community about what it really means to "keep it simple stupid".
The conclusion is: do what's most sensible to you and your team.
I have about 30 projects and 6 of them must have special (but the same) build process. All of those projects inherit from single parent.
I have defined the special build process in parent. It includes several plugins and lots of configuration.
The inheritance structore is like this:
- global-parent (this is the place where special profile is defined)
-a-parent
-a-ear
-a-war
-a-ejb
-a-special <--
-b-parent
-b-ear
-b-war
-b-ejb
-b-special <--
-c-parent
-c-ear
-c-war
-c-ejb
-c-special <--
etc...
So I cannot make those special projects inherit another pom.
How to set "a flag" in those special projects in pom.xml to run always against special-profile?
For now I've set profile/activation/file/extsts and creates special empty marker file in each special project but this is so ugly.
I've also tried to use maven-properties-plugin to set some system property flag but it is still ugly.
There must be a more legant way. Is this a bad design?
The standard way of doing this is with two levels of parent projects. I've done this with a "global-parent" and a "webapps-parent" with common configuration / profiles just for the webapp components. However, I observe that you need something like "multiple inheritance" which doesn't quite exist in Maven.
Otherwise, the "file exists" activation is acceptable, in my opinion.
Addendum
Without knowing what's exactly "special" about the special modules, it's hard to answer the somewhat subjective question "is this bad design?"
Perhaps a custom plugin that encapsulates all the other plugins would be "better" or "more Maven-ish" but at the end of the day - is your build maintainable and easy to run (i.e. svn co project; cd project; mvn package)?
If so, you've achieved your goals.
We're doing a big project on OSGi and adding some commons modules. There's some discussion about naming the artifact.
So, one possibility when naming the module is for example:
cmns-definitions (for common definitions), another is cmns-definition, still another is cmns-def. This has some effect also on the package name. Now it's
xx.xxx.xxx.xxx.xxx.commons.definitions, if changing to cmns-def it would be xx.xxx.xxx.xxx.xxx.commons.def.
Inside this package will be classes like enums and other definitions to be used throughout the system.
I personally lean to cmns-definitions since there's not only 1 definition inside the package. Other people point out that java.util doesn't have only 1 utility there for example. Still, java.util is an abbreviation for me. It can mean java utility or java utilities. Same thing happens with commons-lang.
How would you name the package? Why would you choose this name?
cmns-definitions
cmns-definition
cmns-def
Bonus question: How to name something like cmns-exceptions? That's how I name it. Would you name it cmns-xcpt?
ËDIT:
I'm throwing in my own thoughts on this in the hope of being either confirmed or contradicted. If you can, please do.
According to what I think, the background reason why you name something is to make it easier to understand what's inside it. Or, according to Peter Kriens, to make it easy to remember and being able to automate processes via patterns. Both are valid arguments.
My reasoning is as follows in terms of pattern:
1) When a substantivation occurs and it's well known in the industry, follow it on your naming.
Eg:
"features" is a case on this. We have a module called cmns-features. Does this mean we have many features on this module? No. It means "the module that implements the "features" file from Apache karaf".
"commons" is a substantivation of "common" well-accepted on the industry. It doesn't mean "many common". It means "Common code".
If I see extr-commons as a module name, I know that it contains common code for extr (in this case extraction), for example.
2) When a quantity of classes inside the module are cooperating to give a distinct "one and one only" meaning to the whole, use singular form to name it.
The majority of modules are included here. If I name something cmns-persistence-jpa, I mean that whatever classes inside cooperate together to provide the jpa implementation of cmns-persistence-api. I don't expect 2 implementations inside it, but actually a myriad of classes that together make one implementation. Crystal clear to me. No?
3) When a grouping of classes is done with the sole purpose of gathering classes by affinity, but the classes don't cooperate together to no purpose, use plural.
Here is the case for example of cmns-definitions (enums used by the whole system).
Alternatively, using an abbreviation circumvents the problem, e.g. cmns-def which can be also "interpreted expanded" by a human reader to cmns-definitions. Many people use also "xxxx-util" meaning xxxx-utilities.
Still a third option can be used to pack things together, using a name that itself means a pluralization. The word "api" comes to mind, but any word that pluralizes something would do, like "pack".
Support to these cases (3) are well-known modules like commons-collections (using the plural) or commons-dbcp (using abbreviation) or commons-lang (again abbreviation) and anything that uses api to pack classes together by affinity.
From apache:
commons-collections -> many powerful data structures that accelerate development of most significant Java applications
commons-lang -> host of helper utilities for the java.lang API
commons-dbcp -> package of several database connection pools
'it is just a name ...'
I find in my long career that these just names can make a tremendous difference in productivity. I do not think it makes a difference if you use definitions, definition, or def as long as you're consistent and use patterns in the name that are easy to remember and can be used to automate processes. A build based on a consistent naming scheme is infinitely easier to work with than a build with "nice human display" names that are ad-hoc and have no discernible pattern.
If you use patterns, names tend to become shorter. Now people working with these names usually spent a lot of time with them. So their readability is not nearly as important as their mnemonic value. It turns out that abbreviations of 3 or 4 characters are surprisingly powerful. One of the reason is they work well is that there is only one possible abbreviation while if you go longer there are many candidates.
Anyway, most import part is the overall consistency. Good luck.
definitions (or def or definition) is a bad name because it doesn't have any semantic to a reader. You're in an object oriented world (I suppose) - try to follow its conventions and principles. Modules in Maven should be named after the biggest "abstraction" they contain. "Definition" is a form, not a meaning.
Your question is similar to: "Which class name is better FileUtilities or FileUtils". Answer: none.
Basically what you do with the Definitions and Exceptions is to provide kind of an API for your other modules. So I propose to combine definitions, exceptions and add interfaces to it. Then it makes sense to call it all cmns-api. I normally prefer the singular names as they are shorter but you are free to decide as it is just a name.