Do scheduled jobs in Tower use the extra vars set in the original template the job was based on?
Scheduled Jobs have extra vars also, but it seems like you'd want to still utilize the extra vars set in the template, and I'd rather not have to duplicate them. A change would require a change in the template and a change in every related scheduled job. It seems like extra vars could be set in a scheduled job, and it would have precedence over any variable in the template.
According to the Ansible Job Templates documentation, the answer appears to be yes, unless they are overridden:
Prompt for Extra Variables: If this is checked, the user is prompted for Extra Variables at job execution. The set of extra variables defaults to any Extra Variables already configured for the job template.
Also from the same document:
... passing extra variables to a job template (as you would do with a survey) can override other variables being passed from the inventory and project.
And from the Variables documentation:
Inside a template you automatically have access to all of the variables that are in scope for a host.
So if you don't prompt for user-specified variables, or otherwise override the extra vars already set, Ansible will use any extra variables that are currently set.
Tower does use the extra vars set in an underlying template. And if any Extra Vars are set in a Scheduled Job, they will take precedence. This behavior is what you would expect, now that I've used it a bit.
Related
I have the following structure ..
playbook/main.yml:
- role: my-role
vars:
src_path: "{{ src_path_from_inventory }}"
my-role/defaults/main.yml:
src_path: /opt/src
So I'm wondering if there is any way to use my role default if my "src_path_from_inventory" is not defined in my inventory.
It seems the role default is taken only if I don't specify "src_path" at all in the vars section of my playbook.
I tried with default(omit), but it doesn't take the role default. Only considers my variable as empty instead of undefined.
I tried with default(''), but it doesn't take the role default neither.
EDIT (more precision) :
I want to do something like this because we are deploying a big system (reverse-proxy, several frontend application and many backend services) and we also have a lot of clients. To keep the inventory light and clean, we don't want to specify something in it if we can avoid it. Mainly by these 2 ways :
Using the same value in the inventory to feed many role variables instead of having multiple variables in the inventory.
Using (or trying to use) role default if 90% of the time we can use the default value instead of defining the variable in the inventory.
If I go deeper in my example :
Reverse Proxy
rp_order_app_src_path: "{{ order_app_src_path_from_inventory }}"
Order App
order_app_src_path: "{{ order_app_src_path_from_inventory }}"
I want to avoid to define rp_order_app_src_path and order_app_src_path in my inventory.
I don't even want to define order_app_src_path_from_inventory 90% of the time.
I certainly can use the default() keyword in my playbook, but then I will lose the default value for my most generic role (like authentication) that can be used by other playbook in the company. And it will force us to hardcode the default in each playbook. It seems to me that the role default have been made for that.
I have a console task ran through:
$schedule->command('process:job')
->cron('* * * * *')
->withoutOverlapping();
The task is run, it can invoke different services, everything is fine in the world. However I have one specific tasks invoking a different class where the configuration is not loaded.
For specific reasons I wish to read my configuration in $_ENV (it allows me to do some key value iteration and process some keys specifically based on a pattern). But here $_ENV remains empty, I can read configuration through config() or env().
This never happens through HTTP calls nor through some command lines call (I haven't been able to understand the difference in the scheduler call and command line invocation).
Laravel 5.6
EDIT: this question is kept here because I didn't manage to find the existing relevant one Why is my $_ENV empty?
Found my solution here: Why is my $_ENV empty?
Basically $_ENV is not populated on a systematic basis but only if the flag E is in your variables_order ini variable. So if you stumble one the same problem, I suggest a quick check.
var_dump(ini_get('variables_order'));
The fix is obviously to fix your ini file.
I am having trouble getting the right variable based on a Role.
Perhaps I have the answer but I am not sure and could not find it in the documentation and here in the other questions.
TL;DR:
Multiple Roles on a variable use an OR, not an AND?
Intro
In Infrastructure I have multiple roles assigned to a machine:
WebServer
ApplicationServer
ApplA
ApplB
A variable has two values, each value has the role WebServer and ApplA or ApplB.
In Process the same combination of the Roles WebServer and ApplA is used (or WebServer and ApplB).
Problem
The value of the variable of ApplB is used in the step with ApplA.
It seems that this is because it uses an OR between the Roles and not a AND.
Correct?
That's right. If multiple roles are applied in the scope, it means that the variable will have that value for both of those roles individually, not together. You can combine scopes of different types (like DEV and ApplA) but not of the same time.
This section from the docs has a little more information on scope precedence and what happens if there are conflicting values.
In your deployment process, the "Deploy site" steps will run for all targets that have WebServer or ApplA. That might not be what you want.
In this case, you can consider dropping the webserver role for the purposes of scoping the variables and the deployment steps or combine it with your other tags to make them a little more specific. Instead of WebServer, ApplA, and ApplB, you can replace those with ApplA-Web and ApplB-Web for use in your steps and variables.
I hope that helps!
I'm using the environment variable $PMTargetName#numAffectedRow, but the TargetName is a parameter(parfile)
I'm trying to do this way:
$PM$$SOURCE_TABLE#NumAffectedRows
Is not working :/
What you need to use here is the name of the Target Transformation, not the table name. So assuming you've got a Target Transformation named MyTargetTable and you use the Target Table Name property to set the actual table name to e.g. Customers, then:
$PMMyTargetTable#TableName should give you Customers
and $PMMyTargetTable#NumAffectedRows should get you what you're looking for
The variables to be used in pre/post session commands need to be passed to the session from parameter file. e.g. $PMTargetName should be used in your session e.g. as Target Table Name. If you are doing this, then this will work - ${PMTargetName}#numAffectedRow. Adding parentheses will ensure your variable is expanded before #numAffectedRow is appended to it.
If you are not using $PMTargetName anywhere in your session then IS will not expand it. You should declare it as your workflow variable. And since you have already defined it in parameter file...rest should work.
In my oozie job.properties file I have set a parameter called "jobname". I have a fork running three shell actions. I all the three shell actions I want to assign new value to the workflow property "jobname". How is it possible?
Try setting three different parameters in job.properties(jobname1, jobname2, jobname3), which will be used by three shell actions.
Another option is to use the String EL functions in oozie to manipulate the "jobname" value at runtime.
For example: Use concat(${jobname}, "first") function to append an identifier to the jobname which will differentiate each action's jobname.