Ansible Inventory plugins: the verify_file method of InventoryModule base class receives the input string modified with a path-like prefix - ansible

I have implemented an Inventory plugin according to the guide on developing dynamic inventory.
On invoking my plugin via the CLI, I am experiencing an issue with the way the inventory source is handled from the command line to the verify_file() method. In particular, I am passing a simple string to the command line but the verify_file() method is observed to receive not just that string itself, but actually a concatenation of the string plus my user home path (i.e. the path that is equivalent to "~"). For example, an input of "input_string" from the command line will provide "/home/{user}/input_string" to the verify_file() method.
This behaviour is unexpected since the built-in host_list inventory plugin, like my custom plugin, expects a simple string (not a path) and, also, does seem to omit any logic to handle any expectation of receiving such a path-like input.
Clearly I could make my plugin work by adding some additional logic to handle what is occurring. But I would like to understand if/how I can actually avoid the modification of the input from happening at all - like it seems to be working already for the built-in host_list plugin.
Here is the command I am issuing to the CLI:
ansible-inventory -i 'myUniqueInputString' --list --output ansible-inventory -vvvv
Here is an excerpt of a minimal repro:
class InventoryModule(BaseInventoryPlugin):
NAME = 'my_plugin'
def __init__(self):
super(InventoryModule, self).__init__()
self.foo_stuff = {}
def verify_file(self, input_string):
"""check the input value is same as was passed via the CLI."""
valid = False
b_path = to_bytes(input_string, errors='surrogate_or_strict')
if not os.path.exists(b_path) and ',' not in input_string:
try:
assert input_string == "myUniqueInputString"
valid = True
except AssertionError:
print(f"Received this: {input_string} but was expecting this: myUniqueInputString")
return valid
These is the relevant line from the stdout:
Received this: /home/testuser/myUniqueInputString but was expecting this: myUniqueInputString
The key evidence is the input string of "myUniqueInputString" providing "/home/testuser/myUniqueInputString" to the function. How to stop this modification of the input?

Related

How to pass arguments / parameters to the npm-script using npmExecuteScripts?

I'm trying to run the npm-script from the Jenkins pipeline via the SAP Project Piper's npmExecuteScripts:
npmExecuteScripts:
runScripts: ["testScript"]
That works! Now, I want to pass some arguments to my script.
According to the Project Piper documentation, there is a property scriptOptions, which cares about passing arguments to the called script:
Options are passed to all runScripts calls separated by a --. ./piper npmExecuteScripts --runScripts ci-e2e --scriptOptions --tag1 will correspond to npm run ci-e2e -- --tag1
Unfortunately, I can't figure out what is the proper syntax for that command.
I've tried several combinations of using scriptOptions, e.g.:
scriptOptions: ["myArg"]
scriptOptions: ["myArg=myVal"]
and many others, but still no desired outcome!
How can I call an npm-script and pass arguments / parameters to the script using the Project Piper's npmExecuteScripts?
To solve the issue, it's important to bear in mind that in contrast to the regular argument-value mapping via the npm_config_-prefix, the SAP Project Piper scriptOptions doesn't perform a mapping and passes an array of argument-value pairs «as is» instead, and then this array can be picked up via process.argv.
The Jenkins pipeline configuration:
npmExecuteScripts:
runScripts: ["testScript"]
scriptOptions: ["arg1=Val1", "arg2=Val2"]
package.json:
"scripts": {
"testScript": "node ./testScript.mjs"
}
The server-side script:
/**
* #param {Array.<String>} args - array of console input arguments to be parsed
*/
const testScript = function testScript(args) {…}
testScript(process.argv.slice(2));
P.S. Just to compare, the regular way to pass an argument's value to the npm-script looks like:
npm run testScript --arg=Val
and the server-side script:
"testScript": "echo \"*** My argument's value: ${npm_config_arg} ***\""
The output:
*** My argument's value: Val ***
The npm-script engine performs an argument-value mapping under the hood by using the npm_config_-prefix.

inspec - i want to output structured data to be parsed by another function

I have a inspec test, this is great:
inspec exec scratchpad/profiles/forum_profile --reporter yaml
Trouble is I want to run this in a script and output this to an array
I cannot find the documentation that indicated what method i need to use to simulate the same
I do this
def my_func
http_checker = Inspec::Runner.new()
http_checker.add_target('scratchpad/profiles/forum_profile')
http_checker.run
puts http_checker.report
So the report method seems to give me load of the equivalent type and much more - does anyone have any documentation or advice on returning the same output as the --reporter yaml type response but in a script? I want to parse the response so I can share output with another function
I've never touched inspec, so take the following with a grain of salt, but according to https://github.com/inspec/inspec/blob/master/lib/inspec/runner.rb#L140, you can provide reporter option while instantiating the runner. Looking at https://github.com/inspec/inspec/blob/master/lib/inspec/reporters.rb#L11 I think it should be smth. like ["yaml", {}]. So, could you please try
# ...
http_checker = Inspec::Runner.new(reporter: ["yaml", {}])
# ...
(chances are it will give you the desired output)

Fail to use yaml reference in ansible inventory plugin

I would like to use this config with an inventory plugin
# test_inventory_xxx.yml
plugin: cloudscale # or openstack or ...
inventory_hostname: &inventory_hostname_value uuid
compose:
setting_of_inventory_hostname: *inventory_hostname_value
I get no error, but the value is not set. And it is valid yaml. (At least my checker nor myself see an error.
So I decided to simplify it by using the constructed plugin, which is standard:
# inventory_constructed.yaml
plugin: constructed
# add variables to existing inventory
keyed_groups:
- key: from_inventory
prefix: inventory
parent_group: &common_parent_group test_group_1
compose:
var_from_constructed: 1233456789
also_from_constr: "'also'" # must be in quotes 2x!
some_from_constr: &ref1 1234567777
ref_from_constr: *ref1 # this works fine
ref_to_test: *common_parent_group # <--- this line returns an error
strict: yes
Now I get the error: Could not set ref_to_test for host my_host: 'test_group_1' is undefined
But it passes when I uncomment the marked line. (the ref &common_parent_group is still defined, but not used with *common_parent_group.) Why is test_group_1 undefined in one case, but not in the other?
How to reproduce: ansible -i some_of/your_inventory -i inventory_constructed.yaml -m debug -a var=vars
What do I do wrong? Or what else is the problem?
(I tought it is an missing feature, so original info in https://github.com/ansible/ansible/issues/69043)
It seems like parent_group takes a literal string while ref_to_test takes a Jinja2 expression (because it's under compose). It should fail the same way if you write
ref_to_test: test_group_1
because test_group_1 simply isn't a Jinja2 variable. You'll have to write
ref_to_test: "'test_group_1'"
just like above so Jinja2 sees 'test_group_1' which is a literal string. This also means you can't use an alias because parent_group does not evaluate its content with Jinja2 and therefore shouldn't include quotes in its content.

How to assign file content to chef node attribute

I have fingreprint.txt at the location "#{node['abc.d']}/fingreprint.txt"
The contents of the file are as below:
time="2015-03-25T17:53:12C" level=info msg="SHA1 Fingerprint=7F:D0:19:C5:80:42:66"
Now I want to retrieve the value of fingerprint and assign it to chef attribute
I am using the following ruby block
ruby_block "retrieve_fingerprint" do
block do
path="#{node['abc.d']}/fingreprint.txt"
Chef::Resource::RubyBlock.send(:include, Chef::Mixin::ShellOut)
command = 'grep -Po '(?<=Fingerprint=)[^"]*' path '
command_out = shell_out(command)
node.default['fingerprint'] = command_out.stdout
end
action :create
end
It seems not to be working because of missing escape chars in command = 'grep -Po '(?<=Fingerprint=)[^"]*' path '.
Please let me know if there is some other way of assigning file content to node attribute
Two ways to answer this: first I would do the read (IO.read) and parsing (RegExp.new and friends) in Ruby rather than shelling out to grep.
if IO.read("#{node['abc.d']}/fingreprint.txt") =~ /Fingerprint=([^"]+)/
node.default['fingerprint'] = $1
end
Second, don't do this at all because it probably won't behave how you expect. You would have to take in to account both the two-pass loading process and the fact that default attributes are reset on every run. If you're trying to make an Ohai plugin, do that instead. If you're trying to use this data in later resources, you'll probably want to store it in a global variable and make copious use of the lazy {} helper.

Specifying a particular callback to be used in playbook

I have created different playbooks for different operations in ansible.
And I have also created different Callback Scripts for different kinds of Playbooks (And packaged them with Ansible and installed).
The playbooks will be called from many different scripts/cron jobs.
Now, is it possible to specify a particular callback script to be called for a particular playbook? (Using a command line argument probably?)
What's happening right now is, all the Callback scripts are called for each playbook.
I cannot put the callback script relative to the location/folder of the playbook because it's already packaged inside the ansible package. Also, all the playbooks are in the same location too.
I am fine with modifying a bit of ansible source code to accommodate it too if needed.
After going through the code of Ansible, I was able to solve it with the below...
In each callback_plugin, you can specify self.disabled = True and the callback wont be called at all...
Also, which calling a playbook, there's an option to parsing extra arguments as key=value pairs. It will be part of the playbook object as extra_vars field.
So I did something like this in my callback_plugin.
def playbook_on_start(self):
callback_plugins = self.playbook.extra_vars.get('callback_plugin', '') // self.playbook is populated in your callback plugin by Ansible.
if callback_plugins not in ['email_reporter', 'all']:
self.disabled = True
And while calling the playbook, I can do something like,
ansible-playbook -e callback_plugin=email_reporter //Note -e is the argument prefix key for extra vars.
If with callback scripts you mean callback plugins, you could decide in those plugins if any playbook should trigger some action.
In the playbook_on_play_start method you have the name of the play, which you could use to decide if further notifications should be processed or not.
playbook_on_stats then is called at the end of the playbook.
SHOULD_TRIGGER = false
class CallbackModule(object):
def playbook_on_play_start(self, name):
if name == "My Cool Play":
SHOULD_TRIGGER = true
def playbook_on_stats(self, stats):
if SHOULD_TRIGGER == true:
do something cool
Please note, playbook_on_play_start is called for every play in your playbook, so it might be called multiple times.
If you are simply running a playbook via script you can do something like this
ANSIBLE_STDOUT_CALLBACK="json" ansible-playbook -i hosts play.yml
You are setting the callback as an environment variable prior to ansible-playbook command running.

Resources