dynamic usage of attribute in recipe - ruby

I am trying to increment the value and use in another resource dynamically in recipe but still failing to do that.
Chef::Log.info("I am in #{cookbook_name}::#{recipe_name} and current disk count #{node[:oracle][:asm][:test]}")
bash "beforeTest" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
ruby_block "test current disk count" do
block do
node.set[:oracle][:asm][:test] = "#{node[:oracle][:asm][:test]}".to_i+1
end
end
bash "test" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
However I'm still getting the error bellow:
NoMethodError ------------- undefined method echo' for Chef::Resource::Bash
Cookbook Trace: ---------------
/var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb:3:in block (2 levels) in from_file'
Resource Declaration: ---------------------
# In /var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb
1: bash "beforeTest" do
2: code lazy{
3: echo "#{node[:oracle][:asm][:test]}"
4: }
5: end
Please can you help how lazy should be used in bash? If not lazy is there any other option?

bash "beforeTest" do
code lazy { "echo #{node[:oracle][:asm][:test]}" }
end
You should quote the command for the interpolation to work; if not, ruby would search for an echo command, which is unknown in ruby context (thus the error you get in log).
Warning: lazy has to be for the whole resource attribute; something like this WON'T work:
bash "beforeTest" do
code "echo node asm test is: #{lazy { node[:oracle][:asm][:test]} }"
end
The lazy evaluation takes a block of ruby code, as decribed here
You may have a better result with the log resource like this:
log "print before" do
message lazy { "node asm test is #{node[:oracle][:asm][:test]}" }
end

I've been drilling my head solving this problem until I came up with lambda expressions. But yet just using lambda didn't help me at all. So I thought of using both lambda and lazy evaluation. Though lambda is already lazy loading, when compiling chef recipe's, the resource where you call the lambda expression is still being evaluated. So to prevent it to being evaluated (somehow), I've put it inside a lazy evaluation string.
The lambda expression
app_version = lambda{`cat version`}
then the resource block
file 'tmp/validate.version' do
user 'user'
group 'user_group'
content lazy { app_version.call }
mode '0755'
end
Hope this can help others too :) or if you have some better solution please do let me know :)

Related

How to stub_command in ChefSpec?

I have this condition in my recipe:
install_action = (::Win32::Service.exists?(windows_service['name']) ? :configure : :create)
and a ChefSpec for that in spec file:
#1: not working
allow_any_instance_of(Win32::Service)
.to receive(:exists?)
.with(windows_service[:name])
.and_return(true)
#2: also not working
stub_command("::Win32::Service.exists?(#{windows_service[:name]})").and_return(true)
Could you please help to find out what have I missed in the ChefSpec test that is not working and mocking the return value.
Thanks
This should work:
allow(::Win32::Service).to receive(:exists?).with(windows_service[:name]).and_return(true)
Point is you stub a class method exists?, and not an instance method. That's why allow_any_instance_of does not work. And stub_command is actually for shell commands like stub_command('cat file | grep "hello"')

How can I run all functions in a bash script, in the order in which they are declared without explicitly calling the function? [duplicate]

This question already has answers here:
Get a list of function names in a shell script [duplicate]
(4 answers)
Closed 6 years ago.
Lets say I have a script that has a bunch of functions that act like test cases. So for example:
#!/bin/bash
...
function testAssertEqualsWithStrings () {
assertEquals "something" "something"
}
testAssertEqualsWithStrings <--- I dont want to do this
function testAssertEqualsWithIntegers () {
assertEquals 10 10
}
testAssertEqualsWithIntegers <--- I dont want to do this
function testAssertEqualsWithIntegers2 () {
assertEquals 5 $((10 - 5))
}
testAssertEqualsWithIntegers2 <--- I dont want to do this
function testAssertEqualsWithDoubles () {
assertEquals 5.5 5.5
}
testAssertEqualsWithDoubles <--- I dont want to do this
...
Is there a way I can call of these functions in order without having to actually use that explicit function call underneath each test case? The idea is that the user shouldnt have to manage calling the functions. Ideally, the test suite library would do it for them. I just need to know if this is even possible.
Edit: The reason why I dont use just the assert methods is so I can have meaningful output. My current setup allows me to have output such as this:
...
LineNo 14: Passed - testAssertEqualsWithStrings
LineNo 19: Passed - testAssertEqualsWithIntegers
LineNo 24: Passed - testAssertEqualsWithIntegers2
LineNo 29: Passed - testAssertEqualsWithDoubles
LineNo 34: Passed - testAssertEqualsWithDoubles2
...
LineNo 103: testAssertEqualsWithStringsFailure: assertEquals() failed. Expected "something", but got "something else".
LineNo 108: testAssertEqualsWithIntegersFailure: assertEquals() failed. Expected "5", but got "10".
LineNo 115: testAssertEqualsWithArraysFailure: assertEquals() failed. Expected "1,2,3", but got "4,5,6".
LineNo 120: testAssertNotSameWithIntegersFailure: assertNotSame() failed. Expected not "5", but got "5".
LineNo 125: testAssertNotSameWithStringsFailure: assertNotSame() failed. Expected not "same", but got "same".
...
EDIT: Solution that worked for me
function runUnitTests () {
testNames=$(grep "^function" $0 | awk '{print $2}')
testNamesArray=($testNames)
beginUnitTests #prints some pretty output
for testCase in "${testNamesArray[#]}"
do
:
eval $testCase
done
endUnitTests #prints some more pretty output
}
Then I call runUnitTests at the bottom of my test suite.
If you just want to run these functions without calling them, why declare them as functions in the first place?
You either need them to be functions because you are using them multiple times and need the calls to be different each time, in which case I don't see an issue.
Otherwise you just want to run each function once without calling them, which is just running the command. Remove all the function declarations and just call each line.
For this example
#!/bin/bash
...
assertEquals "something" "something" #testAssertEqualsWithStrings
assertEquals 10 10 #testAssertEqualsWithIntegers
assertEquals 5 $((10 - 5)) #testAssertEqualsWithIntegers2
assertEquals 5.5 5.5 #testAssertEqualsWithDoubles
...

Where is the ruby function 'powershell' defined?

I am using the msutter DSC module for puppet. While reading through the source code, I come across code like this (in dsc_configuration_provider.rb):
def create
Puppet.debug "\n" + ps_script_content('set')
output = powershell(ps_script_content('set'))
Puppet.debug output
end
What file defines the powershell function or method? Is it a ruby builtin? A puppet builtin? Inherited from a class? I know that it is being used to send text to powershell as a command and gathering results, but I need to see the source code to understand how to improve its error logging for my purposes, because certain powershell errors are being swallowed and no warnings are being printed to the Puppet log.
These lines in file dsc_provider_helpers.rb may be relevant:
provider.commands :powershell =>
if File.exists?("#{ENV['SYSTEMROOT']}\\sysnative\\WindowsPowershell\\v1.0\\powershell.exe")
"#{ENV['SYSTEMROOT']}\\sysnative\\WindowsPowershell\\v1.0\\powershell.exe"
elsif File.exists?("#{ENV['SYSTEMROOT']}\\system32\\WindowsPowershell\\v1.0\\powershell.exe")
"#{ENV['SYSTEMROOT']}\\system32\\WindowsPowershell\\v1.0\\powershell.exe"
else
'powershell.exe'
end
Surely this defines where the Powershell executable is located, but gives no indication how it is called and how its return value is derived. Are stdout and stderr combined? Am I given the text output or just the error code? etc.
This is core Puppet logic. When a provider has a command, like
commands :powershell => some binary
That is hooked up as a function powershell(*args).
You can see it with other providers like Chocolatey:
commands :chocolatey => chocolatey_command
def self.chocolatey_command
if Puppet::Util::Platform.windows?
# must determine how to get to params in ruby
#default_location = $chocolatey::params::install_location || ENV['ALLUSERSPROFILE'] + '\chocolatey'
chocopath = ENV['ChocolateyInstall'] ||
('C:\Chocolatey' if File.directory?('C:\Chocolatey')) ||
('C:\ProgramData\chocolatey' if File.directory?('C:\ProgramData\chocolatey')) ||
"#{ENV['ALLUSERSPROFILE']}\chocolatey"
chocopath += '\bin\choco.exe'
else
chocopath = 'choco.exe'
end
chocopath
end
Then other locations can just call chocolatey like a function with args:
chocolatey(*args)

ExecJS: keeping the context between two calls

I'm currently trying to use ExecJS to run Handlebars for one of the product I work on (note: I know the handlebars.rb gem which is really cool and I used it for some times but there is issues to get it installed on Windows, so I try another homemade solution).
One of the problem I'm having is that the Javascript context is not kept between each "call" to ExecJS.
Here the code where I instantiate the #js attribute:
class Context
attr_reader :js, :partials, :helpers
def initialize
src = File.open(::Handlebars::Source.bundled_path, 'r').read
#js = ExecJS.compile(src)
end
end
And here's a test showing the issue:
let(:ctx) { Hiptest::Handlebars::Context.new }
it "does not keep context properly (or I'm using the tool wrong" do
ctx.js.eval('my_variable = 42')
expect(ctx.js.eval('my_variable')).to eq(42)
end
And now when I run it:
rspec spec/handlebars_spec.rb:10 1 ↵
I, [2015-02-21T16:57:30.485774 #35939] INFO -- : Not reporting to Code Climate because ENV['CODECLIMATE_REPO_TOKEN'] is not set.
Run options: include {:locations=>{"./spec/handlebars_spec.rb"=>[10]}}
F
Failures:
1) Hiptest::Handlebars Context does not keep context properly (or I'm using the tool wrong
Failure/Error: expect(ctx.js.eval('my_variable')).to eq(42)
ExecJS::ProgramError:
ReferenceError: Can't find variable: my_variable
Note: I got the same issue with "exec" instead of "eval".
That is a silly example. What I really want to do it to run "Handlebars.registerPartial" and later on "Handlebars.compile". But when trying to use the partials in the template it fails because the one registered previously is lost.
Note that I've found a workaround but I find it pretty ugly :/
def register_partial(name, content)
#partials[name] = content
end
def call(*args)
#context.js.call([
"(function (partials, helpers, tmpl, args) {",
" Object.keys(partials).forEach(function (key) {",
" Handlebars.registerPartial(key, partials[key]);",
" })",
" return Handlebars.compile(tmpl).apply(null, args);",
"})"].join("\n"), #partials, #template, args)
end
Any idea on how to fix the issue ?
Only the context you create when you call ExecJS.compile is preserved between evals. Anything you want preserved needs to be part of the initial compile.

How to change the prompt

I am trying to configure the prompt characters in ripl, an alternative to interactive ruby (irb). In irb, it is done using IRB.conf[:DEFAULT], but it does not seem to work with ripl. I am also having difficulty finding an instruction for it. Please guide to a link for an explanation or give a brief explanation.
Configuring a dynamic prompt in ~/.riplrc:
# Shows current directory
Ripl.config[:prompt] = lambda { Dir.pwd + '> ' }
# Print current line number
Ripl.config[:prompt] = lambda { "ripl(#{Ripl.shell.line})> " }
# Simple string prommpt
Ripl.config[:prompt] = '>>> '
Changing the prompt in the shell:
>> Ripl.shell.prompt = lambda { Dir.pwd + '> ' }
ripl loads your ~/.irbrc file, which
typically contains some irb specific
options (e.g. IRB.conf[:PROMPT]). To
avoid errors, you can install
ripl-irb, which catches calls to the
IRB constant and prints messages to
convert irb configuration to ripl
equivalents.
http://rbjl.net/44-ripl-why-should-you-use-an-irb-alternative

Resources