Write to Chef::Log from a template script? - ruby

I work on a team that decided long ago to use Chef's template resource to make dynamic scripts to be executed in an execute reource block.
I know this feature was only built to generate config files and the like, but I have to work with what I've got here.
Basically I need to know how to write to Chef::Log from a Ruby script generated from a template block. The script is not in the same context as the cookbook that generated it, so I can't just call require 'chef/log' in the script. I also do not want to just append the chef-run.log because that runs into timing problems.
Is there any way to accomplish this as cleanly as possible without appending to chef-run.log?
Thank you for your time.

Chef::Log is a global, so technically you can just call its methods directly from inside a template, but this is really weird and you probably shouldn't. Do whatever logging you need from the recipe code instead. If you need to log some values, compute them in the recipe and them pass them in using variables.

Related

Setting up default overwritable constructors and destructors or other functions for a set of commands

alias cmd_name="source mainshell cmd_name"
My plan is to alias a single main script to a set of script names. Now on invocation of any script that main script would be called and it can define certain functions or constructors and destructor. Then it can check that script file if it has a constructor definition. If it has one, call that constructor else call the default one. Then source that script and then call the destructor. This would also give that script access to the default functions setup by main script. This shall work fine but these aliases can’t be exported to subshells.
To add to that, I just want these defaults functions available to that particular aliased set of commands and want those functions to destroy once command execution is complete. That’s why I can’t just write them on .bash_profile making it absolutely global.
command_name() {
# initial code
source path/to/command_name
# destructing code
}
Another option which I found was to create function for each name and call my script inside. This one is exportable too. In this way i could just encapsulate every command in a function with same name and can easily have initial codes and destroying code. Here the problem is that i can’t define any more functions inside that function and it would get really clumsy too doing everything inside a function.
Another thought I had was symbolic links, but they seem to have a limit to how many I can create to a particular script.
What should be the best way to achieve this or if its somehow an inappropriate design, can someone please explain?
IIUC you're trying to achieve the following:
A set of commands that necessarily take place in the context of the current shell rather than a new shell process.
These commands have a lot of common functionality that needs to be factored out of these commands.
The common functionality must not be accessible to the current shell.
In that case, the common code would be e.g. functions & variables that you have to explicitly unset after the command has been executed. Therefore, your best bet is to have a function per-command, have that function source the common code, and have the common code also have another function (called before you return) to unset everything.
Notes:
You can actually declare functions inside other functions, but the nested functions will actually be global - so name them uniquely and don't forget to unset them.
If they don't need to affect the current shell then you can just put them in their own file, source the common code, and not unset anything.
There is generally no limit to how many symlinks you can create to a single file. The limit on symlink chains (symlink to symlink to symlink etc.) is low.

Make Dynamic Variable Available to Multiple Chef Recipes

We dynamically compute the name of a directory at run-time from some attributes:
var1 = (node['foo']['bar']).to_s
var2 = (node['foo']['baz']).to_s
app_dir = "/var/#{var1}/#{var2}
Copying this code block to all of the recipes that needs it works. When we have tried to clean this up, it bombs with "No resource, or local variable named 'app_dir'.
We have tried the following:
1) Move the block of code into attributes/default.rb
2) Move the block of code into recipes/default.rb
3) Same as 2 above, but adding require_relative 'default' in the recipes that require the variable
If you're requiring from a separate file (e.g. require_relative) you need to make the variable something that gets exported. You can try making it a constant instead e.g. AppDir =, or a method. Local variables are not loaded with require.
There isn't really a good way to do this in Chef. Recipe and attributes files run in very different contexts so your best bet is either copy-pasta the boiler plate (if it really is as small as your example, do that) or otherwise maybe a global helper method but those are hard to write safely. Ping me on Slack and we can talk about the right kind of helper method for your exact situation (covering all of them isn't really practical for an SO answer).
If you need to share some code between a bunch of different recipes, you can try using a library loaded from a common cookbook dependency.

puppet like dsl in ruby

I'm implementing an internal DSL using ruby. I provide a command line tool to execute DSL scripts written in files (much like puppet). At first I was going to use load() to run the scripts, thing is, I want to be able to pass some context before I execute the script. I was hoping I could read a script in text form and treat it as a block and then have that block executed with some given context. Is something like this possible?
Or how are such things generally achieved? It can be done for sure because puppet does it. But before I can dig through its code base, I'm trying here.
Also, are there any nice small examples of internal DSL implementations I could look at?
Check following links please, a series of DSL articles.
http://www.ibm.com/developerworks/java/library/j-cb04046/index.html
http://deadprogrammersociety.blogspot.de/2006/11/ruby-domain-specific-languages-basics.html
http://deadprogrammersociety.blogspot.de/2006/11/ruby-domain-specific-languages-basics_08.html
http://deadprogrammersociety.blogspot.de/2006/11/ruby-domain-specific-languages-basics_19.html
http://deadprogrammersociety.blogspot.de/2006/11/ruby-domain-specific-languages-basics_27.html

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Halting and continuing embebbed ruby code

I call Ruby functions from my C++ code through the embedding commands (rb_eval and the like). Is there any way to stop the execution of the code partway, save the local variables, and restart it from the same spot later?
If you want to store Ruby variables for use later, you want to use a feature called Marshaling. Create a class in which you can store all variables you wish to save, and use Marshal::dump to store the class into a file. The data can be reconstituted into a Ruby variable again later by using Marshal::load.
Restarting your code from a particular point might not be as easy. You can marshal classes and data but not necessarily the state of the entire Ruby interpreter itself. One possibility is to store enough state information in your marshaled data to let you re-load the data and figure out where you need to pick up.

Resources