Halting and continuing embebbed ruby code - ruby

I call Ruby functions from my C++ code through the embedding commands (rb_eval and the like). Is there any way to stop the execution of the code partway, save the local variables, and restart it from the same spot later?

If you want to store Ruby variables for use later, you want to use a feature called Marshaling. Create a class in which you can store all variables you wish to save, and use Marshal::dump to store the class into a file. The data can be reconstituted into a Ruby variable again later by using Marshal::load.
Restarting your code from a particular point might not be as easy. You can marshal classes and data but not necessarily the state of the entire Ruby interpreter itself. One possibility is to store enough state information in your marshaled data to let you re-load the data and figure out where you need to pick up.

Related

Look up object from `inspect` value / pointer

As part of my debugging, I need to dump out a big object. Some of the data shows up like this:
#<Sketchup::Face:0x00007f9119bafea8>
This is the result of calling .inspect on the Sketchup::Face object, and Ruby calls .inspect internally when things are printed to the console.
Is it possible to then get a reference to the actual instance, using that 0x00007f9119bafea8 identifier? I don't even know exactly what this is - a pointer, instance id, something else?
Of course I can always manipulate my data before printing to console but since this is just for temporary debugging I'm hoping there's a quick and easy way.
Note: normally I would put in a binding.pry to avoid this whole business but due to Sketchup's restrictive programming environment it's not possible to use breakpoints there.

Write to Chef::Log from a template script?

I work on a team that decided long ago to use Chef's template resource to make dynamic scripts to be executed in an execute reource block.
I know this feature was only built to generate config files and the like, but I have to work with what I've got here.
Basically I need to know how to write to Chef::Log from a Ruby script generated from a template block. The script is not in the same context as the cookbook that generated it, so I can't just call require 'chef/log' in the script. I also do not want to just append the chef-run.log because that runs into timing problems.
Is there any way to accomplish this as cleanly as possible without appending to chef-run.log?
Thank you for your time.
Chef::Log is a global, so technically you can just call its methods directly from inside a template, but this is really weird and you probably shouldn't. Do whatever logging you need from the recipe code instead. If you need to log some values, compute them in the recipe and them pass them in using variables.

Adding a "source" attribute to ruby objects using Rubinius

I'm attempting to (for fun and profit) add the ability to inspect objects in ruby and discover their source code. Not the generated bytecode, and not some decompiled version of the internal representation, but the actual source that was parsed to create that object.
I was up quite late learning about Rubinius, and while I don't have my head around it yet fully, I think I've made some progress.
I'm having trouble figuring out how to do this, though. My first approach was to simply add another instance attribute to the AST structures (for, say, a ClosedScope object). Then, somehow pull that attribute out again when the bytecode is interpreted at runtime.
Does this seem like a sound approach?
As Mr Samuel says, you can just use pry and do show-source foo. But perhaps you'd like to know how it works under the hood.
Ruby provides two things that are useful: firstly you can get a list of all methods on an object. Just call foo.methods. Secondly it provides a file_name and line_number attribute for each method.
To find the entire source code for an object, we scan through all the methods and group them by where they are defined. We then scan up the file back until we see class or module or a few other ways rubyists use to define methods. We then scan forward in each file until we have identified the entire class/module definition.
As dgitized points out we often end up with multiple such definitions, if people have monkey patched core objects. By default pry only shows the module definition which contains most methods; but you can request the others with show-source -a.
Have you looked into Pry? It is a Ruby interpreter/debugger that claims to be able to do just what you've asked.
have you tried set_trace_func? It's not rubinius specific, but does what you ask and isn't based on pry or some other gem.
see http://www.ruby-doc.org/core-1.9.3/Kernel.html#method-i-set_trace_func

Saving new methods of class using Ruby metaprogramming

I just discovered Ruby's metaprogramming (after 7 years of using Ruby, it was about time!) and I have this question:
Assuming I run a program that uses class_eval and other metaporgramming functions to add methods to a class, is there an easy way, when re-running the same program, to have these new methods already defined, or do I have to program my own system which, every time class_eval is used, also save the generated code in a file in order to re-evaluate it the next time I run the program?
Thanks
This is not how it's done. A proper way is, when you run the program next time, to run all those calls to define_method, class_eval (and whatnot) again and define methods in run-time.
Imagine what would happen if generated methods persisted in your source code? Would you like your attr_accessor to replace itself with two new methods?
What if you're writing such a meta-method yourself and you change it. How do you think all those saved generated methods will be updated?
I don't know where you read about metaprogramming, but I strongly recommend this book: Metaprogramming Ruby. It should clear your head. :)
You can not (with eval, and self-assembled strings you could, but that is not metaprogramming anymore) and should not do that, even ruby's standard library is re-evaluated on program launch.
Another possibility would be forking, unicorn is a good example for that. Evaluate all your method definitions, and then start spawning child processes, which are copies of the "master" process. This saves you the time of re-evaluating all your code, as forks are pretty fast compared to that.

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Resources