debug code in standalone matlab - debugging

Say I have some function foo that is used in a standalone application, (ie. compiled into an executable with mcc -m,) which has an important intermediate result bar. Normally I don't need this intermediate result after completion of the function and thus it is not a return value. However for development and debugging purposes it is useful to be able to make this intermediate result accessible, which I may do by using assignin to put the intermediate result in some debug workspace.
Now the problem is that the assignin isn't possible in a standalone compilation and mcc will complain with an error if there is an assignin in the code. What I'd like to do is include the assignin only when the code is run interactively and not when being compiled as a standalone application. Additionally, this would speed things up as I would not need the intermediate result anyway in the standalone application and thus can same time and/or memory by not doing the assignin in the standalone application. In any other programming environment one would call this compiling in debug and release mode.
In pseudo-matlab:
function res = foo()
bar = some complicated formula
if ~standalone
assignin('debug', 'foo_bar', bar)
end
res = some complicated formula involving bar
The problem is that I know of no way of expressing the if ~standalone, firstly I don't know how to test for being in standalone mode or not, but more crucially, this needs to be some code construct that actually causes mcc to completely disregard the guarded code block and not try to compile it, because the assignin can not be compiled in standalone mode.
As an aside, this would not just be valuable for intermediate results, but also for extra data gathering, where extra data would be calculated in the guarded block and exported by way of an assignin. Obviously such extra data should not be calculated in the standalone version as it would not serve any purpose.
Is there any such code construct in matlab that would allow for this to be done, or is there a better alternative? Up to now I've just been juggling commented code, uncommenting and recommenting the debug code as I went along in the development process.

Instead of using assignin to populate a debug workspace, you could use a global debugging struct and stash the variables in fields of the same name. All valid variable names are also valid struct field names. You could implement this with a global variable, but would probably be better done with a persistent variable inside a function. This will work in compiled or non-compiled code.
First, have a function that defines your debugging mode.
function out = isdebugging(value)
%ISDEBUGGING Get or set the global debugging state
persistent state
if isempty(state)
state = false;
end
switch nargin
case 0 % Getter
out = state;
case 1 % Setter
state = value;
end
Then a function for stashing the debugging values, that only holds on to values when debug mode is on.
function out = debugval(action, name, value)
%DEBUGVAL Stash values for debugging
persistent stash
if isempty(stash)
stash = struct;
end
% Short-circuit when not in debugging mode to save space
if ~isdebugging()
return;
end
switch action
case 'get'
out = stash.(name);
case 'getall'
out = stash;
case 'set'
stash.(name) = value;
case 'list'
out = fieldnames(stash);
case 'remove'
stash = rmfield(stash, name);
case 'clear'
stash = struct;
end
The debugging is disabled by default so it will short-circuit in the compiled version and not accumulate values. Enable it manually in your interactive Matlab session with isdebugging(true). This bypasses the issue of detecting whether you're running deployed. It also means you can enable and use it in your compiled app, if you want to test the compiled code to see how it's working in that context. You can use a GUI button or environment variable to tell the compiled app to enable debugging.
The isdebugging() call can guard other code. But I wouldn't get too carried away with using isdebugging() to guard anything besides log output or value accumulation. You don't want your debugging mechanism to have side effects on the correctness of your code.
Also have a look at Java's log4j as a model of how to incorporate run-time configurable debugging output in an application. You could apply its principles to Matlab.

Use the function isdeployed. isdeployed is true when run in the MCR, and false when run in MATLAB.
EDIT: Of course, this doesn't solve the problem of compiling. You might have to find a substitute for assignin..

Related

Clojure: capture runtime value of function arg, to use in REPL

Problem
My web front-end calls back-end queries with complex arguments. During development, to avoid time-consuming manual replication of those arguments, I want to capture their values in Vars, which can be used in REPL.
Research
This article shows that inline def is a suitable solution, but I couldn't make it work. After a call from the front-end happened, the Var remained unbound.
I launched the backend with REPL through VS Code + Calva, with the following code:
(defn get-analytics-by-category [params]
(def params params)
...)
And here's the evaluation of the Var in the REPL:
#object[clojure.lang.Var$Unbound 0x1298f89e "Unbound: #'urbest.db.queries/params"]
Question
Why the code above didn't bound value of the argument to the Var? Is there another solution?
The best way that I found is to use scope-capture library. It captures all the local variables with addition of 1 line in a function, and then with another 1-liner you can define all those variables as global, which allows you to evaluate in REPL any sub-expression in the function using runtime values.
If you ever spent a lot of time reproducing complex runtime values, I strongly recommend watching their 8-min demo.
My issue with inline-def was likely caused by reloading namespace after the Var was bound to a value. After restarting VS Code and carefully doing everything again, the issue went away.
Another way to look at the runtime is to use a debugger.
It is more flexible than scope-capture, but requires a bit more work to make variables available outside an execution.
VS Code's Calva extension comes with one, there's also Emacs packages.

Can proc macros determine the target of the invoking compilation?

Procedural macros live in their own crates, which are compiled for the development machine (so that they can be executed when crates that use them are compiled). Any conditional compilation directives within the procedural macro crates will accordingly be based on their compilation environment, rather than that of the invoking crate.
Of course, such macros could expand to tokens that include conditional compilation directives that will then be evaluated in the context of the invoking crate's compilation—however, this is not always possible or desirable.
Where one wishes for the expanded tokens themselves to be some function of the invoking crate's compilation environment, there is a need for the macro to determine that environment during its run-time (which is of course the invoking crate's compilation-time). Clearly a perfect use-case for the std::env module.
However, rustc doesn't set any environment variables; and Cargo sets only a limited few. In particular, some key information (like target architecture/operating system etc) is not present at all.
I appreciate that a build script in the invoking crate could set environment variables for the macro then to read, but this places an unsatisfactory burden on the invoking crate author.
Is there any way that a proc macro author can obtain runtime information about the invoking crate's compilation environment (target architecture and operating system being of most interest to me)?
I've somewhat inelegantly solved this by recursing into a second proc macro invocation, where the first invocation adds #[cfg_attr] attributes with literal boolean parameters that can then be accessed within the second invocation:
#[cfg_attr(all(target_os = "linux"), my_macro(linux = true ))]
#[cfg_attr(not(target_os = "linux"), my_macro(linux = false))]
// ...
A hack, but it works.
I found another solution:
Instead of generating the code depending on a flag like that, you can generate the code for all the OS and use #[cfg(...)] inside the quoted code.
quote! {
#[cfg(linux)]
{
// linux specific stuff
}
#[cfg(not(linux))]
{
// not linux specific stuff
}
}
This is probably cleaner.

View a custom data type's values while debugging OCaml code

I have a list called list_ds of a custom-defined data structure in my OCaml source. I compiled the source for debugging and ran the debugger halting execution of my code at a breakpoint. Now I want to check particular element of the data structure within the list. If I use the print list_ds command in the debugger, I see [ abstr; abstr; abstr; abstr; <abstr>; ...] - list with word abstr. If I use "print list_ds.(0)" command in the debugger, it tells me that $1 : ds = abstr. But I really want to see the elements of the ds data structure at the first location in the list_ds. How can I do that?
One option would be to install your own custom print function for the type. This is described in Section 16.8.8 of the OCaml Debugger Manual.
A downside of this approach is that it requires quite a bit of setup, especially since the output must be done through the Format module. You might be able to use the deriving project to speed this up. It can generate formatting functions automatically.

avoiding browser calls in R

I have an elaborate script that spans multiple functions (and files). For debugging purposes I need to embed browser calls into all sorts of nooks and crannies. When I presumably fix something, I want to run the whole thing without debugging, ergo avoiding browser calls because commenting out all browser calls would mean a considerable effort from my part. #mdsumner on R chat suggested running the script in non-interactive mode (i.e. using Rscript.exe on Windows) but I would benefit from having that done in my console, to be able to access for instance traceback. I have gone through browser docs and I can find no option that would come close to what I'm trying to achieve. Any suggestions?
Here are three possibliities:
1) Overwrite browser command. Add this command to your global workspace to turn the browser commands off:
browser <- list
and to turn it back on
rm(browser)
This is probably the easiest but is a bit ugly due to the browser variable being left in the global environment.
The next two solutions are slightly longer but use options instead so that no new variables are introduced into the global environment. Also they are such that if no options are set then no debugging is done so you only have to set an option if you want debugging. The if solution may be faster than the expr solution although its likely not material.
2) Use expr= argument with option. Replace each browser command with:
browser(expr = isTRUE(getOption("Debug")))
and then define the "Debug" option to be TRUE to turn debugging on.
options(Debug = TRUE)
or set it to something else or remove it to turn debugging off:
options(Debug = NULL)
3) Use if with an option. Replace each browser command with:
if (isTRUE(getOption("Debug"))) browser()
and then set the Debug option or not as in the prior point.
Define global logical value
debug_mode <- TRUE
and then instead of browser() use
if (debug_mode) browser()
I think this just comes down to nuanced use of a debugging function. If you want to selectively control the use of browser(), put it inside an if that lets you enable or disable debugging for the function. When you want browser to be called, make that explicit like
myfun(x, debug = TRUE)

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Resources