How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job? - parallel-processing

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?

According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});

A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Related

How to avoid implicit include statements in Rhapsody code generation

I'm creating code for interfaces specified in IBM Rational Rhapsody. Rhapsody implicitly generates include statements for other data types used in my interfaces. But I would like to have more control over the include statements, so I specify them explicitly as text elements in the source artifacts of the component. Therefore I would like to prevent Rhapsody from generating the include statements itself. Is this possible?
If this can be done, it is mostly likely with Properties. In the feature box click on properties and filter by 'include' to see some likely candidates. Not all of the properties have descriptions of what exactly they do so good luck.
EDIT:
I spent some time looking through the properties as well an could not find any to get what you want. It seems likely you cannot do this with the basic version of Rhapsody. IBM does license an add-on to customize the code generation, called Rules Composer (I think); this would almost certainly allow you to customize the includes but at quite a cost.
There are two other possible approaches. Depending on how you are customizing the include statements you may be able to write a simple shell script, perhaps using sed, and then just run that script to update your code every time Rhapsody generates it.
The other approach would be to use the Rhapsody API to create a plugin/tool that iterates through all the interfaces and changes the source artifacts accordingly. I have not tried this method myself but I know my coworkers have used the API to do similar things.
Finally I found the properties that let Rhapsody produce the required output: GenerateImplicitDependencies for several elements and GenerateDeclarationDependency for Type elements. Disabling these will avoid the generation of implicit include statements.

Run user-submitted code in Go

I am working on an application which allows users to compare the execution of different string comparison algorithms. In addition to several algorithms (including Boyer-Moore, KMP, and other "traditional" ones) that are included, I want to allow users to put in their own algorithms (these could be their own algorithms or modifications to the existing ones) to compare them.
Is there some way in Go to take code from the user (for example, from an HTML textarea) and execute it?
More specifically, I want the following characteristics:
I provide a method signature and they fill in whatever they want in the method.
A crash or a syntax error in their code should not cause my whole program to crash. It should instead allow me to catch the error and display an error message.
(In this case, I am not worried about security against malicious code because users will only be executing my program on their own machines, so security is their own responsibility.)
If it is not possible to do this natively with Go, I am open to embedding one of the following languages to use for the comparison functions (in order of preference): JavaScript, Python, Ruby, C. Is there any way to do any of those?
A clear No.
But you can do fancy stuff: Why not recompile the program including the user provided code?
Split the stuff into two: One driver which collects user code, recompiles the actual code, executes the actual code and reports the outcome.
Including other interpreters for other languages can be done, e.g. Otto is a Javascript interpreter. (C will be hard :-)
Have you considered doing something similar to the gopherjs playground? According to this, the compilation is being done client-side.

Can I debug dynamically added Ruby method?

I want to store brief snippets of code in the database (following a standard signature) and "inject" them at runtime. One way would be using eval(my_code). Is there some way to debug the injected code using breakpoints, etc? (I'm using Rubymine)
I'm aware I can just log to console, etc, but I'd prefer IDE-style debugging if possible.
Hm. Let's analyze your question. Firstly, it does not seem to have anything to do with databases: You simply store a code block in the source form somewhere. It can be a file, or a database. Secondly, you don't want IDE-style "debugging", but TDD-style. (But don't concentrate on that question now.)
What you need, is assertions about your code. That is, you need to describe what output should your code produce given some input examples. And then, you need to run that code and see whether its function matches the expectations. Furthermore, if you are not sure about the source of your snippets, run them in a sandbox (with $SAFE = 4). If your code fails the expectations, raise nice errors (TypeError, or even better, your custom made exception), and then you can eg. rescue those exceptions and eg. use some default code snippets...
... but maybe I'm not actually answering the same question that you are asking. If that's the case, then let me share one link to this sourcify gem, which let's you know the source, so that you can insert a breakpoint by saying eg. require 'rdebug' in the middle of code, or can even convert code to sexps. That's all I know.

How can one get a list of Mathematica's built-in global rewrite rules?

I understand that over a thousand built-in rewrite rules in Mathematica populate the global rules table by default. Is there any way to get Mathematica to give a full or even partial list of those rules?
The best way is to get a job at Wolfram Research.
Failing that, I think that for things not completely compiled into the kernel you can recover most of the rules/definitions. Look at
Attributes[fn]
where fn is the command that you're interested in. If it returns
{Protected, ReadProtected}
then there's something you can get a look at (although often it's just a MakeBoxes (formatting) definition or a AutoLoad/Stub type definition). To see what's there run
Unprotect[fn];
ClearAttributes[fn, ReadProtected];
??fn
Quite often you'll have to run an example of the command to load it if it was a stub. You'll also have to dig down from the user-facing commands to the back-end implementations.
Eventually you'll most likely reach a core command that is compiled into the kernel that you can not see the details of.
I previously mentioned this in tips for creating Graph diagrams and it got a mention in What is in your Mathematica tool bag?.
An good example, with a nice bite-sized and digestible bit of code is Experimental`AngularSlider[] mentioned in Circular/Angular slider. I'll leave it up to you to look at the code produced.
Another example is something like BoxWhiskerChart, where you need to call it once in order to load all of the code. Then you see that BoxWhiskerChart proceeds to call Charting`iBoxWhiskerChart which you'll have to unprotect to look at, etc...

Cross version line matching

I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code)
The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations:
It doesn't handle cross file motion.
It might not play well with lines that get changed
It doesn't look at the information available in the intermediate versions.
It provides no way to manually patch up lines when the diff tool gets things wrong.
It's kinda clunky
Before I start diving into developing something better:
What already exists to do this?
What features do similar system have that I've not thought of?
Why do you need to do this? If you use decent source version control, you should have access to old versions of the code, you can simply provide a link to that so people can see the bug in its original place. In fact the main problem I see with this system is that the bug may have already been fixed, but your automatic line tracking code will point to a line and say there's a bug there. Seems this system would be a pain to build, and not provide a whole lot of help in practice.
My suggestion is: instead of trying to track line numbers, which as you observed can quickly get out of sync as software changes, you should decorate each assertion (or other line of interest) with a unique identifier.
Assuming you're using C, in the case of assertions, this could be as simple as changing something like assert(x == 42); to assert(("check_x", x == 42)); -- this is functionally identical, due to the semantics of the comma operator in C and the fact that a string literal will always evaluate to true.
Of course this means that you need to identify a priori those items that you wish to track. But given that there's no generally reliable way to match up source line numbers across versions (by which I mean that for any mechanism you could propose, I believe I could propose a situation in which that mechanism does the wrong thing) I would argue that this is the best you can do.
Another idea: If you're using C++, you can make use of RAII to track dynamic scopes very elegantly. Basically, you have a Track class whose constructor takes a string describing the scope and adds this to a global stack of currently active scopes. The Track destructor pops the top element off the stack. The final ingredient is a static function Track::getState(), which simply returns a list of all currently active scopes -- this can be called from an exception handler or other error-handling mechanism.

Resources