Remote parallel kernels and LibraryLink -- how to make them work together? - wolfram-mathematica

Does anyone have experience using C extensions to Mathematica (LibraryLink or MathLink -- currently I'm using LibraryLink) with remote parallel kernels?
In short: How can I transparently use a LibraryLink-defined function (see CreateLibrary and LibraryFunctionLoad) in both parallel and non-parallel evaluations when the subkernels run on a remote machine?
I am looking for some setup steps that will allow me to have a libraryFun function (written in C) that can be called either normally as libraryFun[args], or in parallel (both of Parallelize#Table[libraryFun[arg], {arg, 0, 100}] and the same with ParallelTable[]) when the subkernels run on a remote machine.
Running the main kernel remotely too might be better if I weren't having trouble with that as well.
Update
In the meantime I made some progress. I'll describe it here.
First, ParallelEvaluate will evaluate an expression in all parallel kernels. If the source files for the C extension are copied to the remote machine, we can compile them there like this:
ParallelNeeds["CCompilerDriver`"]
k1 = First#Kernels[]
ParallelEvaluate[SetDirectory["/path/to/source/files"]]
ParallelEvaluate[CreateLibrary["sourefile", "myLibrary"]]
This needs to be done one time only. I assume that the library has been already compiled on the main kernel machine.
After this, in all subsequent sessions we can use FindLibrary on both the main and the remote machines to load the library.
LibraryFunctionLoad[myFun = FindLibrary["myLibrary"], "fun", ...]
ParallelEvaluate[myFun = LibraryFunctionLoad[FindLibrary["myLibrary"], "fun", ...]]
And here comes the trouble. Because of different paths, myFun will have different values in the main and in the parallel kernels.
So the question is: How can ensure that the value of myFun will not accidentally get synchronized between the main and the parallel kernels?
I'll show in an isolated examples how this might accidentally happen:
In[1]:= LaunchKernels[2]
Out[1]= {KernelObject[1, "local"], KernelObject[2, "local"]}
Set value of x in main kernel:
In[2]:= x = 1
Out[2]= 1
Note that it gets the same value in remote kernels too:
In[3]:= ParallelEvaluate[x]
Out[3]= {1, 1}
Set a different value for x in the parallel kernels and verify that they keep it:
In[4]:= ParallelEvaluate[x = 2]
Out[4]= {2, 2}
In[5]:= {x, ParallelEvaluate[x]}
Out[5]= {1, {2, 2}}
Now "innocently" use Parallelize on something containing x:
In[6]:= Parallelize[Table[x, {10}]]
Out[6]= {1, 1, 1, 1, 1, 1, 1, 1, 1, 1}
And see how the value of x got re-synced between the main and subkernels.
In[7]:= {x, ParallelEvaluate[x]}
Out[7]= {1, {1, 1}}
The new question is: How can I prevent a certain symbol from ever auto-syncing between the main and the subkernels?

I hope this answers your question:
For the moment, I will assume the main kernel and the parallel kernels are on the same architecture, which for me is Windows 7. First you compile a function; you can do this outside
of Mathematica using a C compiler, or directly in Mathematica:
f = Compile[{x}, x^2, CompilationTarget -> "C"]
You can examine by looking at the InputForm of f where the generated dll is located:
f // InputForm
Gives something like:
CompiledFunction[{8, 8., 5468}, {_Real}, {{3, 0, 0}, {3, 0, 1}}, {}, {0, 0, 2, 0, 0}, {{40,
56, 3, 0, 0, 3, 0, 1}, {1}}, Function[{x}, x^2], Evaluate,
LibraryFunction["C:\\Users\\arnoudb\\AppData\\Roaming\\Mathematica\\ApplicationData\\CCompilerDriver\\BuildFolder\\arnoudb2win-5184\\compiledFunction0.dll",
"compiledFunction0", {{Real, 0, "Constant"}}, Real]]
You can copy this file to a location where the parallel (possibly remote) kernel can find it, for example:
CopyFile["C:\\Users\\arnoudb\\AppData\\Roaming\\Mathematica\\ApplicationData\\CCompilerDriver\\BuildFolder\\arnoudb2win-3316\\compiledFunction1.dll",
"c:\\users\\arnoudb\\compiledFunction1.dll"]
Then you can load the library in all the parallel kernels like so:
ParallelEvaluate[
ff = LibraryFunctionLoad["C:\\users\\arnoudb\\compiledFunction1.dll",
"compiledFunction1", {Real}, Real]
]
And check if this works:
ParallelEvaluate[ff[3.4]]
Which returns {11.56,11.56} for me.
If the parallel kernel is on a different architecture, you will need to compile the C code
for that architecture (or evaluate the Compile[..., CompilationTarget->"C"] on the parallel
kernel).

I seem to have found a solution to my question in Update above. It seems to work but I cannot confirm yet that it is not fragile.
The solution is to put symbols that we don't want synchronized into a separate context. Using c`x in my example in place of x prevents synchronizing the value of x when it is used inside Parallelize. Then we can add this context to the $ContextPath to make the symbol easily accessible.
The most convenient way to do this is probably putting all definitions in a package that loads the library functions using LibraryFunctionLoad[FindLibrary[...], ...]. For this to work, the library must have been compiled manually first on both the local and remote machine, however, the package code can be the exactly same both for the main and the subkernels.
I am still interested if someone can confirm that variables not in $Context are guaranteed not to be auto-synchronized.
Update It has been confirmed here.

Related

Mathematica Manipulate Function

Does anybody know how I can use the Manipulate command with the show command.
Basically, I want to display multiple funcitons on one coordinate system. However, I want only one of them to be "manipulated" (i.e. other should be static).
I can not sort out how to use Show and Manipulate together.
Thanks for your help!
HR
If you don't want the manipulated variable, a, in one of the plots, simply omit it.
Manipulate[
Show[
Plot[a x, {x, 0, 3.5}],
ListPlot[{1, 2/a, 3/a}]],
{{a, 1}, 0, 2}]

is it possible to use second argument of Dynamic in setting up control variables inside Manipulate?

I can't get the syntax to do what I want, and I am now not sure if it is even possible.
small review:
One can do this:
{Slider[Dynamic[b], {-2 Pi, 2 Pi}], Dynamic[Sin[b]]}
and now each time the slider moves, 'b' changes, and its Sin[] is automatically printed
But suppose I want to do the computation (Sin[]) directly where the slider is and only show the final result of Sin[], then I can use the second argument of Dynamic like this:
{Slider[Dynamic[b, (b = #; a = Sin[b]; #) &], {-2 Pi, 2 Pi}],
Dynamic[a]}
Now I want to use Manipulate, and do the same thing. I can do the same as the first example above like this:
Manipulate[
Sin[b],
Control[{{b, 0, "b="}, -2 Pi, 2 Pi, ControlType -> Slider}]
]
In the above, Manipulate took care of the 'Dynamic' stuff, and updated Sin[b] whenever 'b' changes.
Now, I want to see if I can do the second case using Manipulate, so I can write:
Manipulate[
a,
Control[{{b, 0, "b="}, -2 Pi, 2 Pi, ControlType -> Slider}] (*where to insert Sin[b]?*)
]
('a' has to initialized to some value for initial display).
But I am not able to figure how to use the second argument for 'b' in the above. Can't figure the syntax, and now sure if it possible?
of course, one can't just write
Manipulate[
a,
{Slider[Dynamic[b, (b = #; a = Sin[b]; #) &], {-2 Pi, 2 Pi}]}
]
This is not even valid syntax of Manipulate controls.
Question is: Is it possible to use second argument of Dynamic in setting up Manipulate controls?
The reason I am asking, is that it could make it easier to 'localize' computation right there, where the control variable changes, and only show the final result elsewhere. Like a local callback function in a way, where computation related to changes for each control sits right next to where the control is.
thanks
Update 9/16/11
Well, after a month of struggle, finally I have the Mathematica CDF up and running.
Thanks to the help of the trick shown here by Simon, and others who answered my questions while I was doing this CDF (Leonid, Heike, WReach, Belisarius and others).
The few tricks I learned here helped finish this demonstration, here is a link if someone wants to try it.
The design of this CDF is different from everything I've done before. It is based on finite state machine. There is only ONE tracked symbol in the whole CDF. Using the second argument of dynamics, the event name is recorded, and in the main expression of Manipulate, which runs the finite state machines, it has 5 states it can be in, and there are 8 events. Depending on the current state and the current event that just happened, it switches to new state (or can stay in the same state, depending), and the main display is updated. No trigger needed. The state machine runs as fast as you allow it, controlled only by the time step size.
This simplifies the logic greatly, and makes it possible to handle make much more advanced UI and inter-dependent logic, since everything now runs in a well controlled way, and all the logic is in one place.
Without being able to set the event using the second argument of dynamics, this whole approach would not have been possible.
I need to write a note on this method to make it more clear.
So, just wanted to thank everyone here. I am now finishing another CDF using this method, a triple pendulum simulation.
Not all of the arguments of Manipulate after the first one have to be Control objects. You can put anything you like there - including completely customized dynamic controls. So, how about something like
Manipulate[a, {a, None}, {b, None},
Row[{"b= ",Slider[Dynamic[b, (b = #; a = Sin[b]; #)&], {-2 Pi, 2 Pi}]}]]
Where the {a, None} and {b, None} ensure that the variables are local, but aren't strictly necessary.

SaveDefinitions considered dangerous

SaveDefinitions is a nice option of Manipulate. It causes Manipulate to store any definitions used for its creation inside the Manipulate panel. A Manipulate made this way can be copied to an empty notebook and will still work on its own. Additionally, your working notebook containing many such Manipulates also doesn't turn into a flurry of pink boxes with printed error messages below it upon opening. Great!
However, all this goodness has its dark side which can bite you real hard if you are not aware of it. I've had this in a notebook I had been working on for a few days, but I present you with a step-by-step toy example scenario which recreates the problem.
In this scenario you want to create a Manipulate showing a plot of a nice wavy function, so you define this (please make a window size like this, this is important):
The definition is nice, so we keep it for the next time and make it an initialization cell. Next we add the Manipulate, and execute it too.
f[x_] := x^2
Manipulate[
Plot[n f[x], {x, -3, 3}],
{n, 1, 4},
SaveDefinitions -> True
]
All works great, the Manipulate really shines, it is a good day.
Just being your paranoid self you check whether the definition is OK:
Yeah, everything still checks out. Fine. But now it occurs to you that a better wavy function would be a sine, so you change the definition, execute, and being paranoid, check:
Everything still fine. You're ready from a day's hard work you save your work and quit. [Quit kernel]
Next day. You start your work again. You evaluate the initialization cells in your notebook. Definition still good? Check.
Now, you scroll down to your Manipulate box (no need to re-execute thanks to the SaveDefinitions), play a little with the slider. And scroll back up.
Being the paranoid you, you once more check the definition of f:
Lo and behold, someone has changed the definition behind your back! And nothing executed between your first and second Information(?) check according to the In[] numbers (In[1]: def of f, In[2] first ?, In[3] second ?).
What happened? Well, it's the Manipulate of course. A FullForm reveals its internal structure:
Manipulate[Plot[n*f[x],{x, -3, 3}],{{n, 2.44}, 1, 4},Initialization:>{f[x_] := x^2}]
There you have the culprit. The initialization part of the box defines f again, but it's the old version because we didn't re-evaluate the Manipulate after modifying its definition. As soon as the manipulate box gets on the screen, it is evaluated and you've got your old definition back. Globally!
Of course, in this toy example it is immediately clear something strange is happening. In my case, I had a larger module in a larger notebook in which I, after some debugging, had changed a small part. It seemed to work, but the next day, the same bug that had bugged me before hit again. It took me a couple of hours before I realized that one of the several Manipulates that I used to study the problem at hand from all sides was doing this.
Clearly, I'm tempted to say, this is unwanted behavior. Now, for the obligatory question: what can we do to prevent this behind-your-back behavior of Manipulate from occurring other than re-executing every Manipulate in your notebook each time you change a definition that might be used by them?
Here is an attempt. The idea is to identify symbols with DownValues or some other ...Values inside your manipulated code, and automatically rename them using unique variables / symbols in place of them. The idea here can be executed rather elegantly with the help of cloning symbols functionality, which I find useful from time to time. The function clone below will clone a given symbol, producing a symbol with the same global definitions:
Clear[GlobalProperties];
GlobalProperties[] :=
{OwnValues, DownValues, SubValues, UpValues, NValues, FormatValues,
Options, DefaultValues, Attributes};
Clear[unique];
unique[sym_] :=
ToExpression[
ToString[Unique[sym]] <>
StringReplace[StringJoin[ToString /# Date[]], "." :> ""]];
Attributes[clone] = {HoldAll};
clone[s_Symbol, new_Symbol: Null] :=
With[{clone = If[new === Null, unique[Unevaluated[s]], ClearAll[new]; new],
sopts = Options[Unevaluated[s]]},
With[{setProp = (#[clone] = (#[s] /. HoldPattern[s] :> clone)) &},
Map[setProp, DeleteCases[GlobalProperties[], Options]];
If[sopts =!= {}, Options[clone] = (sopts /. HoldPattern[s] :> clone)];
HoldPattern[s] :> clone]]
There are several alternatives of how to implement the function itself. One is to introduce the function with another name, taking the same arguments as Manipulate, say myManipulate. I will use another one: softly overload Manipulate via UpValues of some custom wrapper, that I will introduce. I will call it CloneSymbols. Here is the code:
ClearAll[CloneSymbols];
CloneSymbols /:
Manipulate[args___,CloneSymbols[sd:(SaveDefinitions->True)],after:OptionsPattern[]]:=
Unevaluated[Manipulate[args, sd, after]] /.
Cases[
Hold[args],
s_Symbol /; Flatten[{DownValues[s], SubValues[s], UpValues[s]}] =!= {} :>
clone[s],
Infinity, Heads -> True];
Here is an example of use:
f[x_] := Sin[x];
g[x_] := x^2;
Note that to use the new functionality, one has to wrap the SaveDefinitions->True option in CloneSymbols wrapper:
Manipulate[Plot[ f[n g[x]], {x, -3, 3}], {n, 1, 4},
CloneSymbols[SaveDefinitions -> True]]
This will not affect the definitions of original symbols in the code inside Manipulate, since it were their clones whose definitions have been saved and used in initialization now. We can look at the FullForm for this Manipulate to confirm that:
Manipulate[Plot[f$37782011751740542578125[Times[n,g$37792011751740542587890[x]]],
List[x,-3,3]],List[List[n,1.9849999999999999`],1,4],RuleDelayed[Initialization,
List[SetDelayed[f$37782011751740542578125[Pattern[x,Blank[]]],Sin[x]],
SetDelayed[g$37792011751740542587890[Pattern[x,Blank[]]],Power[x,2]]]]]
In particular, you can change the definitions of functions to say
f[x_]:=Cos[x];
g[x_]:=x;
Then move the slider of the Manipulate produced above, and then check the function definitions
?f
Global`f
f[x_]:=Cos[x]
?g
Global`g
g[x_]:=x
This Manipulate is reasonably independent of anything and can be copied and pasted safely. What happens here is the following: we first find all symbols with non-trivial DownValues, SubValues or UpValues (one can probably add OwnValues as well), and use Cases and clone to create their clones on the fly. We then replace lexically all the cloned symbols with their clones inside Manipulate, and then let Manipulate save the definitions for the clones. In this way, we make a "snapshot" of the functions involved, but do not affect the original functions in any way.
The uniqueness of the clones (symbols) has been addressed with the unique function. Note however, that while the Manipulate-s obtained in this way do not threaten the original function definitions, they will generally still depend on them, so one can not consider them totally independent of anything. One would have to walk down the dependency tree and clone all symbols there, and then reconstruct their inter-dependencies, to construct a fully standalone "snapshot" in Manipulate. This is doable but more complicated.
EDIT
Per request of #Sjoerd, I add code for a case when we do want our Manipulate-s to update to the function's changes, but do not want them to actively interfere and change any global definitions. I suggest a variant of a "pointer" technique: we will again replace function names with new symbols, but, rather than cloning those new symbols after our functions, we will use the Manipulate's Initialization option to simply make those symbols "pointers" to our functions, for example like Initialization:>{new1:=f,new2:=g}. Clearly, re-evaluation of such initialization code can not harm the definitions of f or g, and at the same time our Manipulate-s will become responsive to changes in those definitions.
The first thought is that we could just simply replace function names by new symbols and let Manipulate initialization automatically do the rest. Unfortunately, in that process, it walks the dependency tree, and therefore, the definitions for our functions would also be included - which is what we try to avoid. So, instead, we will explicitly construct the Initialize option. Here is the code:
ClearAll[SavePointers];
SavePointers /:
Manipulate[args___,SavePointers[sd :(SaveDefinitions->True)],
after:OptionsPattern[]] :=
Module[{init},
With[{ptrrules =
Cases[Hold[args],
s_Symbol /; Flatten[{DownValues[s], SubValues[s], UpValues[s]}] =!= {} :>
With[{pointer = unique[Unevaluated[s]]},
pointer := s;
HoldPattern[s] :> pointer],
Infinity, Heads -> True]},
Hold[ptrrules] /.
(Verbatim[HoldPattern][lhs_] :> rhs_ ) :> (rhs := lhs) /.
Hold[defs_] :>
ReleaseHold[
Hold[Manipulate[args, Initialization :> init, after]] /.
ptrrules /. init :> defs]]]
With the same definitions as before:
ClearAll[f, g];
f[x_] := Sin[x];
g[x_] := x^2;
Here is a FullForm of produced Manipulate:
In[454]:=
FullForm[Manipulate[Plot[f[n g[x]],{x,-3,3}],{n,1,4},
SavePointers[SaveDefinitions->True]]]
Out[454]//FullForm=
Manipulate[Plot[f$3653201175165770507872[Times[n,g$3654201175165770608016[x]]],
List[x,-3,3]],List[n,1,4],RuleDelayed[Initialization,
List[SetDelayed[f$3653201175165770507872,f],SetDelayed[g$3654201175165770608016,g]]]]
The newly generated symbols serve as "pointers" to our functions. The Manipulate-s constructed with this approach, will be responsive for updates in our functions, and at the same time harmless for the main functions' definitions. The price to pay is that they are not self-contained and will not display correctly if the main functions are undefined. So, one can use either CloneSymbols wrapper or SavePointers, depending on what is needed.
The answer is to use initialization cell as initialization for the Manipulate:
Manipulate[
Plot[n f[x], {x, -3, 3}], {n, 1, 4},
Initialization :> FrontEndTokenExecute["EvaluateInitialization"]]
You can also use DynamicModule:
DynamicModule[{f},
f[x_] := x^2;
Manipulate[Plot[n f[x], {x, -3, 3}], {n, 1, 4}]]
You do not need SaveDefinitions -> True in this case.
EDIT
In response to Sjoerd's comment. With the following simple technique you do not need to copy the definition everywhere and update all copies if you change the definition (but you still need to re-evaluate your code to get updated Manipulate):
DynamicModule[{f}, f[x_] := x^2;
list = Manipulate[Plot[n^# f[x], {x, -3, 3}], {n, 2, 4}] & /# Range[3]];
list // Row

What are the benefits of switching from Rule and /. to OptionsPattern[] and OptionValue in a large application?

Old habits die hard, and I realise that I have been using opts___Rule pattern-matching and constructs like thisoption /. {opts} /. Options[myfunction] in the very large package that I'm currently developing. Sal Manango's "Mathematica Cookbook" reminds me that the post-version-6 way of doing this is opts:OptionsPattern[] and OptionValue[thisoption]. The package requires version 8 anyway, but I had just never changed the way I wrote this kind of code over the years.
Is it worth refactoring all that from my pre-version-6 way of doing things? Are there performance or other benefits?
Regards
Verbeia
EDIT: Summary
A lot of good points were made in response to this question, so thank you (and plus one, of course) all. To summarise, yes, I should refactor to use OptionsPattern and OptionValue. (NB: OptionsPattern not OptionPattern as I had it before!) There are a number of reasons why:
It's a touch faster (#Sasha)
It better handles functions where the arguments must be in HoldForm (#Leonid)
OptionsPattern automatically checks that you are passing a valid option to that function (FilterRules would still be needed if you are passing to a different function (#Leonid)
It handles RuleDelayed (:>) much better (#rcollyer)
It handles nested lists of rules without using Flatten (#Andrew)
It is a bit easier to assign multiple local variables using OptionValue /# list instead of having multiple calls to someoptions /. {opts} /. Options[thisfunction] (came up in comments between #rcollyer and me)
EDIT: 25 July I initially thought that the one time using the /. syntax might still make sense is if you are deliberately extracting a default option from another function, not the one actually being called. It turns out that this is handled by using the form of OptionsPattern[] with a list of heads inside it, for example: OptionsPattern[{myLineGraph, DateListPlot, myDateTicks, GraphNotesGrid}] (see the "More information" section in the documentation). I only worked this out recently.
It seems like relying on pattern-matcher yields faster execution than by using PatternTest as the latter entails invocation of the evaluator. Anyway, my timings indicate that some speed-ups can be achieved, but I do not think they are so critical as to prompt re-factoring.
In[7]:= f[x__, opts : OptionsPattern[NIntegrate]] := {x,
OptionValue[WorkingPrecision]}
In[8]:= f2[x__, opts___?OptionQ] := {x,
WorkingPrecision /. {opts} /. Options[NIntegrate]}
In[9]:= AbsoluteTiming[Do[f[1, 2, PrecisionGoal -> 17], {10^6}];]
Out[9]= {5.0885088, Null}
In[10]:= AbsoluteTiming[Do[f2[1, 2, PrecisionGoal -> 17], {10^6}];]
Out[10]= {8.0908090, Null}
In[11]:= f[1, 2, PrecisionGoal -> 17]
Out[11]= {1, 2, MachinePrecision}
In[12]:= f2[1, 2, PrecisionGoal -> 17]
Out[12]= {1, 2, MachinePrecision}
While several answers have stressed different aspects of old vs. new way of using options, I'd like to make a few additional observations. The newer constructs OptionValue - OptionsPattern provide more safety than OptionQ, since OptionValue inspects a list of global Options to make sure that the passed option is known to the function. The older OptionQ seems however easier to understand since it is based only on the standard pattern-matching and isn't directly related to any of the global properties. Whether or not you want this extra safety provided by any of these constructs is up to you, but my guess is that most people find it useful, especially for larger projects.
One reason why these type checks are really useful is that often options are passed as parameters by functions in a chain-like manner, filtered, etc., so without such checks some of the pattern-matching errors would be very hard to catch since they would be causing harm "far away" from the place of their origin.
In terms of the core language, the OptionValue - OptionsPattern constructs are an addition to the pattern-matcher, and perhaps the most "magical" of all its features. It was not necessary semantically, as long as one is willing to consider options as a special case of rules. Moreover, OptionValue connects the pattern-matching to Options[symbol] - a global property. So, if one insists on language purity, rules as in opts___?OptionQ seem easier to understand - one does not need anything except the standard rule-substitution semantics to understand this:
f[a_, b_, opts___?OptionQ] := Print[someOption/.Flatten[{opts}]/.Options[f]]
(I remind that the OptionQ predicate was designed specifically to recognize options in the older versions of Mathematica), while this:
f[a_, b_, opts:OptionsPattern[]] := Print[OptionValue[someOption]]
looks quite magical. It becomes a bit clearer when you use Trace and see that the short form of OptionValue evaluates to a longer form, but the fact that it automaticaly determines the enclosing function name is still remarkable.
There are a few more consequences of OptionsPattern being a part of the pattern language. One is the speed improvements discussed by #Sasha. However, speed issues are often over-emphasized (this is not to detract from his observations), and I expect this to be especially true for functions with options, since these tend to be the higher-level functions, which will likely have non-trivial body, where most of the computation time will be spent.
Another rather interesting difference is when one needs to pass options to a function which holds its arguments. Consider a following example:
ClearAll[f, ff, fff, a, b, c, d];
Options[f] = Options[ff] = {a -> 0, c -> 0};
SetAttributes[{f, ff}, HoldAll];
f[x_, y_, opts___?OptionQ] :=
{{"Parameters:", {HoldForm[x], HoldForm[y]}}, {" options: ", {opts}}};
ff[x_, y_, opts : OptionsPattern[]] :=
{{"Parameters:", {HoldForm[x], HoldForm[y]}}, {" options: ", {opts}}};
This is ok:
In[199]:= f[Print["*"],Print["**"],a->b,c->d]
Out[199]= {{Parameters:,{Print[*],Print[**]}},{ options: ,{a->b,c->d}}}
But here our OptionQ-based function leaks evaluation as a part of pattern-matching process:
In[200]:= f[Print["*"],Print["**"],Print["***"],a->b,c->d]
During evaluation of In[200]:= ***
Out[200]= f[Print[*],Print[**],Print[***],a->b,c->d]
This is not completely trivial. What happens is that the pattern-matcher, to establish a fact of match or non-match, must evaluate the third Print, as a part of evaluation of OptionQ, since OptionQ does not hold arguments. To avoid the evaluation leak, one needs to use Function[opt,OptionQ[Unevaluated[opt]],HoldAll] in place of OptionQ. With OptionsPattern we don't have this problem, since the fact of the match can be established purely syntactically:
In[201]:= ff[Print["*"],Print["**"],a->b,c->d]
Out[201]= {{Parameters:,{Print[*],Print[**]}},{ options: ,{a->b,c->d}}}
In[202]:= ff[Print["*"],Print["**"],Print["***"],a->b,c->d]
Out[202]= ff[Print[*],Print[**],Print[***],a->b,c->d]
So, to summarize: I think choosing one method over another is largely a matter of taste - each one can be used productively, and also each one can be abused. I am more inclined to use the newer way, since it provides more safety, but I do not exclude that there exist some corner cases when it will surprise you - while the older method is semantically easier to understand. This is something similar to C-C++ comparison (if this is an appropriate one): automation and (possibly) safety vs. simplicity and purity. My two cents.
A little known (but frequently useful) fact is that options are allowed to appear in nested lists:
In[1]:= MatchQ[{{a -> b}, c -> d}, OptionsPattern[]]
Out[1]= True
The options handling functions such as FilterRules know about this:
In[2]:= FilterRules[{{PlotRange -> 3}, PlotStyle -> Blue,
MaxIterations -> 5}, Options[Plot]]
Out[2]= {PlotRange -> 3, PlotStyle -> RGBColor[0, 0, 1]}
OptionValue takes it into account:
In[3]:= OptionValue[{{a -> b}, c -> d}, a]
Out[3]= b
But ReplaceAll (/.) doesn't take this into account of course:
In[4]:= a /. {{a -> b}, c -> d}
During evaluation of In[4]:= ReplaceAll::rmix: Elements of {{a->b},c->d} are a mixture of lists and nonlists. >>
Out[4]= a /. {{a -> b}, c -> d}
So, if you use OptionsPattern, you should probably also use OptionValue to ensure that you can consume the set of options the user passes in.
On the other hand, if you use ReplaceAll (/.), you should stick to opts___Rule for the same reason.
Note that opts___Rule is also a little bit too forgiving in certain (admittedly obscure) cases:
Not a valid option:
In[5]:= MatchQ[Unevaluated[Rule[a]], OptionsPattern[]]
Out[5]= False
But ___Rule lets it through:
In[6]:= MatchQ[Unevaluated[Rule[a]], ___Rule]
Out[6]= True
Update: As rcollyer pointed out, another more serious problem with ___Rule is that it misses options specified with RuleDelayed (:>). You can work around it (see rcollyer's answer), but it's another good reason to use OptionValue.
Your code itself has a subtle, but fixable flaw. The pattern opts___Rule will not match options of the form a :> b, so if you ever need to use it, you'll have to update your code. The immediate fix is to replace opts___Rule with opts:(___Rule | ___RuleDelayed) which requires more typing than OptionsPattern[]. But, for the lazy among us, OptionValue[...] requires more typing than the short form of ReplaceAll. However, I think it makes for cleaner reading code.
I find the use of OptionsPattern[] and OptionValue to be easier to read and instantly comprehend what is being done. The older form of opts___ ... and ReplaceAll was much more difficult to comprehend on a first pass read through. Add to that, the clear timing advantages, and I'd go with updating your code.

Accidental shadowing and `Removed[symbol]`

If you evaluate the following code twice, results will be different. Can anyone explain what's going on?
findHull[points_] := Module[{},
Needs["ComputationalGeometry`"];
ConvexHull[points]
];
findHull[RandomReal[1, {10, 2}]];
Remove["Global`ConvexHull"];
findHull[RandomReal[1, {10, 2}]]
The problem is that even though the module is not evaluated untill you call findHull, the symbols are resolved when you define findHull (i.e.: The new downvalue for findHull is stored in terms of symbols, not text).
This means that during the first round, ConvexHull resolves to Global`ConvexHull because the Needs is not evaluated.
During the second round, ComputationalGeometry is on $ContextPath and so ConvexHull resolves as you intended.
If you really cannot bear to load ComputationalGeometry beforehand, just refer to ConvexHull by its full name: ComputationalGeometry`ConvexHull. See also this related answer.
HTH
Not a direct answer to the question, but a bit too large for a comment. As another alternative, a general way to delay the symbol parsing until run-time is to use Symbol["your-symbol-name"]. In your case, you can replace ConvexHull on the r.h.s. of your definition by Symbol["ConvexHull"]:
findHull[points_] :=
Module[{},
Needs["ComputationalGeometry`"];
Symbol["ConvexHull"][points]];
This solution is not very elegant though, since Symbol["ConvexHull"] will be executed every time afresh. This can also be somewhat error-prone, if you do non-trivial manipulations with $ContextPath. Here is a modified version, combined with a generally useful trick with self-redefinition, that I use in similar cases:
Clear[findHull];
findHull[points_] :=
Module[{},
Needs["ComputationalGeometry`"];
With[{ch = Symbol["ConvexHull"]},
findHull[pts_] := ch[pts];
findHull[points]]];
For example,
findHull[RandomReal[1, {10, 2}]]
{4, 10, 9, 1, 6, 2, 5}
What happens is that the first time the function is called, the original definition with Module gets replaced by the inner one, and that happens already after the needed package is loaded and its context placed on the $ContextPath. Here we exploit the fact that Mathematica replaces an old definition with a new one if it can determine that the patterns are the same - as it can in such cases.
Other instances when self-redefinition trick is useful are cases when, for example, a function call results in some expensive computation, which we want to cache, but we are not sure whether or not the function will be called at all. Then, such construct allows to cache the computed (say, symbolically) result automatically upon the first call to the function.

Resources