How do you debug a generic function (using debug, or mtrace in the debug package)?
As an example, I want to debug cenreg in the NADA package, specifically the method that takes a formula input.
You can retrieve the method details like this:
library(NADA)
getMethod("cenreg", c("formula", "missing", "missing"))
function (obs, censored, groups, ...)
{
.local <- function (obs, censored, groups, dist, conf.int = 0.95,
...)
{
dist = ifelse(missing(dist), "lognormal", dist)
...
}
The problem is that cenreg itself looks like this:
body(cenreg)
# standardGeneric("cenreg")
I don't know how to step through the underlying method, rather than the generic wrapper.
My first two suggestions are pretty basic: (1) wrap your function call in a try() (that frequently provides more information with S4 classes) and (2) call traceback() after the error is thrown (that can sometimes give hints to where the problem is really occuring).
Calling debug() won't help in this scenario, so you need to use trace or browser. From the debug help page:
"In order to debug S4 methods (see Methods), you need to use trace, typically
calling browser, e.g., as "
trace("plot", browser, exit=browser, signature = c("track", "missing"))
S4 classes can be hard to work with; one example of this is the comment in the debug package documentation (regarding the usage of mtrace() with S4 classes):
"I have no plans to write S4 methods, and hope not to have to
debug other people’s!"
A similar question was asked recently on R-Help. The recommendation from Duncan Murdoch:
"You can insert a call to browser() if you want to modify the source. If
you'd rather not do that, you can use trace() to set a breakpoint in it.
The new setBreakpoint() function in R 2.10.0 will also work, if you
install the package from source with the R_KEEP_PKG_SOURCE=yes
environment variable set. It allows you to set a breakpoint at a
particular line number in the source code."
I've never done this before myself (and it requires R 2.10.0), but you might try installing from source with R_KEEP_PKG_SOURCE=yes.
Incidentally, you can use the CRAN mirror of NADA in github to browse the source.
For a long time this was a standard annoyance point for S4 method debugging. As pointed out by Charles Plessy, I worked with Michael Lawrence to add a number of features to R that are intended to make this easier.
debug, debugonce, undebug and isdebugged all now take a signature argument suitable for specifying s4 methods. Furthermore, debugging S4 methods this way bypasses the weird implementation detail that you previously had to deal with by hand by browsering into the method via trace, stepping through the .local definition, debugging that, then continuing.
In addition, I added debugcall, which you give an actual, full call that you would want to invoke. Doing so sets debugging on the first closue which will be invoked when evaluating that call that is not an S3 or S4 standard generic. So if you are calling a non-generic, that will just be the top level function being called, but if it is a standard S3 or S4 generic, the first method that will be hit is debugged instead of the generic. A "standard S3 generic" is defined as a function where the first top-level (ignoring curly braces) call in the body is a call to UseMethod.
Note we went back and forth on the design of this but at the end of the day settled on debugcall not actually executing the function call being debugged, but it returns the call expression which you can pass it to eval if desired, as illustrated in ?debugcall.
Related
I am using QTP 11.5 for automating a web application.I am trying to call an action in qtp through driverscript as below:
RFSTestPath = "D:\vf74\D Drive\RFS Automation\"
LoadAndRunAction RFStestPath & LogInApplication,"Action1",oneIteration
Inside the LogInApplication(Action1) am calling a login function as:
Call fncLogInApplication(strURL,strUsesrName,strPasssword)
Definition of fncLogInApplication is written in fncLogInApplication.vbs
When I associate the fncLogInApplication.vbs file to driverscript, am able to execute my code without any errors. But when I de-associate .vbs file from driverscript and associate it to LogInApplication test am getting "Type mismatch: 'fncLogInApplication'"
Can anyone help me in the association please. I want fncLogInApplication to be executed when I associate to LogInApplication not to the main driverscript.
Please comment back if you require any more info
There is only one set of associated libraries that is active at any one time: That is always the outermost test's one.
This means if test A calls test B, test B will be executed with the libraries loaded based upon test A´s associated libraries list, not B's.
This also means that if B depends on a library, and B associated this library, but is called from test A (which does not associated this library), then B will fail to call (locate) the function since the associated libraries of B are never loaded (only those from A are). (As would A, naturally.).
If you are still interested: "Type mismatch" is QTPs (or VBScript´s) poor way of telling you: "The function called is not known, so I bet you instead meant an array variable dereference, and the variable you specified is equal to empty, so it is not an array, and thus cannot be dereferenced as an array variable, which is what I call a 'type mismatch'."
This reasoning is valid, considering the syntax tree of VB/VBScript: Function calls and array variable dereferences cannot be formally differentiated. Syntactically, they are very similar, or identical in most cases. So be prepared to handle "Type mismatch" like the "Unknown function referenced" message that VB/VBScript never display when creating VBScript code.
You can, however, load the library you want in test B´s code (for example, using LoadFunctionLibrary), but this still allows A to call functions from that library once B loaded it and returned from A´s call. This, and all the possible variations of this procedure, however, have side-effects to aspects like debugging, forward references and visibility of global variables, so I would recommend against it.
Additional notes:
There is no good reason to use CALL. Just call the sub or function.
If you call a function and use the result it returns, you must include the arguments in parantheses.
If you call a sub (or a function, and don´t use the result it returns), you must not include the arguments in parantheses. If the sub or function accepts only one argument, it might look like you are allowed to put it in parantheses, but this is not true. In this case, the argument is simply treated like a term in parantheses.
The argument "bracketing" aspects just listed can create very nasty bugs, especially if the argument is byRef, also due (but not limited) to the fact that VBScripts unfortunately allows you to pass values for a byRef argument (where a variable parameter is expected), so it is generally a good idea to put paranthesis only where it belongs (i.e. where absolutely needed).
I am developing a (large) package which does not load properly anymore.
This happened after I changed a single line of code.
When I attempt to load the package (with Needs), the package starts loading and then one of the setdelayed definitions “comes alive” (ie. Is somehow evaluated), gets trapped in an error trapping routine loaded a few lines before and the package loading aborts.
The error trapping routine with abort is doing its job, except that it should not have been called in the first place, during the package loading phase.
The error message reveals that the wrong argument is in fact a pattern expression which I use on the lhs of a setdelayed definition a few lines later.
Something like this:
……Some code lines
Changed line of code
g[x_?NotGoodQ]:=(Message[g::nogood, x];Abort[])
……..some other code lines
g/: cccQ[g[x0_]]:=True
When I attempt to load the package, I get:
g::nogood: Argument x0_ is not good
As you see the passed argument is a pattern and it can only come from the code line above.
I tried to find the reason for this behavior, but I have been unsuccessful so far.
So I decided to use the powerful Workbench debugging tools .
I would like to see step by step (or with breakpoints) what happens when I load the package.
I am not yet too familiar with WB, but it seems that ,using Debug as…, the package is first loaded and then eventually debugged with breakpoints, ect.
My problem is that the package does not even load completely! And any breakpoint set before loading the package does not seem to be effective.
So…2 questions:
can anybody please explain why these code lines "come alive" during package loading? (there are no obvious syntax errors or code fragments left in the package as far as I can see)
can anybody please explain how (if) is possible to examine/debug
package code while being loaded in WB?
Thank you for any help.
Edit
In light of Leonid's answer and using his EvenQ example:
We can avoid using Holdpattern simply by definying upvalues for g BEFORE downvalues for g
notGoodQ[x_] := EvenQ[x];
Clear[g];
g /: cccQ[g[x0_]] := True
g[x_?notGoodQ] := (Message[g::nogood, x]; Abort[])
Now
?g
Global`g
cccQ[g[x0_]]^:=True
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
In[6]:= cccQ[g[1]]
Out[6]= True
while
In[7]:= cccQ[g[2]]
During evaluation of In[7]:= g::nogood: -- Message text not found -- (2)
Out[7]= $Aborted
So...general rule:
When writing a function g, first define upvalues for g, then define downvalues for g, otherwise use Holdpattern
Can you subscribe to this rule?
Leonid says that using Holdpattern might indicate improvable design. Besides the solution indicated above, how could one improve the design of the little code above or, better, in general when dealing with upvalues?
Thank you for your help
Leaving aside the WB (which is not really needed to answer your question) - the problem seems to have a straightforward answer based only on how expressions are evaluated during assignments. Here is an example:
In[1505]:=
notGoodQ[x_]:=True;
Clear[g];
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
In[1509]:= g/:cccQ[g[x0_]]:=True
During evaluation of In[1509]:= g::nogood: -- Message text not found -- (x0_)
Out[1509]= $Aborted
To make it work, I deliberately made a definition for notGoodQ to always return True. Now, why was g[x0_] evaluated during the assignment through TagSetDelayed? The answer is that, while TagSetDelayed (as well as SetDelayed) in an assignment h/:f[h[elem1,...,elemn]]:=... does not apply any rules that f may have, it will evaluate h[elem1,...,elem2], as well as f. Here is an example:
In[1513]:=
ClearAll[h,f];
h[___]:=Print["Evaluated"];
In[1515]:= h/:f[h[1,2]]:=3
During evaluation of In[1515]:= Evaluated
During evaluation of In[1515]:= TagSetDelayed::tagnf: Tag h not found in f[Null]. >>
Out[1515]= $Failed
The fact that TagSetDelayed is HoldAll does not mean that it does not evaluate its arguments - it only means that the arguments arrive to it unevaluated, and whether or not they will be evaluated depends on the semantics of TagSetDelayed (which I briefly described above). The same holds for SetDelayed, so the commonly used statement that it "does not evaluate its arguments" is not literally correct. A more correct statement is that it receives the arguments unevaluated and does evaluate them in a special way - not evaluate the r.h.s, while for l.h.s., evaluate head and elements but not apply rules for the head. To avoid that, you may wrap things in HoldPattern, like this:
Clear[g,notGoodQ];
notGoodQ[x_]:=EvenQ[x];
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
g/:cccQ[HoldPattern[g[x0_]]]:=True;
This goes through. Here is some usage:
In[1527]:= cccQ[g[1]]
Out[1527]= True
In[1528]:= cccQ[g[2]]
During evaluation of In[1528]:= g::nogood: -- Message text not found -- (2)
Out[1528]= $Aborted
Note however that the need for HoldPattern inside your left-hand side when making a definition is often a sign that the expression inside your head may also evaluate during the function call, which may break your code. Here is an example of what I mean:
In[1532]:=
ClearAll[f,h];
f[x_]:=x^2;
f/:h[HoldPattern[f[y_]]]:=y^4;
This code attempts to catch cases like h[f[something]], but it will obviously fail since f[something] will evaluate before the evaluation comes to h:
In[1535]:= h[f[5]]
Out[1535]= h[25]
For me, the need for HoldPattern on the l.h.s. is a sign that I need to reconsider my design.
EDIT
Regarding debugging during loading in WB, one thing you can do (IIRC, can not check right now) is to use good old print statements, the output of which will appear in the WB's console. Personally, I rarely feel a need for debugger for this purpose (debugging package when loading)
EDIT 2
In response to the edit in the question:
Regarding the order of definitions: yes, you can do this, and it solves this particular problem. But, generally, this isn't robust, and I would not consider it a good general method. It is hard to give a definite advice for a case at hand, since it is a bit out of its context, but it seems to me that the use of UpValues here is unjustified. If this is done for error - handling, there are other ways to do it without using UpValues.
Generally, UpValues are used most commonly to overload some function in a safe way, without adding any rule to the function being overloaded. One advice is to avoid associating UpValues with heads which also have DownValues and may evaluate -by doing this you start playing a game with evaluator, and will eventually lose. The safest is to attach UpValues to inert symbols (heads, containers), which often represent a "type" of objects on which you want to overload a given function.
Regarding my comment on the presence of HoldPattern indicating a bad design. There certainly are legitimate uses for HoldPattern, such as this (somewhat artificial) one:
In[25]:=
Clear[ff,a,b,c];
ff[HoldPattern[Plus[x__]]]:={x};
ff[a+b+c]
Out[27]= {a,b,c}
Here it is justified because in many cases Plus remains unevaluated, and is useful in its unevaluated form - since one can deduce that it represents a sum. We need HoldPattern here because of the way Plus is defined on a single argument, and because a pattern happens to be a single argument (even though it describes generally multiple arguments) during the definition. So, we use HoldPattern here to prevent treating the pattern as normal argument, but this is mostly different from the intended use cases for Plus. Whenever this is the case (we are sure that the definition will work all right for intended use cases), HoldPattern is fine. Note b.t.w., that this example is also fragile:
In[28]:= ff[Plus[a]]
Out[28]= ff[a]
The reason why it is still mostly OK is that normally we don't use Plus on a single argument.
But, there is a second group of cases, where the structure of usually supplied arguments is the same as the structure of patterns used for the definition. In this case, pattern evaluation during the assignment indicates that the same evaluation will happen with actual arguments during the function calls. Your usage falls into this category. My comment for a design flaw was for such cases - you can prevent the pattern from evaluating, but you will have to prevent the arguments from evaluating as well, to make this work. And pattern-matching against not completely evaluated expression is fragile. Also, the function should never assume some extra conditions (beyond what it can type-check) for the arguments.
When debugging a function I usually use
library(debug)
mtrace(FunctionName)
FunctionName(...)
And that works quite well for me.
However, sometimes I am trying to debug a complex function that I don't know. In which case, I can find that inside that function there is another function that I would like to "go into" ("debug") - so to better understand how the entire process works.
So one way of doing it would be to do:
library(debug)
mtrace(FunctionName)
FunctionName(...)
# when finding a function I want to debug inside the function, run again:
mtrace(FunctionName.SubFunction)
The question is - is there a better/smarter way to do interactive debugging (as I have described) that I might be missing?
p.s: I am aware that there where various questions asked on the subject on SO (see here). Yet I wasn't able to come across a similar question/solution to what I asked here.
Not entirely sure about the use case, but when you encounter a problem, you can call the function traceback(). That will show the path of your function call through the stack until it hit its problem. You could, if you were inclined to work your way down from the top, call debug on each of the functions given in the list before making your function call. Then you would be walking through the entire process from the beginning.
Here's an example of how you could do this in a more systematic way, by creating a function to step through it:
walk.through <- function() {
tb <- unlist(.Traceback)
if(is.null(tb)) stop("no traceback to use for debugging")
assign("debug.fun.list", matrix(unlist(strsplit(tb, "\\(")), nrow=2)[1,], envir=.GlobalEnv)
lapply(debug.fun.list, function(x) debug(get(x)))
print(paste("Now debugging functions:", paste(debug.fun.list, collapse=",")))
}
unwalk.through <- function() {
lapply(debug.fun.list, function(x) undebug(get(as.character(x))))
print(paste("Now undebugging functions:", paste(debug.fun.list, collapse=",")))
rm(list="debug.fun.list", envir=.GlobalEnv)
}
Here's a dummy example of using it:
foo <- function(x) { print(1); bar(2) }
bar <- function(x) { x + a.variable.which.does.not.exist }
foo(2)
# now step through the functions
walk.through()
foo(2)
# undebug those functions again...
unwalk.through()
foo(2)
IMO, that doesn't seem like the most sensible thing to do. It makes more sense to simply go into the function where the problem occurs (i.e. at the lowest level) and work your way backwards.
I've already outlined the logic behind this basic routine in "favorite debugging trick".
I like options(error=recover) as detailed previously on SO. Things then stop at the point of error and one can inspect.
(I'm the author of the 'debug' package where 'mtrace' lives)
If the definition of 'SubFunction' lives outside 'MyFunction', then you can just mtrace 'SubFunction' and don't need to mtrace 'MyFunction'. And functions run faster if they're not 'mtrace'd, so it's good to mtrace only as little as you need to. (But you probably know those things already!)
If 'MyFunction' is only defined inside 'SubFunction', one trick that might help is to use a conditional breakpoint in 'MyFunction'. You'll need to 'mtrace( MyFunction)', then run it, and when the debugging window appears, find out what line 'MyFunction' is defined in. Say it's line 17. Then the following should work:
D(n)> bp( 1, F) # don't bother showing the window for MyFunction again
D(n)> bp( 18, { mtrace( SubFunction); FALSE})
D(n)> go()
It should be clear what this does (or it will be if you try it).
The only downsides are: the need to do it again whenever you change the code of 'MyFunction', and; the slowing-down that might occur through 'MyFunction' itself being mtraced.
You could also experiment with adding a 'debug.sub' argument to 'MyFunction', that defaults to FALSE. In the code of 'MyFunction', then add this line immediately after the definition of 'SubFunction':
if( debug.sub) mtrace( SubFunction)
That avoids any need to mtrace 'MyFunction' itself, but does require you to be able to change its code.
This is a troublesome violation of type safety in my project, so I'm looking for a way to disable it. It seems that if a function takes an AnyRef (or a java.lang.Object), you can call the function with any combination of parameters, and Scala will coalesce the parameters into a Tuple object and invoke the function.
In my case the function isn't expecting a Tuple, and fails at runtime. I would expect this situation to be caught at compile time.
object WhyTuple {
def main(args: Array[String]): Unit = {
fooIt("foo", "bar")
}
def fooIt(o: AnyRef) {
println(o.toString)
}
}
Output:
(foo,bar)
No implicits or Predef at play here at all -- just good old fashioned compiler magic. You can find it in the type checker. I can't locate it in the spec right now.
If you're motivated enough, you could add a -X option to the compiler prevent this.
Alternatively, you could avoid writing arity-1 methods that accept a supertype of TupleN.
What about something like this:
object Qx2 {
#deprecated def callingWithATupleProducesAWarning(a: Product) = 2
def callingWithATupleProducesAWarning(a: Any) = 3
}
Tuples have the Product trait, so any call to callingWithATupleProducesAWarning that passes a tuple will produce a deprecation warning.
Edit: According to people better informed than me, the following answer is actually wrong: see this answer. Thanks Aaron Novstrup for pointing this out.
This is actually a quirk of the parser, not of the type system or the compiler. Scala allows zero- or one-arg functions to be invoked without parentheses, but not functions with more than one argument. So as Fred Haslam says, what you've written isn't an invocation with two arguments, it's an invocation with one tuple-valued argument. However, if the method did take two arguments, the invocation would be a two-arg invocation. It seems like the meaning of the code affects how it parses (which is a bit suckful).
As for what you can actually do about this, that's tricky. If the method really did require two arguments, this problem would go away (i.e. if someone then mistakenly tried to call it with one argument or with three, they'd get a compile error as you expect). Don't suppose there's some extra parameter you've been putting off adding to that method? :)
The compile is capable of interpreting methods without round brackets. So it takes the round brackets in the fooIt to mean Tuple. Your call is the same as:
fooIt( ("foo","bar") )
That being said, you can cause the method to exclude the call, and retrieve the value if you use some wrapper like Some(AnyRef) or Tuple1(AnyRef).
I think the definition of (x, y) in Predef is responsible. The "-Yno-predefs" compiler flag might be of some use, assuming you're willing to do the work of manually importing any implicits you otherwise need. By that I mean that you'll have to add import scala.Predef._ all over the place.
Could you also add a two-param override, which would prevent the compiler applying the syntactic sugar? By making the types taking suitably obscure you're unlikely to get false positives. E.g:
object WhyTuple {
...
class DummyType
def fooIt(a: DummyType, b: DummyType) {
throw new UnsupportedOperationException("Dummy function - should not be called")
}
}
original (update follows)
I'm working with a lot of anonymous functions, ie functions declared as part of a dictionary, aka "methods". It's getting pretty painful to debug, because I can't tell what function the errors are happening in.
Vim's backtraces look like this:
Error detected while processing function NamedFunction..2111..2105:
line 1:
E730: using List as a String
This trace shows that the error occurred in the third level down the stack, on the first line of anonymous function #2105. IE NamedFunction called anonymous function #2111, which called anonymous function #2105. NamedFunction is one declared through the normal function NamedFunction() ... endfunction syntax; the others were declared using code like function dict.func() ... endfunction.
So obviously I'd like to find out which function has number 2105.
Assuming that it's still in scope, it's possible to find out what Dictionary entry references it by dumping all of the dictionary variables that might contain that reference. This is sort of awkward and it's difficult to be systematic about it, though I guess I could code up a function to search through all of the loaded dictionaries for a reference to that function, watching out for circular references. Although to be really thorough, it would have to search not only script-local and global dictionaries, but buffer-local dictionaries as well; is there a way to access another buffer's local variables?
Anyway I'm wondering if it's possible to dump the source code for the anonymous function instead. This would be a lot easier and probably more reliable.
update
I ended up asking about this a while back on the vim_use mailing list. Bram Moolenar, aka vim's BDFL, responded by saying that "You are not supposed to use the function number." However, a suitable alternative for this functionality has not been suggested, as of early September 2010. It's also not been explicitly mentioned whether or not this functionality will continue to work in subsequent vim releases. I've not tried to do this (or anything else, for that matter) in the recently released vim 7.3.
The :function command tries to stop you from specifying the numbered functions (their name is just a number) but you can trick it using the {...} dynamic function name feature, throw in some :verbose and you have a winner:
:verbose function {43}
function 43()
Last set from /home/peter/test.vim
1 throw "I am an exception"
endfunction
This was not at all obvious in the help docs.
I use the following workaround: I have one plugin that does some stuff like creating commands, global functions for other plugins. It also registers all plugins, so I have a large dictionary with lots of stuff related to plugins. If I see a error I search for a function that produces it using function findnr:
"{{{3 stuf.findf:
function s:F.stuf.findf(nr, pos, d)
if type(a:d)==2 && string(a:d)=~#"'".a:nr."'"
return a:pos
elseif type(a:d)==type({})
for [key, Value] in items(a:d)
let pos=s:F.stuf.findf(a:nr, a:pos."/".key, Value)
unlet Value
if type(pos)==type("")
return pos
endif
endfor
endif
return 0
endfunction
"{{{3 stuf.findr:
function s:F.stuf.findnr(nr)
for [key, value] in items(s:g.reg.registered)+[["load", {"F": s:F}]]
let pos=s:F.stuf.findf(a:nr, "/".key, value.F)
if type(pos)==type("")
return pos
endif
endfor
return 0
endfunction
Here I have this plugin functions in s:F.{key} dictionaries and other plugins' functions under s:g.reg.registered[plugname].F dictionary.