Unable to Call Function in Go debugger - go

I am following the "Little Go Book" by Karl Seguin, in order to learn Go.
My working environment is Visual Studio Code.
Upon debugging, when I try to call a function from the debug console, i get the following error:
"function calls not allowed without using 'call'", if I try using "call fib(10)", i get "Unable to eval expression: "1:6: expected 'EOF', found fib".
This is the function I am trying to evaluate:
//Fibonnaci
func fib(n int) int64 {
if n == 0 {
return 0
} else if n == 1 {
return 1
} else {
return fib(n-1) + fib(n-2)
}
}
If i try to call the function from the code itself ( from the main() for instance, it works perfectly).
However, if I set a breakpoint and try to call the same function from the debugger console, I get the below error:
Eval error: function calls not allowed without using 'call'
call fib(10)
Unable to eval expression: "1:6: expected 'EOF', found fib"
Failed to eval expression: {
"Expr": "call fib(10)",
"Scope": {
"goroutineID": 1,
"frame": 0
},
"Cfg": {
"followPointers": true,
"maxVariableRecurse": 1,
"maxStringLen": 64,
"maxArrayValues": 64,
"maxStructFields": -1
}
}

Looks like "Function calls via delve 'call' are not supported" yet github issue in microsoft/vscode-go repo :(

The issue vscode-go issue 100 "debug: support function calls via delve 'call'" just got closed with PR 101 and commit 5a7752c / CL 249377
Delve supports function calls. Even though it is still experimental and can be applied only to a limited set of functions, this is a useful feature, many vscode-go users long for.
Unlike other javascript/typescript debuggers, delve treats function calls specially and requires different call paths than usual expression evaluation.
That is because Go is a compiled, runtime-managed GC language, calling a function safely from debugger is complex.
DAP and VS Code UI does not distinguish function calls and other expression evaluation either, so we have to implement this in the same evaluateRequest context.
We use a heuristic to guess which route (call or expression evaluation) we need to take based on evaluateRequest's request.
This is part of the 0.17.0 milestone, yet to be released, and available for now in the nightly build.

Related

How to debug with PureScript?

Issue
Following is a minimal, contrived example:
read :: FilePath -> Aff String
read f = do
log ("File: " <> f) -- (1)
readTextFile UTF8 f -- (2)
I would like to do some debug logging in (1), before a potential error on (2) occurs. Executing following code in Spago REPL works for success cases so far:
$ spago repl
> launchAff_ $ read "test/data/tree/root.txt"
File: test/data/tree/root.txt
unit
Problem: If there is an error with (2) - file is directory here - , (1) seems to be not executed at all:
$ spago repl
> launchAff_ $ read "test/data/tree"
~/purescript-book/exercises/chapter9/.psci_modules/node_modules/Effect.Aff/foreign.js:532
throw util.fromLeft(step);
^
[Error: EISDIR: illegal operation on a directory, read] {
errno: -21,
code: 'EISDIR',
syscall: 'read'
}
The original problem is more complex including several layers of recursions (see E-Book exercise 3), where I need logging to debug above error.
Questions
How can I properly log regardless upcoming errors here?
(Optional) Is there a more sophisticated, well-established debugging alternative - purescript-debugger? A decicated VS Code debug extension/functionality would be the cherry on the cake.
First of all, the symptoms you observe do not mean that the first line doesn't execute. It does always execute, you're just not seeing output from it due to how console works in the PureScript REPL. The output gets swallowed. Not the only problem with REPL, sadly.
You can verify that the first line is always executed by replacing log with throwError and observing that the error always gets thrown. Or, alternatively, you can make the first line modify a mutable cell instead of writing to the console, and then examine the cell's contents.
Finally, this only happens in REPL. If you put that launchAff_ call inside main and run the program, you will always get the console output.
Now to the actual question at hand: how to debug trace.
Logging to console is fine if you can afford it, but there is a more elegant way: Debug.trace.
This function has a hidden effect - i.e. its type says it's pure, but it really produces an effect when called. This little lie lets you use trace in a pure setting and thus debug pure code. No need for Effect! This is ok as long as used for debugging only, but don't put it in production code.
The way it works is that it takes two parameters: the first one gets printed to console and the second one is a function to be called after printing, and the result of the whole thing is whatever that function returns. For example:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace ("x = " <> show x) \_ ->
x + y
main :: Effect Unit
main =
log $ show $ calculateSomething 37 5
> npx spago run
'x = 37'
42
The first parameter can be anything at all, not just a string. This lets you easily print a lot of stuff:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace { x, y } \_ ->
x + y
> npx spago run
{ x: 37, y: 5 }
42
Or, applying this to your code:
read :: FilePath -> Aff String
read f = trace ("File: " <> f) \_ -> do
readTextFile UTF8 f
But here's a subtle detail: this tracing happens as soon as you call read, even if the resulting Aff will never be actually executed. If you need tracing to happen on effectful execution, you'll need to make the trace call part of the action, and be careful not to make it the very first action in the sequence:
read :: FilePath -> Aff String
read f = do
pure unit
trace ("File: " <> f) \_ -> pure unit
readTextFile UTF8 f
It is, of course, a bit inconvenient to do this every time you need to trace in an effectful context, so there is a special function that does it for you - it's called traceM:
read :: FilePath -> Aff String
read f = do
traceM ("File: " <> f)
readTextFile UTF8 f
If you look at its source code, you'll see that it does exactly what I did in the example above.
The sad part is that trace won't help you in REPL when an exception happens, because it's still printing to console, so it'll still get swallowed for the same reasons.
But even when it doesn't get swallowed, the output is a bit garbled, because trace actually outputs in color (to help you make it out among other output), and PureScript REPL has a complicated relationship with color:
> calculateSomething 37 5
←[32m'x = 37'←[39m
42
In addition to Fyodor Soikin's great answer, I found a variant using VS Code debug view.
1.) Make sure to build with sourcemaps:
spago build --purs-args "-g sourcemaps"
2.) Add debug configuration to VS Code launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Launch Program",
"skipFiles": ["<node_internals>/**"],
"runtimeArgs": ["-e", "require('./output/Main/index.js').main()"],
"smartStep": true // skips files without (valid) source map
}
]
}
Replace "./output/Main/index.js" / .main() with the compiled .js file / function to be debugged.
3.) Set break points and step through the .purs file via sourcemap support.

Is it possible to read an SRML error message in Substrate UI, when a transaction fails?

I am not sure of the behaviour of error messages in Substrate runtimes in relation to Substrate UI, and if they inherently cause a transaction failure or not.
For example in the democracy SRML I see the following line:
ensure!(!<Cancellations<T>>::exists(h), "cannot cancel the same proposal twice");
Which presumably is a macro that ensures that the transaction fails or stops processing if the h (the proposal hash) already exists. There is clearly a message associated with this error.
Am I right to assume that the transaction fails (without the rest of the SRML code being executed) when this test fails?
If so, how do I detect the failure in Substrate UI, and possibly see the message itself?
If not, then presumably some further code is necessary in the runtime module which explicitly creates an error. I have seen Err() - but not in conjunction with ensure!()
As https://github.com/paritytech/substrate/pull/3433 is merged, the ExtrinsicFailed event now includes a DispatchError, which will provide additional error code.
There isn't much documentations available so I will just use system module as example.
First you need to decl_error, note the error variants can only be simple C like enum
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L334
decl_error! {
/// Error for the System module
pub enum Error {
BadSignature,
BlockFull,
RequireSignedOrigin,
RequireRootOrigin,
RequireNoOrigin,
}
}
Then you need to associate the declared Error type
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L253
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
type Error = Error;
Then you can just return your Error in dispatch calls when things failed
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L543
pub fn ensure_root<OuterOrigin, AccountId>(o: OuterOrigin) -> Result<(), Error>
where OuterOrigin: Into<Result<RawOrigin<AccountId>, OuterOrigin>>
{
match o.into() {
Ok(RawOrigin::Root) => Ok(()),
_ => Err(Error::RequireRootOrigin),
}
}
Right now you will only able to see a two numbers, module index and error code from JS side. Later there could be support to include the error details in metadata so that frontend will be able to provide a better response.
Related issue:
https://github.com/paritytech/substrate/issues/2954
The ensure! macro is expaneded as:
#[macro_export]
macro_rules! fail {
( $y:expr ) => {{
return Err($y);
}}
}
#[macro_export]
macro_rules! ensure {
( $x:expr, $y:expr ) => {{
if !$x {
$crate::fail!($y);
}
}}
}
So basically, it's just a quicker way to return Err. At 1.0, the error msg will only be printed to stdout(at least what I've tested so far), doesn't know if it'll be included in blockchain in the future(so can be viewed in substrate ui)..

Docpad: confused about extending template data

I'm totally confused about adding mongo data to template data. I haven't even started trying to get the data from a database as I can't get my templates to see test data (see below). This is in docpad.coffee for the moment, but ultimately g will be the output of mongoDB.
events:
extendTemplateData: (opts) ->
# {templateData} = opts
getGigsData: ->
g = { "date" : "3-4-2013", "location" : "Gent" }
return g
opts.templateData["getGigsData"] = getGigsData
And I hope to access it with <%= #getGigsData().date %>
Thanks so much for some guidance
I should add that this design is based on wanting to make it easy for the band to add gigs, without letting them edit the page content itself as I fear they would mess up the markup - if there are other ways to achieve this goal, I'd be pleased to hear.
Tried this locally. And hit the issue:
debug: Emitting the event: extendTemplateData
→ [2014-02-14 01:38:50.030] [/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/out/lib/docpad.js:1184] [DocPad.emitSerial]
error: Something went wrong with the action
→ [2014-02-14 01:38:50.037] [/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/out/lib/interfaces/console.js:107] [ConsoleInterface.destroyWithError]
error: An error occured:
ReferenceError: getGigsData is not defined
at Object.docpadConfig.events.extendTemplateData (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/docpad.coffee:42:44)
at ambi (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/ambi/out/lib/ambi.js:25:27)
at DocPad.<anonymous> (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/out/lib/docpad.js:995:25)
at ambi (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/ambi/out/lib/ambi.js:23:18)
at Task.<anonymous> (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/event-emitter-grouped/out/lib/event-emitter-grouped.js:45:23)
at ambi (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/ambi/out/lib/ambi.js:23:18)
at fire (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/taskgroup/out/lib/taskgroup.js:163:25)
at b (domain.js:183:18)
at Domain.run (domain.js:123:23)
at Task.fire (/Users/balupton/Projects/docpad-extras/skeletons/so-21747504/node_modules/docpad/node_modules/taskgroup/out/lib/taskgroup.js:173:25)
at processImmediate [as _immediateCallback] (timers.js:330:15)
Which indicates that the error is actually inside our event handler, rather than inside our code. That for some reason getGigsData is not being set, despite our:
getGigsData: ->
g = { "date" : "3-4-2013", "location" : "Gent" }
return g
Examining the code, as a CoffeeScript user, I found the issue. As a non-coffeescript user, you can use the coffeescript compiler on the coffeescript website http://coffeescript.org to see the compiled javascript, which is:
({
events: {
extendTemplateData: function(opts) {
({
getGigsData: function() {
var g;
g = {
"date": "3-4-2013",
"location": "Gent"
};
return g;
}
});
return opts.templateData["getGigsData"] = getGigsData;
}
}
});
As we can see that is definitely not what we expected. We are just defining getGigsData inside an object, then doing nothing with it.
The issue is that we used a colon instead of an equals sign, so getGigsData: -> instead of getGigsData = ->. This is not a coffeescript thing, but you would have run into the same issue if this was javascript too, albeit javascript may be a bit more obvious due to the necessary squiggly braces around object definitions.
As a sidenote, if you prefer to use JavaScript with DocPad for whatever reason, that is totally supported. You could use a docpad.json or docpad.js file for your docpad configuration file. Another option, is to continue using CoffeeScript then just wrap JavaScript code within the backtick, see: http://coffeescript.org/#embedded

Scala IDE Debugger "step into" behavior

Current behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to x match { line
On "step into" the control goes to def TwiceTest = { line
On further "step into" the control goes to if (z % 2 == 0)... line.
Expected behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to if (z % 2 == 0)... line.
Code Snippet
object testobj extends App {
def TwiceTest = {
val x = Twice(21)
x match {
case Twice(n) => Console.println(n)
} // prints 21
}
TwiceTest
}
object Twice {
def apply(x: Int): Int = x * 2
def unapply(z: Int): Option[Int] = {
if (z % 2 == 0) Some(z / 2) else None
}
}
The current behavior is irritating while debugging a scala program with lots of nested extractors. I tried this with the new Scala debugger as well as the Java debugger but with the same result.
Step Filtering also does not help in this case.
As a workaround, I am putting a breakpoint in the unapply method and running resume from the first breakpoint. Can someone please suggest me a cleaner method.
Edit 1
I am using Scala-IDE (latest nightly build. 2.1.0.nightly-2_09-201208250315-529cd70 )
Eclipse Version: Indigo Service Release 2 Build id: 20120216-1857
OS: Windows 7 ( 64 bit)
The line number information in the bytecode is wrong. It is not an issue with the IDE, but the Scala compiler. When pattern matching is compiled, synthetic code sometimes gets the wrong position information.
I assume you are using Scala 2.9.2. In the next version of Scala (2.10.0), there are significant improvements in the pattern matcher, so it would be good to give it a try.

Forcing a package's function to use user-provided function

I'm running into a problem with the MNP package which I've traced to an unfortunate call to deparse (whose maximum width is limited to 500 characters).
Background (easily skippable if you're bored)
Because mnp uses a somewhat idiosyncratic syntax to allow for varying choice sets (you include cbind(choiceA,choiceB,...) in the formula definition), the left hand side of my formula call is 1700 characters or so when model.matrix.default calls deparse on it. Since deparse supports a maximum width.cutoff of 500 characters, the sapply(attr(t, "variables"), deparse, width.cutoff = 500)[-1L] line in model.matrix.default has as its first element:
[1] "cbind(plan1, plan2, plan3, plan4, plan5, plan6, plan7, plan8, plan9, plan10, plan11, plan12, plan13, plan14, plan15, plan16, plan17, plan18, plan19, plan20, plan21, plan22, plan23, plan24, plan25, plan26, plan27, plan28, plan29, plan30, plan31, plan32, plan33, plan34, plan35, plan36, plan37, plan38, plan39, plan40, plan41, plan42, plan43, plan44, plan45, plan46, plan47, plan48, plan49, plan50, plan51, plan52, plan53, plan54, plan55, plan56, plan57, plan58, plan59, plan60, plan61, plan62, plan63, "
[2] " plan64, plan65, plan66, plan67, plan68, plan69, plan70, plan71, plan72, plan73, plan74, plan75, plan76, plan77, plan78, plan79, plan80, plan81, plan82, plan83, plan84, plan85, plan86, plan87, plan88, plan89, plan90, plan91, plan92, plan93, plan94, plan95, plan96, plan97, plan98, plan99, plan100, plan101, plan102, plan103, plan104, plan105, plan106, plan107, plan108, plan109, plan110, plan111, plan112, plan113, plan114, plan115, plan116, plan117, plan118, plan119, plan120, plan121, plan122, plan123, "
[3] " plan124, plan125, plan126, plan127, plan128, plan129, plan130, plan131, plan132, plan133, plan134, plan135, plan136, plan137, plan138, plan139, plan140, plan141, plan142, plan143, plan144, plan145, plan146, plan147, plan148, plan149, plan150, plan151, plan152, plan153, plan154, plan155, plan156, plan157, plan158, plan159, plan160, plan161, plan162, plan163, plan164, plan165, plan166, plan167, plan168, plan169, plan170, plan171, plan172, plan173, plan174, plan175, plan176, plan177, plan178, plan179, "
[4] " plan180, plan181, plan182, plan183, plan184, plan185, plan186, plan187, plan188, plan189, plan190, plan191, plan192, plan193, plan194, plan195, plan196, plan197, plan198, plan199, plan200, plan201, plan202, plan203, plan204, plan205, plan206, plan207, plan208, plan209, plan210, plan211, plan212, plan213, plan214, plan215, plan216, plan217, plan218, plan219, plan220, plan221, plan222, plan223, plan224, plan225, plan226, plan227, plan228, plan229, plan230, plan231, plan232, plan233, plan234, plan235, "
[5] " plan236, plan237, plan238, plan239, plan240, plan241, plan242, plan243, plan244, plan245, plan246, plan247, plan248, plan249, plan250, plan251, plan252, plan253, plan254, plan255, plan256, plan257, plan258, plan259, plan260, plan261, plan262, plan263, plan264, plan265, plan266, plan267, plan268, plan269, plan270, plan271, plan272, plan273, plan274, plan275, plan276, plan277, plan278, plan279, plan280, plan281, plan282, plan283, plan284, plan285, plan286, plan287, plan288, plan289, plan290, plan291, "
[6] " plan292, plan293, plan294, plan295, plan296, plan297, plan298, plan299, plan300, plan301, plan302, plan303, plan304, plan305, plan306, plan307, plan308, plan309, plan310, plan311, plan312, plan313)"
When model.matrix.default tests this against the variables in the data.frame, it returns an error.
The problem
To get around this, I've written a new deparse function:
deparse <- function (expr, width.cutoff = 60L, backtick = mode(expr) %in%
c("call", "expression", "(", "function"), control = c("keepInteger",
"showAttributes", "keepNA"), nlines = -1L) {
ret <- .Internal(deparse(expr, width.cutoff, backtick, .deparseOpts(control), nlines))
paste0(ret,collapse="")
}
However, when I run mnp again and step through, it returns the same error for the same reason (base::deparse is being run, not my deparse).
This is somewhat surprising to me, as what I expect is more typified by this example, where the user-defined function temporarily over-writes the base function:
> print <- function() {
+ cat("user-defined print ran\n")
+ }
> print()
user-defined print ran
I realize the right way to solve this problem is to rewrite model.matrix.default, but as a tool for debugging I'm curious how to force it to use my deparse and why the anticipated (by me) behavior is not happening here.
The functions fixInNamespace and assignInNamespace are provided to allow editing of existing functions. You could try ... but I will not since mucking with deparse looks too dangerous:
assignInNamespace("deparse",
function (expr, width.cutoff = 60L, backtick = mode(expr) %in%
c("call", "expression", "(", "function"), control = c("keepInteger",
"showAttributes", "keepNA"), nlines = -1L) {
ret <- .Internal(deparse(expr, width.cutoff, backtick, .deparseOpts(control), nlines))
paste0(ret,collapse="")
} , "base")
There is an indication on the help page that the use of such functions has restrictions and I would not be surprised that such core function might have additional layers of protection. Since it works via side-effect, you should not need to assign the result.
This is how packages with namespaces search for functions, as described in Section 1.6, Package Namespaces of Writing R Extensions
Namespaces are sealed once they are loaded. Sealing means that imports
and exports cannot be changed and that internal variable bindings
cannot be changed. Sealing allows a simpler implementation strategy
for the namespace mechanism. Sealing also allows code analysis and
compilation tools to accurately identify the definition corresponding
to a global variable reference in a function body.
The namespace controls the search strategy for variables used by
functions in the package. If not found locally, R searches the package
namespace first, then the imports, then the base namespace and then
the normal search path.

Resources