The debugger can be programmatically invoked by executing (break). For example, the debugging banner then displays what caused the interrupt, the HELP line, the available restarts, some related info, and finally the source of the interrupt:
debugger invoked on a SIMPLE-CONDITION in thread
#<THREAD "main thread" RUNNING {10010B0523}>:
break
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE] Return from BREAK.
1: [ABORT ] Exit debugger, returning to top level.
#(
NODE: STATE=<NIL NIL NIL 0.0 0.0
( )> DEPTH=0)
#(
NODE: STATE=<NIL NIL NIL 0.0 0.0
((ACTIVE GATE1) (ACTIVE GATE2) (COLOR RECEIVER1 BLUE) (COLOR RECEIVER2 RED) (COLOR TRANSMITTER1 BLUE) (COLOR TRANSMITTER2 RED) (FREE ME) (LOC CONNECTOR1 AREA5) (LOC CONNECTOR2 AREA7) (LOC ME AREA5))> DEPTH=0)
(DF-BNB1 )
source: (BREAK)
0]
I don't understand the related info between the restarts and the source. Can this info be suppressed, as
sometimes it is many lines long in my application. I've tried changing the debug & safety optimization settings, but to no effect.
The output you are confused with is related to the place in the code where break was invoked. When I call it from the vanilla Lisp REPL (without SLIME), it displays:
(SB-INT:SIMPLE-EVAL-IN-LEXENV (BREAK) #<NULL-LEXENV>)
However, if I do something wrong in the debugger, here's what happens:
0] q
; in: PROGN (PRINT 1)
; (PROGN Q)
;
; caught WARNING:
; undefined variable: COMMON-LISP-USER::Q
;
; compilation unit finished
; Undefined variable:
; Q
; caught 1 WARNING condition
debugger invoked on a UNBOUND-VARIABLE in thread
#<THREAD "main thread" RUNNING {10005204C3}>:
The variable Q is unbound.
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE ] Retry using Q.
1: [USE-VALUE ] Use specified value.
2: [STORE-VALUE] Set specified value and use it.
3: [ABORT ] Reduce debugger level (to debug level 1).
4: Return from BREAK.
5: Exit debugger, returning to top level.
((LAMBDA (#:G498)) #<unused argument>)
source: (PROGN Q)
You can see that the last line resembles the output you got with the line starting at source:. Actually, the output we saw consists of 3 main parts:
1. Description of the condition
2. Listing of the available restarts
3. Debug REPL prompt printed by debug-loop-fun
The last output is part of the prompt and it is generated by the invocation of:
(print-frame-call *current-frame* *debug-io* :print-frame-source t)
So, you can recompile the call providing :print-frame-source nil or try to understand why your current frame looks this way...
Related
Issue
Following is a minimal, contrived example:
read :: FilePath -> Aff String
read f = do
log ("File: " <> f) -- (1)
readTextFile UTF8 f -- (2)
I would like to do some debug logging in (1), before a potential error on (2) occurs. Executing following code in Spago REPL works for success cases so far:
$ spago repl
> launchAff_ $ read "test/data/tree/root.txt"
File: test/data/tree/root.txt
unit
Problem: If there is an error with (2) - file is directory here - , (1) seems to be not executed at all:
$ spago repl
> launchAff_ $ read "test/data/tree"
~/purescript-book/exercises/chapter9/.psci_modules/node_modules/Effect.Aff/foreign.js:532
throw util.fromLeft(step);
^
[Error: EISDIR: illegal operation on a directory, read] {
errno: -21,
code: 'EISDIR',
syscall: 'read'
}
The original problem is more complex including several layers of recursions (see E-Book exercise 3), where I need logging to debug above error.
Questions
How can I properly log regardless upcoming errors here?
(Optional) Is there a more sophisticated, well-established debugging alternative - purescript-debugger? A decicated VS Code debug extension/functionality would be the cherry on the cake.
First of all, the symptoms you observe do not mean that the first line doesn't execute. It does always execute, you're just not seeing output from it due to how console works in the PureScript REPL. The output gets swallowed. Not the only problem with REPL, sadly.
You can verify that the first line is always executed by replacing log with throwError and observing that the error always gets thrown. Or, alternatively, you can make the first line modify a mutable cell instead of writing to the console, and then examine the cell's contents.
Finally, this only happens in REPL. If you put that launchAff_ call inside main and run the program, you will always get the console output.
Now to the actual question at hand: how to debug trace.
Logging to console is fine if you can afford it, but there is a more elegant way: Debug.trace.
This function has a hidden effect - i.e. its type says it's pure, but it really produces an effect when called. This little lie lets you use trace in a pure setting and thus debug pure code. No need for Effect! This is ok as long as used for debugging only, but don't put it in production code.
The way it works is that it takes two parameters: the first one gets printed to console and the second one is a function to be called after printing, and the result of the whole thing is whatever that function returns. For example:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace ("x = " <> show x) \_ ->
x + y
main :: Effect Unit
main =
log $ show $ calculateSomething 37 5
> npx spago run
'x = 37'
42
The first parameter can be anything at all, not just a string. This lets you easily print a lot of stuff:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace { x, y } \_ ->
x + y
> npx spago run
{ x: 37, y: 5 }
42
Or, applying this to your code:
read :: FilePath -> Aff String
read f = trace ("File: " <> f) \_ -> do
readTextFile UTF8 f
But here's a subtle detail: this tracing happens as soon as you call read, even if the resulting Aff will never be actually executed. If you need tracing to happen on effectful execution, you'll need to make the trace call part of the action, and be careful not to make it the very first action in the sequence:
read :: FilePath -> Aff String
read f = do
pure unit
trace ("File: " <> f) \_ -> pure unit
readTextFile UTF8 f
It is, of course, a bit inconvenient to do this every time you need to trace in an effectful context, so there is a special function that does it for you - it's called traceM:
read :: FilePath -> Aff String
read f = do
traceM ("File: " <> f)
readTextFile UTF8 f
If you look at its source code, you'll see that it does exactly what I did in the example above.
The sad part is that trace won't help you in REPL when an exception happens, because it's still printing to console, so it'll still get swallowed for the same reasons.
But even when it doesn't get swallowed, the output is a bit garbled, because trace actually outputs in color (to help you make it out among other output), and PureScript REPL has a complicated relationship with color:
> calculateSomething 37 5
←[32m'x = 37'←[39m
42
In addition to Fyodor Soikin's great answer, I found a variant using VS Code debug view.
1.) Make sure to build with sourcemaps:
spago build --purs-args "-g sourcemaps"
2.) Add debug configuration to VS Code launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Launch Program",
"skipFiles": ["<node_internals>/**"],
"runtimeArgs": ["-e", "require('./output/Main/index.js').main()"],
"smartStep": true // skips files without (valid) source map
}
]
}
Replace "./output/Main/index.js" / .main() with the compiled .js file / function to be debugged.
3.) Set break points and step through the .purs file via sourcemap support.
I would like to know how I can limit the maximum memory (heap) a process can use to allocate objects.
I looked into spawn_opt with max_heap_size as mentioned here but I keep getting a badarg error. Is this even possible now? I want to avoid using a gen server to kill the process when heapsize goes over a limit.
-module(maxMemCheck).
-export([fib/1,printfib/1]).
-export([main/0]).
printfib(N) ->
Res = maxMemCheck:fib(N),
io:fwrite("~w ~w~n", [N, Res]).
fib(0) -> 0 ;
fib(1) -> 1 ;
fib(N) when N > 0 -> fib(N-1) + fib(N-2) .
main () ->
spawn_opt(maxMemCheck,printfib,[10],[{max_heap_size,#{size => 300, kill => true, error_logger => true}}]).
This results in the error:
{"init terminating in do_boot",{badarg,[{erlang,spawn_opt,[maxMemCheck,fib,"\n",[{max_heap_size,#{error_logger=>true,kill=>true,size=>300}}]],[]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
Crash dump is being written to: erl_crash.dump...done
init terminating in do_boot ()
The syntax is correct, maybe you are using an "old" erlang version. This syntax uses a map variable which was introduced in OTP18 or 19.
I came across the Timeout module in Ruby, and wanted to test it out. I looked at their official source code at http://ruby-doc.org/stdlib-2.1.1/libdoc/timeout/rdoc/Timeout.html
Here is the code I had
require 'timeout'
require 'benchmark'
numbers = [*1..80]
Timeout::timeout(5) { numbers.combination(5).count }
=> 24040016
I did some benchmarking tests, and got the following.
10.828000 0.063000 10.891000 11.001676
According to the documentation, this method is supposed to return an exception if the block is not executed within 5 seconds. If it is executed within the time frame, it will return the result of the code block
For what it's worth, I've tried timeout with 1 second, instead of 5 seconds, and I still get returned the result of the code block.
Here is the official documentation
timeout(sec, klass=nil)
Performs an operation in a block, raising an error if it takes longer than sec seconds to complete.
sec: Number of seconds to wait for the block to terminate. Any number may be used,
including Floats to specify fractional seconds. A value of 0 or nil will execute the
block without any timeout.
klass: Exception Class to raise if the block fails to terminate in sec seconds. Omitting
will use the default, Timeout::Error
I am mystified as to why this doesn't work.
The problem is the way MRI (Matz's Ruby Implementation) thread scheduling works. MRI uses a GIL (Global Interpreter Lock), which in practice means only one thread is truly running at a time.
There are some exception, but for the majority of the time there is only one thread executing Ruby code at any one time.
Normally you do not notice this, even during heavy computations that consume 100% CPU, because the MRI keeps time-slicing the threads at regular intervals so that each thread gets a turn to run.
However there's one exception where time-slicing isn't active and that's when a Ruby thread is executing native C-code instead of Ruby code.
Now it so happens that Array#combination is implemented in pure C:
[1] pry(main)> show-source Array#combination
From: array.c (C Method):
static VALUE
rb_ary_combination(VALUE ary, VALUE num)
{
...
}
When we combine this knowledge with how Timeout.timeout is implemented we can start to get a clue of what is happening:
[7] pry(main)> show-source Timeout#timeout
From: /opt/ruby21/lib/ruby/2.1.0/timeout.rb # line 75:
75: def timeout(sec, klass = nil) #:yield: +sec+
76: return yield(sec) if sec == nil or sec.zero?
77: message = "execution expired"
78: e = Error
79: bl = proc do |exception|
80: begin
81: x = Thread.current
82: y = Thread.start {
83: begin
84: sleep sec
85: rescue => e
86: x.raise e
87: else
88: x.raise exception, message
89: end
90: }
91: return yield(sec)
92: ensure
93: if y
94: y.kill
95: y.join # make sure y is dead.
96: end
97: end
98: end
99: ...
1xx: end
Your code running Array.combination most likely actually starts executing even BEFORE the timeout thread runs sleep sec on line 84. Your code is launched on line 91 through yield(sec).
This means the order of execution actually becomes:
1: [thread 1] numbers.combination(5).count
# ...some time passes while the combinations are calculated ...
2: [thread 2] sleep 5 # <- The timeout thread starts running sleep
3: [thread 1] y.kill # <- The timeout thread is instantly killed
# and never times out.
In order to make sure the timeout thread starts first you can try this, which will most likely trigger the timeout exception this time:
Timeout::timeout(5) { Thread.pass; numbers.combination(5).count }
This is because by running Thread.pass you allow the MRI scheduler to start and run the code on line 82 before the native combination C-code executes. However even in this case the exception won't be triggered until combination exits because of the GIL.
There is no way around this unfortunately. You would have to use something like JRuby instead, which has real concurrent threads. Or you could run the combination calculation in a Process instead of a thread.
Current behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to x match { line
On "step into" the control goes to def TwiceTest = { line
On further "step into" the control goes to if (z % 2 == 0)... line.
Expected behavior:
Put a breakpoint on the case Twice(n) ... line.
On "step into" the control goes to if (z % 2 == 0)... line.
Code Snippet
object testobj extends App {
def TwiceTest = {
val x = Twice(21)
x match {
case Twice(n) => Console.println(n)
} // prints 21
}
TwiceTest
}
object Twice {
def apply(x: Int): Int = x * 2
def unapply(z: Int): Option[Int] = {
if (z % 2 == 0) Some(z / 2) else None
}
}
The current behavior is irritating while debugging a scala program with lots of nested extractors. I tried this with the new Scala debugger as well as the Java debugger but with the same result.
Step Filtering also does not help in this case.
As a workaround, I am putting a breakpoint in the unapply method and running resume from the first breakpoint. Can someone please suggest me a cleaner method.
Edit 1
I am using Scala-IDE (latest nightly build. 2.1.0.nightly-2_09-201208250315-529cd70 )
Eclipse Version: Indigo Service Release 2 Build id: 20120216-1857
OS: Windows 7 ( 64 bit)
The line number information in the bytecode is wrong. It is not an issue with the IDE, but the Scala compiler. When pattern matching is compiled, synthetic code sometimes gets the wrong position information.
I assume you are using Scala 2.9.2. In the next version of Scala (2.10.0), there are significant improvements in the pattern matcher, so it would be good to give it a try.
Is there any way to view the reduction steps in haskell, i.e trace the recursive function calls made? For example, chez scheme provides us with trace-lambda. Is there an equivalent form in Haskell?
You could try inserting Debug.Trace.trace in places you want to trace, but this has the tendency of (a) producing wildly out-of-order output, as your trace statement may belong to a thunk that isn't evaluated until far far away from the original call, and (b) changing the runtime behavior of your program, if tracing requires evaluating things that wouldn't otherwise have been evaluated (yet).
Is this for debugging? If so...
Hat modifies your source code to output tracing which can be viewed after running. The output should be pretty close to what you want: the example on their homepage is
For example, the computation of the faulty program
main = let xs :: [Int]
xs = [4*2,5 `div` 0,5+6]
in print (head xs,last' xs)
last' (x:xs) = last' xs
last' [x] = x
gives the result
(8, No match in pattern.
and the Hat viewing tools can be used to explore its behaviour as follows:
Hat-stack
For aborted computations, that is computations that terminated with an error message or were interrupted, hat-stack shows in which function call the computation was aborted. It does so by showing a virtual stack of function calls (redexes). Thus, every function call shown on the stack caused the function call above it. The evaluation of the top stack element caused the error (or during its evaluation the computation was interrupted). The stack shown is virtual, because it does not correspond to the actual runtime stack. The actual runtime stack enables lazy evaluation whereas the virtual stack corresponds to a stack that would be used for eager (strict) evaluation.
Using the same example program as above, hat-stack shows
$ hat-stack Example
Program terminated with error:
No match in pattern.
Virtual stack trace:
(Last.hs:6) last' []
(Last.hs:6) last' [_]
(Last.hs:6) last' [_,_]
(Last.hs:4) last' [8,_,_]
(unknown) main
$
These days, GHCi (≥6.8.1) also comes with a debugger:
$ ghci -fbreak-on-exception
GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude> :l Example.hs
[1 of 1] Compiling Main ( Example.hs, interpreted )
Example.hs:5:0:
Warning: Pattern match(es) are overlapped
In the definition of `last'': last' [x] = ...
Ok, modules loaded: Main.
*Main> :trace main
(8,Stopped at <exception thrown>
_exception :: e = _
[<exception thrown>] *Main> :back
Logged breakpoint at Example.hs:(5,0)-(6,12)
_result :: t
[-1: Example.hs:(5,0)-(6,12)] *Main> :hist
-1 : last' (Example.hs:(5,0)-(6,12))
-2 : last' (Example.hs:5:15-22)
-3 : last' (Example.hs:(5,0)-(6,12))
-4 : last' (Example.hs:5:15-22)
-5 : last' (Example.hs:(5,0)-(6,12))
-6 : last' (Example.hs:5:15-22)
-7 : last' (Example.hs:(5,0)-(6,12))
-8 : main (Example.hs:3:25-32)
-9 : main (Example.hs:2:17-19)
-10 : main (Example.hs:2:16-34)
-11 : main (Example.hs:3:17-23)
-12 : main (Example.hs:3:10-33)
<end of history>
[-1: Example.hs:(5,0)-(6,12)] *Main> :force _result
*** Exception: Example.hs:(5,0)-(6,12): Non-exhaustive patterns in function last'
[-1: Example.hs:(5,0)-(6,12)] *Main> :back
Logged breakpoint at Example.hs:5:15-22
_result :: t
xs :: [t]
[-2: Example.hs:5:15-22] *Main> :force xs
xs = []
While not as nice, it has the benefit of being easily available, and being usable without recompiling your code.
There's a reduction count in hugs, if that helps?
Alternatively, could you use something like the hugs hood to wrap your code, to get more detail around what it's doing at each step?
Nothing of the kind is built into the Haskell standard.
I would hope that the Helium graphical interpreter would offer something like this, but the web page is silent on the topic.
A partial solution is to use vacuum to visualize data structures.
I've seen some gif animations of fold, scan and others, but I can't find them at the moment. I think Cale Gibbard made the animations.