I'm making a program that has to be ran for a long period of time, therefor the TraceBack builds up until it hits a "stack level too deep (SystemStackError)" error. So my question is how do I clear the TraceBack history in my program so i don't get this SystemStackError?
Related
I am using Open Cobol.
I have a program that I have been running for several weeks.
Yesterday, I got the following error:
MERRILL_MAX_AMOUNTS.COB:46: libcob: Stack overflow, possible PERFORM depth exceeded
I tried going back to other versions of the same program that worked, but I am still getting the same error. I have several other programs that run fine with no problem.
If the program was running for several weeks and then ends with this error the program seems to be broken.
You get that error if a section/paragraph was PERFORMed and then (likely after a bunch of other statements possibly including GO TO or PERFORMing other sections/paragraphs there) is `PERFORM' itself again (recursively).
In most cases this is an error.
If the same program "worked before" and now doesn't then its program flow is changed, likely because of different data being processed.
You could enable tracing of paragraphs and sections for this single program by adding -ftrace to this single program and adjusting runtime.cfg / export/set COB_SET_TRACE and COB_TRACE_FILE according to the runtime documentation.
Note: The PERFORM stack checking is only enabled upon request by -fstack-check, which is auto-enabled with --debug (all runtime checks) or -g (debugging) - if you don't want this you can disable it by explicit specifying -fno-stack-check.
You can also adjust the number of iterations libcob considers "possibly safe" with -fstack-size=number, the current default of 255 is quite high, the maximum that can be set in a current version is 512 (artificial limit only).
In any case I highly suggest to replace the outdated OpenCOBOL (likely 1.1 from Feb 2009) by a current GnuCOBOL version (latest 3.1-rc1 19 days ago).
(using OS X 10.9.4) I have this cool Ruby script which scans system/firewall logs and tells me if anything odd is happening. The script runs on a 1 second loop, however at exactly the 3852nd iteration, the script terminates with a "stack level too deep (SystemStackError)" error.
I am not new to this error and it seems to appear when a script enters a loop, and the system sandbox (probably) terminates it after a set amount of time or a specific parameter is reached.
I attempted to bypass the error by running the script as root, however this had no effect. I have also considered programming another script to relaunch the original script when it detects its absence in the command: ps -efs' output, however this is a very 'clunky' method which i would prefer to avoid.
I have also conducted some research into the error on the "Stack Overflow", however only found questions answered by altering the offending script ,as the error is in their case due to a bug in the code, which is not the case for me.
So my question:
Is there any possible way to bypass the "stack level too deep (SystemStackError)" error
Thanks in advance, greatly appreciated.
This is my first question in Stack Overflow, since up until now, I always managed to find my answers.
So.. I'm writing a debbuger (for Windows, in python, using WinAppDbg library) that should trace the program execution, and encountered some problems.
I'm setting the trap flag, so I could stop every single step.
First problem - sometimes the execution flow goes through a Windows api, which goes to the kernel. When it returns, obviously the trap flag is off, and the execution of the thread may continue millions of instructions without my debbuger tracing every step of it.
Chance of solution - before a Windows api is called, I set the next addresses permissions as guard page, thus when the call returns, I get a guard page exception, setting the trap flag again, and continue tracing. But this cause a different problem (I call it "second problem")
Second problem - since I'm setting the trap flag of my main thread, all I see is a loop of that thread (I guess it's the Windows gui loop), and the program execution isn't advancing (for example, there should be new threads created, but I don't see them).
So what am I looking for?
A debugger's source code that can handle the problems I've described.
Or better yet, a solution to my problems. What am I doing wrong?
Thank you all!
Lua features hook call BEFORE every processed line. What I need is a call AFTER line is processed, so that I can check for encountered errors and so on. Is there a way to make such kind of call?
Otherwise things get a little bit confusing if error is encountered at the last line of the script. I don't get any feedback.
UPDATE #1
We want to catch both Lua errors and 'our' errors asserted via lua_error(*L) C interface, and Lua should throw correct debug info including the line number where the error occurred.
Using return hook we always get error line number -1, which is not what we want. Using any combination of pcall and any hook setup after lua_error(*L) we get either line number -1, or number of the next executed line, never a correct one.
SOLUTION#
We managed to make everything work. The thing was that Lua throws a real C exception after it detects an error, so some of our 'cleaning & finalizing' C code called from Lua operation did not execute, which messed up some flags and so on. The solution was to execute 'cleaning code' right before calling lua_error(...). This is correct and desired Lua behavior as we really want to stop executing the function once lua_error(...) is called, it was our mistake to expect any code would be executed after lua_error(...) call.
Tnx Paul Kulchenko, some of this behavior was found while trying to design a simple example script which reproduces the problem.
Try setting a return hook: it'll be called after the last line is executed.
I'm not sure debug hook is the best solution for what you are trying to do (or you need to provide more details). If you just need to check for run-time errors, why use debug hooks at all if you can run your code with pcall and get an error message that points to the line number where the error happened (or use xpcall, which also allows you to get a stack trace)? You can combine this with debug.getinfo(func, "L") to get a table whose indexes are valid line numbers for the function.
UPDATE: Problem located in my related question - Nokogiri performance problem
I am having a serious problem with my program. After program reaches it's last statement, Aptana studio shows the program is still running even after the last line was evaluated. Ruby process (after the last line of the script) is still running with 100% CPU usage, it ends after several seconds (15-30 maybe). I am trying to at atleast see where the problem is but after a long time I am still at the beginning. So the question is, what could cause this problem and how can I at least see where the problem is, what are my options? Some additional information:
Aptana debbug mode: After the last line, this will show in the Debug window:
<terminated, exit value: 0>path/to/ruby
But Ruby process is still running and using 100% CPU
I was trying to use gdb to profile Ruby process itself, but ended up with nothing using method described here: Profilig using gdb. I am using debian squeeze 64-bit and i tried both versions of script (8,12 > 16,24). When I tried to get some stack info I just get this:
Program received signal SIGSEGV, Segmentation fault.
0x00007f20539a80b8 in ?? () from /lib/libc.so.6
/home/giron/programovani/gdb_init.sh:1: Error in sourced command file:
The program being debugged was signaled while in a function called from GDB.
GDB remains in the frame where the signal was received.
To change this behavior use "set unwindonsignal on".
Evaluation of the expression containing the function
(backtrace) will be abandoned.
When the function is done executing, GDB will silently stop.
After I quit gdb, following output shows up in Aptana console (But this is maybe absolutely useless, probably gdb did this, I don't know):
/home/giron/Aptana Studio 3 Workspace/RedisXmlConcept/bin/main.rb: [BUG] Segmentation fault
ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-linux]
-- control frame ----------
c:0001 p:0000 s:0002 b:0002 l:000f68 d:000f68 TOP
---------------------------
-- C level backtrace information -------------------------------------------
/home/giron/.rvm/rubies/ruby-1.9.2-p290/lib/libruby.so.1.9(rb_vm_bugreport+0x5f)[0x7f205488216f]
/home/giron/.rvm/rubies/ruby-1.9.2-p290/lib/libruby.so.1.9(+0x63274) [0x7f205476a274]
/home/giron/.rvm/rubies/ruby-1.9.2-p290/lib/libruby.so.1.9(rb_bug+0xb3) [0x7f205476a413]
/home/giron/.rvm/rubies/ruby-1.9.2-p290/lib/libruby.so.1.9(+0x10c215) [0x7f2054813215]
/lib/libpthread.so.0(+0xeff0) [0x7f20544f9ff0]
/lib/libc.so.6(+0xe40b8) [0x7f20539a80b8]
/lib/libgcc_s.so.1(_Unwind_Backtrace+0x49) [0x7f2050d5b599]
/lib/libc.so.6(backtrace+0x4e) [0x7f20539a81ae]
/home/giron/.rvm/rubies/ruby-1.9.2-p290/bin/ruby(_start+0) [0x400890]
[NOTE]
You may have encountered a bug in the Ruby interpreter or extension libraries.
Bug reports are welcome.
For details: http://www.ruby-lang.org/bugreport.html
Just to be sure that I have described problem well, last line of code (before this, Nokogiri parsing and work with Redis database is done):
puts "End"
End is printed out and after this Ruby process will consume 100% CPU for several seconds
This question is related to my previous one here: Nokogiri performance problem where are some more code snippets but since I am focusing on the different approach here (profiling Ruby), I have created new question.
Thank you in advance for any tips, I am pretty much clueless right now.
I was trying to use gdb to profile Ruby process itself
Don't do that. Calling backtrace may not be safe in the context you are executing in, and (apparently) causes your program to SIGSEGV.
Instead, just attach gdb to the Ruby process, and execute thread apply all where command. Update your question with the output, and you may get a better answer.