I'am trying to run my application after compiling it with AdaCores GPS (Gnat Programming Studio).
I get the run time error
Exception name: STORAGE_ERROR
Message: EXCEPTION_STACK_OVERFLOW
I get these run time errors despite setting the stack size in the binder options using
-d65535 (task stack size) and
-D65535 (secondary stack size)
(I also have tried 65535k on both as well as 655m).
The application runs well when compiling it with the Aonix Object Ada compiler. In the Aonix compiler I set the
- stack size to 65535,
- the secondary stack size to 65535
- and the Task stack size to 46345.
My main aim is to port the application to the GNAT Ada compiler.
I notice -d sets the task stack size and -D the secondary stack size but I can't see where to set the main stack size, and I am assuming that this is the issue with the application, but please correct me if I am looking in the wrong direction.
Any pointers would be greatly appreciated appreciated.
Bearslumber
If the problem is indeed the main task, a workaround is to move the main procedure to the body of a helper task.
First, compile for debug (-g) (there may be other relevant options; posting wrong information is the fastest way to find them ;-) and you should get more information : the source line and file that raised the exception. Or a stack trace which you can analyze via addr2line.
That should help understand why it is raising...
Are you allocating hundreds of MB on the stack? I've got away with about 200MB in the past...
Is the raise within one of the container classes or part of the RTS?
Is the message actually misleading and a new() heap allocation failed? Other things than the stack can raise Storage_Error, and I'm not clear how or if the default handler distinguishes the cause...
We can't proceed further down this path without further information : edit it into the question and I'll take another look.
Setting stack size for the environment task is not directly possible within Gnat. It's part of gcc's interaction with the OS, and supposed to use the system's ulimit settings for the resulting executable (on Linux; other OS may vary)...
Unfortunately, around the gcc/gnat 4.5 timeframe I found evidence these were being ignored, though that may have been corrected, and I haven't revisited that problem.
I have seen Alex's answer posted elsewhere as a viable workround if the debug trace and ulimit settings don't provide the answer, or if you need to press on instead of spending time debugging. To keep the codebase clean, I would suggest a wrapper, simply creating the necessary task and calling your current main procedure. For Aonix you simply don't need to include the wrapper file in your build.
Related
I am using Open Cobol.
I have a program that I have been running for several weeks.
Yesterday, I got the following error:
MERRILL_MAX_AMOUNTS.COB:46: libcob: Stack overflow, possible PERFORM depth exceeded
I tried going back to other versions of the same program that worked, but I am still getting the same error. I have several other programs that run fine with no problem.
If the program was running for several weeks and then ends with this error the program seems to be broken.
You get that error if a section/paragraph was PERFORMed and then (likely after a bunch of other statements possibly including GO TO or PERFORMing other sections/paragraphs there) is `PERFORM' itself again (recursively).
In most cases this is an error.
If the same program "worked before" and now doesn't then its program flow is changed, likely because of different data being processed.
You could enable tracing of paragraphs and sections for this single program by adding -ftrace to this single program and adjusting runtime.cfg / export/set COB_SET_TRACE and COB_TRACE_FILE according to the runtime documentation.
Note: The PERFORM stack checking is only enabled upon request by -fstack-check, which is auto-enabled with --debug (all runtime checks) or -g (debugging) - if you don't want this you can disable it by explicit specifying -fno-stack-check.
You can also adjust the number of iterations libcob considers "possibly safe" with -fstack-size=number, the current default of 255 is quite high, the maximum that can be set in a current version is 512 (artificial limit only).
In any case I highly suggest to replace the outdated OpenCOBOL (likely 1.1 from Feb 2009) by a current GnuCOBOL version (latest 3.1-rc1 19 days ago).
We have to solve the liars problem in prolog, in several environments with constraints (ECLiPSe ic, ECLiPSe fd, SWI-prolog, GNU-prolog, NaxosSolver, etc.). I have used tail recursion (I think) and as many cuts as I could think of(that way I guess the resolution tree is not getting as big as it could be. If requested, I can post my code.
When the data number becomes 10000-50000, I receive a stack overflow in fd and ic in ECLiPSe and in SWI-prolog the program runs forever. So I would like to increase the size of the stack in ECLiPSe, but I can not see how.
I tried to write in the 1st line of my code this:
:-set_flag(local_stack_allocated, 512).
, but it says out of range.
See what eclipse says, which might be helpful:
* Overflow of the global/trail stack in spite of garbage collection!
You can use the "-g kBytes" (GLOBALSIZE) option to have a larger stack.
Peak sizes were: global stack 128832 kbytes, trail stack 5312 kbytes
First, from error message text I'm assuming that you mean ECLiPSe constraint logic programming system, not Eclipse IDE.
Second, how do you start ECLiPSe? How do you load your code into ECLiPSe?
Try this (you said you are on Windows):
Open command line from the folder where your ECLiPSe source file (say, 'myprogram.ecl') exists. For instructions look at this page: http://www.techsupportalert.com/content/how-open-windows-command-prompt-any-folder.htm
In the command line put eclipse -g 512000 and press ENTER.
Load your program using [myprogram]. (put name of your ECLiPSe source file instead of 'myprogram').
Execute queries as usual.
But I suspect that your program just runs forever and eats all memory, so all this probably won't help in the end.
EDIT. Updated instructions for TkECLiPSe:
In TkECLiPSe in menu choose Tools -> TkECLiPSe Preference Editor.
In preference window find option "Global/trail stack size (in megabytes)" and set it to 512.
Save preferences and close TkECLiPSe.
Next time you run TkECLiPSe stack size will be set to 512 Mb.
I have an application that traces program execution through memory. I tried to use readelf --debug-dump=decodedline to get memory address / line # information, but the memory addresses I see don't match up often with the ones given by that dump. I wrote something to match up each address with the "most recent" one appearing in the DWARF data -- this seemed to clean some things up but I'm not sure if that's the "official" way to interpret this data.
Can someone explain the exact process to map a program address to line number using DWARF?
Have a look at the program addr2line. It can probably give you some guidance on how to do this, if not solving your problem entirely (e.g. by shelling out to it, or linking its functionality in).
Indeed, as mentioned by Phil Miller's answer, addr2line is your friend. I have a gist where I show how I get the line number in the (C++) application source code from an address obtained from a backtrace.
Following this process will not show you the process you mention, but can give you an idea of how the code gets mapped into the object code (in an executable or a library/archive). Hope it helps.
hey guys i have a question regarding amzi prolog with eclipse,
Im running a .pro file which executes a breadth first search and if queue gets too long,
the following error message appears:
system_error 1021 Control stack full.
Compile code or increase .cfg
parameter 'control'
If so, how may i run the compiled code under eclipse? I've tried running the project but the listener just ends without accepting any queries....?
Control stack full means one of two things:
You have a deep recursion that exhausts the control stack. In that case you need to increase the default value of 'control' in your amzi.cfg file. You may find you that have to increase 'heap', 'trail' and/or 'local' as well.
You have an error in your program causing an infinite recursion.
Running the program in the debugger will show you which case you've got. In the initial case you will see it digging deeper and deeper for a solution. In the later case you will see it chasing it's tail in circles with each recursion the same as the one before, but with different variables.
I don't know amzi prolog (I only used SICStus and SWI), and never used Eclipse for prolog, but as the error message says, try compiling (instead of consulting) your code. Look under project/properties for build configurations (like run/deug, as it works for Java/C++). Hopefully, that ".cfg paramerer" can also be accessed through project/properties.
Is it possible for the debugger (or the CLR exception handler) to show the line where the exception happened in Release mode using the pdb?
The code, in release mode, is optimized and do not always follow the order and logic of the "original" code.
It's also surprising that the debugger can navigate through my code step by step, even in Release mode. The optimization should make the navigation very inconfortable.
Could you please clarify those two points for me?
I'm not as familiar with how this is done with CLR, but it's probably very similar to how it's done with native code. When the compiler generates machine instructions, it adds entries to the pdb that basically say "the instruction at the current address, X, came from line 25 in foo.cpp".
The debugger knows what program address is currently executing. So it looks up some address, X, in the pdb and sees that it came from line 25 in foo.cpp. Using this, it's able to "step" through your source code.
This process is the same regardless of Debug or Release mode (provided that a pdb is generated at all in Release mode). You are right, however, that often in release mode due to optimizations the debugger won't step "linearly" through the code. It might jump around to different lines unexpectedly. This is due to the optimizer changing the order of instructions, but it doesn't change the address-to-source-line mapping, so the debugger is still able to follow it.
[#Not Sure] has it almost right. The compiler makes a best effort at identifying an appropriate line number that closely matches the current machine code instruction.
The PDB and the debugger don't know anything about optimizations; the PDB file essentially maps address locations in the machine code to source code line numbers. In optimized code, it's not always possible to match exactly an assembly instruction to a specific line of source code, so the compiler will write to the PDB the closest thing it has at hand. This might be "the source code line before", or "the source code line of the enclosing context (loop, etc)" or something else.
Regardless, the debugger essentially finds the entry in the PDB map closest (as in "before or equal") to the current IP (Instruction Pointer) and highlights that line.
Sometimes the match is not very good, and that's when you see the highlighted area jumping all over the place.
The debugger makes a best-effort guess at where the problem occurred. It is not guaranteed to be 100% accurate, and with fully optimized code, it often will be inaccurate - I've found the inaccuracies ranging anywhere from a few lines off to having an entirely wrong call stack.
How accurate the debugger is with optimized code really depends on the code itself and which optimizations you're making.
Reference the following SO question:
Display lines number in stack trace for .NET assembly in release mode