I am using version 3.12.10 of Linux. I am writing a simple module that loops through the task list and checks the stack usage of each process to see if any are in danger of overflowing the stack. To get the stack limit of the process I use:
tsk->signal->rlim[ RLIMIT_STACK ].rlim_cur
To get the memory address for the start of the stack I use:
tsk->mm->start_stack
I then subract from it the result of this macro:
KSTK_ESP( tsk )
Most of the time this seems to work just fine, but on occasion I a situation where a process uses more than its stack limit ( usually 8 MB ), but the process continues to run and Linux itself is not reporting any sort of issue.
My question is, am I using the right variables to check this stack usage?
After doing more research I think I have realized that this is not a good way of determining how much stack was used. The problem arises when the kernel allocates more pages of memory to the stack for that process. Those pages may not be contiguous to the other pages. Thus the current stack pointer may be some value that would result in an invalid calculation.
The value in task->mm->stack_vm can be used to determine how much space was actually allocated to a process' stack. This is not as accurate as how much is actually used, but for my use, good enough.
Related
My textbook says that following:
Once the operating system decides to create a new process, it
allocates space for all elements of the process image; therefore, the
OS must know how much space is needed for the private user address
space (programs and data) and the user stack.
As I understand it, the stack contains functions and local variables. And since much of the input into functions and the data resulting from any associated computations cannot be known at compile-time, the OS must allocate a static amount of memory to serve as the stack.
Given this, how does the OS determine at compile-time the sufficient amount of memory required by constituents of the stack? Given the dramatic variability of programs, I cannot imagine how the OS achieves this task. It would seem that if one tried to allocate a fixed amount of memory as the stack at compile-time, it would regularly result in either too much or too little memory. However, I presume that there is an effective mechanism in place to deal with this (to allocate an appropriate amount of memory as the stack); otherwise, stack overflows would be a common occurrence.
I would greatly appreciate it if someone could please take the time to clarify this concept.
I think you have never heard of stack overflow.
The short answer is that it cannot determine at compile time. Because if it was possible to calculate the amount of stack memory required at compile-time, there would be no such thing as stack overflow occuring as the compiler would simply give an error telling that the amount of stack memory required exceeds the limit.
Consider the simple function:
int foo()
{
return foo();
}
the function will compile successfully. But will result in stack overflow.
The stack size is normally determined by the linker. Most linkers have options for setting the stack size. The stack is then created by the program loader along with the rest of the program's address space as it reads the instructions from the executable file.
I'm a computer undergraduate taking operating systems course. For my assignment, I am required to implement a simple thread management system.
I'm in the process of creating a struct for a TCB. According to my lecture notes, what I could have in my TCB are:
registers,
program counter,
stack pointer,
thread ID and
process ID
Now according to my lecture notes, each thread should have its own stack. And my problem is this:
Just by storing the stack pointer, can I keep a unique stack per thread? If I did so, won't one stack of a thread over write other's stack?
How can I prevent that? Limit the stack for each thread??? Please tell me how this is usually done in a normal operating system.
Please help. Thanks in advance.
The OS may control stack growth by monitoring page faults from inaccessible pages located around the stack portion of the address space. This can help with detection of stack overflows by small amounts.
But if you move the stack pointer way outside the stack region of the address space and use it to access memory, you may step into the global variables or into the heap or the code or another thread's stack and corrupt whatever's there.
Threads run in the same address space for a reason, to share code and data between one another with minimal overhead and their stacks usually aren't excepted from sharing, from being accessible.
The OS is generally unable to do anything about preventing programs from stack overflows and corruptions and helping them to recover from those. The OS simply doesn't and can't know how an arbitrary program works and what it's supposed to do, hence it can't know when things start going wrong and what to do about them. The only thing the OS can do is just terminate a program that's doing something very wrong like trying to access inaccessible resources (memory, system registers, etc) or execute invalid or inaccessible instructions.
Would the OS send a warning to the user before a threshold and then the application would actually crash if there is not enough memory to allocate the stack (local) variables of the current function?
Yes, you would get a Stack Overflow run-time error.
Side note: There is a popular web site named after this very error!
Stack allocation can fail and there's nothing you can do about it.
On a modern OS, a significant amount of memory will be committed for the stack to begin with (on Linux it seems to be 128k or so these days) and a (usually much larger, e.g. 8M on Linux, and usually configurable) range of virtual addresses will be reserved for stack growth. If you exceed the committed part, committing more memory could fail due to out-of-memory condition and your program will crash with SIGSEGV. If you exceed the reserved address range, your program will definitely fail, possibly catastrophically if it ends up overwriting other data just below the stack address range.
The solution is not to do insane things with the stack. Even the initial committed amount on Linux (128k) is more stack space than you should ever use. Don't use call recursion unless you have a logarithmic bound on the number of call levels, don't use gigantic automatic arrays or structures (including ones that might result from user-provided VLA dimensions), and you'll be just fine.
Note that there is no portable and no future-safe way to measure current stack usage and remaining availability, so you just have to be safe about it.
Edit: One guarantee you do have about stack allocations, at least on real-world systems, (without the split-stack hack) is that stack space you've already verified you have won't magically disappear. For instance if you successfully once call c() from b() from a() from main(), and they're not using any VLA's that could vary in size, a second repetition of this same call pattern in the same instance of your program won't fail. You can also find tools to perform static analysis on some programs (ones without fancy use of function pointers and/or recursion) that will determine the maximum amount of stack space ever consumed by your program, after which you could setup to verify at program start that you can successfully use that much space before proceeding.
Well... semantically speaking, there is no stack.
From the point of view of the language, automatic storage just works and dynamic storage may fail in well-determined ways (malloc returns NULL, new throws a std::bad_alloc).
Of course, implementations will usually bring up a stack to implement the automatic storage, and one that is limited in size at that. However this is an implementation detail, and need not be so.
For example, gcc -fsplit-stack allows you to have a fractionned stack that grows as you need. This technic is quite recent for C or C++ AFAIK, but languages with continuations (and thousands or millions of them) like Haskell have this built-in and Go made a point about it too.
Still, at some point, the memory will get exhausted if you keep hammering at it. This is actually undefined behavior since the Standard does not attempt to deal with this, at all. In this case, typically, the OS will send a signal to the program which will shut off and the stack will not get unwound.
The process would get killed by the OS if it runs out of stack space.
The exact mechanics are OS-specific. For example, running out of stack space on Linux triggers a segfault.
While the operating system may not inform you that you're out of stack space, you can check this yourself with a bit on inline assembly:
unsigned long StackSpace()
{
unsigned long retn = 0;
unsigned long *rv = &retn;
__asm
{
mov eax, FS:[0x08]
sub eax, esp
mov [rv], eax
}
return retn;
}
You can determine the value of FS:[*] by referring to the windows Thread Information Block
Edit: Meant to subtract esp from eax, not ebx XD
Does anybody know how much default memory is allocated to a thread created on Unix/Linux operating system?
For windows xp OS i found that it allocates a memory block of 1MB, is it correct?
Thanks in advance.
There's not going to be a single answer to that question.
In fact there's not even a single answer on Windows. Different executables specify different stack limits. And even within a single process, individual threads can have different stack limits.
And it gets even more complicated when you factor in the differences between .net and native executables. Rather strangely .net executables commit the entire stack allocation for each thread as soon as the thread starts. On the other hand, native executables reserve the stack allocation and then commit memory on demand using guard pages.
You can see how much space is allocated for thread stacks (measured in kbytes) with ulimit -s.
Quoting from the pthread_create(3) manpage:
On Linux/x86-32, the default stack
size for a new thread is 2 megabytes.
Under the NPTL threading
implementation, if the RLIMIT_STACK
soft resource limit at the time the
program started has any value other
than "unlimited", then it determines
the default stack size of new threads.
Using pthread_attr_setstacksize(3),
the stack size attribute can be
explicitly set in the attr argument
used to create a thread, in order to
obtain a stack size other than the
default.
With reference to Stack Based Memory Allocation, it is stated as "...each thread has a reserved region of memory referred to as its stack. When a function executes, it may add some of its state data to the top of the stack; when the function exits it is responsible for removing that data from the stack" and "...that memory on the stack is automatically, and very efficiently, reclaimed when the function exits"
The first quoted sentence says the current thread is responsible and second quoted sentence says its done automatically.
Question 1: Is it done automatically or by the current running thread?
Question 2: How the deallocation of memory takes place in Stack?
Question 1: by automatically (and very efficiently) they mean that just by shifting a memory pointer around (cutting the top off the stack), all memory used there is reclaimed. There is no complex garbage collection necessary.
Question 2: the stack is just a contiguous chunk of memory delimited by a start and an end pointer. Everything between the pointers belongs to the stack, everything beyond the end pointer is considered free memory. You allocate and deallocate memory by moving the end pointer (the top of the stack) around. Things are much more complicated on the heap, where memory use is fragmented.
You might understand more by looking at an example of a Call Stack (such as in C on many machines).
Question 1: Yes.
Question 2: by decreasing the stack pointer, i.e. the reverse operation of allocation.
The stack is managed by the compiler.
The heap is managed by a library.
ans to the question 1: yes its automatically done by the garbage collector as it is daemon process running always with the jvm. it checks for all the references and if they dont have references(or out of reach) then it will remove it from the heap.
ans to the question 2: as the local variables and method calls will be stored in stack as soon as they will be out of scope they will be removed from the stack.