Does coroutine stacks grow in Lua, Python, Ruby or any other languages? - stack-overflow

There are some languages which support deterministic lightweight concurrency - coroutine.
Lua - coroutine
Stack-less Python - tasklet
Ruby - fiber
should be many more... but currently I don't have much idea.
Anyway as far as I know, it needs many of separated stacks, so I want to know how these languages handle the stack growth. This because I read some mention about Ruby Fiber which comes with 4KB - obviously big overhead - and they are advertising this as a feature that prevents stack overflow. But I don't understand why they're just saying the stacks will grow automatically. It doesn't make sense the VM - which is not restricted to C stack - can't handle stack growth, but I can't confirm this because I don't know about internals well.
How do they handle stack growth on these kind of micro-threads? Is there any explicit/implicit limitations? Or just will be handled clearly and automatically?

For ruby:
As per this google tech talk the ruby vm uses a slightly hacky system involving having a copy of the C stack for each thread and then copying that stack on to the main stack every time it switches between fibers. This means that Ruby is still restricts each fibre from having more than a 4KB stack but the interpreter does not overflow if you switch between deeply nested fibres.
For python:
task-lets are only available in the stackless variant. Each thread gets its own heap based stack as the stackless python vm uses heap based stacks. This mess they are inherently only limited to the size of the heap in stack growth. This means that for 32 bit systems there is still an effective limit of 1-4 GB.
For Lua:
Lua uses a heap based stack so are inherently only limited to the size of the heap in stack growth. Each coroutine gets its own stack in the memory. This means that for 32 bit systems there is still an effective limit of 1-4 GB.
To add a couple more to your list C# and VB.Net both now support async/await. This is a system that allows the program to preform a time consuming operation and have the rest of that function continue afterwards. This is implemented by creating an object to represent the method with a single method that is called to advance to the next step in the method which is called when you attempt to get a result and various other internal locations. The original method is replaced with one that creates the object. This means that the recursion depth is not affected as the method is never more than a few steps further down the stack than you would expect.

Related

How to calculate total RSS of all thread stacks under Linux?

I have a heavily multi-threaded application under Linux consuming lots of memory and I am trying to categorize its RSS. I found particularly challenging to estimate total RSS of all thread stacks in program. I had following ideas:
Idea 1: look into /proc/<pid>/smaps and consider mappings for stacks; there is an information regarding resident size of each mapping but only the main thread mapping is annotated like [stack]; the rest of them is indistinguishable from regular 8 MiB mappings (with default stack size). Also reading /proc/<pid>/smaps is pretty expensive as it produces contention on kernel innternal VMA data structures.
Idea 2: look into /proc/<tid>/status; there is VmStk section which should describe stack resident size, but it always shows stack size of a main thread. It looks pretty clear why: beacuse main thread is the only one for which kernel allocates stack by itself, while the rest of threads gets stack from pthreads code which allocates it as a regular memory mapping.
Idea 3: traverse threads from user-space using some stuff from pthreads, retrieve stack mapping address and stack size for each thread and then find out how many pages are resident using mincore(2). As a possible optimization, we may skip calling mincore for sleeping threads using the cached value for them. Unfortunately, I did not find any suitable way to iterate over pthread_t structures. Note that part of the threads comes from the libraries which I am not able to control, so maintaining any kind of thread registry by registering threads on startup is not possible.
Idea 4: use ptrace(2) to retrieve thread registers, retrive stack pointers from them, then proceed with Idea 1. This way looks excessively hard and intrusive.
Can anybody provide me more or less intended way to do so? Being non-portable is OK.
Two more ideas I got after some extra research:
Idea 5: from man 5 proc on /proc/<pid>/maps:
There are additional helpful pseudo-paths:
[stack]
The initial process's (also known as the main thread's) stack.
[stack:<tid>] (since Linux 3.4)
A thread's stack (where the <tid> is a thread ID). It corresponds to the /proc/[pid]/task/[tid]/ path.
It looks intriguing, but it seems that this logic has been reverted as it was implemented ineffiiently: https://lore.kernel.org/patchwork/patch/716239/. Man page seems obsolete (at least on my Ubuntu Disco 19.04).
Idea 6: This one may actually work. There is an /proc/<tid>/syscall file which may expose thread stack register for a blocked thread. Considering the fact that most of my threads are sleeping on I/O, this allows me to track their rsp value, which I may project onto /proc/<pid>/maps to find the correspondence between thread and its stack mapping. After that I may implement Idea 3.

Is there a difference between data structure stack and hardware stack?

I've debated with my friend about the difference between data structure stack and hardware stack(call stack). I thought they are perfectly same because they both have 'push' and 'pop' that can deal with only the latest element. But my friend said that they're not same at all, but they share only the same name, 'stack'. And he thinks so because in call stack, we can access addresses that are not the latest ones, contradicting the definition of stack(data structure). Can you give an answer to this?
Here are some differences:
Usually you can only have one hardware stack (per thread).
You can have as many software stacks as desired.
Usually hardware stack is managed by the CPU directly.
Software stack access is managed explicitly from the code.
Hardware stack is usually directly related to call stack (caller functions, their arguments).
Software stack is independent of the hardware call stack (you can push item in a function and pop them in another one independently of hardware stack level).
Hardware stack memory is managed bu the OS or CPU (might be limited).
Software stack memory is managed by the application.
Well basically both stack have push and pop operations and thus work like a stack.
It would be possible to have pure or not pure hardware or software stack. Usually, hardware stack would be able to access items at a relative position from the top for arguments. On software stack, usually the protection would essentially be a private access.
On some embedded device, the stack might serve only for returns addresses and a software based stack might be needed for arguments. On some device, the maximum level can be very low.
The first is a data structure and the second is a data structure applied.
Like most applications of data structures in the real world it's not pure and has features added for convenience or speed.

Checking a process' stack usage in Linux

I am using version 3.12.10 of Linux. I am writing a simple module that loops through the task list and checks the stack usage of each process to see if any are in danger of overflowing the stack. To get the stack limit of the process I use:
tsk->signal->rlim[ RLIMIT_STACK ].rlim_cur
To get the memory address for the start of the stack I use:
tsk->mm->start_stack
I then subract from it the result of this macro:
KSTK_ESP( tsk )
Most of the time this seems to work just fine, but on occasion I a situation where a process uses more than its stack limit ( usually 8 MB ), but the process continues to run and Linux itself is not reporting any sort of issue.
My question is, am I using the right variables to check this stack usage?
After doing more research I think I have realized that this is not a good way of determining how much stack was used. The problem arises when the kernel allocates more pages of memory to the stack for that process. Those pages may not be contiguous to the other pages. Thus the current stack pointer may be some value that would result in an invalid calculation.
The value in task->mm->stack_vm can be used to determine how much space was actually allocated to a process' stack. This is not as accurate as how much is actually used, but for my use, good enough.

Infinite Ruby Fibers?

Is it possible to create 2 Ruby's Fibers that call each other forever? Would Ruby eventually crash with the stack overflow or do the Fibers not consume stack space?
If you write an infinite loop in any programming language, something will eventually break. I'm not familiar with Ruby Fibers, but if they are calling each other via methods, then the stack will overflow eventually.
Other things that can break in an infinite loop scenario are anything that is a limited resource, so disk space and network bandwidth are usually the next two (the network because things usually time out).
Resuming a fiber doesn't increase the stack size. If you recursed into a function each time before your resumed the other fiber, then the stack would increase until overflow - just as it does with infinite recursion normally.

How many stacks does a windows program use?

Are return address and data mixed/stored in the same stack, or in 2 different stacks, which is the case?
They are mixed. However, it depends on the actual programming language / compiler. I can image a compiler allocating space for local variable on the heap and keeping a pointer to the storage on the stack.
There is one stack per thread in each process. Hence, for example, a process with 20 threads has 20 independent stacks.
As others have already pointed out, it's mostly a single, mixed stack. I'll just add one minor detail: reasonably recent processors also have a small cache of return addresses that's stored in the processor itself, and this stores only return addresses, not other data. It's mostly invisible outside of faster execution though...
It depends on the compiler, but the x86 architecture is geared towards a single stack, due to the way push and pop instructions work with a single stack pointer. The compiler would have to do more work maintaining more than one stack.
On more note: every thread in Win32 has its own stack. So, when you tell "windows program" - it depends on how many threads it has. (Of course threads are created/exited during the runtime).

Resources