How does Golang GC reclaim complete allocations when user references don't point to the start of the allocation? - go

How does the Go garbage collector know the beginning address of a slice's backing store, if the slice points to the middle of the original store?
Consider the following:
array := make([]int, 1000)
slice := array[999:1000]
array = nil
We are famously warned that the GC will hang on to the entire array although the ptr part of slice points towards the end of the original array.
But where is the original address of the array kept so the GC can clear it entirely?
When the GC begins its mark phase and reaches slice, how does it mark the beginning of the array as "in use"?
Same question could be asked for other cases of the GC not collecting objects when the user only has a reference to part of them such as when the user only has references to a part of a struct or a string.
Edit:
After some discussion I suspect that in all of these cases when the GC performs the mark phase for a user reference it performs a lookup in its internal allocation block mechanisms searching for the block within which the address of the user reference is contained and marks the entire block as "In Use"

Related

How does Linux buddy allocator determine which size order list to insert into when freeing block?

The Binary Buddy Allocator used in Linux uses bitmaps where each bit corresponds to a state of a pair of buddy-blocks (taken from this article). And the void free_page(void *addr) call doesn't take a size of the allocated block that is to be freed.
I don't understand how this algorithm does determine into which order list to insert a block being freed without knowing its size. Let's imagine we allocated all available memory, so all the bitmaps are zeroed. Now we call free_page() with some address, and it's not clear into which list should the block be inserted.
Of course free_page(void *addr) or the equivalent __free_page(struct page *page) do not need a second argument: their purpose is to free one page only, and the size of a single page is known. The order is always 0 in this case.
Freeing a block of pages is done through __free_pages(struct page *page, unsigned int order), which does indeed take order as second argument.

Faster memory allocation and freeing algorithm than multiple Free List method

We allocate and free many memory blocks. We use Memory Heap. However, heap access is costly.
For faster memory access allocation and freeing, we adopt a global Free List. As we make a multithreaded program, the Free List is protected by a Critical Section. However, Critical Section causes a bottleneck in parallelism.
For removing the Critical Section, we assign a Free List for each thread, i.e. Thread Local Storage. However, thread T1 always memory blocks and thread T2 always frees them, so Free List in thread T2 is always increasing, meanwhile there is no benefit of Free List.
Despite of the bottleneck of Critical Section, we adopt the Critical Section again, with some different method. We prepare several Free Lists as well as Critical Sections which is assigned to each Free List, thus 0~N-1 Free Lists and 0~N-1 Critical Sections. We prepare an atomic-operated integer value which mutates to 0, 1, 2, ... N-1 then 0, 1, 2, ... again. For each allocation and freeing, we get the integer value X, then mutate it, access X-th Critical Section, then access X-th Free List. However, this is quite slower than the previous method (using Thread Local Storage). Atomic operation is quite slow as there are more threads.
As mutating the integer value non-atomically cause no corruption, we did the mutation in non-atomic way. However, as the integer value is sometimes stale, there is many chance of accessing the same Critical Section and Free List by different threads. This causes the bottleneck again, though it is quite few than the previous method.
Instead of the integer value, we used thread ID with hashing to the range (0~N-1), then the performance got better.
I guess there must be much better way of doing this, but I cannot find an exact one. Are there any ideas for improving what we have made?
Dealing with heap memory is a task for the OS. Nothing guarantees you can do a better/faster job than the OS does.
But there are some conditions where you can get a bit of improvement, specially when you know something about your memory usage that is unknown to the OS.
I'm writting here my untested idea, hope you'll get some profit of it.
Let's say you have T threads, all of them reserving and freeing memory. The main goal is speed, so I'll try not to use TLS, nor critical blocking, not atomic ops.
If (repeat: if, if, if) the app can fit to several discrete sizes of memory blocks (not random sizes, so as to avoid fragmentation and unuseful holes) then start asking the OS for a number of these discrete blocks.
For example, you have an array of n1 blocks each of size size1, an array of n2 blocks each of size size2, an array of n3... and so on. Each array is bidimensional, the second field just stores a flag for used/free block. If your arrays are very large then it's better to use a dedicated array for the flags (due to contiguous memory usage is always faster).
Now, some one asks for a block of memory of size sB. A specialized function (or object or whatever) searches the array of blocks of size greater or equal to sB, and then selects a block by looking at the used/free flag. Just before ending this task the proper block-flag is set to "used".
When two or more threads ask for blocks of the same size there may be a corruption of the flag. Using TLS will solve this issue, and critical blocking too. I think you can set a bool flag at the beggining of the search into flags-array, that makes the other threads to wait until the flag changes, which only happens after the block-flag changes. With pseudo code:
MemoryGetter(sB)
{
//select which array depending of 'sB'
for (i=0, i < numOfarrays, i++)
if (sizeOfArr(i) >= sB)
arrMatch = i
break //exit for
//wait if other thread wants a block from the same arrMatch array
while ( searching(arrMatch) == true )
; //wait
//blocks other threads wanting a block from the same arrMatch array
searching(arrMatch) = true
//Get the first free block
for (i=0, i < numOfBlocks, i++)
if ( arrOfUsed(arrMatch, i) != true )
selectedBlock = addressOf(....)
//mark the block as used
arrOfUsed(arrMatch, i) = true
break; //exit for
//Allow other threads
searching(arrMatch) = false
return selectedBlock //NOTE: selectedBlock==NULL means no free block
}
Freeing a block is easier, just mark it as free, no thread concurrency issue.
Dealing with no free blocks is up to you (wait, use a bigger block, ask OS for more, etc).
Note that the whole memory is reserved from the OS at app start, which can be a problem.
If this idea makes your app faster, let me know. What I can say for sure is that memory used is greater than if you use normal OS request; but not much if you choose "good" sizes, those most used.
Some improvements can be done:
Cache the last freeded block (per size) so as to avoid the search.
Start with not that much blocks, and ask the OS for more memory only
when needed. Play with 'number of blocks' for each size depending on
your app. Find the optimal case.

How to release memory allocated by a slice? [duplicate]

This question already has answers here:
How do you clear a slice in Go?
(3 answers)
Cannot free memory once occupied by bytes.Buffer
(2 answers)
Closed 5 years ago.
package main
import (
"fmt"
"time"
)
func main() {
storage := []string{}
for i := 0; i < 50000000; i++ {
storage = append(storage, "string string string string string string string string string string string string")
}
fmt.Println("done allocating, emptying")
storage = storage[:0]
storage = nil
for {
time.Sleep(1 * time.Second)
}
}
The code above will allocate about ~30mb of memory, and then won't release it. Why is that? How can I force go to release memory used by this slice? I sliced that slice and then nilled it.
The program I'm debugging is a simple HTTP input buffer: it appends all requests into large chunks, and sends these chunks over a channel to goroutine for processing. But problem is illustrated above - I can't get storage to release the memory and then eventually run out of memory.
Edit: as some people pointed out to similar question, no, it first doesn't work, second isn't what I'm asking for. The slice gets emptied, the memory does not.
There are several things going on here.
The first one which is needed to be absorbed is that Go is
a garbage-collected language; the actual algorithm of its GC
is mostly irrelevant but one aspect of it is crucial to understand:
it does not use reference counting, and hence there's no way to
somehow make the GC immediately reclaim the memory of any given
value whose storage is allocated on the heap.
To recap it in more simple words, it's futile to do
s := make([]string, 10*100*100)
s = nil
as the second statement will indeed remove the sole reference
to the slice's underlying memory but won't make the GC go
and "mark" that memory as available for reuse.
This means two things:
You should know how the GC works.
This explains how it works
since v1.5 and up until now (v1.10 these days).
You should structure those of your algorythms which are
memory-intensive in a way that reduces memory pressure.
The latter can be done in several ways:
Preallocate, when you have a sensible idea about how much to.
In your example, you start with a slice of length 0,
and then append to it a lot. Now, almost all library code which deals
with growing memory buffers—the Go runtime included—deals with these
allocations by 1) allocating twice the memory requested—hoping to
prevent several future allocations, and 2) copies the "old" contents
over, when it had to reallocate. This one is important: when reallocation
happens, it means there's two memory regions now: the old one and the new
one.
If you can estimate that you may need to hold N elements on
average, preallocate for them using make([]T, 0, N)—
more info here
and here.
If you'll need to hold less than N elements, the tail of that buffer
will be unused, and if you'll need to hold more than N, you'll need
to reallocate, but on average, you won't need any reallocations.
Re-use your slice(s). Say, in your case, you could "reset" the slice
by reslicing it to the zero length and then use it again for the next
request. This is called "pooling", and in the case of mass-parallel access
to such a pool, you could use sync.Pool to hold your buffers.
Limit the load on your system to make the GC be able to cope with
the sustained load. A good overview of the two approaches to such
limiting is this.
In the program you wrote, it makes no sense to release memory because no part of code is requesting it any more.
To make a valid case, you have to request a new memory and release it inside the loop. Then you will observe that the memory consumption will stabilize at some point.

Memory management in Forth

So I'm just learning Forth and was curious if anyone could help me understand how memory management generally works. At the moment I only have (some) experience with the C stack-vs-heap paradigm.
From what I understand, one can allocate in the Dictionary, or on the heap. Is the Dictionary faster/preferred like the stack in C? But unlike in C, there aren't scopes and automatic stack reclamation, so I'm wondering if one only uses the dictionary for global data structures (if at all).
As far as the heap goes, is it pretty much like C? Is heap management a standard (ANS) concept, or is it implementation-defined?
It is not Dictionary, or on the heap - the equivalent of the heap is the dictionary. However, with the severe limitation that it acts more like a stack than a heap - new words are added to the end of the dictionary (allocation by ALLOT and freeing by FORGET or FREE (but freeing all newer words - acting more like multiple POPs)).
An implementation can control the memory layout and thus implement a traditional heap (or garbage collection). An example is A FORTH implementation of the Heap Data Structure for Memory Management (1984). Another implementation is Dynamic Memory Heaps for Quartus Forth (2000).
A lot is implementation dependent or extensions. For instance, the memory layout is often with the two block buffers (location by BLOCK and TIB), the text input buffer and values and low-level/primitive functions of the language, in the lowest portion, dictionary in the middle (growing upwards) and the return stack and the parameter stack at the top 1.
The address of the first available byte above the dictionary is returned by HERE (it changes as the dictionary expands).
There is also a scratchpad area above the dictionary (address returned by PAD) for temporarily storing data. The scratchpad area can be regarded as free memory.
The preferred mode of operation is to use the stack as much as possible instead of local variables or a heap.
1 p. 286 (about a particular edition of Forth, MMSFORTH) in chapter "FORTH's Memory, Dictionary, and Vocabularies", Forth: A text and a reference. Mahlon G. Kelly and Nicholas Spies. ISBN 0-13-326349-5 / 0-13-326331-2 (pbk.). 1986 by Prentice-Hall.
The fundamental question may not have been answered in a way that a new Forth user would require so I will take a run at it.
Memory in Forth can be very target dependent so I will limit the description to the simplest model, that being a flat memory space, where code and data live together happily. (as opposed to segmented memory models, or FLASH memory for code and RAM for data or other more complicated models)
The Dictionary typically starts at the bottom of memory and is allocated upwards by the Forth system. The two stacks, in a simple system would exist in high memory and typically have two CPU registers pointing to them. (Very system dependent)
At the most fundamental level, memory is allocated simply by changing the value of the dictionary pointer variable. (sometimes called DP)
The programmer does not typically access this variable directly but rather uses some higher level words to control it.
As mentioned the Forth word HERE returns the next available address in the dictionary space. What was not mentioned was that HERE is defined by fetching the value of the variable DP. (system dependency here but useful for a description)
In Forth HERE might look like this:
: HERE ( -- addr) DP # ;
That's it.
To allocate some memory we need to move HERE upwards and we do that with the word ALLOT.
The Forth definition for ALLOT simply takes a number from the parameter stack and adds it to the value in DP. So it is nothing more than:
: ALLOT ( n --) DP +! ; \ '+!' adds n to the contents variable DP
ALLOT is used by the FORTH system when we create a new definition so that what we created is safely inside 'ALLOTed' memory.
Something that is not immediately obvious is the that ALLOT can take a negative number so it is possible to move the dictionary pointer up or down. So you could allocate some memory and return it like this:
HEX 100 ALLOT
And free it up like this:
HEX -100 ALLOT
All this to say that this is the simplest form of memory management in a Forth system. An example of how this is used can be seen in the definition of the word BUFFER:
: BUFFER: ( n --) CREATE ALLOT ;
BUFFER: "creates" a new name in the dictionary (create uses allot to make space for the name by the way) then ALLOTs n bytes of memory right after the name and any associated housekeeping bytes your Forth system might use
So now to allocate a block of named memory we just type:
MARKER FOO \ mark where the memory ends right now
HEX 2000 BUFFER: IN_BUFFER
Now we have an 8K byte buffer called IN_BUFFER. If wanted to reclaim that space in Standard Forth we could type FOO and everything allocated in the Dictionary after FOO would be removed from the Forth system.
But if you want temporary memory space, EVERYTHING above HERE is free to use!
So you can simply point to an address and use it if you want to like this
: MYMEMORY here 200 + ; \ MYMEMORY points to un-allocated memory above HERE
\ MYMEMORY moves with HERE. be aware.
MYMEMORY HEX 1000 ERASE \ fill it with 2K bytes of zero
Forth has typically been used for high performance embedded applications where dynamic memory allocation can cause un-reliable code so static allocation using ALLOT was preferred. However bigger systems have a heap and use ALLOCATE, FREE and RESIZE much like we use malloc etc. in C.
BF
Peter Mortensen laid it out very well. I'll add a few notes that might help a C programmer some.
The stack is closest to what C terms "auto" variables, and what are commonly called local variables. You can give your stack values names in some forths, but most programmers try to write their code so that naming the values is unnecessary.
The dictionary can best be viewed as "static data" from a C programming perspective. You can reserve ranges of addresses in the dictionary, but in general you will use ALLOT and related words to create static data structures and pools which do not change size after allocation. If you want to implement a linked list that can grow in real time, you might ALLOT enough space for the link cells you will need, and write words to maintain a free list of cells you can draw from. There are naturally implementations of this sort of thing available, and writing your own is a good way to hone pointer management skills.
Heap allocation is available in many modern Forths, and the standard defines ALLOCATE, FREE and RESIZE words that work in a way analogous to malloc(), free(), and realloc() in C. Where the bytes are allocated from will vary from system to system. Check your documentation. It's generally a good idea to store the address in a variable or some other more permanent structure than the stack so that you don't inadvertently lose the pointer before you can free it.
As a side note, these words (along with the file i/o words) return a status on the stack that is non-zero if an error occurred. This convention fits nicely with the exception handling mechanism, and allows you to write code like:
variable PTR
1024 allocate throw PTR !
\ do some stuff with PTR
PTR # free throw
0 PTR !
Or for a more complex if somewhat artificial example of allocate/free:
\ A simple 2-cell linked list implementation using allocate and free
: >link ( a -- a ) ;
: >data ( a -- a ) cell + ;
: newcons ( a -- a ) \ make a cons cell that links to the input
2 cells allocate throw tuck >link ! ;
: linkcons ( a -- a ) \ make a cons cell that gets linked by the input
0 newcons dup rot >link ! ;
: makelist ( n -- a ) \ returns the head of a list of the numbers from 0..n
0 newcons dup >r
over 0 ?do
i over >data ! linkcons ( a -- a )
loop >data ! r> ;
: walklist ( a -- )
begin dup >data ? >link # dup 0= until drop ;
: freelist ( a -- )
begin dup >link # swap free throw dup 0= until drop ;
: unittest 10 makelist dup walklist freelist ;
Some Forth implementations support local variables on the return stack frame and allocating memory blocks. For example in SP-Forth:
lib/ext/locals.f
lib/ext/uppercase.f
100 CONSTANT /buf
: test ( c-addr u -- ) { \ len [ /buf 1 CHARS + ] buf }
buf SWAP /buf UMIN DUP TO len CMOVE
buf len UPPERCASE
0 buf len + C! \ just for illustration
buf len TYPE
;
S" abc" test \ --> "ABC"
With Forth you enter a different world.
In a typical Forth like ciforth on linux (and assuming 64 bits) you can configure your Forth to have a linear memory space that is as large as your swap space (e.g. 128 Gbyte). That is yours to fill in with arrays, linked lists, pictures whatever. You do this interactively, typically by declaring variable and including files. There are no restrictions. Forth only provides you with a HERE pointer to help you keep track of memory you have used up. Even that you can ignore, and there is even a word in the 1994 standard that provides scratch space that floats in the free memory (PAD).
Is there something like malloc() free() ? Not necessarily. In a small kernel of a couple of dozen kilobytes,no. But you can just include a file with an ALLOCATE / FREE and set aside a couple of Gbyte to use for dynamic memory.
As an example I'm currently working with tiff files. A typical 140 Mbyte picture takes a small chunk out of the dictionary advancing HERE.
Rows of pixels are transformed, decompressed etc. For that I use dynamic memory, so I ALLOCATE space for the decompression result of a row. I've to manually FREE them again when the results have been used up for another transformation. It feels totally different from c. There is more control and more danger.
Your question about scopes etc. In Forth if you know the address, you can access the data structure. Even if you jotted F7FFA1003 on a piece of paper. Trying to make programs safer by separate name spaces is not prominent in Forth style. So called wordlist (see also VOCABULARY) provide facilities in that direction.
There's a little elephant hiding in a big FORTH memory management room, and I haven't seen too many people mention it.
The canonical FORTH has, at the very least, a non-addressable parameter stack. This is the case in all FORTH hardware implementations I'm aware of (usually originating with Chuck Moore) that have a hardware parameter stack: it's not mapped into the addressable memory space.
What does "non-addressable" mean? It means: you can't have pointers to the parameter stack, i.e. there are no means to get addresses of things on that stack. The stack is a "black box" that you can only access via the stack API (opcodes if it's a hardware stack), without bypassing it - and only that API will modify its contents.
This implies no aliasing between parameter stack and memory accesses using pointers - via # and ! and the like. This enables efficient code generation with small effort, and indeed it makes decent generated code in FORTH systems orders of magnitude easier to obtain than with C and C++.
This of course breaks down when pointers can be obtained to the parameter stack. A well designed system would probably have guarded API for such access, since within the guards the code generator has to spill everything from registers to stack - in absence of full data flow analysis, that is.
DFA and other "expensive" optimization techniques are not of course impossible in FORTH, it's just that they are a bit larger in scope than many a practical FORTH system. They can be done very cleanly in spite of that (I'm using CFA, DFA and SSA optimizations in an in-house FORTH implementation, and the whole thing has less source code, comments included, than the utility classes in LLVM... - classes that are used all over the place, but that don't actually do anything related to compiling or code analysis).
A practical FORTH system can also place aliasing limitations on the return stack contents, namely that the return addresses themselves don't alias. That way control flow can be analyzed optimistically, only taking into account explicit stack accesses via R#, >R and R>, while letting you place addressable local variables on that stack - that's typically done when a variable is larger than a cell or two, or would be awkward to keep around on the parameter stack.
In C and C++, aliasing between automatic "local" variables and pointers is a big problem, because only large compilers with big optimizers can afford to prove lack of aliasing and forgo register reloads/spills when intervening pointer dereferences take place. Small compilers, to remain compliant and not generate broken code, have to pessimize and assume that accesses via char* alias everything, and accesses via Type* alias that type and others "like it" (e.g. derived types in C++). That char* aliases all things in C is a prime example of where you pay a big price for a feature you didn't usually intend to use.
Usually, forcing an unsigned char type for characters, and re-writing the string API using this type, lets you not use char* all over the place and lets the compiler generate much better code. Compilers of course add lots of analysis passes to minimize the fallout from this design fiasco... And all it'd take to fix in C is having a byte type that aliases every other type, and is compatible with arbitrary pointers, and has the size of the smallest addressable unit of memory. The reuse of void in void* to mean "pointer to anything" was, in hindsight, a mistake, since returning void means returning nothing, whereas pointing to void absolutely does not mean "pointing to nothing".
My idea is published at https://sites.google.com/a/wisc.edu/memorymanagement
I'm hoping to put forth code on github soon.
If you have an array (or several) with each array having a certain number of items of a certain size, you can pair a single-purpose stack to each array. The stack is initialized with the address of each array item. To allocate an array item, pop an address off the stack. To deallocate an array item, push its address onto the stack.

are array initialization operations cached as well

If you are not reading a value but assigning a value
for example
int array[] = new int[5];
for(int i =0; i < array.length(); i++){
array[i] = 2;
}
Still does the array come to the cache? Can't the cpu bring the array elements one by one to its registers and do the assignment and after that write the updated value to the main memory, bypasing the cache because its not necessary in this case ?
The answer depends on the cache protocol I answered assuming Write Back Write Allocate.
The array will still come to the cache and it will make a difference. When a cache block is retrieved from it's more than just a single memory location (the actual size depends on the design of the cache). So since arrays are stored in order in memory pulling in array[0] will pulling the rest of the block which will include (at least some of) array[1] array[2] array[3] and array[4]. This means the following calls will not have to access main memory.
Also after all this is done the values will NOT be written to memory immediately (under write back) instead the CPU will keep using the cache as the memory for reads/writes until that cache block is replaced from the cache at which point the values will be written to main memory.
Overall this is preferable to going to memory every time because the cache is much faster and the chances are the user is going to use the memory he just set relatively soon.
If the protocol is Write Through No Allocate then it won't bring the block into memory and it will right straight through to the main memory.

Resources