Mutable data types that use stack allocation - memory-management

Based on my earlier question, I understand the benefit of using stack allocation. Suppose I have an array of arrays. For example, A is a list of matrices and each element A[i] is a 1x3 matrix. The length of A and the dimension of A[i] are known at run time (given by the user). Each A[i] is a matrix of Float64 and this is also known at run time. However, through out the program, I will be modifying the values of A[i] element by element. What data structure can also allow me to use stack allocation? I tried StaticArrays but it doesn't allow me to modify a static array.

StaticArrays defines MArray (MVector, MMatrix) types that are fixed-size and mutable. If you use these there's a higher chance of the compiler determining that they can be stack-allocated, but it's not guaranteed. Moreover, since the pattern you're using is that you're passing the mutable state vector into a function which presumably modifies it, it's not going to be valid or helpful to stack allocate that anyway. If you're going to allocate state once and modify it throughout the program, it doesn't really matter if it is heap or stack allocated—stack allocation is only a big win for objects that are allocated, used locally and then don't escape the local scope, so they can be “freed” simply by popping the stack.
From the code snippet you showed in the linked question, the state vector is allocated in the outer function, test_for_loop, which shouldn't be a big deal since it's done once at the beginning of execution. Using a variably sized state vector to index into an array with a splat (...) might be an issue, however, and that's done in test_function. Using something with fixed size like MVector might be better for that. It might, however, be better still, to use a state tuple and return a new rather than mutated state tuple at the end. The compiler is very good at turning that kind of thing into very efficient code because of immutability.
Note that by convention test_function should be called test_function! since it modifies its M argument and even more so if it modifies the state vector.
I would also note that this isn't a great question/answer pair since it's not standalone at all and really just a continuation of your other question. StackOverflow isn't very good for this kind of iterative question/discussion interaction, I'm afraid.

Related

Is it correct to use slice as *[]Item, because Slice is by default pointer

What is the right way to use slice in Go. As per Go documentation slice is by default pointer, so is creating slice as *[]Item is the right way?. Since slice are by default pointer isn't this way of creating the slice making it pointer to a pointer.
I feel the right way to create slice is []Item or []*item (slice holding pointers of items)
A bit of theory
Your question has no sense: there's no "right" or "wrong" or "correct" and "incorrect": you can have a pointer to a slice, and you can have a pointer to a pointer to a slice, and you can add levels of such indirection endlessly.
What to do depends on what you need in a particular case.
To help you with the reasoning, I'll try to provide a couple of facts and draw some conclusions.
The first two things to understand about types and values in Go are:
Everything in Go, ever, always, is passed by value.
This means variable assignments (= and :=), passing values to function and method calls, and copying memory which happens internally such as when reallocating backing arrays of slices or rebalancing maps.
Passing by value means that actual bits of the value which is assigned are physically copied into the variable which "receives" the value.
Types in Go—both built-in and user-defined (including those defined in the standard library)—can have value semantics and reference semantics when it comes to assignment.
This one is a bit tricky, and often leads to novices incorrectly assuming that the first rule explained above does not hold.
"The trick" is that if a type contains a pointer (an adderss of a variable) or consists of a single pointer, the value of this pointer is copied when the value of the type is copied.
What does this mean?
Pretty simple: if you assign the value of a variable of type int to another variable of type int, both variables contain identical bits but they are completely independent: change the content of any of them, and another will be unaffected.
If you assign a variable containing a pointer (or consisting of a single pointer) to another one, they both, again, will contain identical bits and are independent in the sense that changing those bits in any of them will not affect the other.
But since the pointer in both these variables contains the address of the same memory location, using those pointers to modify the contents of the memory location they point at will modify the same memory.
In other words, the difference is that an int does not reference anything while a pointer naturally references another memory location—because it contains its address.
Hence, if a type contains at least a single pointer (it may do so by containing a field of another type which itself contains a pointer, and so on—to any nesting level), values of this type will have reference assignment semantics: if you assign a value to another variable, you end up with two values referencing the same memory location.
That is why maps, slices and strings have reference semantics: when you assign variables of these types both variables point to the same underlying memory location.
Let's move on to slices.
Slices vs pointers to slices
A slice, logically, is a struct of three fields: a pointer to the slice's backing array which actually contains the slice's elements, and two ints: the capacity of the slice and its length.
When you pass around and assign a slice value, these struct values are copied: a pointer and two integers.
As you can see, when you pass a slice value around the backing array is not copied—only a pointer to it.
Now let's consider when you want to use a plain slice or a pointer to a slice.
If you're concerned with performance (memory allocation and/or CPU cycles needed to copy memory), these concerns are unfounded: copying three integers when passing around a slice is dirt-cheap on today's hardware.
Using a pointer to a slice would make copying a tiny bit faster—a single integer rather than three—but these savings will be easily offset by two facts:
The slice's value will almost certainly end up being allocated on the heap so that the compiler can be sure its value will survive crossing boundaries of the function calls—so you will pay for using the memory manager and the garbage collector will have more work.
Using a level of indirection reduces data locality: accessing RAM is slow so CPUs have caches which prefetch data at the addresses following the one at which the data is being read. If the control flow immediately reads memory at another location, the prefetched data is thrown away: cache trashing.
OK, so is there a case when you would want a pointer to a slice?
Yes. For instance, the built-in append function could have been defined as
func append(*[]T, T...)
instead of
func append([]T, T...) []T
(N.B. the T here actually means "any type" because append is not a library fuction and cannot be sensibly defined in plain Go; so it's sort of pseudocode.)
That is, it could accept a pointer to a slice and possibly replace the slice pointed to by the pointer, so you'd call it as append(&slice, element) and not as slice = append(slice, element).
But honestly, in real-world Go projects I have dealt with, the only case of using pointers to slices which I can remember was about pooling slices which are heavily reused—to save on memory reallocations. And that sole case was only due to sync.Pool keeping elements of type interface{} which may be more effective when using pointers¹.
Slices of values vs slices of pointers to values
Exactly the same logic described above applies to the reasoning about this case.
When you put a value in a slice that value is copied. When the slice needs to grow its backing array, the array will be reallocated, and reallocation means physically copying all existing elements into the new memory location.
So, two considerations:
Are elements reasonably small so that copying them is not going to press on memory and CPU resources?
(Note that "small" vs "big" also heavily depens on the frequency of such copying in a working program: copying a couple of megabytes once in a while is not a big deal; copying even tens of kilobytes in a tight time-critical loop can be a big deal.)
Are you program OK with multiple copies of the same data (for instance, values of certain types like sync.Mutex must not be copied after first use)?
If the answer to either question is "no", you should consider keeping pointers in the slice. But when you consider keeping pointers, also think about data locality explained above: if a slice contains data intended for time-critical number-crunching, it's better not have the CPU to chase pointers.
To recap: when you ask about a "correct" or "right" way of doing something, the question has no sense without specifying the set of criteria according to which we could classify all possible solutions to a problem. Still, there are considerations which must be performed when designing the way you're going to store and manipulate data, and I have tried to explain these considerations.
In general, a rule of thumb regarding slices could be:
Slices are designed to be passed around "as is"—as values, not pointers to variables containing their values.
There are legitimate reasons to have pointers to slices, though.
Most of the time you keep values in the slice's elements, not pointers to variables with these values.
Exceptions to this general rule:
Values you intend to store in a slice occupy too much space so that it looks like the envisioned pattern of using slices of them would involve excessive memory pressure.
Types of values you intend to store in a slice require they must not be copied but rather only referenced, existing as a single instance each. A good example are types containing/embedding a field of type sync.Mutex (or, actually, a variable of any other type from the sync package except those which itself have reference semantics such as sync.Pool): if you lock a mutex, copy its value and then unlock the copy, the initially locked copy won't notice, which means you have a grave bug in your code.
A note of caution on correctness vs performance
The text above contains a lot of performance considerations.
I've presented them because Go is a reasonably low-level language: not that low-level as C and C++ and Rust but still providing the programmer with plenty of wiggle-room to use when performance is at stake.
Still, you should very well understand that at this point on your learning curve, correctness must be your top—if not the sole—objective: please take no offence, but if you were after tuning some Go code to shave off some CPU time to execute it, you weren't asking your question in the first place.
In other words, please consider all of the above as a set of facts and considerations to guilde you in your learning and exploration of the subject but do not fall into the trap of trying to think about performance first. Make your programs correct and easy to read and modify.
¹ An interface value is a pair of pointers: to the variable containing the value you have put into the interface value and to a special data structure inside the Go runtime which describes the type of that variable.
So while you can put a slice value into a variable of type interface{} directly—in the sense that it's perfectly fine in the language—if the value's type is not itself a single pointer, the compiler will have to allocate on the heap a variable to contain a copy of your value there, and store a pointer to that new variable into the value of type interface{}.
This is needed to hold that "everything is always passed by value" semantics of the Go assignments.
Consequently, if you put a slice value into a variable of type interface{}, you will end up with a copy of that value on the heap.
Because of this, keeping pointers to slices in data structures such as sync.Map makes code uglier but results in lesser memory churn.

Use big.Rat with Go to get Abs() value

I am a beginner with Go and a java developer.
I am currently working with big.Rat.
I need to get the Abs of a Rat n for which I have to write something like
n.Abs(n) or something like big.Rat{}.Abs(n)
Why didn't go provide something like just n.Abs()?
Or am I going wrong somewhere?
Go's big package is concerned with memory allocation when it comes to its function signatures. A big.Rat consists of two big.Ints which each contain an array of uints. Unlike an int (native 32 or 64 bit integer), a big.Int must thus be allocated dynamically, depending on its value. For large values this means more elements in the array.
Your proposed function signature n.Abs() would mean that a new array of the same size as n's would have to be allocated for this operation. In reality we often have the case that the original n is no longer needed, thus we can reuse its existing memory. To allow this, the Abs function takes a pointer to an existing big.Rat which might be n itself. The implementation can now reuse the memory. The caller is now in full control of what memory to use for these operations.
This might not make the nicest API for all use cases, in fact if you just want to do a quick calculation for a few large numbers, on a computer with Gigabytes of RAM, you might have preferred the n.Abs() version, but if you do numerically expensive computations with a lot of large numbers, you must be able to control your memory. Imagine doing some image manipulation on a Raspberry for example, where you are more constraint by the available memory. In this case the existing API allows you to be more efficient.

"cut and paste" the last k elements of std::vector efficiently?

Is it possible in C++11 "cut and paste" the last k elements of an std::vector A to a new std:::vector B in constant time?
One way would be to use B.insert(A.end() - k, A.end()) and then use erase on A but these are both O(k) time operations.
Mau
No, vectors own their memory.
This operation is known as splice. forward_list is ridiculously slow otherwise, but it does have an O(1) splice.
Typically, the process of deciding which elements to move is already O(n), so having the splice itself take O(n) time is not a problem. The other operations being faster on vector more than make up for it.
This isn't possible in general, since (at least in the C++03 version -- there it's 23.2.4/1) the C++ standard guarantees that the memory used by a vector<T> is a single contiguous block. Thus the only way to "transfer" more than a fixed number of elements in O(1) time would be if the receiving vector were empty, and you had somehow arranged to have it's allocated block of memory begin at the right place inside the first vector -- in which case the "transfer" could be argued to have taken no time at all. (Deliberately overlapping objects in this way is almost certain to constitute Undefined Behaviour in theory -- and in practice, it's also very fragile, since any operation that invalidates iterators to a vector<T> can also reallocate memory, thus breaking things.)
If you're prepared to sacrifice a whole bunch of portability, I've heard it's possible to play OS-level (or hardware-level) tricks with virtual memory mapping to achieve tricks like no-overhead ring buffers. Maybe these tricks could also be applied here -- but bear in mind that the assumption that the mapping of virtual to physical memory within a single process is one-to-one is very deeply ingrained, so you could be in for some surprises.

Efficiency of appending to vectors

Appending an element onto a million-element ArrayList has the cost of setting one reference now, and copying one reference in the future when the ArrayList must be resized.
As I understand it, appending an element onto a million-element PersistenVector must create a new path, which consists of 4 arrays of size 32. Which means more than 120 references have to be touched.
How does Clojure manage to keep the vector overhead to "2.5 times worse" or "4 times worse" (as opposed to "60 times worse"), which has been claimed in several Clojure videos I have seen recently? Has it something to do with caching or locality of reference or something I am not aware of?
Or is it somehow possible to build a vector internally with mutation and then turn it immutable before revealing it to the outside world?
I have tagged the question scala as well, since scala.collection.immutable.vector is basically the same thing, right?
Clojure's PersistentVector's have special tail buffer to enable efficient operation at the end of the vector. Only after this 32-element array is filled is it added to the rest of the tree. This keeps the amortized cost low. Here is one article on the implementation. The source is also worth a read.
Regarding, "is it somehow possible to build a vector internally with mutation and then turn it immutable before revealing it to the outside world?", yes! These are known as transients in Clojure, and are used for efficient batch changes.
Cannot tell about Clojure, but I can give some comments about Scala Vectors.
Persistent Scala vectors (scala.collection.immutable.Vectors) are much slower than an array buffer when it comes to appending. In fact, they are 10x slower than the List prepend operation. They are 2x slower than appending to Conc-trees, which we use in Parallel Collections.
But, Scala also has mutable vectors -- they're hidden in the class VectorBuilder. Appending to mutable vectors does not preserve the previous version of the vector, but mutates it in place by keeping the pointer to the rightmost leaf in the vector. So, yes -- keeping the vector mutable internally, and than returning an immutable reference is exactly what's done in Scala collections.
The VectorBuilder is slightly faster than the ArrayBuffer, because it needs to allocate its arrays only once, whereas ArrayBuffer needs to do it twice on average (because of growing). Conc.Buffers, which we use as parallel array combiners, are twice as fast compared to VectorBuilders.
Benchmarks are here. None of the benchmarks involve any boxing, they work with reference objects to avoid any bias:
comparison of Scala List, Vector and Conc
comparison of Scala ArrayBuffer, VectorBuilder and Conc.Buffer
More collections benchmarks here.
These tests were executed using ScalaMeter.

Performance of std::vector<Test> vs std::vector<Test*>

In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.

Resources