When defining a function with variadic arguments of type interface{} (e.g. Printf), the arguments are apparently implicitly converted to interface instances.
Does this conversion imply memory allocation? Is this conversion fast? When concerned by code efficiency, should I avoid using variadic functions?
The best explanation i found about the interface memory allocation in Go is still this article from Rus Cox, one of the core Go programmer. It's well worth to read it.
http://research.swtch.com/interfaces
I picked up some of the most interesting parts:
Values stored in interfaces might be arbitrarily large, but only one
word is dedicated to holding the value in the interface structure, so
the assignment allocates a chunk of memory on the heap and records the
pointer in the one-word slot.
...
Calling fmt.Printf(), the Go compiler generates code that calls the
appropriate function pointer from the itable, passing the interface
value's data word as the function's first (in this example, only)
argument.
Go's dynamic type conversions mean that it isn't reasonable for the
compiler or linker to precompute all possible itables: there are too
many (interface type, concrete type) pairs, and most won't be needed.
Instead, the compiler generates a type description structure for each
concrete type like Binary or int or func(map[int]string). Among other
metadata, the type description structure contains a list of the
methods implemented by that type.
...
The interface runtime computes the itable by looking for each method
listed in the interface type's method table in the concrete type's
method table. The runtime caches the itable after generating it, so
that this correspondence need only be computed once.
...
If the interface type involved is empty—it has no methods—then the
itable serves no purpose except to hold the pointer to the original
type. In this case, the itable can be dropped and the value can point
at the type directly.
Because Go has the hint of static typing to go along with the dynamic method lookups, it can move the lookups back from the call sites to the point when the value is stored in the interface.
Converting to an interface{} is a separate concept from variadic arguments which are contained in a slice and can be of any type. However these are all probably free in the sense of allocations as long as they don't escape to the heap (in the GC toolchain).
The excess allocations you would see from fmt functions like Printf are going to be from reflection rather than from the use of interface{} or variadic arguments.
If you're concerned with efficiency though, avoiding indirection will always be more efficient than not, so using the correct value types will yield more efficient code. The difference can be minimal though, so benchmark the code first before concerning yourself with minor optimizations.
Go passes arguments copy_by_value, so it does memory allocation anyway. You always should better avoid using interface{} if possible. In described case your function will need to reflect arguments to use them. Reflection is quite expensive operation that's why fmt.Printf() is so slow.
Related
What is the right way to use slice in Go. As per Go documentation slice is by default pointer, so is creating slice as *[]Item is the right way?. Since slice are by default pointer isn't this way of creating the slice making it pointer to a pointer.
I feel the right way to create slice is []Item or []*item (slice holding pointers of items)
A bit of theory
Your question has no sense: there's no "right" or "wrong" or "correct" and "incorrect": you can have a pointer to a slice, and you can have a pointer to a pointer to a slice, and you can add levels of such indirection endlessly.
What to do depends on what you need in a particular case.
To help you with the reasoning, I'll try to provide a couple of facts and draw some conclusions.
The first two things to understand about types and values in Go are:
Everything in Go, ever, always, is passed by value.
This means variable assignments (= and :=), passing values to function and method calls, and copying memory which happens internally such as when reallocating backing arrays of slices or rebalancing maps.
Passing by value means that actual bits of the value which is assigned are physically copied into the variable which "receives" the value.
Types in Go—both built-in and user-defined (including those defined in the standard library)—can have value semantics and reference semantics when it comes to assignment.
This one is a bit tricky, and often leads to novices incorrectly assuming that the first rule explained above does not hold.
"The trick" is that if a type contains a pointer (an adderss of a variable) or consists of a single pointer, the value of this pointer is copied when the value of the type is copied.
What does this mean?
Pretty simple: if you assign the value of a variable of type int to another variable of type int, both variables contain identical bits but they are completely independent: change the content of any of them, and another will be unaffected.
If you assign a variable containing a pointer (or consisting of a single pointer) to another one, they both, again, will contain identical bits and are independent in the sense that changing those bits in any of them will not affect the other.
But since the pointer in both these variables contains the address of the same memory location, using those pointers to modify the contents of the memory location they point at will modify the same memory.
In other words, the difference is that an int does not reference anything while a pointer naturally references another memory location—because it contains its address.
Hence, if a type contains at least a single pointer (it may do so by containing a field of another type which itself contains a pointer, and so on—to any nesting level), values of this type will have reference assignment semantics: if you assign a value to another variable, you end up with two values referencing the same memory location.
That is why maps, slices and strings have reference semantics: when you assign variables of these types both variables point to the same underlying memory location.
Let's move on to slices.
Slices vs pointers to slices
A slice, logically, is a struct of three fields: a pointer to the slice's backing array which actually contains the slice's elements, and two ints: the capacity of the slice and its length.
When you pass around and assign a slice value, these struct values are copied: a pointer and two integers.
As you can see, when you pass a slice value around the backing array is not copied—only a pointer to it.
Now let's consider when you want to use a plain slice or a pointer to a slice.
If you're concerned with performance (memory allocation and/or CPU cycles needed to copy memory), these concerns are unfounded: copying three integers when passing around a slice is dirt-cheap on today's hardware.
Using a pointer to a slice would make copying a tiny bit faster—a single integer rather than three—but these savings will be easily offset by two facts:
The slice's value will almost certainly end up being allocated on the heap so that the compiler can be sure its value will survive crossing boundaries of the function calls—so you will pay for using the memory manager and the garbage collector will have more work.
Using a level of indirection reduces data locality: accessing RAM is slow so CPUs have caches which prefetch data at the addresses following the one at which the data is being read. If the control flow immediately reads memory at another location, the prefetched data is thrown away: cache trashing.
OK, so is there a case when you would want a pointer to a slice?
Yes. For instance, the built-in append function could have been defined as
func append(*[]T, T...)
instead of
func append([]T, T...) []T
(N.B. the T here actually means "any type" because append is not a library fuction and cannot be sensibly defined in plain Go; so it's sort of pseudocode.)
That is, it could accept a pointer to a slice and possibly replace the slice pointed to by the pointer, so you'd call it as append(&slice, element) and not as slice = append(slice, element).
But honestly, in real-world Go projects I have dealt with, the only case of using pointers to slices which I can remember was about pooling slices which are heavily reused—to save on memory reallocations. And that sole case was only due to sync.Pool keeping elements of type interface{} which may be more effective when using pointers¹.
Slices of values vs slices of pointers to values
Exactly the same logic described above applies to the reasoning about this case.
When you put a value in a slice that value is copied. When the slice needs to grow its backing array, the array will be reallocated, and reallocation means physically copying all existing elements into the new memory location.
So, two considerations:
Are elements reasonably small so that copying them is not going to press on memory and CPU resources?
(Note that "small" vs "big" also heavily depens on the frequency of such copying in a working program: copying a couple of megabytes once in a while is not a big deal; copying even tens of kilobytes in a tight time-critical loop can be a big deal.)
Are you program OK with multiple copies of the same data (for instance, values of certain types like sync.Mutex must not be copied after first use)?
If the answer to either question is "no", you should consider keeping pointers in the slice. But when you consider keeping pointers, also think about data locality explained above: if a slice contains data intended for time-critical number-crunching, it's better not have the CPU to chase pointers.
To recap: when you ask about a "correct" or "right" way of doing something, the question has no sense without specifying the set of criteria according to which we could classify all possible solutions to a problem. Still, there are considerations which must be performed when designing the way you're going to store and manipulate data, and I have tried to explain these considerations.
In general, a rule of thumb regarding slices could be:
Slices are designed to be passed around "as is"—as values, not pointers to variables containing their values.
There are legitimate reasons to have pointers to slices, though.
Most of the time you keep values in the slice's elements, not pointers to variables with these values.
Exceptions to this general rule:
Values you intend to store in a slice occupy too much space so that it looks like the envisioned pattern of using slices of them would involve excessive memory pressure.
Types of values you intend to store in a slice require they must not be copied but rather only referenced, existing as a single instance each. A good example are types containing/embedding a field of type sync.Mutex (or, actually, a variable of any other type from the sync package except those which itself have reference semantics such as sync.Pool): if you lock a mutex, copy its value and then unlock the copy, the initially locked copy won't notice, which means you have a grave bug in your code.
A note of caution on correctness vs performance
The text above contains a lot of performance considerations.
I've presented them because Go is a reasonably low-level language: not that low-level as C and C++ and Rust but still providing the programmer with plenty of wiggle-room to use when performance is at stake.
Still, you should very well understand that at this point on your learning curve, correctness must be your top—if not the sole—objective: please take no offence, but if you were after tuning some Go code to shave off some CPU time to execute it, you weren't asking your question in the first place.
In other words, please consider all of the above as a set of facts and considerations to guilde you in your learning and exploration of the subject but do not fall into the trap of trying to think about performance first. Make your programs correct and easy to read and modify.
¹ An interface value is a pair of pointers: to the variable containing the value you have put into the interface value and to a special data structure inside the Go runtime which describes the type of that variable.
So while you can put a slice value into a variable of type interface{} directly—in the sense that it's perfectly fine in the language—if the value's type is not itself a single pointer, the compiler will have to allocate on the heap a variable to contain a copy of your value there, and store a pointer to that new variable into the value of type interface{}.
This is needed to hold that "everything is always passed by value" semantics of the Go assignments.
Consequently, if you put a slice value into a variable of type interface{}, you will end up with a copy of that value on the heap.
Because of this, keeping pointers to slices in data structures such as sync.Map makes code uglier but results in lesser memory churn.
My app uses a sync.Map to store open socket-connections which are accessed concurrently through multiple goroutines.
I'm wondering whether to store these connections as structs net.Conn or as references *net.Conn.
What are the benefits/drawbacks of both options and what would be the prefered solution?
While #blackgreen is correct, I'd expand a bit on the reasoning.
The sync.Map type is explicitly defined to operate on interface{}.
Now remember that in Go, an interface is not merely an abstraction used by the type system; instead, you can have values of interface types, and the in-memory representation of such values is a struct containing two pointers—to an internal object describing the dynamic type of the value stored in the variable, and to the value itself (or a copy of it created on the heap by the runtime).
This means, if you were to store a pointer to anything in sync.Map, any such pointer stored would have been converted to a value of type interface{} and it would occupy exactly the same space in sync.Map.
If, instead, you would store values of type net.Conn there directly, they would have been stored directly—simply because they are already interface values, so Go would just copy the pair of pointers.
On the surface, this looks like both methods are on par in terms of the space used but bear with me.
To store a pointer to a net.Conn value in a container data type such as sync.Map, the program must make sure that that value is allocated on the heap (as opposed to allocating it directly on the stack of the currently running goroutine), and this fact might force the compiler to arrange for ensuring that the original net.Conn value is allocated directly on the heap.
In other words, storing a pointer to a variable of interface type might be (and usually will be—due to the way typical code is organized) more wasteful in terms of memory use.
Add to it that most dereferencing (pointer chasing) tends to trash CPU cache; that's not a game changer but might add up to a couple of µs when you iterate over collections in tight loops.
Having said that, I'd would advise against outright dismissing storing pointers in containers like sync.Map: occasionally it comes in handy—for instance, to reuse arrays for slices, you usually store pointers to the 1st elements of such arrays.
I am a beginner with Go and a java developer.
I am currently working with big.Rat.
I need to get the Abs of a Rat n for which I have to write something like
n.Abs(n) or something like big.Rat{}.Abs(n)
Why didn't go provide something like just n.Abs()?
Or am I going wrong somewhere?
Go's big package is concerned with memory allocation when it comes to its function signatures. A big.Rat consists of two big.Ints which each contain an array of uints. Unlike an int (native 32 or 64 bit integer), a big.Int must thus be allocated dynamically, depending on its value. For large values this means more elements in the array.
Your proposed function signature n.Abs() would mean that a new array of the same size as n's would have to be allocated for this operation. In reality we often have the case that the original n is no longer needed, thus we can reuse its existing memory. To allow this, the Abs function takes a pointer to an existing big.Rat which might be n itself. The implementation can now reuse the memory. The caller is now in full control of what memory to use for these operations.
This might not make the nicest API for all use cases, in fact if you just want to do a quick calculation for a few large numbers, on a computer with Gigabytes of RAM, you might have preferred the n.Abs() version, but if you do numerically expensive computations with a lot of large numbers, you must be able to control your memory. Imagine doing some image manipulation on a Raspberry for example, where you are more constraint by the available memory. In this case the existing API allows you to be more efficient.
var a = [...]int{1,2,3,4,5,6}
s1 := a[2:4:5]
Suppose s1 goes out of scope later than a. How does gc know to reclaim the memory of s1's underlying array a?
Consider the runtime representation of s1, spec
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
The GC doesn't even know about the beginning of a.
Go uses mark-and-sweep collector as it's present implementation.
As per the algorithm, there will be one root object, and the rest is tree like structure, in case of multi-core machines gc runs along with the program on one core.
gc will traverse the tree and when something is not reachable it, considers it as free.
Go objects also have metadata for objects as stated in this post.
An excerpt:
We needed to have some information about the objects since we didn't have headers. Mark bits are kept on the side and used for marking as well as allocation. Each word has 2 bits associated with it to tell you if it was a scalar or a pointer inside that word. It also encoded whether there were more pointers in the object so we could stop scanning objects sooner than later.
The reason go's slices (slice header) were structures instead of pointer to structures is documented by russ cox in this page under slice section.
This is an excerpt:
Go originally represented a slice as a pointer to the structure(slice header) , but doing so meant that every slice operation allocated a new memory object. Even with a fast allocator, that creates a lot of unnecessary work for the garbage collector, and we found that, as was the case with strings, programs avoided slicing operations in favor of passing explicit indices. Removing the indirection and the allocation made slices cheap enough to avoid passing explicit indices in most cases.
The size(length) of an array is part of its type. The types [1]int and [2]int are distinct.
One thing to remember is go is value oriented language, instead of storing pointers, they store direct values.
[3]int, arrays are values in go, so if you pass an array, it copies the whole array.
[3]int this is a value (one as a whole).
When one does a[1] you are accessing part of the value.
SliceHeader Data field says consider this as base point of array, instead of a[0]
As far as my knowledge is considered:
When one requests for a[4],
a[0]+(sizeof(type)*4)
is calculated.
Now if you are accessing something in through slice s = a[2:4],
and if one requests for s[1], what one was requesting is,
a[2]+sizeof(type)*1
Background: I'm writing a toy Lisp (Scheme) interpreter in Haskell. I'm at the point where I would like to be able to compile code using LLVM. I've spent a couple days dreaming up various ways of feeding untyped Lisp values into compiled functions that expect to know the format of the data coming at them. It occurs to me that I am not the first person to need to solve this problem.
Question: What are some historically successful ways of mapping untyped data into an efficient binary format.
Addendum: In point of fact, I do know which of about a dozen different types the data is, I just don't know which one might be sent to the function at compile time. The function itself needs a way to determine what it got.
Do you mean, "I just don't know which [type] might be sent to the function at runtime"? It's not that the data isn't typed; certainly 1 and '() have different types. Rather, the data is not statically typed, i.e., it's not known at compile time what the type of a given variable will be. This is called dynamic typing.
You're right that you're not the first person to need to solve this problem. The canonical solution is to tag each runtime value with its type. For example, if you have a dozen types, number them like so:
0 = integer
1 = cons pair
2 = vector
etc.
Once you've done this, reserve the first four bits of each word for the tag. Then, every time two objects get passed in to +, first you perform a simple bit mask to verify that both objects' first four bits are 0b0000, i.e., that they are both integers. If they are not, you jump to an error message; otherwise, you proceed with the addition, and make sure that the result is also tagged accordingly.
This technique essentially makes each runtime value a manually-tagged union, which should be familiar to you if you've used C. In fact, it's also just like a Haskell data type, except that in Haskell the taggedness is much more abstract.
I'm guessing that you're familiar with pointers if you're trying to write a Scheme compiler. To avoid limiting your usable memory space, it may be more sensical to use the bottom (least significant) four bits, rather than the top ones. Better yet, because aligned dword pointers already have three meaningless bits at the bottom, you can simply co-opt those bits for your tag, as long as you dereference the actual address, rather than the tagged one.
Does that help?
Your default solution should be a simple tagged union. If you want to narrow your typing down to more specific types, you can do it - but it won't be that "toy" any more. A thing to look at is called abstract interpretation.
There are few successful implementations of such an optimisation, with V8 being probably the most widespread. In the Scheme world, the most aggressively optimising implementation is Stalin.