If one has a big structure, having lot of member variables.
Some function needs to access 4-5 elements in the structure for its working, so which of the below scenario could be cache effective(less cache misses)-
1.) Pass the pointer to the structure as argument to the function, Which in turn will access the needed elements.(Assume that the elements are not continuous in the structure declaration and they are apart)
2.) Pass individual structure member variables as argument to the function.
In first place, Does this scenario affect performance of the code from cache perspective in first place?
Thanks.
-AD
Ignoring cache issues, passing a pointer will always be fastest, as there is no overhead of copying the interesting fields.
Well ... If the members accessed are many cache lines apart, then it would probably help to get them all collected (on the stack, or even in registers if possible) as arguments, if the function does many accesses. If not, the extra overhead of reading out the arguments and setting up the call might eat up the benefit.
I think this is a micro-optimization, and that you should profile both cases, and then document any change to the code that you do as a result of said profiling (since it won't be obvious to the casual observer, later on).
A memory access is a memory access. It doesn't matter whether it happens in the caller or the callee. Ignoring the cache, there are several reasons to pass a pointer (pass by reference).
Separation of concerns dictates that the callee should decide what it wants to access.
Passing more parameters may increase pressure on the register file and/or cause more accesses to the stack.
Passing a single argument is more readable than several. (Arguably related to separation of concerns.)
The only way to improve cache performance is to improve locality. Arrange the variables to be consecutive in the struct (or whatever) definition. Arrange the algorithm to access each structure only once. If these aren't simple changes to make, and the program is cache bound, then performance will just take that much programming effort.
Related
I am doing a first production webservice in go, so I am quite new to the language and some concepts/patterns.
My question is related to handlers and essentially how to pull out duplicated code without degrading performance.
I have come accross the pattern to wrap either http.Handle or http.HandlerFunc to clean up code. For example this blog post here using an adapter pattern https://medium.com/#matryer/writing-middleware-in-golang-and-how-go-makes-it-so-much-fun-4375c1246e81#.hvsc236iv
It might end up with something like this (copied from the blob post):
http.Handle("/", Adapt(indexHandler, AddHeader("Server", "Mine"),
CheckAuth(providers),
CopyMgoSession(db),
Notify(logger),
)
which is basically a deeply nested function call.
The question I have is what happens in the stack and the performance of the service? With this pattern, every single user request will add at least 5 stack frames to the stack. Is that acceptable or will it have a negative effect on performance when traffic is high?
Chaining middlewares is basically just making the handlers of the chain call the next one, often based on a condition whether everything went well. Or in another approach some external mechanism may call handlers one-by-one.
However, all things come down to that the handlers will be called. The Handler.ServeHTTP() method looks like this:
type Handler interface {
ServeHTTP(ResponseWriter, *Request)
}
A simple method with 2 parameters and no return values. The parameters are of type http.ResponseWriter (an interface type) and *http.Request (a pointer type).
So a call to a handler's ServeHTTP() involves 2 things: making a copy of its arguments - which is fast since they are small, and actually making the call (taking care of stack update like create a new stack frame, record return address, save used registers, and execute the called function) - which is also very fast (see quote at the end of the answer).
So should you worry about calling functions? No. Will this be less performant compared to a handler which contains everything? Yes. Is the difference significant? No. Serving an HTTP request could take hundreds of milliseconds (network latency included). Calling 10 functions in your handler will not make it noticeably slower.
If you'd worry about the performance loss due to function calls, then your app would consist of one single main() function. Obviously nobody wants that. You create functions to break down your initially large problem to smaller ones (recursively until it is "small enough" to be on its own) which you can oversee and reuse and test independently from others, and you assemble your large problem from the smaller ones. It's not really a question of performance but maintainability and reusability. Would you really want to copy that 100-line code which checks the user's identity to all your 10 different handlers?
One last thing. Should you be concerned about "consuming" the stack (resulting in a stack overflow error)? The answer is no. A goroutine starts with a small 4096 byte stack which grows and shrinks as needed without the risk of ever running out. Read more about it at Why is a Goroutine’s stack infinite? Also detailed at FAQ: Why goroutines instead of threads?
To make the stacks small, Go's run-time uses resizable, bounded stacks. A newly minted goroutine is given a few kilobytes, which is almost always enough. When it isn't, the run-time grows (and shrinks) the memory for storing the stack automatically, allowing many goroutines to live in a modest amount of memory. The CPU overhead averages about three cheap instructions per function call.
For example, an immutable CFString can store the length and the character data in the same block of memory. And, more generally, there is NSAllocateObject(), which lets you specify extra bytes to be allocated after the object’s ivars. The amount of storage is determined by the particular instance rather than being fixed by the class. This reduces memory use (one allocation instead of two) and improves locality of reference. Is there a way to do this with Swift?
A rather later reply. 😄 NSAllocateObject() is now deprecated for some reason. However, NSAllocateObject() is really a wrapper around class_createInstance which is not deprecated. So, in principle, one could use this to allocate extra bytes for an object instance.
I can't see why this wouldn't work in Swift. But accessing the extra storage would be messy because you'd have to start fooling around with unsafe pointers and the like. Moreover, if you're not the author of the original class, then you risk conflicting with Apple's ivars, especially in cases where you might be dealing with a class cluster which could potentially have a number of different instance sizes, according to the specific concrete implementation.
I think a safter approach would be to make use of objc_setAssociatedObject and objc_getAssociatedObject, which are accessible in Swift. E.g. Is there a way to set associated objects in Swift?
This page has been quite confusing for me.
It says:
Memory management in newLISP does not rely on a garbage collection algorithm. Memory is not marked or reference-counted. Instead, a decision whether to delete a newly created memory object is made right after the memory object is created.
newLISP follows a one reference only (ORO) rule. Every memory object not referenced by a symbol is obsolete once newLISP reaches a higher evaluation level during expression evaluation. Objects in newLISP (excluding symbols and contexts) are passed by value copy to other user-defined functions. As a result, each newLISP object only requires one reference.
Further down, I see:
All lists, arrays and strings are passed in and out of built-in functions by reference.
I can't make sense of these two.
How can newLISP "not rely on a garbage collection algorithm", and yet pass things by reference?
For example, what would it do in the case of circular references?!
Is it even possible for a LISP to not use garbage collection, without making performance go down the drain? (I assume you could always pass things by value, or you could always perform a full-heap scan whenever you think it might be necessary, but then it seems to me like that would insanely hurt your performance.)
If so, how would it deal with circular references? If not, what do they mean?
Perhaps reading http://www.newlisp.org/ExpressionEvaluation.html helps understanding the http://www.newlisp.org/MemoryManagement.html paper better. Regarding circular references: they do not exist in newLISP, there is no way to create them. The performance question is addressed in a sub chapter of that memory management paper and here: http://www.newlisp.org/benchmarks/
May be working and experimenting with newLISP - i.e. trying to create a circular reference - will clear up most of the questions.
I've been creating a temporary object stack- mainly for the use of heap-based STL structures which only actually have temporary lifetimes, but any other temporary dynamically sized allocation too. The one stack performs all types- storing in an unrolled linked list.
I've come a cropper with alignment. I can get the alignment with std::alignment_of<T>, but this isn't really great, because I need the alignment of the next type I want to allocate. Right now, I've just arbitrarily sized each object at a multiple of 16, which as far as I know, is the maximal alignment for any x86 or x64 type. But now, I'm having two pointers of memory overhead per object, as well as the cost of allocating them in my vector, plus the cost of making every size round up to a multiple of 16.
On the plus side, construction and destruction is fast and reliable.
How does this compare to regular operator new/delete? And, what kind of test suites can I run? I'm pretty pleased with my current progress and don't want to find out later that it's bugged in some nasty subtle fashion, so any advice on testing the operations would be nice.
This doesn't really answer your question, but Boost has just recently added a memory pool library in the most recent version.
It may not be exactly what you want, but there is a thorough treatment of alignment which might spark an idea? If the docs are not enough, there is always the source code.
Does anyone know of a GC algorithm which utilises type information to allow incremental collection, optimised collection, parallel collection, or some other nice feature?
By type information, I mean real semantics. Let me give an example: suppose we have an OO style class with methods to maintain a list which hide the representation. When the object becomes unreachable, the collector can just run down the list deleting all the nodes. It knows they're all unreachable now, because of encapsulation. It also knows there's no need to do a general scan of the nodes for pointers, because it knows all the nodes are the same type.
Obviously, this is a special case and easily handled with destructors in C++. The real question is whether there is way to analyse types used in a program, and direct the collector to use the resulting information to advantage. I guess you'd call this a type directed garbage collector.
The idea of at least exploiting containers for garbage collection in some way is not new, though in Java, you cannot generally assume that a container holds the only reference to objects within it, so your approach will not work in that context.
Here are a couple of references. One is for leak detection, and the other (from my research group) is about improving cache locality.
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4814126
http://www.cs.umass.edu/~emery/pubs/06-06.pdf
You might want to visit Richard Jones's extensive garbage collection bibliography for more references, or ask the folks on gc-list.
I don't think it has anything to do with a specific algorithm.
When the GC computes the graph of objects relationship, the information that a Collection object is sole responsible for those elements of the list is implicitly present in the graph if the compiler was good enough to extract it.
Whatever the GC algorithm chosen: the information depends more on how the compiler/runtime will extract this information.
Also, I would avoid C and C++ with GC. Because of pointer arithmetic, aliasing and the possibility to point within an object (reference on a data member or in an array), it's incredibly hard to perform accurate garbage collection in these languages. They have not been crafted for it.