Reflection.Emit Performance - emit

Here's a simple question.
Let's say we want to unroll a looping method such as:
public int DoSum1(int n)
{
int result = 0;
for(int i = 1;i <= n; i++)
{
result += i;
}
return result;
}
Into a method performing simple additions only:
public int DoSum2( )
{
return 1+2+3+4+5+6+7+8+9+10+11+12+13+14+15+16+17+18+19+20;
}
[http://etutorials.org/Programming/Programming+C.Sharp/Part+III+The+CLR+and+the+.NET+Framework/Chapter+18.+Attributes+and+Reflection/18.3+Reflection+Emit/][1]
Logically, we're going to need code to create DoSum2 in IL at some point.
In this IL generation code we will perform an actual loop with the same iteration count than the unoptimized method.
What's the point of creating a super fast dynamic method if the code required to generate it will use a similar amount of time to execute???
Perhaps you can give an example, when it worths using Emit in a similar case?

What's the point of creating a super fast dynamic method if the code required to generate it will use a similar amount of time to execute
This isn't really specific to Reflection.Emit, but to runtime code generation in general, so I will answer accordingly.
First, I do not recommend using code generation simply to perform micro-optimizations that compilers normally perform like loop unrolling. Let the JIT compiler do it's job.
Second, you are right in that there is usually little point in generating code that will only execute once. The time required to emit and JIT compile the IL is not insubstantial. You should only bother generating code if it will be executed many times.
Now, there definitely are cases where runtime code generation can prove beneficial. In fact, it's a technique I leverage heavily. I work in an electronic trading environment where it is necessary to process very high volumes of dynamic data. This introduces several concerns, the most significant being memory usage and throughput.
Our trading application needs to keep a lot of data in memory, so the footprint of each record is critical. Dynamic data structures like maps/dictionaries are less efficient than "POCO" classes with optimized field layouts and, depending on the design, may require boxing some values. I avoid this overhead by generating client-side storage classes once the shape of the data is known. In effect, the memory layout is as it would have been had I known the shape of the data at compile time.
Throughput is a major issue as well; (de)serializing dynamic data often involves some additional introspection and extra layers of indirection. Need to serialize a record? OK, first you need to query what the fields are. Then, for each field, you need to determine its type, then select a serializer for that type, and then invoke the serializer. If your data structure has optional fields, you may need to do some additional pre-processing, like figuring out the size of a presence map, and which bits in the presence map correspond to which fields. If you need to process a ton of data, all that overhead becomes a real problem. I avoid this overhead by generating specialized (de)serializers on both the server side and client side. Since the serializers are generated on demand, they can know the exact shape of the data, and read/write that data as efficiently as a hand-optimized serializer. When you have a high volume of data updating at very high frequencies, this can make a huge difference.
Now, keep in mind that we're something of an edge case. Most applications do not have the aggressive memory and throughput requirements that ours has, so runtime code generation isn't necessary. You should only go that route if you really need it, and you have exhausted all other possibilities. Although it can help with performance, generated code can be very difficult to debug and maintain.

Related

Parallel counting using a functional approach and immutable data structures?

I have heard and bought the argument that mutation and state is bad for concurrency. But I struggle to understand what the correct alternatives actually are?
For example, when looking at the simplest of all tasks: counting, e.g. word counting in a large corpus of documents. Accessing and parsing the document takes a while so we want to do it in parallel using k threads or actors or whatever the abstraction for parallelism is.
What would be the correct but also practical pure functional way, using immutable data structures to do this?
The general approach in analyzing data sets in a functional way is to partition the data set in some way that makes sense, for a document you might cut it up into sections based on size. i.e. four threads means the doc is sectioned into four pieces.
The thread or process then executes its algorithm on each section of the data set and generates an output. All the outputs are gathered together and then merged. For word counts, for example, a collection of word counts are sorted by the word, and then each list is stepped through using looking for the same words. If that word occurs in more than one list, the counts are summed. In the end, a new list with the sums of all the words is output.
This approach is commonly referred to as map/reduce. The step of converting a document into word counts is a "map" and the aggregation of the outputs is a "reduce".
In addition to the advantage of eliminating the overhead to prevent data conflicts, a functional approach enables the compiler to optimize to a faster approach. Not all languages and compilers do this, but because a compiler knows its variables are not going to be modified by an outside agent it can apply transforms to the code to increase its performance.
In addition, functional programming lets systems like Spark to dynamically create threads because the boundaries of change are clearly defined. That's why you can write a single function chain in Spark, and then just throw servers at it without having to change the code. Pure functional languages can do this in a general way making every application intrinsically multi-threaded.
One of the reasons functional programming is "hot" is because of this ability to enable multiprocessing transparently and safely.
Mutation and state are bad for concurrency only if mutable state is shared between multiple threads for communication, because it's very hard to argue about impure functions and methods that silently trash some shared memory in parallel.
One possible alternative is using message passing for communication between threads/actors (as is done in Akka), and building ("reasonably pure") functional data analysis frameworks like Apache Spark on top of it. Apache Spark is known to be rather suitable for counting words in a large corpus of documents.

how to remove nested foreach loops for performance emprovement

I have a performance-based question.
Is there a way to remove the nested foreach loops replacing them with something more performant ? Here is an example:
List<foo> foos = SelectAllfoos();
foreach(foo f in foos){
//dosomething
foreach(foo2 f2 in foo.GetFoos2()){
//dosomething
}
foreach(foo3 f3 in foo.GetFoos3()){
//dosomething
}
foreach(foo4 f4 in foo.GetFoos4()){
//dosomething
foreach(foo4_1 f4_1 in f4.GetFoos4_1()){
//dosomething
}
}
}
Obiouvsly it is a fake code I just invented for this example. But imagine you have something like that. How should you improve this method's performances?
PS: I already tried using System.Threading.Task.Parallel.ForEachand it improve performance, but I mean a better way to write this code.
PPS: this is written in C#, but my question regards a wider scope, something useful in all languages.
Since the question is rather general and only focused on loops which provide no information about the actual work being done, I can only provide a general answer.
The last thing you typically want to focus on are the loop mechanics themselves. These often yield little, if any, impact.
Typically if you have this kind of situation where algorithmic improvements are out (ex: sequential loops that cannot do better than linear-time complexity as they require traversing and doing something with every single element no matter what), then the two biggest improvements will often come from parallelization and memory optimization.
The latter one is unfortunately less discussed, especially in higher-level languages, but often carries just as much or more impact. It can improve execution times by orders of magnitudes, and is applicable regardless of the language. Concepts like cache efficiency are not language-dependent concepts, as the hardware remains the same no matter what programming language we use (though how we achieve it can vary considerably between languages).
Memory Access Patterns
For example, take an image processing algorithm. In that case, given two otherwise identical machine instructions (except for the fact that they are swapped), a memory access pattern accessing pixels one horizontal scanline at a time in the outer loop can significantly outperform a memory access pattern that accesses pixels one vertical column of pixels at a time. This would be true even with otherwise identical machine instructions that have the same total instruction-level cost (though instruction costs are variable), but merely access memory in a swapped order.
It's because, put crudely, computers fetch data from slower forms of memory into faster forms of memory in contiguous chunks (pages, cache lines). When you access pixels of an image horizontally, an adjacent, horizontal chunk of pixels might be fetched from a slower form of memory into a faster form, and you end up accessing all the neighboring pixels from the faster form of memory prior to moving on to the next series of pixels. When you access pixels of an image in a vertical fashion, you end up loading horizontal neighboring pixels into a faster form of memory only to use one pixel from that column. The result can significantly slow down the resulting image algorithm as a result of cache misses, since we're failing to use all the data available when it's loaded into a smaller but faster form of memory prior to it being evicted (we're basically wasting a lot of the benefits of that smaller but faster memory).
So typically if you want to make loops go faster, and algorithmic improvements are out, you want to analyze the way that memory is being accessed and potentially change even the memory layout of the data structures involved. Computers like it when you access contiguous data close together in memory, and don't like it so much when you're accessing memory in a chaotic way that's going all over the place. They like arrays which pack their memory contents tightly together a lot more than linked structures which scatter the memory all over the place (unless the linked structures or their memory allocators are carefully designed not to do that). Speedy loops don't come from changing the mechanics of the loop so much as what the loops are doing, but deeper than algorithmic improvements and perhaps even parallelization are those memory-related optimizations coming from a data-oriented design mindset. In languages like C#, one of the techniques to get better locality of reference out of your data structures is object pooling.
Loop Tiling/Blocking
Occasionally there are opportunities where you can improve the memory access patterns by simply changing the way you loop over the data without actually changing the way the data is represented. One such example is loop tiling (aka loop blocking): https://software.intel.com/en-us/articles/how-to-use-loop-blocking-to-optimize-memory-use-on-32-bit-intel-architecture. But again, here the speedup isn't coming from optimizing how you write the loop, per se, but optimizing the way you traverse the data in a way that exploits locality of reference. It's still entirely about memory access.
Profiling
All of these micro-level optimization techniques have a tendency to make your code harder to maintain, so they're almost always best applied in hindsight with plenty of profiling measurements in your hand. The first thing to learn about optimization in general is how to measure, to do it based on hard data rather than hunches. Beginners tend to want to optimize more, not less, because they're doing it based on guesses about what might be inefficient instead of hard data and proper measurements. It's easy to do this for glaring algorithmic bottlenecks, but anything else typically demands a profiler in your hand. A good optimizer is a sniper dispatching hotspots, not a grenadier blindly hurling grenades at anything that might slow things down. In fact, knowing how to prioritize optimizations properly and to make the proper measurements is probably even more important than understanding the inner workings of the machine. So probably beyond all this stuff, if you want to make your loops go faster, first grab a profiler and learn how to measure inefficiencies properly. The first thing to ask is not how to make things faster so much as what actually needs to be faster (and just as importantly if not more, what doesn't).

Techniques for handling arrays whose storage requirements exceed RAM

I am author of a scientific application that performs calculations on a gridded basis (think finite difference grid computation). Each grid cell is represented by a data object that holds values of state variables and cell-specific constants. Until now, all grid cell objects have been present in RAM at all times during the simulation.
I am running into situations where the people using my code wish to run it with more grid cells than they have available RAM. I am thinking about reworking my code so that information on only a subset of cells is held in RAM at any given time. Unfortunately the grids (or matrices if you prefer) are not sparse, which eliminates a whole class of possible solutions.
Question: I assume that there are libraries out in the wild designed to facilitate this type of data access (i.e. retrieve constants and variables, update variables, store for future reference, wipe memory, move on...) After several hours of searching Google and Stack Overflow, I have found relatively few libraries of this sort.
I am aware of a few options, such as this one from the HSL mathematical library: http://www.hsl.rl.ac.uk/specs/hsl_of01.pdf. I'd prefer to work with something that is open source and is written in Fortran or C. (my code is mostly Fortran 95/2003, with a little C and Python thrown in for good measure!)
I'd appreciate any suggestions regarding available libraries or advice on how to reformulate my problem. Thanks!
Bite the bullet and roll your own?
I deal with too-large data all the time, such as 30,000+ data series of half-hourly data that span decades. Because of the regularity of the data (daylight savings changeovers a problem though) it proved quite straightforward to devise a scheme involving a random-access disc file and procedures ReadDay and WriteDay that use a series number, and a day number, with further details because series start and stop at different dates. Thus, a day's data in an array might be Array(Run,DayNum) but now is ReturnCode = ReadDay(Run,DayNum,Array) and so forth, the codes indicating presence/absence of that day's data, etc. The key is that a day's data is a convenient size, and a regular (almost) size, and although my prog. allocates a buffer of one record per series, it runs in ~100MB of memory rather than GB.
Because your array is non-sparse, it is regular. Granted that a grid cell's data are of fixed size, you could devise a random-access disc file with each record holding one cell, or, perhaps a row's worth of cells (or a column's worth of cells) or some worthwhile blob size. I choose to have 4,096 bytes/record as that is the disc file allocation size. Let the computer's operating system and disc storage controller do whatever buffering to real memory they feel up to. Typical execution is restricted to the speed of data transfer however, unless the local data's computation is heavy. Thus, I get cpu use of a few percent until data requests start being satisfied from buffers.
Because fortran uses the same syntax for functions as for arrays (unlike say Pascal), instead of declaring DIMENSION ARRAY(Big,Big) you would remove that and devise FUNCTION ARRAY(i,j), and all read references in your source file stay as they are. Alas, in the absence of a "palindromic" function declaration, assignments of values to your array will have to be done with a different syntax and you devise a subroutine or similar. Possibly a scratchpad array could be collated, worked upon with convenient syntax, and then written back if changed.

How does SIMD behave in this case?

I am using an engine that allows SIMD code to be written, and it performs fast. But there is only one block that has all the code.
I understand that this code is run independently on each entity concurrently, but when there is only 1 thing changing, is it still faster to calculate it regardless? Is this the idea with SIMD, parallelism?
For instance:
void simdFunction ()
{
center = mesh.center(); // always the same
vert.pos.x = center.x; // run on each vertex
}
In this case, the center is always the same, so will it be calculated for each vertex on SIMD? If so, is this still efficient?
Basically does being able to run this in parallel outweighs the cost of calculating it regardless in the general SIMD programming sense?
this code is run independently on each entity concurrently
No, that's not how SIMD works.
With SIMD, all arithmetic units are working in lock-step, performing identical operations. There's no independence whatsoever.
Generally though, you're better off computing shared constants just once, in sequential code. That way the SIMD engine will spend less time on each slice of vertices.
The exception would be if the computation is short, the SIMD is a co-processor (like GPGPU), and the data is already in that co-processor. Then computing it using SIMD might easily beat moving data back to the sequential processor and back.

Pattern name for flippable data structure?

I'm trying to think of a naming convention that accurately conveys what's going on within a class I'm designing. On a secondary note, I'm trying to decide between two almost-equivalent user APIs.
Here's the situation:
I'm building a scientific application, where one of the central data structures has three phases: 1) accumulation, 2) analysis, and 3) query execution.
In my case, it's a spatial modeling structure, internally using a KDTree to partition a collection of points in 3-dimensional space. Each point describes one or more attributes of the surrounding environment, with a certain level of confidence about the measurement itself.
After adding (a potentially large number of) measurements to the collection, the owner of the object will query it to obtain an interpolated measurement at a new data point somewhere within the applicable field.
The API will look something like this (the code is in Java, but that's not really important; the code is divided into three sections, for clarity):
// SECTION 1:
// Create the aggregation object, and get the zillion objects to insert...
ContinuousScalarField field = new ContinuousScalarField();
Collection<Measurement> measurements = getMeasurementsFromSomewhere();
// SECTION 2:
// Add all of the zillion objects to the aggregation object...
// Each measurement contains its xyz location, the quantity being measured,
// and a numeric value for the measurement. For example, something like
// "68 degrees F, plus or minus 0.5, at point 1.23, 2.34, 3.45"
foreach (Measurement m : measurements) {
field.add(m);
}
// SECTION 3:
// Now the user wants to ask the model questions about the interpolated
// state of the model. For example, "what's the interpolated temperature
// at point (3, 4, 5)
Point3d p = new Point3d(3, 4, 5);
Measurement result = field.interpolateAt(p);
For my particular problem domain, it will be possible to perform a small amount of incremental work (partitioning the points into a balanced KDTree) during SECTION 2.
And there will be a small amount of work (performing some linear interpolations) that can occur during SECTION 3.
But there's a huge amount of work (constructing a kernel density estimator and performing a Fast Gauss Transform, using Taylor series and Hermite functions, but that's totally beside the point) that must be performed between sections 2 and 3.
Sometimes in the past, I've just used lazy-evaluation to construct the data structures (in this case, it'd be on the first invocation of the "interpolateAt" method), but then if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch.
In other projects, I've required the user to explicitly call an "object.flip()" method, to switch from "append mode" into "query mode". The nice this about a design like this is that the user has better control over the exact moment when the hard-core computation starts. But it can be a nuisance for the API consumer to keep track of the object's current mode. And besides, in the standard use case, the caller never adds another value to the collection after starting to issue queries; data-aggregation almost always fully precedes query preparation.
How have you guys handled designing a data structure like this?
Do you prefer to let an object lazily perform its heavy-duty analysis, throwing away the intermediate data structures when new data comes into the collection? Or do you require the programmer to explicitly flip the data structure from from append-mode into query-mode?
And do you know of any naming convention for objects like this? Is there a pattern I'm not thinking of?
ON EDIT:
There seems to be some confusion and curiosity about the class I used in my example, named "ContinuousScalarField".
You can get a pretty good idea for what I'm talking about by reading these wikipedia pages:
http://en.wikipedia.org/wiki/Scalar_field
http://en.wikipedia.org/wiki/Vector_field
Let's say you wanted to create a topographical map (this is not my exact problem, but it's conceptually very similar). So you take a thousand altitude measurements over an area of one square mile, but your survey equipment has a margin of error of plus-or-minus 10 meters in elevation.
Once you've gathered all the data points, you feed them into a model which not only interpolates the values, but also takes into account the error of each measurement.
To draw your topo map, you query the model for the elevation of each point where you want to draw a pixel.
As for the question of whether a single class should be responsible for both appending and handling queries, I'm not 100% sure, but I think so.
Here's a similar example: HashMap and TreeMap classes allow objects to be both added and queried. There aren't separate interfaces for adding and querying.
Both classes are also similar to my example, because the internal data structures have to be maintained on an ongoing basis in order to support the query mechanism. The HashMap class has to periodically allocate new memory, re-hash all objects, and move objects from the old memory to the new memory. A TreeMap has to continually maintain tree balance, using the red-black-tree data structure.
The only difference is that my class will perform optimally if it can perform all of its calculations once it knows the data set is closed.
If an object has two modes like this, I would suggest exposing two interfaces to the client. If the object is in append mode, then you make sure that the client can only ever use the IAppendable implementation. To flip to query mode, you add a method to IAppendable such as AsQueryable. To flip back, call IQueryable.AsAppendable.
You can implement IAppendable and IQueryable on the same object, and keep track of the state in the same way internally, but having two interfaces makes it clear to the client what state the object is in, and forces the client to deliberately make the (expensive) switch.
I generally prefer to have an explicit change, rather than lazily recomputing the result. This approach makes the performance of the utility more predictable, and it reduces the amount of work I have to do to provide a good user experience. For example, if this occurs in a UI, where do I have to worry about popping up an hourglass, etc.? Which operations are going to block for a variable amount of time, and need to be performed in a background thread?
That said, rather than explicitly changing the state of one instance, I would recommend the Builder Pattern to produce a new object. For example, you might have an aggregator object that does a small amount of work as you add each sample. Then instead of your proposed void flip() method, I'd have a Interpolator interpolator() method that gets a copy of the current aggregation and performs all your heavy-duty math. Your interpolateAt method would be on this new Interpolator object.
If your usage patterns warrant, you could do simple caching by keeping a reference to the interpolator you create, and return it to multiple callers, only clearing it when the aggregator is modified.
This separation of responsibilities can help yield more maintainable and reusable object-oriented programs. An object that can return a Measurement at a requested Point is very abstract, and perhaps a lot of clients could use your Interpolator as one strategy implementing a more general interface.
I think that the analogy you added is misleading. Consider an alternative analogy:
Key[] data = new Key[...];
data[idx++] = new Key(...); /* Fast! */
...
Arrays.sort(data); /* Slow! */
...
boolean contains = Arrays.binarySearch(data, datum) >= 0; /* Fast! */
This can work like a set, and actually, it gives better performance than Set implementations (which are implemented with hash tables or balanced trees).
A balanced tree can be seen as an efficient implementation of insertion sort. After every insertion, the tree is in a sorted state. The predictable time requirements of a balanced tree are due to the fact the cost of sorting is spread over each insertion, rather than happening on some queries and not others.
The rehashing of hash tables does result in less consistent performance, and because of that, aren't appropriate for certain applications (perhaps a real-time microcontroller). But even the rehashing operation depends only on the load factor of the table, not the pattern of insertion and query operations.
For your analogy to hold strictly, you would have to "sort" (do the hairy math) your aggregator with each point you add. But it sounds like that would be cost prohibitive, and that leads to the builder or factory method patterns. This makes it clear to your clients when they need to be prepared for the lengthy "sort" operation.
Your objects should have one role and responsibility. In your case should the ContinuousScalarField be responsible for interpolating?
Perhaps you might be better off doing something like:
IInterpolator interpolator = field.GetInterpolator();
Measurement measurement = Interpolator.InterpolateAt(...);
I hope this makes sense, but without fully understanding your problem domain it's hard to give you a more coherent answer.
"I've just used lazy-evaluation to construct the data structures" -- Good
"if the user calls the "field.add()" method again, I have to completely discard those data structures and start over from scratch." -- Interesting
"in the standard use case, the caller never adds another value to the collection after starting to issue queries" -- Whoops, false alarm, actually not interesting.
Since lazy eval fits your use case, stick with it. That's a very heavily used model because it is so delightfully reliable and fits most use cases very well.
The only reason for rethinking this is (a) the use case change (mixed adding and interpolation), or (b) performance optimization.
Since use case changes are unlikely, you might consider the performance implications of breaking up interpolation. For example, during idle time, can you precompute some values? Or with each add is there a summary you can update?
Also, a highly stateful (and not very meaningful) flip method isn't so useful to clients of your class. However, breaking interpolation into two parts might still be helpful to them -- and help you with optimization and state management.
You could, for example, break interpolation into two methods.
public void interpolateAt( Point3d p );
public Measurement interpolatedMasurement();
This borrows the relational database Open and Fetch paradigm. Opening a cursor can do a lot of preliminary work, and may start executing the query, you don't know. Fetching the first row may do all the work, or execute the prepared query, or simply fetch the first buffered row. You don't really know. You only know that it's a two part operation. The RDBMS developers are free to optimize as they see fit.
Do you prefer to let an object lazily perform its heavy-duty analysis,
throwing away the intermediate data structures when new data comes
into the collection? Or do you require the programmer to explicitly
flip the data structure from from append-mode into query-mode?
I prefer using data structures that allow me to incrementally add to it with "a little more work" per addition, and to incrementally pull the data I need with "a little more work" per extraction.
Perhaps if you do some "interpolate_at()" call in the upper-right corner of your region, you only need to do calculations involving the points in that upper-right corner,
and it doesn't hurt anything to leave the other 3 quadrants "open" to new additions.
(And so on down the recursive KDTree).
Alas, that's not always possible -- sometimes the only way to add more data is to throw away all the previous intermediate and final results, and re-calculate everything again from scratch.
The people who use the interfaces I design -- in particular, me -- are human and fallible.
So I don't like using objects where those people must remember to do things in a certain way, or else things go wrong -- because I'm always forgetting those things.
If an object must be in the "post-calculation state" before getting data out of it,
i.e. some "do_calculations()" function must be run before the interpolateAt() function gets valid data,
I much prefer letting the interpolateAt() function check if it's already in that state,
running "do_calculations()" and updating the state of the object if necessary,
and then returning the results I expected.
Sometimes I hear people describe such a data structure as "freeze" the data or "crystallize" the data or "compile" or "put the data into an immutable data structure".
One example is converting a (mutable) StringBuilder or StringBuffer into an (immutable) String.
I can imagine that for some kinds of analysis, you expect to have all the data ahead of time,
and pulling out some interpolated value before all the data has put in would give wrong results.
In that case,
I'd prefer to set things up such that the "add_data()" function fails or throws an exception
if it (incorrectly) gets called after any interpolateAt() call.
I would consider defining a lazily-evaluated "interpolated_point" object that doesn't really evaluate the data right away, but only tells that program that sometime in the future that data at that point will be required.
The collection isn't actually frozen, so it's OK to continue adding more data to it,
up until the point something actually extract the first real value from some "interpolated_point" object,
which internally triggers the "do_calculations()" function and freezes the object.
It might speed things up if you know not only all the data, but also all the points that need to be interpolated, all ahead of time.
Then you can throw away data that is "far away" from the interpolated points,
and only do the heavy-duty calculations in regions "near" the interpolated points.
For other kinds of analysis, you do the best you can with the data you have, but when more data comes in later, you want to use that new data in your later analysis.
If the only way to do that is to throw away all the intermediate results and recalculate everything from scratch, then that's what you have to do.
(And it's best if the object automatically handled this, rather than requiring people to remember to call some "clear_cache()" and "do_calculations()" function every time).
You could have a state variable. Have a method for starting the high level processing, which will only work if the STATE is in SECTION-1. It will set the state to SECTION-2, and then to SECTION-3 when it is done computing. If there's a request to the program to interpolate a given point, it will check if the state is SECTION-3. If not, it will request the computations to begin, and then interpolate the given data.
This way, you accomplish both - the program will perform its computations at the first request to interpolate a point, but can also be requested to do so earlier. This would be convenient if you wanted to run the computations overnight, for example, without needing to request an interpolation.

Resources