Does the hardware-prefetcher benefit in this memory access pattern? - performance

I have two arrays: A with N_A random integers and B with N_B random integers between 0 and (N_A - 1). I use the numbers in B as indices into A in the following loop:
for(i = 0; i < N_B; i++) {
sum += A[B[i]];
}
Experimenting on an Intel i7-3770, N_A = 256 million, N_B = 64 million, this loop takes only .62 seconds, which corresponds to a memory access latency of about 9 nanoseconds.
As this latency is too small, I was wondering if the hardware prefetcher is playing a role. Can someone offer an explanation?

The HW prefetcher can see through your first level of indirection (B[i]) since these elements are sequential. It's capable of issuing multiple prefetches ahead, so you could assume that the average access into B would hit the caches (either L1 or L2). However, there's no way that the prefetcher can predict random addresses (the data stored in B) and prefetch the correct elements from A. You still have to perform a memory access in almost all accesses to A (disregarding occasional lucky cache hits due to reuse of lines)
The reason you see such low latency is that the accesses into A are non serialized, the CPU can access multiple elements of A simultaneously, so the time doesn't just accumulate. In fact, you measure memory BW here, checking how long it takes to access 64M elements overall, not memory latency (how long it takes to access a single element).
A reasonable "snapshot" of the CPU memory unit should show several outstanding requests - a few accesses into B[i], B[i+64], ... (the intermediate accesses should simply get merged as each request fetches a 64Byte line), all of which would probably be prefetches reflecting future values of i, intermixed with random accesses to A elements according to the previously fetched elements of B.
To measure latency, you need each access to depends on the result of the previous one, for e.g. by making the content of each element in A the index of the next access.

The CPU charges ahead in the instruction stream and will juggle multiple outstanding loads at once. The stream looks like this:
load b[0]
load a[b[0]]
add
loop code
load b[1]
load a[b[1]]
add
loop code
load b[1]
load a[b[1]]
add
loop code
...
The iterations are only serialized by the loop code, which runs quickly. All loads can run concurrently. Concurrency is just limited by how many loads the CPU can handle.
I suspect you wanted to benchmark random, unpredictable, serialized memory loads. This is actually pretty hard on a modern CPU. Try to introduce an unbreakable dependency chain:
int lastLoad = 0;
for(i = 0; i < N_B; i++) {
var load = A[B[i] + (lastLoad & 1)]; //be sure to make A one element bigger
sum += load;
lastLoad = load;
}
This requires the last load to be executed until the address of the next load can be computed.

Related

Temporal locality in memory mountain

In the csapp textbook, the description of memory mountain denotes that increasing working size worsens temporal locality, but I feel like both size and stride factors contribute to spatial locality only, as throughput decreases when more data is sparsely stored in lower level caches.
Where is temporal locality in play here? As far as I know, it means the same specific memory address is referenced again in the near future as seen in this answer: What is locality of reference?
This graph is produced by sequentially traversing fixed-size elements of an array. The stride parameter specifies the number of elements to be skipped between two sequentially accessed elements. The size parameter specifies the total size of the array (including the elements that may be skipped). The main loop of the test looks like this (you can get the code from here):
for (i = 0; i < size / sizeof(double); i += stride*4) {
acc0 = acc0 + data[i];
acc1 = acc1 + data[i+stride];
acc2 = acc2 + data[i+stride*2];
acc3 = acc3 + data[i+stride*3];
}
That loop is shown in the book in Figure 6.40. What is not shown or mentioned in the book is that this loop is executed once to warm up the cache hierarchy and then memory throughput is measured for a number of runs. The minimum memory throughput of all the runs (on the warmed up cache) is the one that is plotted.
Both the size and stride parameters together affect temporal locality (but only the stride affects spatial locality). For example, the 32k-s0 configuration has a similar temporal locality as the 64k-s1 configuration because the first access and last access to every line are interleaved by the same number of cache lines. If you hold the size at a particular value and go along the stride axis, some lines that are repeatedly accessed at a lower stride would not be accessed at higher strides, making their temporal locality essentially zero. It's possible to define temporal locality formally, but I'll not do that to answer the question. On the other hand, if you hold the stride at a particular value and go along the size axis, temporal locality for each accessed line becomes smaller with higher sizes. However, performance deteriorates not because of the uniformly lower temporal locality of each accessed line, but because of the larger working set size.
I think the size axis better illustrates the impact of the size of the working set (the amount of memory the loop is going to access during its execution) on execution time than temporal locality. To observe the impact of temporal locality on performance, the memory throughput of the first run of this loop should be compared against that of the second run of the same loop (same size and stride). Temporal locality increases by the same amount for each accessed cache line in the second run of the loop and, if the cache hierarchy is optimized for temporal locality, the throughput of the second run should be better than that of the first. In general, the throughput of each of N sequential invocations of the same loop should be plotted to see the full impact of temporal locality, where N >= 2.
By the way, memory mountains on other processors can be found here and here. You can create a 3D mountain plot using this or this script.

Faster memory allocation and freeing algorithm than multiple Free List method

We allocate and free many memory blocks. We use Memory Heap. However, heap access is costly.
For faster memory access allocation and freeing, we adopt a global Free List. As we make a multithreaded program, the Free List is protected by a Critical Section. However, Critical Section causes a bottleneck in parallelism.
For removing the Critical Section, we assign a Free List for each thread, i.e. Thread Local Storage. However, thread T1 always memory blocks and thread T2 always frees them, so Free List in thread T2 is always increasing, meanwhile there is no benefit of Free List.
Despite of the bottleneck of Critical Section, we adopt the Critical Section again, with some different method. We prepare several Free Lists as well as Critical Sections which is assigned to each Free List, thus 0~N-1 Free Lists and 0~N-1 Critical Sections. We prepare an atomic-operated integer value which mutates to 0, 1, 2, ... N-1 then 0, 1, 2, ... again. For each allocation and freeing, we get the integer value X, then mutate it, access X-th Critical Section, then access X-th Free List. However, this is quite slower than the previous method (using Thread Local Storage). Atomic operation is quite slow as there are more threads.
As mutating the integer value non-atomically cause no corruption, we did the mutation in non-atomic way. However, as the integer value is sometimes stale, there is many chance of accessing the same Critical Section and Free List by different threads. This causes the bottleneck again, though it is quite few than the previous method.
Instead of the integer value, we used thread ID with hashing to the range (0~N-1), then the performance got better.
I guess there must be much better way of doing this, but I cannot find an exact one. Are there any ideas for improving what we have made?
Dealing with heap memory is a task for the OS. Nothing guarantees you can do a better/faster job than the OS does.
But there are some conditions where you can get a bit of improvement, specially when you know something about your memory usage that is unknown to the OS.
I'm writting here my untested idea, hope you'll get some profit of it.
Let's say you have T threads, all of them reserving and freeing memory. The main goal is speed, so I'll try not to use TLS, nor critical blocking, not atomic ops.
If (repeat: if, if, if) the app can fit to several discrete sizes of memory blocks (not random sizes, so as to avoid fragmentation and unuseful holes) then start asking the OS for a number of these discrete blocks.
For example, you have an array of n1 blocks each of size size1, an array of n2 blocks each of size size2, an array of n3... and so on. Each array is bidimensional, the second field just stores a flag for used/free block. If your arrays are very large then it's better to use a dedicated array for the flags (due to contiguous memory usage is always faster).
Now, some one asks for a block of memory of size sB. A specialized function (or object or whatever) searches the array of blocks of size greater or equal to sB, and then selects a block by looking at the used/free flag. Just before ending this task the proper block-flag is set to "used".
When two or more threads ask for blocks of the same size there may be a corruption of the flag. Using TLS will solve this issue, and critical blocking too. I think you can set a bool flag at the beggining of the search into flags-array, that makes the other threads to wait until the flag changes, which only happens after the block-flag changes. With pseudo code:
MemoryGetter(sB)
{
//select which array depending of 'sB'
for (i=0, i < numOfarrays, i++)
if (sizeOfArr(i) >= sB)
arrMatch = i
break //exit for
//wait if other thread wants a block from the same arrMatch array
while ( searching(arrMatch) == true )
; //wait
//blocks other threads wanting a block from the same arrMatch array
searching(arrMatch) = true
//Get the first free block
for (i=0, i < numOfBlocks, i++)
if ( arrOfUsed(arrMatch, i) != true )
selectedBlock = addressOf(....)
//mark the block as used
arrOfUsed(arrMatch, i) = true
break; //exit for
//Allow other threads
searching(arrMatch) = false
return selectedBlock //NOTE: selectedBlock==NULL means no free block
}
Freeing a block is easier, just mark it as free, no thread concurrency issue.
Dealing with no free blocks is up to you (wait, use a bigger block, ask OS for more, etc).
Note that the whole memory is reserved from the OS at app start, which can be a problem.
If this idea makes your app faster, let me know. What I can say for sure is that memory used is greater than if you use normal OS request; but not much if you choose "good" sizes, those most used.
Some improvements can be done:
Cache the last freeded block (per size) so as to avoid the search.
Start with not that much blocks, and ask the OS for more memory only
when needed. Play with 'number of blocks' for each size depending on
your app. Find the optimal case.

Why is iterating through the outer dimension with an outer loop faster than with an inner loop?

Let's consider a matrix
std::vector<std::vector<int>> matrix;
where each row has the same length. I will call each std::vector<int> a column.
Why is iterating through the outer dimension with an outer loop faster than with an inner loop?
First program: Iterating through columns first
int sum = 0;
for (int col = 0 ; col < matrix.size() ; col++)
{
for (int row = 0 ; row < matrix[0].size() ; row++)
{
sum += matrix[col][row];
}
}
Second program: Iterating through rows first
int sum = 0;
for (int row = 0 ; row < matrix[0].size() ; row++) // Assuming there is at least one element in matrix
{
for (int col = 0 ; col < matrix.size() ; col++)
{
sum += matrix[col][row];
}
}
Here are my guesses
Jumping around the memory
I could have a vague intuition that jumping around in the memory would take more time than reading the memory that is contiguous but I thought memory access of the RAM takes constant time. Plus, there are no moving part in the DRAM and I don't understand why it would be faster to read two ints if they are continuous?
Bus width
An int take either 2 bytes (although it may vary depending on the data model). In a machine with a 8 byte wide bus, I can imagine that eventually if the ints are contiguous in memory, then 4 ints (depending on the data model) could be sent to the processor at every clock cycle, while only one int could be sent per clock cycle if they are not contiguous.
If this is the case, then if the matrix would contain long long int which are 8 byte long, we would not see any difference between the two programs anymore (I haven't tested it).
Cache
I am not sure why, but I feel like the cache could be the reason for why the second program is slower. The effect with the cache might be related to the bus size argument I talked about just above. It is possible that only memory that is contiguous in the DRAM could load in the cache but I don't know why it would be the case.
Yeah, it's cache.
There's a strange coincidence1 that when programs access data in memory, they often access nearby data immediately or shortly after.
CPU designers realized this and thus design caches to load a whole chunk of memory at once.
So when you access matrix[0][0], much, if not all of the rest of matrix[0] was pulled into cache along with the single element at matrix[0][0], whereas there's a good chance that nothing from matrix[20] made it into cache.
Note that this is dependent on your matrix consisting of contiguous arrays, at least at the last dimension. If you were using, say linked lists, you would probably2 not see much difference, instead experience the slower performance regardless of access order.
The reason is that cache loads contiguous blocks. Consider if matrix[0][0] refers to the memory address 0x12340000. Accessing that would load that byte, plus the next 127 bytes into cache (exact amount depends on the cpu). So you'd have every byte from 0x12340000 to 0x1234007F in cache.
In a contiguous array, your next element at 0x12340004 is already in cache. But linked lists aren't contiguous, the next element could be virtually anywhere. If it's outside of that 0x12340000 to 0x1234007F range, you haven't gained anything.
1 It's really not that strange of a coincidence if you think about it. Using local stack variables? Accesses to the same area of memory. Iterating across a 1-dimensional array? Lots of accesses to the same area of memory. Iterating across a 2-dimensional array with the outer dimension in the outer loop and the inner arrays in the inner, nested loop? Basically iterating over a bunch of 1-dimensional arrays.
2 It's possibly you might luck out and have your linked list's nodes all right next to each other, but that seems like a very unlikely scenario. And you'll still won't fit as many elements in cache because the pointers to the next element takes up space, and there will be an additional, small performance hit from the indirection.
When Going Column - row, you are counting like so ([C][R]) [0][0] + [0][1] + [0][2] ... and so on. So you are not switching between the elements of the array.
When going row - column, you are counting like so ([C][R]) [0][0] + [1][0] + [2][0] This way you are switching between the elements of the array each time, so in DRAM it takes longer.
2D Arrays are handled like so : new Array{array1, array2, array3}; Arrays inside of an array. Counting down an array (C-R) is faster than Switching arrays and counting the element in the same row (R-C).
Arrays are a sectioned off piece of memory so when you have 2D arrays and you count (R-C) you are jumping around in the DRAM which is slower.
It does not matter that there are no mechanical parts in DRAM, jumping around will be slower. Example: SRAM has no mechanical parts but is slower than DRAM (with larger size of course) because more distance has the be traveled do to the larger size of the extra transistors and capacitors.
edit after reading the other answer, I would like to include that when you iterate (C-R) the whole element can be loaded into cache for quick access. But when going (R-C) loading a new array element into cache each time is not efficient or possibly does not happen due to inefficiency.

Why does the speed of memcpy() drop dramatically every 4KB?

I tested the speed of memcpy() noticing the speed drops dramatically at i*4KB. The result is as follow: the Y-axis is the speed(MB/second) and the X-axis is the size of buffer for memcpy(), increasing from 1KB to 2MB. Subfigure 2 and Subfigure 3 detail the part of 1KB-150KB and 1KB-32KB.
Environment:
CPU : Intel(R) Xeon(R) CPU E5620 # 2.40GHz
OS : 2.6.35-22-generic #33-Ubuntu
GCC compiler flags : -O3 -msse4 -DINTEL_SSE4 -Wall -std=c99
I guess it must be related to caches, but I can't find a reason from the following cache-unfriendly cases:
Why is my program slow when looping over exactly 8192 elements?
Why is transposing a matrix of 512x512 much slower than transposing a matrix of 513x513?
Since the performance degradation of these two cases are caused by unfriendly loops which read scattered bytes into the cache, wasting the rest of the space of a cache line.
Here is my code:
void memcpy_speed(unsigned long buf_size, unsigned long iters){
struct timeval start, end;
unsigned char * pbuff_1;
unsigned char * pbuff_2;
pbuff_1 = malloc(buf_size);
pbuff_2 = malloc(buf_size);
gettimeofday(&start, NULL);
for(int i = 0; i < iters; ++i){
memcpy(pbuff_2, pbuff_1, buf_size);
}
gettimeofday(&end, NULL);
printf("%5.3f\n", ((buf_size*iters)/(1.024*1.024))/((end.tv_sec - \
start.tv_sec)*1000*1000+(end.tv_usec - start.tv_usec)));
free(pbuff_1);
free(pbuff_2);
}
UPDATE
Considering suggestions from #usr, #ChrisW and #Leeor, I redid the test more precisely and the graph below shows the results. The buffer size is from 26KB to 38KB, and I tested it every other 64B(26KB, 26KB+64B, 26KB+128B, ......, 38KB). Each test loops 100,000 times in about 0.15 second. The interesting thing is the drop not only occurs exactly in 4KB boundary, but also comes out in 4*i+2 KB, with a much less falling amplitude.
PS
#Leeor offered a way to fill the drop, adding a 2KB dummy buffer between pbuff_1 and pbuff_2. It works, but I am not sure about Leeor's explanation.
Memory is usually organized in 4k pages (although there's also support for larger sizes). The virtual address space your program sees may be contiguous, but it's not necessarily the case in physical memory. The OS, which maintains a mapping of virtual to physical addresses (in the page map) would usually try to keep the physical pages together as well but that's not always possible and they may be fractured (especially on long usage where they may be swapped occasionally).
When your memory stream crosses a 4k page boundary, the CPU needs to stop and go fetch a new translation - if it already saw the page, it may be cached in the TLB, and the access is optimized to be the fastest, but if this is the first access (or if you have too many pages for the TLBs to hold on to), the CPU will have to stall the memory access and start a page walk over the page map entries - that's relatively long as each level is in fact a memory read by itself (on virtual machines it's even longer as each level may need a full pagewalk on the host).
Your memcpy function may have another issue - when first allocating memory, the OS would just build the pages to the pagemap, but mark them as unaccessed and unmodified due to internal optimizations. The first access may not only invoke a page walk, but possibly also an assist telling the OS that the page is going to be used (and stores into, for the target buffer pages), which would take an expensive transition to some OS handler.
In order to eliminate this noise, allocate the buffers once, perform several repetitions of the copy, and calculate the amortized time. That, on the other hand, would give you "warm" performance (i.e. after having the caches warmed up) so you'll see the cache sizes reflect on your graphs. If you want to get a "cold" effect while not suffering from paging latencies, you might want to flush the caches between iteration (just make sure you don't time that)
EDIT
Reread the question, and you seem to be doing a correct measurement. The problem with my explanation is that it should show a gradual increase after 4k*i, since on every such drop you pay the penalty again, but then should enjoy the free ride until the next 4k. It doesn't explain why there are such "spikes" and after them the speed returns to normal.
I think you are facing a similar issue to the critical stride issue linked in your question - when your buffer size is a nice round 4k, both buffers will align to the same sets in the cache and thrash each other. Your L1 is 32k, so it doesn't seem like an issue at first, but assuming the data L1 has 8 ways it's in fact a 4k wrap-around to the same sets, and you have 2*4k blocks with the exact same alignment (assuming the allocation was done contiguously) so they overlap on the same sets. It's enough that the LRU doesn't work exactly as you expect and you'll keep having conflicts.
To check this, i'd try to malloc a dummy buffer between pbuff_1 and pbuff_2, make it 2k large and hope that it breaks the alignment.
EDIT2:
Ok, since this works, it's time to elaborate a little. Say you assign two 4k arrays at ranges 0x1000-0x1fff and 0x2000-0x2fff. set 0 in your L1 will contain the lines at 0x1000 and 0x2000, set 1 will contain 0x1040 and 0x2040, and so on. At these sizes you don't have any issue with thrashing yet, they can all coexist without overflowing the associativity of the cache. However, everytime you perform an iteration you have a load and a store accessing the same set - i'm guessing this may cause a conflict in the HW. Worse - you'll need multiple iteration to copy a single line, meaning that you have a congestion of 8 loads + 8 stores (less if you vectorize, but still a lot), all directed at the same poor set, I'm pretty sure there's are a bunch of collisions hiding there.
I also see that Intel optimization guide has something to say specifically about that (see 3.6.8.2):
4-KByte memory aliasing occurs when the code accesses two different
memory locations with a 4-KByte offset between them. The 4-KByte
aliasing situation can manifest in a memory copy routine where the
addresses of the source buffer and destination buffer maintain a
constant offset and the constant offset happens to be a multiple of
the byte increment from one iteration to the next.
...
loads have to wait until stores have been retired before they can
continue. For example at offset 16, the load of the next iteration is
4-KByte aliased current iteration store, therefore the loop must wait
until the store operation completes, making the entire loop
serialized. The amount of time needed to wait decreases with larger
offset until offset of 96 resolves the issue (as there is no pending
stores by the time of the load with same address).
I expect it's because:
When the block size is a 4KB multiple, then malloc allocates new pages from the O/S.
When the block size is not a 4KB multiple, then malloc allocates a range from its (already allocated) heap.
When the pages are allocated from the O/S then they are 'cold': touching them for the first time is very expensive.
My guess is that, if you do a single memcpy before the first gettimeofday then that will 'warm' the allocated memory and you won't see this problem. Instead of doing an initial memcpy, even writing one byte into each allocated 4KB page might be enough to pre-warm the page.
Usually when I want a performance test like yours I code it as:
// Run in once to pre-warm the cache
runTest();
// Repeat
startTimer();
for (int i = count; i; --i)
runTest();
stopTimer();
// use a larger count if the duration is less than a few seconds
// repeat test 3 times to ensure that results are consistent
Since you are looping many times, I think arguments about pages not being mapped are irrelevant. In my opinion what you are seeing is the effect of hardware prefetcher not willing to cross page boundary in order not to cause (potentially unnecessary) page faults.

are array initialization operations cached as well

If you are not reading a value but assigning a value
for example
int array[] = new int[5];
for(int i =0; i < array.length(); i++){
array[i] = 2;
}
Still does the array come to the cache? Can't the cpu bring the array elements one by one to its registers and do the assignment and after that write the updated value to the main memory, bypasing the cache because its not necessary in this case ?
The answer depends on the cache protocol I answered assuming Write Back Write Allocate.
The array will still come to the cache and it will make a difference. When a cache block is retrieved from it's more than just a single memory location (the actual size depends on the design of the cache). So since arrays are stored in order in memory pulling in array[0] will pulling the rest of the block which will include (at least some of) array[1] array[2] array[3] and array[4]. This means the following calls will not have to access main memory.
Also after all this is done the values will NOT be written to memory immediately (under write back) instead the CPU will keep using the cache as the memory for reads/writes until that cache block is replaced from the cache at which point the values will be written to main memory.
Overall this is preferable to going to memory every time because the cache is much faster and the chances are the user is going to use the memory he just set relatively soon.
If the protocol is Write Through No Allocate then it won't bring the block into memory and it will right straight through to the main memory.

Resources