In this image, I know that block size is 2^4 words and there are 2^8 blocks. But how can I know that cache is direct-mapped or not and total cache capacity?
Related
I'm currently considering an n x n matrix M of 64-bit integer elements stored in main memory in row-major order. I have an L1 data cache of 16KB split in 64B blocks (no L2 or L3). My code is meant to print out each element of the array one at a time, by either traversing the matrix in row-first order or column-first order.
In the case where n = 16 (i.e. 16 x 16 matrix), I've counted 0 cache misses using both row-first order and column-first order since the matrix M fits entirely in the 16KB cache (it never needs to jump to main memory to fetch an element). How would I deal with the case of, say, n = 256 (256 x 256 matrix of 64-bit ints); i.e. when M doesn't fully fit in the cache? Do I count all the ints that don't fit as misses, or can spatial locality be leveraged somehow? Assume the cache is initially empty.
The "0 cache misses" seems to assume you start out with M already in cache. That's already a bit suspicious, but OK.
For the 256x256 case, you need to simulate how the cache behaves. You must have cache misses to bring in the missing entries. Each cache miss brings in not just the requested int, but also 7 adjacent ints.
Consider a 2KB direct mapped cache with blocks of size 1 word. As
always, addresses are 32 bits.
How many blocks does the cache contain? 2^7
How many bits long is each tag? (Tags are shown in pink in the class
notes.) 2^23
How many bits long is each cache index? (These are green in the notes)
2^7
What is the total size of the cache? (32 + 1+ 23) x 2^7
What percentage of the total size is the overhead?
what is .. overhead .. and percentage of overhead.. ?
overhead is tag size, and any other bits the cache needs to store other than the data itself.
(e.g. for an associative cache with LRU replacement, it would need to store some bits that record the LRU state to track which member of the set is next in line for eviction.)
overhead percentage is obviously overhead / total size, as the assignment says. (not overhead / data).
I have a Fortran code to test the Cache memory of my computer. The code basically compares the time needed to compute the sum of 3 matrices A, B and C of size NxN. N is given as input and all the elements of the matrices are 1.0 .
The sum is made by columns and by lines to compare the Gflops (bearing in mind that in Fortran, the memory loads matrices by columns and not by lines, so: A(1,1) , A(2,1), A(3,1)...).
The for loop is repeated several times in order to compute the average time of the sum. The sum is stored in the same matrix A, so A=A+B+C.
The characteristics of my cache memory are:
hw.l3cachesize: 3145728
hw.l2cachesize: 262144
hw.l1dcachesize: 32768
hw.l1icachesize: 32768
hw.cachelinesize: 64
Since the elements of the matrices are of type real*8, I am expecting to find a decrease in the Gflops when going beyond the cache L1 size.
Actually I also would like to know if I should expect the change at 32 kB or 64 kB, since the L1i is 32 kB and the L1d also 32 kB.
For that I assume that the size stored in the cache in bytes is 3*N*N*8 (3 matrices and 8 bytes per element).
When I check my results I am not able to identify this change in Gflops.
Preface: There are many different design patterns that are important to cache's overall performance. Below are listed parameters for
different direct-mapped cache designs.
Cache data size: 32 kib
Cache block Size: 2 words
Cache access time: 1-cycle
Question: Calculate the number of bits required for the cache listed above, assuming a 32-bit address. Given that total size, find the
total size of the closest direct-mapped cache with 16-word blocks of
equal size or greater. Explain why the second cache, despite its
larger data size, might provide slower performance that the first
cache.
Here's the formula:
Number of bits in a cache 2^n X (block size + tag size + valid field size)
Here's what I got: 65536(1+14X(32X2)..
is this correct?
using: (2^index bits) * (valid bits + tag bits + (data bits * 2^offset bits))
for the first one i get:
total bits = 2^15 (1+14+(32*2^1)) = 2588672 bits
for the cache with 16 word blocks i get:
total bits = 2^13(1 +13+(32*2^4)) = 4308992
the next smallest cache with 16 word blocks and a 32 bit address works out to be 2158592 bits, smaller than the first cache.
I'm stuck on the same problem too but I have the answer to the first part.
To calculate the total number of bits required
You need to convert the KB to words and get the index bits.
Use the answer from part 1 to get your tag bits.
Plug them into this formula.
(2^(index bits)) * ((tag bits)+(valid bits)+(data size))
Hint: data size is 64 bits in this case and valid bit is 1. So just find the index and tag bits.
And I don't think your answer is right. I didn't check but I can see you are multiplying 1+14 and (32x2) instead of adding them.
I think the formula you were using is correct. According to my textbook "Computer Organization and Design The Hardware, 5th edition", the total number of bits in a direct-mapped cache is:
2^indext bits * (block size + tag size + valid field size).
block size was given by the question: 2 words = 32 bits
tag size: 32 - offset in bits - index in bits
valid field size is usually 1 valid bit
I've tried every kind of reasoning I can possibly came out with but I don't really understand this plot.
It basically shows the performance of reading and writing from different size array with different stride.
I understand that for small stride like 4 bytes I read all the cell in the cache, consequently I have good performance. But what happen when I have the 2 MB array and the 4k stride? or the 4M and 4k stride? Why the performance are so bad? Finally why when I have 1MB array and the stride is 1/8 of the size performance are decent, when is 1/4 the size performance get worst and then at half the size, performance are super good?
Please help me, this thing is driving me mad.
At this link, the code: https://dl.dropboxusercontent.com/u/18373264/membench/membench.c
Your code loops for a given time interval instead of constant number of access, you're not comparing the same amount of work, and not all cache sizes/strides enjoy the same number of repetitions (so they get different chance for caching).
Also note that the second loop will probably get optimized away (the internal for) since you don't use temp anywhere.
EDIT:
Another effect in place here is TLB utilization:
On a 4k page system, as you grow your strides while they're still <4k, you'll enjoy less and less utilization of each page (finally reaching one access per page on the 4k stride), meaning growing access times as you'll have to access the 2nd level TLB on each access (possibly even serializing your accesses, at least partially).
Since you normalize your iteration count by the stride size, you'll have in general (size / stride) accesses in your innermost loop, but * stride outside. However, the number of unique pages you access differs - for 2M array, 2k stride, you'll have 1024 accesses in the inner loop, but only 512 unique pages, so 512*2k accesses to TLB L2. on the 4k stride, there would be 512 unique pages still, but 512*4k TLB L2 accesses.
For the 1M array case, you'll have 256 unique pages overall, so the 2k stride would have 256 * 2k TLB L2 accesses, and the 4k would again have twice.
This explains both why there's gradual perf drop on each line as you approach 4k, as well as why each doubling in array size doubles the time for the same stride. The lower array sizes may still partially enjoy the L1 TLB so you don't see the same effect (although i'm not sure why 512k is there).
Now, once you start growing the stride above 4k, you suddenly start benefiting again since you're actually skipping whole pages. 8K stride would access only every other page, taking half the overall TLB accesses as 4k for the same array size, and so on.