Computer Architecture: Cache Transfer Analysis - caching

This is a question on my exam study guide and we have not yet covered how to calculate data transfer. Any help would be greatly appreciated.
Given is an 8 way set associative level 2 data cache with a capacity of 2 MByte (1MByte = 2^20 Byte)
and a block size 128 Bytes. The cache is connected to the main memory by a shared 32 bit address and
data bus. The cache and the RISC-CPU are connected by a separated address and data bus, each with a
width of 32 bit. The CPU is executing a load word instruction
a) How much user data is transferred from the main memory to the cache in case of a cache miss?
b) How much user data is transferred from the cache to the CPU in case of a cache miss?

You need to compute first your cache line size:
Number of cache blocks: 2MB / 128B = 16384 blocks (14 bits)
Number of sets: 16384 / 8 way = 2048 sets (11 bits)
Address width: 32 bits
Line offset bits: 32 - 14 - 11 = 7 bits
So the cache line size is 128B - actually a line is a block but it's good to know the above computation.
a) How much user data is transferred from the main memory to the cache
in case of a cache miss?
In your problem, the L2 cache is the last level cache before main memory. So if you miss in the L2 cache (you don't find the line you are looking for), you need to fetch the line from main memory. So 128B of user data will be transferred from the main memory. The fact that the address bus and data bus are shared does not influence.
b) How much user data is transferred from the cache to the CPU in case
of a cache miss?
If you reached the L2 cache that means you missed the L1 cache. So from L2 the CPU has to transfer to L1 a full L1 cache line. So the L1 line size is 128B, then 128B of data will go from L2 to L1. The CPU will use then only a fraction of that line to feed the instruction that generated the miss into the L1 cache. Whether that line is evicted or not from L2, this should have been stated in the problem sentence (inclusive / exclusive cache)

Related

Can all of L2/L3 cache be used by data? If so, why does the Graviton 3 bandwidth plot drop off after half the L2/L3 size, but only gradually?

Consider Graviton3, for example. It's a 64-core CPU with per-core caches 64KiB L1d and 1MiB L2. And a shared L3 of 64MiB across all cores. The RAM bandwidth per socket is 307GB/s (source).
In this plot (source),
we see that all-cores bandwidth drops off to roughly half, when the data exceeds 4MB. This makes sense: 64x 64KiB = 4 MiB is the size of the L1 data cache.
But why does the next cliff begin at 32MB? And why is the drop-off so gradual there? The private L2 caches of 64 cores is a total of 64 MiB, same as the shared L3 size.
It looks from the plot like they may not have tested any sizes between 32M and 64M. Looks like a straight line between those points on all 3 CPUs.
Since 64M is the total size of both L2 and L3, I'd expect a test like this to have slowed most of the way down at 64M. As Brendan says, page tables and a bit of code will take space, competing with the actual intended test data. If the benchmark loop is tight, stack won't come into play, except for interrupt handling.
Once you're evicting anything from a working set slightly larger than cache, you often evict almost everything before getting back to it, depending on pseudo-LRU luck. I'd expect a test size or 48 or even 56 MiB to be a lot closer to the 32 MiB data point than the 64 MiB data point.
Can all of L2/L3 cache be used by data?
In theory, yes; but only if there's no "non-data" (code) in the cache, only if you count "all data" (and don't just count a process' data and ignore things like stack and page tables), and only if there isn't any aliasing problems.
But why does the next cliff begin at 32MB? And why is the drop-off so gradual there?
For a fully associative cache I'd expect a sudden drop off at/near 32 MiB. However, large caches are almost never fully associative as it costs way to much to find anything in the cache.
As associativity decreases the chance of conflicts increases. For example, for an 8-way associative 64 MiB cache the pathological case is that everything conflicts and you're only able to effectively use 8 MiB of it.
More specifically, for a 64 MiB cache (with unknown associativity), and an "assumed Linux" environment that lacks support for cache coloring, it's reasonable to expect a smooth drop off that ends at 64 MiB.
Just to be clear, on a running Graviton 3 in AWS, an lscpu gives me 32MiB for L3 and not 64 MiB.
Caches (sum of all):
L1d: 4 MiB (64 instances)
L1i: 4 MiB (64 instances)
L2: 64 MiB (64 instances)
L3: 32 MiB (1 instance)
The original question is assuming an L3 of 64 MiB across all cores.
Blockquote
But why does the next cliff begin at 32MB? And why is the drop-off so gradual there? The private L2 caches of 64 cores is a total of 64 MiB, same as the shared L3 size.
Blockquote

Mapping 512KB main memory into 1KB cache homework question

I'm sorry if I made an error in posting this. Please let me know if I need to change anything.
I've received my computer architecture homework back and I missed this question. My professor's explanation didn't make sense to me, and I disagree with what he told me, so I am here asking what you guys think.
Here is the question:
A computer uses 16-bit memory addresses. Main memory is 512KB, and the cache is 1KB with 32B per block. Given each of the following mapping functions, calculate the number of bits in each field of the memory address.
Here is how I worked through the direct mapping part of the problem:
Cache memory: 1KB (2^10), 16-bit memory addresses (1 word = 2B) -> 1024B/2B = 512 words, 16 words per block (32B) -> 512/16 = 32 cache memory blocks.
Main memory: 512 KB (2^19), 16-bit memory addresses (1 word = 2B) -> 524288B/2B = 256K words, 16 words per block (32B) -> 256K/16 = 16384 or 16K main memory blocks.
I understand the word tag as such: 32B per block allows for 16 16-bit memory addresses per block. This (I believe) supports that: 1 word = 16 bits = 2 B -> 32B/2B = 16 words in each block. This equates to 2^4 = 4 bits for determining which word in the block, leaving 12 bits for tag and block bits in the memory address.
Now, in order to map 16K main memory blocks directly into 32 cache memory blocks, there will have to be 512 main memory blocks mapped to each cache memory block. So 512/16K blocks per 1/32 blocks.
Here is where I am confused. Doesn't this require 9 tag bits, as 2^9 = 512 (main memory blocks possibly mapped into one cache memory block)?
For the block bits, which point to a particular block in the cache, this requires 5 bits. 2^5 = 32, blocks in cache memory.
This would require 18 bits in the memory address.
Here is my professor's answer for this question:
2^5 = 32 -> 5 Word bits
(1KB)/(32B) = 32 blocks -> 5 Block bits
16 ā€“ 5 ā€“ 5 = 6 Tag bits
I did not realize I could simply subtract the required block and word bits to get the tag bits. But it still doesn't make sense to me. 2^6 = 64 blocks per cache block. 64*32 gives 2048. I can't wrap my head around this. Can someone please help?
Okay, the terminology that i learnt is slightly different but the principal should be the same for this explanation.
So cache will have multiple sets (sort of like a cell). And each set will have 1 cache line (containing 1 block of data) or multiple cache lines (each contain 1 block of data) (direct mapping or n-associativity mapping).
In mapping the main memory blocks to the cache, the main memory address (16 bit) is divided into 3 fields: tag, index bits and offset bits. A memory cell is 1 byte and a block is made up of a few cells
Offset bits are used to access the individual bytes of a memory block. Think of it as the offset on top of the block base address to get the byte you want (i assume your memory should be byte-addressable rather than word-addressable as it doesn't make sense to access 2B word as this would be inflexible) And here your prof/textbook call it as word bit. Hence if a block has 32 Bytes, there would be log2(block size) = 5 bits needed to access the individuals cells in the mapped block.
Index bits (in direct mapped cache is called block bits too as the number of set is the same as the number of blocks in the cache) is used to identify which set/cache line/ cache block that the main memory block is mapped to the cache. There are 1KB/32B = 32 cache blocks in the cache. As direct mapping is used, each set contain only 1 cache block and therefore there will be 32 set in this cache. Thus to access the correct set in cache, 5 bits is needed and therefore index bits = 5 bits
Tag is a name to determine if the data block in cache is the correct one we are looking one from the main memory. As the address of main memory is 16 bit and we already know index and offset fields, it is easy to deduce that tag will need 16 - 5 - 5 6 bits. How we determine the tag is not really a concern as the block size and cache size (and hence no. of sets in cache is given here).

When is it not possible to exploit spatial locality in cache?

We are given a processor whose instructions operate on 8 - byte operands and whose instructions are also encoded using 8 bytes. We are using a 16 kilo-byte,
4-way set associative cache that contains 1024 sets. The cache has 4 * 1024 = 4096 cache lines in total. That means, each cache line is 16KB=4096 = 4 bytes, so each operand and each instruction needs to be stored in two cache lines, which will require 2 accesses to the cache for each load/store operation and instruction fetches. We are told that the cache cannot apply spatial locality, but why? What does spatial locality mean in this context?
So your machine doesn't have single byte loads/stores at all?
If every load is a full cache line (or multiple whole cache lines), then bringing a line into cache for one load will never benefit another load that's spatially nearby.
(Unless you have hardware prefetching to detect sequential accesses and start fetching adjacent cache lines...)

Virtually addressed Cache

Relation between cache size and page size
How does the associativity and page size constrain the Cache size in virtually addressed cache architecture?
Particularly I am looking for an example on the following statement:
If Cā‰¤(page_size x associativity), the cache index bits come only
from page offset (same in Virtual address and Physical address).
Intel CPUs have used 8-way associative 32kiB L1D with 64B lines for many years, for exactly this reason. Pages are 4k, so the page offset is 12 bits, exactly the same number of bits that make up the index and offset within a cache line.
See the "L1 also uses speed tricks that wouldn't work if it was larger" paragraph in this answer for more details about how it lets the cache avoid aliasing problems like a PIPT cache, but still be as fast as a VIPT cache.
The idea is that the virtual address bits below the page offset are already physical address bits. So a VIPT cache that works this way is more like a PIPT cache with free translation of the index bits.

How much data is loaded in to the L2 and L3 caches?

If I have this class:
class MyClass{
short a;
short b;
short c;
};
and I have this code performing calculations on the above:
std::vector<MyClass> vec;
//
for(auto x : vec){
sum = vec.a * (3 + vec.b) / vec.c;
}
I understand the CPU only loads the very data it needs from the L1 cache, but when the L1 cache retrieves data from the L2 cache it loads a whole "cache line" (which could include a few bytes of data it doesn't need).
How much data does the L2 cache load from the L3 cache, and the L3 cache load from main memory? Is it defined in terms of pages and if so, how would this answer differ according to different L2/L3 cache sizes?
L2 and L3 caches also have cache lines that are smaller than a virtual memory system page. The size of L2 and L3 cache lines is greater than or equal to the L1 cache line size, not uncommonly being twice that of the L1 cache line size.
For recent x86 processors, all caches use the same 64-byte cache line size. (Early Pentium 4 processors had 64-byte L1 cache lines and 128-byte L2 cache lines.)
IBM's POWER7 uses 128-byte cache blocks in L1, L2, and L3. (However, POWER4 used 128-byte blocks in L1 and L2, but sectored 512-byte blocks in the off-chip L3. Sectored blocks provide a valid bit for subblocks. For L2 and L3 caches, sectoring allows a single coherence size to be used throughout the system.)
Using a larger cache line size in last level cache reduces tag overhead and facilitates long burst accesses between the processor and main memory (longer bursts can provide more bandwidth and facilitate more extensive error correction and DRAM chip redundancy), while allowing other levels of cache and cache coherence to use smaller chunks which reduces bandwidth use and capacity waste. (Large last level cache blocks also provide a prefetching effect whose cache polluting issues are less severe because of the relatively high capacity of last level caches. However, hardware prefetching can accomplish the same effect with less waste of cache capacity.) With a smaller cache (e.g., typical L1 cache), evictions happen more frequently so the time span in which spatial locality can be exploited is smaller (i.e., it is more likely that only data in one smaller chunk will be used before the cache line is evicted). A larger cache line also reduces the number of blocks available, in some sense reducing the capacity of the cache; this capacity reduction is particularly problematic for a small cache.
It depends somewhat on the ISA and microarchitecture of your platform. Recent x86-64 based microarchitectures use 64 byte lines in all levels of the cache hierarchy.
Typically signed shorts will require two bytes each meaning that MyClass will need 6 bytes in addition the class overhead. If your C++ implementation stores the vector<> contiguously like an array you should get about 10 MyClass objects per 64-byte lines. Provided the vector<> is the right length, you won't load much garbage.
It's wise to note that since you're accessing the elements in a very predictable pattern the hardware prefetcher should kick in and fetch a reasonable amount of data it expects to use in the future. This could potentially bring more than you need into various levels of the cache hierarchy. It will vary from chip to chip.

Resources