When a cache is first designed, is it randomly mapped with some memory addresses or does it is empty at the beginning and fills with memory/lower level cache data only after a load or store instruction from processor?
I have this question , since I have designed the RTL for L1 Cache. So should I leave it blank and wait for any processor to request a read/write or just fill it with some memory mapped data and then comprehend hit/miss accordingly?
First designed? Do you mean first powered on? The normal way would be to start out with all the tags invalid (so it doesn't matter what's in the data arrays or anywhere else).
It's easy to imagine bugs if all the data in your cache was randomly initialized, so some lines would be valid, not-dirty, and have different contents than what's actually in RAM / ROM, so obviously you shouldn't do that. e.g. a hit in this out-of-sync L1 for the boot ROM code would be bad!
If any part of memory is initialized at power-on to known contents (like all-zeros), you could in theory init your cache tags and data so it's caching that memory.
If you init your cache as valid for anywhere that doesn't match what's in memory, you'd need to initialize it as dirty, which would trigger a writeback when the lines are evicted in favour of whatever the CPU actually needs, so that makes no sense.
Related
When we use malloc and access memory, the physical pages being given for this address space has what kind of page attributes, are they cacheable or non-cacheable pages ?
Ordinary memory -- whether for user-space or kernel -- is pretty much always marked cacheable. Otherwise, using that memory would entail a huge performance hit.
Generally speaking, the only time you want memory to be marked non-cacheable is when the memory is actually part of an external device (i.e. a device other than a memory chip): for example, a PCI device BAR region used to implement device control registers.
Caching is good for performance since reading and writing the cache is usually much faster than reading and writing the underlying RAM. And the caching can "bundle up" reads and writes so that those operations on the RAM chip are done significantly less often. The downside is that by using it you generally give up exact control over the reading and writing of the RAM.
The main RAM usually gets read and written at "random" times as determined by the cache controller, and it typically gets read and written in large blocks called "cache lines" -- blocks of 32-, 64- or 128-bytes at a time. When you write a value to cached memory, that value may not get written to the actual RAM chip until some indeterminate later time (if ever: it might get overwritten before it ever gets transferred out of the cache). This is of course all hidden you as a user of the memory -- you don't generally even need to be aware of it.
But if the memory being written to is a control register -- setting some mode or characteristic of a device for example -- then you want the value of that register to be set exactly when you write to it not at some indeterminate later time, and you don't want the write to that register to affect any other registers that may be located near to it in the address space.
Likewise, if you read the value of a status register, it might be "volatile": i.e. its value might change with two consecutive reads of the same register so you don't want the value cached. And reading a register might have side-effects, so you only want explicit reads to access it.
When we want to write a data item, the block containing the data is brought into the cache first and data item is written into the cache. This can cause cache pollution. To avoid this, Intel has introduced no temporal instructions.
If I'm going to be using mmap() to write data to the file and never going to read again, is it possible to avoid TLB entry creation for this ? Is there anything instruction similar to non temporal instructions available ?
TLB entries are needed by the CPU to map from the virtual address to the physical address, so it is not possible to avoid them with mmap() or any similar API.
Even if it were possible to avoid storing the mapping in the TLB, every access to the mapped memory would need to reload the corresponding entries from the page tables, so the performance would be much worse.
Non-temporal accesses make sense only for stores, but the page table entries are read.
My understanding is that the main difference between the two methods is that in "write-through" method data is written to the main memory through the cache immediately, while in "write-back" data is written in a "later time".
We still need to wait for the memory in "later time" so What is the benefit of "write-through"?
The benefit of write-through to main memory is that it simplifies the design of the computer system. With write-through, the main memory always has an up-to-date copy of the line. So when a read is done, main memory can always reply with the requested data.
If write-back is used, sometimes the up-to-date data is in a processor cache, and sometimes it is in main memory. If the data is in a processor cache, then that processor must stop main memory from replying to the read request, because the main memory might have a stale copy of the data. This is more complicated than write-through.
Also, write-through can simplify the cache coherency protocol because it doesn't need the Modify state. The Modify state records that the cache must write back the cache line before it invalidates or evicts the line. In write-through a cache line can always be invalidated without writing back since memory already has an up-to-date copy of the line.
One more thing - on a write-back architecture software that writes to memory-mapped I/O registers must take extra steps to make sure that writes are immediately sent out of the cache. Otherwise writes are not visible outside the core until the line is read by another processor or the line is evicted.
Hope this article can help you Differences between disk Cache Write-through and Write-back
Write-through: Write is done synchronously both to the cache and to the backing store.
Write-back (or Write-behind): Writing is done only to the cache. A modified cache block is written back to the store, just before it is replaced.
Write-through: When data is updated, it is written to both the cache and the back-end storage. This mode is easy for operation but is slow in data writing because data has to be written to both the cache and the storage.
Write-back: When data is updated, it is written only to the cache. The modified data is written to the back-end storage only when data is removed from the cache. This mode has fast data write speed but data will be lost if a power failure occurs before the updated data is written to the storage.
Let's look at this with the help of an example.
Suppose we have a direct mapped cache and the write back policy is used. So we have a valid bit, a dirty bit, a tag and a data field in a cache line.
Suppose we have an operation : write A ( where A is mapped to the first line of the cache).
What happens is that the data(A) from the processor gets written to the first line of the cache. The valid bit and tag bits are set. The dirty bit is set to 1.
Dirty bit simply indicates was the cache line ever written since it was last brought into the cache!
Now suppose another operation is performed : read E(where E is also mapped to the first cache line)
Since we have direct mapped cache, the first line can simply be replaced by the E block which will be brought from memory. But since the block last written into the line (block A) is not yet written into the memory(indicated by the dirty bit), so the cache controller will first issue a write back to the memory to transfer the block A to memory, then it will replace the line with block E by issuing a read operation to the memory. dirty bit is now set to 0.
So write back policy doesnot guarantee that the block will be the same in memory and its associated cache line. However whenever the line is about to be replaced, a write back is performed at first.
A write through policy is just the opposite. According to this, the memory will always have a up-to-date data. That is, if the cache block is written, the memory will also be written accordingly. (no use of dirty bits)
Write-back and write-through describe policies when a write hit occurs, that is when the cache has the requested information. In these examples, we assume a single processor is writing to main memory with a cache.
Write-through: The information is written to the cache and memory, and the write finishes when both have finished. This has the advantage of being simpler to implement, and the main memory is always consistent (in sync) with the cache (for the uniprocessor case - if some other device modifies main memory, then this policy is not enough), and a read miss never results in writes to main memory. The obvious disadvantage is that every write hit has to do two writes, one of which accesses slower main memory.
Write-back: The information is written to a block in the cache. The modified cache block is only written to memory when it is replaced (in effect, a lazy write). A special bit for each cache block, the dirty bit, marks whether or not the cache block has been modified while in the cache. If the dirty bit is not set, the cache block is "clean" and a write miss does not have to write the block to memory.
The advantage is that writes can occur at the speed of the cache, and if writing within the same block only one write to main memory is needed (when the previous block is being replaced). The disadvantages are that this protocol is harder to implement, main memory can be not consistent (not in sync) with the cache, and reads that result in replacement may cause writes of dirty blocks to main memory.
The policies for a write miss are detailed in my first link.
These protocols don't take care of the cases with multiple processors and multiple caches, as is common in modern processors. For this, more complicated cache coherence mechanisms are required. Write-through caches have simpler protocols since a write to the cache is immediately reflected in memory.
Good resources:
http://web.cs.iastate.edu/~prabhu/Tutorial/CACHE/interac.html (what my post is largely based on)
http://www.cs.cornell.edu/courses/cs3410/2013sp/lecture/18-caches3-w.pdf
Write-Back is a more complex one and requires a complicated Cache Coherence Protocol(MOESI) but it is worth it as it makes the system fast and efficient.
The only benefit of Write-Through is that it makes the implementation extremely simple and no complicated cache coherency protocol is required.
I'm using perf as basic event counter. I'm working on a program which suffers from data cache store misses. Which as as high as ratio of %80.
I know how caches in principle work. It loads from memory on various miss cases, removes data from cache when it pleases. What I don't understand is , what is difference between store - load misses. How does it differ loading and storing. How can you store-miss ?
A load-miss (as you know) is referring to when the processor needs to fetch data from main memory, but data does not exist in the cache. So whenever the processor wants some data from the main memory, it esquires the cache, and if the data is already loaded you get a load-hit and otherwise you get a load-miss.
A store-miss is related to when the processor wants to write back the newly calculated data to the main memory.When it wants to write-back the data to the main memory, it hasto make sure that the content of the cache and main memory are in sync with each other. It can happen with two different policies that you can find here: Writing Policies.
So no matter what policy you choose, you first need to check whether the data is already in the cache so you can store it to cache first (since it's faster), and if the data block you are looking for has been evicted from the cache, you get a store-miss related to that cache.
You can check the applet here, to get a better idea of what happens in different scenarios.
I'm not fully familiar with how perf define these events, but given the common definition I believe load/store miss is just a way to break down the overall miss rate counting, so that you may tell which accesses miss more often. Note that loads are usually performed speculatively (at least in modern x86 cpus), while stores are performed much later along the pipeline, after the commit point, so even a piece of code with both loads and stores to the same region can have different miss rates.
In MESI-based cache protocols a load would hit the cache, or miss and fetch the line from the memory or next cache levels, either exclusively if it's not owned by anyone else, or in a shared state if it is. It would write the data to the caches along the way in the process.
A store would fetch a line in the same manner, but use an RFO (read-for-ownership) request which grants it exclusive ownership and the right to modify the line. The line would still get cached, but once the new data is written to it locally (usually in your L1 cache), it would become modified. The hit/miss process would look the same though.
What Saman referred to in his answer is the breakdown between reads and writes. Loads and stores (and other forms of access like code-read) all form the "read" part, and writebacks (or intentional write-throughs using special command or mem types like uncacheable) form the "write part.
I am trying to complete a simulator based for a simplified mips computer using java. I believe I have completed the pipeline logic needed for my assignment but I am having a hard time understanding what the instruction and data caches are supposed to do.
The instruction cache should be direct-mapped with 4 blocks and the block size is 4 words.
So I am really confused on what the cache is doing. Is it going to memory and pulling the instruction from memory? For example, in one block it will have just the add command.
Would it make sense to implement it as a 2 dimensional array?
First you should know the basics of the cache. You can imagine cache as an intermediate memory which sits between the DRAM or main memory and your processor, however very much limited in size. Now, when you try to access a location in memory, you will search it first in the cache. If it is found (cache hit) the processor will take this data and resume the execution. Generally the cache hit is supposed to be very few clock cycles lets say 1 or 2. Suppose if the data is not found in the cache (cache miss), then the data is fetched from the main memory, filled in the cache and fed to the processor. The processor blocks till the data is fetched. This takes few hundreds of clock cycles normally depending on the DRAM you are using. The amount of data that is fetched from DRAM is equal to cacheline size. For that you should search for spatial locality of reference in caches.
I think this should get you a start.