Paging and TLB operating systems - memory-management

I'm really stuck on this question for my OS class, I don't want someone to just give me the answer though, instead if someone could tell me how to work it out.
Example Question:
This system uses simple paging and TLB
Each memory access requires 80ns
TLB access requires 10ns
TLB hit rate is 80%.
Work out the actual speedup because of the TLB?
NOTE: I changed the memory accessed required and the TLB access requires part of the question because as I said I don't want the answer, just a way to work it out.

In case the virtual address translation is cached in the TLB, all we need is one lookup in the TLB that will give us a physical address, and we are done. The interesting part is if we need to do the page table walk. Think carefully about what the system has to do in case it did not find an address in the TLB (well it already had to do a TLB look-up). Memory access takes 80ns, but how many of them do you need to actually get the physical address? Pretty much every paging architecture follows the schema that page-tables are stored in memory and only the entry point, the address that points to the base of the first page table (the root) is in a register.
If you have the amount of time you can calculate the speed-up by comparing it to the TLB access time.

On TLB Hit 80% your required to access time 2ns and to access that page in main memory required 20ns therefore one part is
0.8×(2+20)
On TLB miss i.e. (1-0.8) 20% for that you are checking TLB again so required 2ns when it is TLB miss it will check into Page Table but base Address of Page Table is into Main Memory so it requires 20ns and when it searches into PT it will getting desired Frame and again required memory access time to access data from main memory so miss calculation is
0.2×(2+20+20)
From above 2 :
Effective access time=0.8×(2+20)+0.2×(2+20+20)
= 26ns

Related

Avoiding Translation Look aside buffer ( TLB ) pollution when mmap()

When we want to write a data item, the block containing the data is brought into the cache first and data item is written into the cache. This can cause cache pollution. To avoid this, Intel has introduced no temporal instructions.
If I'm going to be using mmap() to write data to the file and never going to read again, is it possible to avoid TLB entry creation for this ? Is there anything instruction similar to non temporal instructions available ?
TLB entries are needed by the CPU to map from the virtual address to the physical address, so it is not possible to avoid them with mmap() or any similar API.
Even if it were possible to avoid storing the mapping in the TLB, every access to the mapped memory would need to reload the corresponding entries from the page tables, so the performance would be much worse.
Non-temporal accesses make sense only for stores, but the page table entries are read.

cache miss, a TLB miss and page fault

Can someone clearly explain me the difference between a cache miss, a tlb miss and page fault, and how do these affect the effective memory access time?
Let me explain all these things step by step.
The CPU generates the logical address, which contains the page number and the page offset.
The page number is used to index into the page table, to get the corresponding page frame number, and once we have the page frame of the physical memory(also called main memory), we can apply the page offset to get the right word of memory.
Why TLB(Translation Look Aside Buffer)
The thing is that page table is stored in physical memory, and sometimes can be very large, so to speed up the translation of logical address to physical address , we sometimes use TLB, which is made of expensive and faster associative memory, So instead of going into page table first, we go into the TLB and use page number to index into the TLB, and get the corresponding page frame number and if it is found, we completely avoid page table( because we have both the page frame number and the page offset) and form the physical address.
TLB Miss
If we don't find the page frame number inside the TLB, it is called a TLB miss only then we go to the page table to look for the corresponding page frame number.
TLB Hit
If we find the page frame number in TLB, its called TLB hit, and we don't need to go to page table.
Page Fault
Occurs when the page accessed by a running program is not present in physical memory. It means the page is present in the secondary memory but not yet loaded into a frame of physical memory.
Cache Hit
Cache Memory is a small memory that operates at a faster speed than physical memory and we always go to cache before we go to physical memory. If we are able to locate the corresponding word in cache memory inside the cache, its called cache hit and we don't even need to go to the physical memory.
Cache Miss
It is only after when mapping to cache memory is unable to find the corresponding block(block similar to physical memory page frame) of memory inside cache ( called cache miss ), then we go to physical memory and do all that process of going through page table or TLB.
So the flow is basically this
1.First go to the cache memory and if its a cache hit, then we are done.
2. If its a cache miss, go to step 3.
3. First go to TLB and if its a TLB hit, go to physical memory using physical address formed, we are done.
4. If its a TLB miss, then go to page table to get the frame number of your page for forming the physical address.
5. If the page is not found, its a page fault.Use one of the page replacement algorithms if all the frames are occupied by some page else just load the required page from secondary memory to physical memory frame.
End Note
The flow I have discussed is related to virtual cache(VIVT)(faster but not sharable between processes), the flow would definitely change in case of physical cache(PIPT)(slower but can be shared between processes). Cache can be addressed in multiple ways. If you are willing to dive deeply have a look at this and this.
This diagram might help to see what will happen when there is a hit or a miss.
Just imagine a process is running and requires a data item X.
At first cache memory will be checked to see if it has the requested data item, if it is there(cache hit), it will be returned.If it is not there(cache miss), it will be loaded from main memory.
If there is a cache miss main memory will be checked to see if there is page containing the requested data item(page hit) and if such page is not there (page fault), the page containing the desired item has to be brought into main memory from disk.
While processing the page fault TLB will be checked to see if the desired page's frame number is available there (TLB hit) otherwise (TLB miss)OS has to consult page table for servicing page fault.
Time required to access these types memories:
cache << main memory << disk
Cache access requires least time so a hit or miss at certain level drastically changes the effective access time.
What causes page faults? Is it always because the memory has been
moved to hard disk? Or just moved around for other applications?
Well, it depends. If your system does not support multiprogramming(In a multiprogramming system there are one or more programs loaded in main memory which are ready to execute), then definitely page fault has occurred because memory has been moved to hard disk.
If your system does support multiprogramming, then it depends on whether your operating system uses global page replacement or local page replacement. If it uses global, then yes there is a chance that memory has been moved around for other applications. But in local, the memory has been moved back to hard disk. When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process. On the other hand a global replacement algorithm is free to select any page in from the entire pool of frames. This discussion about these pops up more when dealing with thrashing.
I am confused of the difference between TLB miss and page faults.
TLB miss occurs when the page table entry required for conversion of virtual address to physical address is not present in the TLB(translation look aside buffer). TLB is like a cache, but it does not store data rather it stores page table entries so that we can completely bypass the page table in case of TLB hit as you can see in the diagram.
Is page fault a crash? Or is it the same as a TLB miss?
Neither of them is a crash as crash is not recoverable. But it is well known that we can recover from both page fault and TLB miss without any need for aborting the process execution.
The Operating system uses virtual memory and page tables maps these virtual address to physical address. TLB works as a cache for such mapping.
program >>> TLB >>> cache >>> Ram
A program search for a page in TLB, if it doesn't find that page it's a TLB miss and then further looks for the page in cache.
If the page is not in cache then it's a cache miss and further looks for the page in RAM.
If the page is not in RAM, then it's a page fault and program look for the data in secondary storage.
So, typical flow would be
Page Requested >> TLB miss >> cache miss >> page fault >> looks in secondary memory.

Virtual Address to Physical address translation in the light of the cache memory

I do understand how the a virtual address is translated to a physical address to access the main memory. I also understand how the cache memory works as well.
But my problem is in putting the 2 concepts together and understanding the big picture of how a process accesses memory and what will happen if we have a cache miss. so i have this drawing that will help me asks the following questions:
click to see the image ( assume one-level cache)
1- Does the process access the cache with the exact same physical address that represent the location of byte in the main memory ?
2- Is the TLB actually in the first level of Cache or is it a separate memory inside the CPU chip that is dedicated for the translation purpose ?
3- When there is a cache miss, i need to get a whole block and allocated in the cache, but the main memory organized in frames(pages) not blocks. So does a process page is divided itself to cache blocks that can be brought to cache in case of a miss ?
4- Lets assume there is a TLB miss, does that mean that I need to go all the way to the main memory and do the page walk there , or does the page walk happen in the cache ?
5- Does a TLB miss guarantee that there will be a cache miss ?
6- If you have any reading material that explain the big picture that i am trying to understand i would really appreciate sharing it with me.
Thanks and feel free to answer any single question i have asked
Yes. The cache is not memory that can be addressed separately. Cache mapping will translate a physical address into an address for the cache but this mapping is not something a process usually controls. For some CPU architecture it is completely controlled by the hardware (e.g. Intel x86). For others the operating system would be expected to program the mapping.
The TLB in the diagram you gave is for virtual to physical address mapping. It is probably not for the cache. Again on some architecture the TLBs are programmed whereas on others it is controlled by the hardware.
Page size and cache line size do not have to be the same as one relates to virtual memory and the other to physical memory. When a process access a virtual address that address will be translated to a physical address using the TLB considering page size. Once that's done the size of a page is of no concern. The access is for a byte/word at a physical address. If this causes a cache miss occurs then the cache block that will be read will be of the size of a cache block that covers the physical memory address that's being accessed.
TLB miss will require a page translation by reading other memory. This process can occur in hardware on some CPU (such as Intel x86/x64) or need to be handled in software. Once the page translation has been completed the TLB will be reloaded with the page translation.
TLB miss does not imply cache miss. TLB miss just means the virtual to physical address mapping was not known and required a page address translation to occur. A cache miss means the physical memory content could not be provided quickly.
To recap:
the TLB is to convert virtual addresses to physical address quickly. It exist to cache the virtual to physical memory mapping quickly. It does not have anything to do with physical memory content.
the cache is to allow faster access to memory. It is only there to provide the content of physical memory faster.
Keep in mind that the term cache can be used for lots of purposes (e.g. note the usage of cache when describing the TLB). TLB is a bit more specific and usually implies a virtual memory translation though that's not universal. For example some DMA controllers have a TLB too but that TLB is not necessarily used to translate virtual to physical addresses but rather to convert block addresses to physical addresses.

Does the address translation of paging decrease memory access performance?

When paging is enabled, some hardware is responsible for translating virtual memory addresses into physical addresses. Known translations are usually kept in some sort of cache, the translation look aside buffer (TLB).
Assuming a memory access where the address translation is cached, is it any slower than directly accessing memory without paging enabled?
I'm wondering about the overhead of that translation, even when it's cached, since the access to that cache will probably also take some (although very short) time. Or is that time planned as part of the clock cycle?
(To make it clear, my question is not about page faults or cache misses of the TLB)
Like everything in life, it depends! :-)
Let's assume, for the sake of simplicity, that (a) we're talking about data rather than instructions (b) all data memory accesses hit the level 1 cache (c) the level 1 data cache is a typical set associative cache.
Each block of the data cache must be identified with an address (less the offset). If the cache uses virtual addresses then no translation need take place and there is no overhead. If the cache uses physical addresses then the address must be translated prior to the data access adding additional latency to the request. Even for a small TLB, I don't think a high performance processor could both translate the address and then complete the cache request within the same cycle. So it's fair to assume that a physically addressed cache does indeed have overhead of address translation.
So virtually addressed caches sound like the better deal, right? Unfortunately it's a double-edged sword. The problem is that virtual memory often allows multiple virtual addresses to map to the same physical address. If in our cache there are two virtual addresses that map to a single physical address, modifying one will not be reflected in the other.
So, there is an option between these two extremes. Still assuming a set associative cache, we can use the virtual address as the index while simultaneously translating the address to a physical one. After, we use the physical address as the tag to access the data. This way we take the TLB translation off the critical path thus achieving similar performance to the virtually addressed cache. It also allows us to avoid this virtual/physical aliasing problem, although it often needs a little extra help from the operating system.
So, you can see that it can be the same or slower, depending on how the cache is configured.

Does memory address translation need extra access to memory?

I've got a question about virtual memory management, more specifically, the address translation.
When an application runs, the CPU receives instructions containing virtual memory addresses, and translates them into physical addresses via the page table.
My question is, since the page table also aside at a memory block, does that means the CPU has to access the memory twice in a single memory-access instruction? If the answer is no, then how does this actually work? Which part did I miss?
Could anyone give me some details about this?
As usual the answer is neither yes or no.
Worst case you have to do a walk of the page table, which is indeed stored in (some kind of) memory, this is not necessarily only one lookup, it can be multiple lookups, see for example a two-level table (example from wikipedia).
However, typically this page table is accompanied by a hardware assist called the translation lookaside buffer, this is essentially a cache for the page table, the lookup process can be seen in this image. It works just as you would expect a cache too work, if a lookup succeeds you happily continue with the physical fetch, if it fails you proceed to the aforementioned page walk and you update the cache afterwards.
This hardware assist is usually implemented as a CAM (Content Addressable Memory), something that's most used in network processing but is also very useful here. It is a memory-component that does not do the lookup based upon an address but based upon 'content', or any generic key (the keys dont' have to be contiguous, incrementing numbers). In this case the key would be your virtual address, and the resulting memory lookup would be your physical address. As this CAM is a separate component and as it is very fast you could state that as long as you hit it you don't incur any extra memory overhead for virtual -> physical address translation.
You could ask why they don't put the whole page table in a CAM? Quite simply, CAM's are both quite expensive and more importantly quite energy-hungry, so you don't want to make them too big (we wouldn't want a laptop that requires 1KW to run do we?).
Sometimes.
The MMU contains a cache of virtual to physical address mapping, called a TLB (Translation Lookaside Buffer).
If the page in question is not in the TLB (a TLB miss), then it needs to load the relevant piece of page table from main memory into that cache first, which will need additional memory access.
Finally, if the page cannot be found at all, a trap is issued to the CPU (a page fault), and the CPU have an opportunity to fix this - e.g. allocate memory, load the piece from a file, swap space and similar.
The details on how this is done varies between architectures, on some, the TLB miss also involves the CPU to configure the TLB, though on most this is automatic. (but the CPU would have to flush the TLB when doing a context switch, and load a new pagetable for e.g. a new process)
More info e.g. here https://www.kernel.org/doc/gorman/html/understand/understand006.html

Resources