I am not able to entirely grasp the concept of synonyms or aliasing in VIPT caches.
Consider the address split as:-
Here, suppose we have 2 pages with different VA's mapped to same physical address(or frame no).
The pageno part of VA (bits 13-39) which are different gets translated to PFN of PA(bits 12-35) and the PFN remains same for both the VA's as they are mapped to same physical frame.
Now the pageoffset part(bits 0-13) of both the VA's are same as the data which they want to access from a particular frame no is same.
As the pageoffset part of both VA's are same, bits (5-13) will also be same, so the index or set no is the same and hence there should be no aliasing as only single set or index no is mapped to a physical frame no.
How is bit 12 as shown in the diagram, responsible for aliasing ? I am not able to understand that.
It would be great if someone could give an example with the help of addresses.
Note: this diagram has a minor error that doesn't affect the question: 36 - 12 = 24-bit tags for 36-bit physical addresses, not 28. MIPS64 R4x00 CPUs do in fact have 40-bit virtual, 36-bit physical addresses, and 24-bit tags, according to chapters 4 and 11 of the manual.
This diagram is from http://www.cse.unsw.edu.au/~cs9242/02/lectures/03-cache/node8.html which does label it as being for MIPS R4x00.
The page offset is bits 0-11, not 0-13. Look at your bottom diagram: the page offset is the low 12 bits, so you have 4k pages (like x86 and other common architectures).
If any of the index bits come from above the page offset, VIPT no longer behaves like a PIPT with free translation for the index bits. That's the case here.
A process can have the same physical page (frame) mapped to 2 different virtual pages.
Your claim that The pageno part of VA (bits 13-39) which are different gets translated to PFN of PA(bits 12-35) and the PFN remains same for both the VA's is totally bogus. Translation can change bit #12. So one of the index bits really is virtual and not also physical, so two entries for the same physical line can go in different sets.
I think my main confusion is regarding the page offset range. Is it the same for both PA and VA (that is 0-11) or is it 0-12 for VA and 0-11 for PA? Will they always be same?
It's always the same for PA and VA. The page offset isn't marked on the VA part of your diagram, only the range of bits used as the index.
It wouldn't make sense for it to be any different: virtual and physical memory are both byte-addressable (or word-addressable). And of course a page frame (physical page) is the same size as a virtual page. Right or left shifting an address during translation from virtual to physical would make no sense.
As discussed in comments:
I did eventually find http://www.cse.unsw.edu.au/~cs9242/02/lectures/03-cache/node8.html (which includes the diagram in the question!). It says the same thing: physical tagging does solve the cache homonym problem as an alternative to flushing on context switch.
But not the synonym problem. For that, you can have the OS ensure that bit 12 of every VA = bit 12 of every PA. This is called page coloring.
Page coloring would also solve the homonym problem without the hardware doing overlapping tag bits, because it gives 1 more bit that's the same between physical and virtual address. phys idx = virt idx. (But then the HW would be relying on software to be correct, if it wanted to depend on this invariant.)
Another reason for having the tag overlap the index is write-back during eviction:
Outer caches are almost always PIPT, and memory itself obviously needs the physical address. So you need the physical address of a line when you send it out the memory hierarchy.
A write-back cache needs to be able to evict dirty lines (send them to L2 or to physical RAM) long after the TLB check for the store was done. Unlike a load, you don't still have the TLB result floating around unless you stored it somewhere. How does the VIPT to PIPT conversion work on L1->L2 eviction
Having the tag include all the physical address bits above the page offset solves this problem: given the page-offset index bits and the tag, you can construct the full physical address.
(Another solution would be a write-through cache, so you do always have the physical address from the TLB to send with the data, even if it's not reconstructable from the cache tag+index. Or for read-only caches, e.g. instruction caches, there is no write-back; eviction = drop.)
Related
While studying virtual memory concepts, I understood that a virtual address (generated by a processor to access memory location) contains page number and page offset. we use a page table to get the physical address (frame number essentially) corresponding to this page number.
Now, if these addresses (physical/virtual) operate in terms of pages/frames, how does the processor access a cache which operates in terms of blocks/lines?
Also, if the virtual address consists of only page number and page offset, where does the tag bits come from which is used to check if the cache set (specified by index/set bits) contains the required data or not?
I figured out the answer to this question.
Same address can be used/interpreted for accessing two different addressing schemes. (Thanks #PeterCordes for pointing this out)
Scheme 1 (To access the TLB): PageNumber + PageOffset
Scheme 2 (To access the cache): Tag + Set/Index + Offset
Usually in VIPT caches, the page number comes from higher-order TAG bits, and the page offset comes from lower-order TAG bits along with SET and OFFSET bits. To prevent aliasing (multiple virtual addresses mapping to same physical address), it is important that SET/INDEX bits come fully from page offset. This restriction limits the size of the cache.
This question comes in context of a section on virtual memory in an undergraduate computer architecture course. Neither the teaching assistants nor the professor were able to answer it sufficiently, and online resources are limited.
Question:
Suppose a processor with the following specifications:
8KB pages
32-bit virtual addresses
28-bit physical addresses
a two-level page table, with a 1KB page table at the first level, and 8KB page tables at the
second level
4-byte page table entries
a 16-entry 8-way set associative TLB
in addition to the physical frame (page) number, page table entries contain a valid bit, a
readable bit, a writeable bit, an executable bit, and a kernel-only bit.
Now suppose this processor has a 32KB L1 cache whose tags are computed based on physical addresses. What is the minimum associativity that cache must have to allow the appropriate cache set to be accessed before computing the physical address that corresponds to a virtual address?
Intuition:
My intuition is that if the number of indices in the cache and the number of virtual pages (aka page table entries) is evenly divisible by each other, then we could retrieve the bytes contained within the physical page directly from the cache without ever computing that physical page, thus providing a small speed-up. However, I am unsure if this is the correct intuition and definitely don't know how to follow through with it. Could someone please explain this?
Note: I have computed the number of page table entries to be 2^19, if that helps anyone.
What is the minimum associativity that cache must have to allow the appropriate cache set to be accessed before computing the physical address that corresponds to a virtual address?
They're only specified that the cache is physically tagged.
You can always build a virtually indexed cache, no minimum associativity. Even direct-mapped (1 way per set) works. See Cache Addressing Methods Confusion for details on VIPT vs. PIPT (and VIVT, and even the unusual PIVT).
For this question not to be trivial, I assume they also meant "without creating aliasing problems", so VIPT is just a speedup over PIPT (physically indexed, phyiscally tagged). You get the benefit of allowing TLB lookup in parallel with fetching tags (and data) for the ways of the indexed set without any downsides.
My intuition is that if the number of indices in the cache and the number of virtual pages (aka page table entries) is evenly divisible by each other, then we could retrieve the bytes contained within the physical page directly from the cache without ever computing that physical page
You need the physical address to check against the tags; remember your cache is physically tagged. (Virtually tagged caches do exist, but typically have to get flushed on context switches to a process with different page tables = different virtual address space. This used to be used for small L1 caches on old CPUs.)
Having both numbers be a power of 2 is normally assumed, so they're always evenly divisible.
Page sizes are always a power of 2 so you can split an address into page number and offset-within-page by just taking different ranges of bits in the address.
Small/fast cache sizes also always have a power of 2 number of sets so the index "function" is just taking a range of bits from the address. For a virtually-indexed cache: from the virtual address. For a physically-indexed cache: from the physical address. (Outer caches like a big shared L3 cache may have a fancier indexing function, like a hash of more address bits, to avoid aliasing for addresses offset from each other by a large power of 2.)
The cache size might not be a power of 2, but you'd do that by having a non-power-of-2 associativity (e.g. 10 or 12 ways is not rare) rather than a non-power-of-2 line size or number of sets. After indexing a set, the cache fetches the tags for all the ways of that set and compare them in parallel. (And for fast L1 caches, often fetch the data selected by the line-offset bits in parallel, too, then the comparators just mux that data into the output, or raise a flag for no match.)
Requirements for VIPT without aliasing (like PIPT)
For that case, you need all index bits to come from below the page offset. They translate "for free" from virtual to physical so a VIPT cache (that indexes a set before TLB lookup) has no homonym/synonym problems. Other than performance, it's PIPT.
My detailed answer on Why is the size of L1 cache smaller than that of the L2 cache in most of the processors? includes a section on that speed hack.
Virtually indexed physically tagged cache Synonym shows a case where the cache does not have that property, and needs page coloring by the OS to let avoid synonym problems.
How to compute cache bit widths for tags, indices and offsets in a set-associative cache and TLB has some more notes about cache size / associativity that give that property.
Formula:
min associativity = cache size / page size
e.g. a system with 8kiB pages needs a 32kiB L1 cache to be at least 4-way associative so that index bits only come from the low 13.
A direct-mapped cache (1 way per set) can only be as large as 1 page: byte-within-line and index bits total up to the byte-within-page offset. Every byte within a direct-mapped (1-way) cache must have a unique index:offset address, and those bits come from contiguous low bits of the full address.
To put it another way, 2^(idx_bits + within_line_bits) is the total cache size with only one way per set. 2^N is the page size, for a page offset of N (the number of byte-within-page address bits that translate for free).
The actual number of sets (in this case = lines) depends on the line size and page size. Using smaller / larger lines would just shift the divide between offset and index bits.
From there, the only way to make the cache bigger without indexing from higher address bits is to add more ways per set, not more ways.
As far as my understanding goes, the CPU always generates a virtual address that is made-up of 2 parts- the page number and the page offset. The page number is used for indexing the page table (the corresponding mapping gives the starting address of the frame in the RAM). Now, please consider the following questions. Consider that the word size of the machine is 4 bytes, and the page size is equal to the frame size = 4096 bytes.
Supposing that the page number is 4 and the offset is 3. Then Page 4 in the logical memory maps to frame 8 in the virtual memory. This means that the starting address of the frame is 8.
Now, each frame will contain 4096/4= 1024 words. Does the offset imply for a word inside the frame, since the machine will always fetch a word at a time? What I mean is that does it mean the 3rd word in frame 8?
Is the particular word given to the CPU, or the entire frame? If former, then why does everyone talk about transfer in terms of frames and pages rather than words?
Suppose a page fault occurs. What this means is that the particular page in not in memory. Does it mean that the physical address mapped contains some other page? Does the mapping even exist in such a case when the invalid bit is 1.
Can someone clear-up things for me? One moment I seem to get it, and the very next, I get into a maze.
The key point of paging is that it deals with "chunks" of memory.
It is a map, a function, that translates virtual addresses into physical addresses but not on an address-by-address base. Rather, a "chunk" of continuous virtual addresses is translated together into another continuous "chunk" of, now physical, addresses.
You can think of it as a "translation" or "shuffle" of "chunks" of memory.
The correct term for "chunk" is page.
If try to do a sample mapping you can see that each page contains a set of addresses that all have a peculiarity: their lower bits don't change when passing from virtual to physical. The upper bits instead are arbitrary.
This dichotomy of the address value defines the Offset and Page/Frame number.
The offset is the part of the address value that don't undergo any translation.
In a page of 4KiB there are 4096 addresses, each one with its offset, so the offset has size log2(4096) = log2(212) = 12 * log2(2) = 12 bits.
In short the page size determines the offset size.
It is necessary to break the memory into pages and not words or byte, or in another view it is necessary to group the addresses to translate into pages.
Without pages, the metadata used for the translation, in jargon the page tables of various level, would occupy more memory that the one that under translation!
Offsets are relative to their page/frame thanks to the way their are defined: the offset 1024 (in hex 400h) in the frame 8 means the address 8000h + 400h = 8400h; if the page is mapped to the frame 12 the offset 1024 is still 1024 bytes after the beginning of the frame, 0c000h + 400h = 0c400h.
Being an address, an offset usually denotes a byte, event in architecture where bytes are not addressable. However this is not a standard convention, to know if an offset denotes a word or a byte (e.g. if offset 10 of frame 0 is the byte 40 or the byte 10) check the architecture manual. The first sections are usually dedicated to establishing a terminology to use throughout the book.
Paging happens before the CPU accesses the memory, you can think of it as an high level process. The unit that accesses the memory/bus is mostly unaware of it, as such the CPU read the data that the instruction is telling it to read (a word, a byte, and so on).
People talk about moving a page because a page is the smallest unit that can be characterized.
You can mark a page as non present, but not a word. You can make a page read-only but not a word.
If you need to map, say 16 bytes, you still need to map a whole page since 16 bytes are not characterizable. So we might as well read a whole page.
When a page-fault occurs it means that the page accessed is, at any level in the page-tables, non present.
This may mean a wide range of things, from the fact that the Present bit has been simply toggled (with the page still there), to the fact that the page has been saved to disk and zero-ed in memory.
Since the mapping function is total, meaning that every value is a valid value, the CPU need a way to know when a value is not valid.
The Present bit does this: tell the CPU that a translation must not be performed and that an exception must be raised instead.
The OS use this exception to be notified of when a page is needed, it doesn't need to reassign the mapping to another page or zero the memory.
When people say that a page is removed they mean that it is removed from the mapping, all modern OSes also zero-d the page to prevent leaking of information to other processes though.
So if a physical frame is not mapped it doesn't mean that another page in another process is mapping it, it simply mean that that range of addresses cannot be accessed.
As said above there are a lot of reasons for an OS to do this, including protection.
You have things a bit backwards. The operating system defines a logical address space for each process. The logic address space is divided into units of memory called PAGES.
The operating system logically maps the pages of the address to either physical page frames or secondary storage If the operating system maps pages to secondary storage then is using virtual memory.
In ye olde days all systems that did logical memory translation always did virtual memory mappings to secondary storage. That is why the terms virtual memory translation and logical memory translation are often conflated. These days it is becoming increasingly common to have logical translation without virtual memory.
All address accesses through a process are to logical addresses. The processor translates the logical address to page frames. If logical page exists but is mapped to secondary storage, accessing that page triggers a page fault. The operating system must handle the fault, remap the logical/virtual page to a physical page frame; load the data from secondary storage to the page frame; and restart the instructions.
Supposing that the page number is 4 and the offset is 3. Then Page 4 in the logical memory maps to frame 8 in the virtual memory. This means that the starting address of the frame is 8.
This make no sense. A logical page is virtual when it is mapped to secondary storage. If the page number is 4 the 4th logical page can:
a) have no mapping at all (access violation)
b) map to a physical page frame
c) map to a secondary storage (virtual memory)
Now, each frame will contain 4096/4= 1024 words. Does the offset imply for a word inside the frame, since the machine will always fetch a word at a time? What I mean is that does it mean the 3rd word in frame 8?
In nearly all (if not all) current processors there are no memory words; only bytes. The system bus fetches memory and the "word size" of the bus can be (and often is) different from the "word size" of the processor.
Is the particular word given to the CPU, or the entire frame? If former, then why does everyone talk about transfer in terms of frames and pages rather than words?
The process sees transfers in sizes related to the instruction being executed. The operand size can be larger or smaller than the machine word. The bus transfers data to memory and that size is frequently different from the word size of the machine.
Suppose a page fault occurs. What this means is that the particular page in not in memory. Does it mean that the physical address mapped contains some other page? Does the mapping even exist in such a case when the invalid bit is 1.
I gave the three possibilities for logical page mappings above. How those are indicated are system specific. Some systems use 2 bits to indicate a, b, or c. Others use a single bit to indicate (b) and require the operating system to determine whether it's (a) or (c).
Whether or not a page fault is triggered depended upon the state of the page table.
Generally a page fault means that the page frame is not in memory. However, it is often possible for the physical page frame to be in memory but not mapped in the page table (a soft page fault). (This occurs when the operating system has unmapped page frames to free some up but has not reallocated them.) In this case, the operating system simply needs to update the page table to point to the page frame and restart the instruction (no need to load from secondary storage).
I was reading the dinosaur book on Operating System about memory management. I assume this is one of the best books but there's something about paging written in the book which I don't get.
The book says, "A 32-bit CPU uses 32-bit addresses, meaning that a given process space can only be 2^32 bytes (4 TB ). Therefore, paging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length."
I don't quite get this part because if the CPU can only refer to 2^32 different physical addresses, if there were 2^32+1 physical addresses, the last address won't be able to be reached by the CPU. So how can paging help with this?
Also, earlier the book says "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
I don't see how that is even possible in ideal/theoretical situations, cuz as I understand it, part of the virtual address will refer to an entry of the page table while the other part of the virtual address will refer to the off-set of that particular type in that page. So in the above-mentioned situation put forward by the book, even if the CPU could point to 2^32 different page entries, it won't be able to read any particular byte within that page cuz it doesn't specify the office.
Maybe I've misunderstood the book or there is some part that I missed out. I much appreciate your help! Thanks a lot!
It sounds like you need to burn your book. It's useless.
"[P]aging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length" is complete nonsense (unless the book is assigning two different meanings to the term "paging," in which it is still useless).
Let's start with logical addressing. A logical address is composed of a page selector and and offset into the page. Some number (P) of bits will be assigned to the page selector and the remained will be assigned to the offset. If pages are 2^9 bits, there are 23 bits in the page selector and 9 bits for the byte offset within the page.
Note that the 9/23 pick are arbitrary on my part. Most systems these days use larger pages but these are values have been used in the past.
The 23 bits in the page selector are indices into the process page table.
The size of entries in the page table are going to be a power of 2 (and I have never seen one less than 4). For our purposes let's say that each entry is 8-bytes long.
The bits in the page table entry are divided between those that index physical page frames and control bits. let's make the arbitrary choice that 32 bits index page frames and 32 bits are used for control.
That means the system can theoretically MANAGE 2^32 pages that are 2^9 bytes large or a total of 2^41 bytes. If we were to increase the page size from 2^9 to 2^20, the system could theoretically MANAGE 2^52 (32+20) bytes of memory.
Note that each process can still only ACCESS 2^32 bytes. But in my 9-bit page system, 2^9 processes could each access 2^32 pages simultaneously on a system with 2^41 physical bytes of memory (ignoring the need for a shared system address space in this gross oversimplification).
Note that if I change my page table to 32-bits and assign 9 of those bits to control and and 23 to page frame selection, the system can only MANAGE 2^32 bytes of memory (and that was more common than managing greater than 2^32 bytes).
You quote: "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
This is theoretical BS. A system that used all 32 bites of the page table entry as an index to page frames could not function. There would have to be some control bits in the page table.
The quotes you are taking from this book are highly misleading. Few (any?) 32-bit processors could even access 2^32 bytes of memory due to address line limitations.
While it is possible that the use of logical pages could allow a processor to manage more memory that the logical address size suggests, that was not the purpose of managing memory in pages.
The purpose of paging—which in its normal and customary usage refers to the movement of virtual memory pages between physical page frames and secondary storage—is to allow processes to access more virtual memory than there was physical memory on the system.
There is an additional system of memory management that is (thankfully) dying out: segments. Segments also provided a means for systems to manage more physical memory than the logical address space would allow.
Lets consider that size of page is equal to 1 KB. One entry in table
takes 2B. Table of pages takes not more than one page (so <= 1KB).
Can we conclude that size of operational memory is <= 512 KB ?
A correct answer is No, however I can't understand it. For me, answer is yes - look at my reasoning at show me where I am wrong, please.
Table contains <=1024B/2B=512=2^9 entries in table of pages. Size of page is 1024B=2^10B, so offset takes not more than <=10 bits. Number of page takes <=9 bits - because we have 512=2^9 entries. Hence, 9+10=19. Therefore <=2^19 bits make it possible to address <= 2^19 B=2^9KB=512KB.
Where am I wrong?
A page table entry is 2B, so a table entry can point to one of 2^16 physical pages. That's 2*16 kiB (because pages are 1kB). So a kernel can make use of up to 65536 kiB (64MiB) of physical memory.
This assumes that the entire page table entry (PTE) is a physical page number. That can work if the position of a PTE within the page table associates it with a virtual address. i.e. 9 bits of a virtual address are used as an index into the PT, to select a PTE.
We can't assume that the machine does work this way, but similarly we can't assume that it doesn't, until we have more information.
Given just the info in the question, we can conclude:
Usable physical memory can be anything up to 65536kiB.
Up to 512 virtual pages can be mapped at once (512kiB).
Not much else! Many possibilities are open.
A more likely design would have a valid/invalid bit in each PTE.
If a PTE's contents (rather than its location) indicate which virtual page it maps, it would be possible to map a large virtual address space onto very few physical pages. This address space would necessarily be sparsely mapped, but the kernel could give a process the illusion of having a lot of active mappings by keeping a separate table of what the mappings should be.
So on a page fault, the kernel checks to see if the page should have been mapped. If so, it puts the mapping in the PTE and resumes the process. (This kind of thing happens in real OSes, and it's called a minor page fault.)
In this design, the kernel's use of the page table to hold a subset of the real mappings would be analogous to a software-managed TLB. (Of course, this architecture could have a non-coherent TLB that requires invalidation after every PTE modification, so this would probably perform horribly).