In real mode segmented memory model, a segment always begins on a paragraph boundary. A paragraph is 16 bytes in size so the segment address is always divisible by 16. What is the reason/benefit of having segments on paragraph boundaries?
It's not so much a benefit as an axiom - the 8086's real mode segmentation model is designed at the hardware level such that a segment register specifies a paragraph boundary.
The segment register specified the base address of the upper 16 bits of the 8086's 20 bit address space, the lower 4 bits of that base address were in essence forced to zero.
The segmented architecture was one way to have the 8086's 16-bit register architecture be able to address a full megabyte(!) of address space (which requires 20 bits of addressing).
For a little more history, the next step that Intel took in the x86 architecture was to abstract the segment registers from directly defining the base address. That was the 286 protected mode, where the segment register held a 'selector' that instead of defining the bits for a physical base address was an index into a a set of descriptor tables that held information about the physical address, permissions for access to the physical memory, and other things.
The segment registers in modern 32-bit or larger x86 processors still do that. But with the address offsets being able to specify a full 32-bits of addressing (or 64-bits on the x64 processors) and page tables able to provide virtual memory semantics within the segment defined by a selector, the programming model essentially does away with having to manipulate segment registers at the application level. For the most part, the OS sets the segment registers up once, and nothing else needs to deal with them. So programmers generally don't even need to know they exist anymore.
The 8086 had 20 address lines. The segment was mapped to the top 16, leaving an offset of the bottom 4 lines, or 16 addresses.
The Segment register stores the address of the memory location where that segment starts. But Segment registers stores 16 bit information. This 16 bit is converted to 20 bits by appending 4 bits of 0 to the right end of the address. If a segment register contains 1000H then it is left shifted to get 10000H. Now it is 20 bits.
While converting we added 4 bits of 0 to the end of the address. So every segment in memory must begin with memory location where the last 4 bits are 0.
For ex:
If a segment starts at 10001H memory location, we cannot access it because the last 4 bits are not 0.
Any address in segment register will be appended with 4 bits in right end to convert to 20bits. So there is no way of accessing such an address.
Related
My teacher has given me the question to differentiate the maximum memory space of 1MB and 4GB microprocessor. Does anyone know how to answer this question apart from size mentioned difference ?
https://i.stack.imgur.com/Q4Ih7.png
A 32-bit microprocessor can address up to 4 GB of memory, because its registers can contain an address that is 32 bits in size. (A 32-bit number ranges from 0 to 4,294,967,295). Each of those values can represent a unique memory location.
The 16-bit 8086, on the other hand, has 16-bit registers which only range from 0 to 65,535. However, the 8086 has a trick up its sleeve- it can use memory segments to increase this range up to one megabyte (20 bits). There are segment registers whose values are automatically bit-shifted left by 4 then added to the regular registers to form the final address.
For example, let's look at video mode 13h on the 8086. This is the 256-color VGA standard with a resolution of 320x200 pixels. Each pixel is represented by a single byte and the desired color is stored in that byte. The video memory is located at address 0xA0000, but since this value is greater than 16 bits, typically the programmer will load 0xA000 into a segment register like ds or es, then load 0000 into si or di. Once that is done, the program can read from [ds:si] and write to [es:di] to access the video memory. It's important to keep in mind that with this memory addressing scheme, not all combinations of segment and offset represent a unique memory location. Having es = A100/di = 0000 is the same as es=A000/di=1000.
I am not able to entirely grasp the concept of synonyms or aliasing in VIPT caches.
Consider the address split as:-
Here, suppose we have 2 pages with different VA's mapped to same physical address(or frame no).
The pageno part of VA (bits 13-39) which are different gets translated to PFN of PA(bits 12-35) and the PFN remains same for both the VA's as they are mapped to same physical frame.
Now the pageoffset part(bits 0-13) of both the VA's are same as the data which they want to access from a particular frame no is same.
As the pageoffset part of both VA's are same, bits (5-13) will also be same, so the index or set no is the same and hence there should be no aliasing as only single set or index no is mapped to a physical frame no.
How is bit 12 as shown in the diagram, responsible for aliasing ? I am not able to understand that.
It would be great if someone could give an example with the help of addresses.
Note: this diagram has a minor error that doesn't affect the question: 36 - 12 = 24-bit tags for 36-bit physical addresses, not 28. MIPS64 R4x00 CPUs do in fact have 40-bit virtual, 36-bit physical addresses, and 24-bit tags, according to chapters 4 and 11 of the manual.
This diagram is from http://www.cse.unsw.edu.au/~cs9242/02/lectures/03-cache/node8.html which does label it as being for MIPS R4x00.
The page offset is bits 0-11, not 0-13. Look at your bottom diagram: the page offset is the low 12 bits, so you have 4k pages (like x86 and other common architectures).
If any of the index bits come from above the page offset, VIPT no longer behaves like a PIPT with free translation for the index bits. That's the case here.
A process can have the same physical page (frame) mapped to 2 different virtual pages.
Your claim that The pageno part of VA (bits 13-39) which are different gets translated to PFN of PA(bits 12-35) and the PFN remains same for both the VA's is totally bogus. Translation can change bit #12. So one of the index bits really is virtual and not also physical, so two entries for the same physical line can go in different sets.
I think my main confusion is regarding the page offset range. Is it the same for both PA and VA (that is 0-11) or is it 0-12 for VA and 0-11 for PA? Will they always be same?
It's always the same for PA and VA. The page offset isn't marked on the VA part of your diagram, only the range of bits used as the index.
It wouldn't make sense for it to be any different: virtual and physical memory are both byte-addressable (or word-addressable). And of course a page frame (physical page) is the same size as a virtual page. Right or left shifting an address during translation from virtual to physical would make no sense.
As discussed in comments:
I did eventually find http://www.cse.unsw.edu.au/~cs9242/02/lectures/03-cache/node8.html (which includes the diagram in the question!). It says the same thing: physical tagging does solve the cache homonym problem as an alternative to flushing on context switch.
But not the synonym problem. For that, you can have the OS ensure that bit 12 of every VA = bit 12 of every PA. This is called page coloring.
Page coloring would also solve the homonym problem without the hardware doing overlapping tag bits, because it gives 1 more bit that's the same between physical and virtual address. phys idx = virt idx. (But then the HW would be relying on software to be correct, if it wanted to depend on this invariant.)
Another reason for having the tag overlap the index is write-back during eviction:
Outer caches are almost always PIPT, and memory itself obviously needs the physical address. So you need the physical address of a line when you send it out the memory hierarchy.
A write-back cache needs to be able to evict dirty lines (send them to L2 or to physical RAM) long after the TLB check for the store was done. Unlike a load, you don't still have the TLB result floating around unless you stored it somewhere. How does the VIPT to PIPT conversion work on L1->L2 eviction
Having the tag include all the physical address bits above the page offset solves this problem: given the page-offset index bits and the tag, you can construct the full physical address.
(Another solution would be a write-through cache, so you do always have the physical address from the TLB to send with the data, even if it's not reconstructable from the cache tag+index. Or for read-only caches, e.g. instruction caches, there is no write-back; eviction = drop.)
I have trouble understanding how in say a 32-bit computer byte addressing is achieved:
Is the ram itself byte addressable meaning the first byte has address 0 and the second 1 etc? In this case, wouldn't is take 4 read cycles to read a 32-bit word and waste the width of the data bus?
Or does the ram consist of 32-bit words meaning address 0 points to the first 4 bytes and address 2 points to bytes 5 to 8? In this case I would expect the ram interface to make byte addressing possible (from the cpu's point of view)
Think of RAM as 8 bit wide structure with N entries. N is often the size quoted when referring to memory (256 MB - 256M entries, 2GB - 2G entries etc, B is for bytes). When you access this memory, the smallest unit you can address is one of these entries which is 8 bits (1 byte). Since you can only access it at byte level, we call it byte addressable memory.
Now coming to your question about accessing this memory, we do not just access a byte. Most of the time, memory accesses are sent through caches which are there to reduce memory access latency. Caches store data at a higher granularity than a byte or word, normally it is multiple of words. In doing so, caches explore a property called "locality". Locality means, there is a high chance that we either access this data item or a near by data item very soon. So fetching not just the byte, but all the adjacent bytes is not a waste. Think of it as an investment for future, saves you multiple data fetches that you would have done otherwise.
Memory addresses in RAM start with 0th address and they are accessed using the registers with capacity of 8 bit register or 32 bit registers. Based on these registers the value from specific address is accessed by the CPU. If you really need to understand how it works, you will need to run couple of programs using Assembly language to navigate in the physical memory by reading the values directly using registers and register move commands.
I was reading the dinosaur book on Operating System about memory management. I assume this is one of the best books but there's something about paging written in the book which I don't get.
The book says, "A 32-bit CPU uses 32-bit addresses, meaning that a given process space can only be 2^32 bytes (4 TB ). Therefore, paging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length."
I don't quite get this part because if the CPU can only refer to 2^32 different physical addresses, if there were 2^32+1 physical addresses, the last address won't be able to be reached by the CPU. So how can paging help with this?
Also, earlier the book says "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
I don't see how that is even possible in ideal/theoretical situations, cuz as I understand it, part of the virtual address will refer to an entry of the page table while the other part of the virtual address will refer to the off-set of that particular type in that page. So in the above-mentioned situation put forward by the book, even if the CPU could point to 2^32 different page entries, it won't be able to read any particular byte within that page cuz it doesn't specify the office.
Maybe I've misunderstood the book or there is some part that I missed out. I much appreciate your help! Thanks a lot!
It sounds like you need to burn your book. It's useless.
"[P]aging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length" is complete nonsense (unless the book is assigning two different meanings to the term "paging," in which it is still useless).
Let's start with logical addressing. A logical address is composed of a page selector and and offset into the page. Some number (P) of bits will be assigned to the page selector and the remained will be assigned to the offset. If pages are 2^9 bits, there are 23 bits in the page selector and 9 bits for the byte offset within the page.
Note that the 9/23 pick are arbitrary on my part. Most systems these days use larger pages but these are values have been used in the past.
The 23 bits in the page selector are indices into the process page table.
The size of entries in the page table are going to be a power of 2 (and I have never seen one less than 4). For our purposes let's say that each entry is 8-bytes long.
The bits in the page table entry are divided between those that index physical page frames and control bits. let's make the arbitrary choice that 32 bits index page frames and 32 bits are used for control.
That means the system can theoretically MANAGE 2^32 pages that are 2^9 bytes large or a total of 2^41 bytes. If we were to increase the page size from 2^9 to 2^20, the system could theoretically MANAGE 2^52 (32+20) bytes of memory.
Note that each process can still only ACCESS 2^32 bytes. But in my 9-bit page system, 2^9 processes could each access 2^32 pages simultaneously on a system with 2^41 physical bytes of memory (ignoring the need for a shared system address space in this gross oversimplification).
Note that if I change my page table to 32-bits and assign 9 of those bits to control and and 23 to page frame selection, the system can only MANAGE 2^32 bytes of memory (and that was more common than managing greater than 2^32 bytes).
You quote: "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
This is theoretical BS. A system that used all 32 bites of the page table entry as an index to page frames could not function. There would have to be some control bits in the page table.
The quotes you are taking from this book are highly misleading. Few (any?) 32-bit processors could even access 2^32 bytes of memory due to address line limitations.
While it is possible that the use of logical pages could allow a processor to manage more memory that the logical address size suggests, that was not the purpose of managing memory in pages.
The purpose of paging—which in its normal and customary usage refers to the movement of virtual memory pages between physical page frames and secondary storage—is to allow processes to access more virtual memory than there was physical memory on the system.
There is an additional system of memory management that is (thankfully) dying out: segments. Segments also provided a means for systems to manage more physical memory than the logical address space would allow.
I understand what it means to access memory such that it is aligned but I don’t understand why this is necessary. For instance, why can I access a single byte from an address 0x…1 but I cannot access a half word (two bytes) from the same address.
Again, I understand that if you have an address A and an object of size s that the access is aligned if A mod s = 0. But I just don’t understand why this is important at the hardware level.
Hardware is complex; this is a simplified explanation.
A typical modern computer might have a 32-bit data bus. This means that any fetch that the CPU needs to do will fetch all 32 bits of a particular memory address. Since the data bus can't fetch anything smaller than 32 bits, the lowest two address bits aren't even used on the address bus, so it's as if RAM is organised into a sequence of 32-bit words instead of 8-bit bytes.
When the CPU does a fetch for a single byte, the read cycle on the bus will fetch 32 bits and then the CPU will discard 24 of those bits, loading the remaining 8 bits into whatever register. If the CPU wants to fetch a 32 bit value that is not aligned on a 32-bit boundary, it has several general choices:
execute two separate read cycles on the bus to load the appropriate parts of the data word and reassemble them
read the 32-bit word at the address determined by throwing away the low two bits of the address
read some unexpected combination of bytes assembled into a 32-bit word, probably not the one you wanted
throw an exception
Various CPUs I have worked with have taken all four of those paths. In general, for maximum compatibility it is safest to align all n-bit reads to an n-bit boundary. However, you can certainly take shortcuts if you are sure that your software will run on some particular CPU family with known unaligned read behaviour. And even if unaligned reads are possible (such as on x86 family CPUs), they will be slower.
The computer always reads in some fixed size chunks which are aligned.
So, if you don't align your data in memory, you will have to probably read more than once.
Example
word size is 8 bytes
your structure is also 8 bytes
if you align it, you'll have to read one chunk
if you don't align it, you'll have to read two chunks
So, it's basically to speed up.
The reason for all alignment rules are the various widths of the Cache Lines (Instruction-Cache do have 16 Byte lines for the Core2 Architecture, and the Data-Cache do have 64-Byte Lines for L1 and 128-Byte Lines for L2).
So if you want to store/load data that crosses a Cahce-Line Boundary you need to load and store both Cache-lines, which hits the performance.
So you just don't do it because of the performance hit, its that simple.
Try reading a serial port. The data is 8 bits wide.
Nice hardware designers ensure it lies on a least significant byte of the word.
If you have a C structure that has elements not word aligned ( from backwards compatibility or conservation of memory say )
then the address of any byte within the structure is not word aligned.