I know many people have already asked similar question on this platform, but still I have confusion on this. Please clarify.
If my program needs only 5 page frames initially to store all its things in RAM, it needs only 5 page table entries, right? Or, does it still need 2^20 page table entries?
RAM : 32-bit addressable.
Page Size : 4096 bytes.
Or, Due to stack/heap of a process, it is assumed that a process any time can acquire page frames upto 2^20 ? and increasing/decreasing of page table will become a costly operation. Hence, giving a place to store maximum of 2^20 page frame numbers for each process is a good idea.
Related
I got this question during an exam last week and I still cannot figure out how to get to the correct answer.
I think that first, the 8MiB block of data can at worst be saved on 2049 pages (8MiB/4KiB + 1 due to the data not starting at the start of a page). But that is basically all I have as I get confused and tangled up in the question every time.
How many page faults may occur at the most when copying a contiguous 8
MiB block of memory (i.e., from memory to memory) under the following
assumptions:
32-bit virtual address space, 4 KiB pages, three-level page tables
1st page table level has only 4 8-byte items, 2nd and 3rd page table levels have 512 of 8-byte items in each paging table
there is enough free frames, there will be no page replacement
the first level paging table is always in the memory
OS implements a very simple algorithm which allocates only one frame during each page failure
we do not take into account potential page faults caused by instruction fetching (we expect the code is already in the memory)
I do not know how to calculate, Who can teach me how to calculate or demonstrate to me?
Thanks.
Assume one process needs one page table, for a 32-bit system with 4KB each page, if each table entry is 8 bytes and the average number of processes running in the system is 100, what’s the average storage space needed for storing all tables in this system?
Assuming 4GB Physical memory you have 4GB/4KB = 1MB 2^20 frames.
Each process has its own page-table which will contain 2^20 entries.
Each entry is of 8 byte size.
2^20*8 B for process
2^20*8*100 B for the 100 processes.
That is 800MB overhead.
I was reading here and at the problem 3, exercise 2 he states:
The top-level page table should not assume that 2nd level page tables are page-aligned. So, we store full physical addresses there. Fortunately, we do not need control bits. So, each entry is at least 44 bits (6 bytes for byte-aligned, 8 bytes for word-aligned). Each top-level page table is therefore 256*6 = 1536 bytes (256 * 8 = 2048 bytes).
So, why does first layer can't assume page aligned 2nd layer?
Since it's presumes that the memory is not allocated in page granularity, wouldn't that make the allocation significantly more complicated?
I have tried btw to read the lectures of the course but they are that comprehensive.
Lets consider that size of page is equal to 1 KB. One entry in table
takes 2B. Table of pages takes not more than one page (so <= 1KB).
Can we conclude that size of operational memory is <= 512 KB ?
A correct answer is No, however I can't understand it. For me, answer is yes - look at my reasoning at show me where I am wrong, please.
Table contains <=1024B/2B=512=2^9 entries in table of pages. Size of page is 1024B=2^10B, so offset takes not more than <=10 bits. Number of page takes <=9 bits - because we have 512=2^9 entries. Hence, 9+10=19. Therefore <=2^19 bits make it possible to address <= 2^19 B=2^9KB=512KB.
Where am I wrong?
A page table entry is 2B, so a table entry can point to one of 2^16 physical pages. That's 2*16 kiB (because pages are 1kB). So a kernel can make use of up to 65536 kiB (64MiB) of physical memory.
This assumes that the entire page table entry (PTE) is a physical page number. That can work if the position of a PTE within the page table associates it with a virtual address. i.e. 9 bits of a virtual address are used as an index into the PT, to select a PTE.
We can't assume that the machine does work this way, but similarly we can't assume that it doesn't, until we have more information.
Given just the info in the question, we can conclude:
Usable physical memory can be anything up to 65536kiB.
Up to 512 virtual pages can be mapped at once (512kiB).
Not much else! Many possibilities are open.
A more likely design would have a valid/invalid bit in each PTE.
If a PTE's contents (rather than its location) indicate which virtual page it maps, it would be possible to map a large virtual address space onto very few physical pages. This address space would necessarily be sparsely mapped, but the kernel could give a process the illusion of having a lot of active mappings by keeping a separate table of what the mappings should be.
So on a page fault, the kernel checks to see if the page should have been mapped. If so, it puts the mapping in the PTE and resumes the process. (This kind of thing happens in real OSes, and it's called a minor page fault.)
In this design, the kernel's use of the page table to hold a subset of the real mappings would be analogous to a software-managed TLB. (Of course, this architecture could have a non-coherent TLB that requires invalidation after every PTE modification, so this would probably perform horribly).
We have a 1024*1024 matrix with 32 bit numbers that is going to be normalized. Suppose that the size of the page in the virtual memory is 4KB and we allocate 1 MB of main memory to save the matrix while we are working. Suppose that we need 10 ms to upload a page from the disc.
a) Suppose that we work with the matrix one column at a time. How many page faults will be caused to traverse all the matrix elements, if they are saved in the virtual memory by column?
The answer is 1024, but I don't understand why this is?
b) What if we work by row not by column?
The answer for this is 1024 page faults*2*1024
How do we get both of these answers,can you explain these to me?
Since an entry of the matrix is of size 32 bit, which is 4 Bytes, an entire row or column can be stored in a virtual memory of 4 Bytes * 1024 = 4KB. Since the memory is filled using columns, we can fit exactly one column in the memory.
Say we walk over the elements column by column. Getting the first entry we see this one is not present in the virtual memory so we have to load it (i.e. page fault). Now the entire column is stored so the next 1023 elements do not yield a page fault (they are all present in the memory). The next fault appears when accessing the first element of the second column. In general we have one page fault per column, which results in 1024 page faults.
Now we traverse the matrix row wise, every time we access the an element it will not be contained in the memory. This is clear since we have one column on the memory at all time and we never access elements of the same column sequentially. So each entry gives a page fault, resulting in 1024*1024 page faults.