I do not know how to calculate, Who can teach me how to calculate or demonstrate to me?
Thanks.
Assume one process needs one page table, for a 32-bit system with 4KB each page, if each table entry is 8 bytes and the average number of processes running in the system is 100, what’s the average storage space needed for storing all tables in this system?
Assuming 4GB Physical memory you have 4GB/4KB = 1MB 2^20 frames.
Each process has its own page-table which will contain 2^20 entries.
Each entry is of 8 byte size.
2^20*8 B for process
2^20*8*100 B for the 100 processes.
That is 800MB overhead.
Related
I know many people have already asked similar question on this platform, but still I have confusion on this. Please clarify.
If my program needs only 5 page frames initially to store all its things in RAM, it needs only 5 page table entries, right? Or, does it still need 2^20 page table entries?
RAM : 32-bit addressable.
Page Size : 4096 bytes.
Or, Due to stack/heap of a process, it is assumed that a process any time can acquire page frames upto 2^20 ? and increasing/decreasing of page table will become a costly operation. Hence, giving a place to store maximum of 2^20 page frame numbers for each process is a good idea.
A computer system has a 36-bit virtual address space with a page size of 4K (small modification for hex representation), and 4 bytes per page table entry. (example found here, 2nd problem)
PTE->0x11223344 (32 bits)
FullAddress(PTE<<12+PageOffset)->0x11223344AAA (44 bits)
But the offset in the page table cannot be bigger than 2^24 (36-PAGE_SIZE which is 12=24)
So, let's say there is a function f that generates the PTE address, f: {0,1}^24->{0,1}^32, which effectively allows access to 2^24 pages per process.
Bottom line, i would say that one process cannot address the full 2^44 bytes but only 2^36 and it could be potentially beneficial when there are multiple processes.
e.g. The system could allocate to 2^8 processes different chunks of 2^36 memory.
This is the potential benefit?
(This is for single level of page table, for multilevel it will grow even bigger)
I guess the question is similar with: Does Physical Address Extension (PAE) allows a process to utilize more than 4GB or does it just allows a number of processes to utilize more than 4GB?
I was reading the dinosaur book on Operating System about memory management. I assume this is one of the best books but there's something about paging written in the book which I don't get.
The book says, "A 32-bit CPU uses 32-bit addresses, meaning that a given process space can only be 2^32 bytes (4 TB ). Therefore, paging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length."
I don't quite get this part because if the CPU can only refer to 2^32 different physical addresses, if there were 2^32+1 physical addresses, the last address won't be able to be reached by the CPU. So how can paging help with this?
Also, earlier the book says "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
I don't see how that is even possible in ideal/theoretical situations, cuz as I understand it, part of the virtual address will refer to an entry of the page table while the other part of the virtual address will refer to the off-set of that particular type in that page. So in the above-mentioned situation put forward by the book, even if the CPU could point to 2^32 different page entries, it won't be able to read any particular byte within that page cuz it doesn't specify the office.
Maybe I've misunderstood the book or there is some part that I missed out. I much appreciate your help! Thanks a lot!
It sounds like you need to burn your book. It's useless.
"[P]aging lets us use physical memory that is larger than what can be addressed by the CPU’s address pointer length" is complete nonsense (unless the book is assigning two different meanings to the term "paging," in which it is still useless).
Let's start with logical addressing. A logical address is composed of a page selector and and offset into the page. Some number (P) of bits will be assigned to the page selector and the remained will be assigned to the offset. If pages are 2^9 bits, there are 23 bits in the page selector and 9 bits for the byte offset within the page.
Note that the 9/23 pick are arbitrary on my part. Most systems these days use larger pages but these are values have been used in the past.
The 23 bits in the page selector are indices into the process page table.
The size of entries in the page table are going to be a power of 2 (and I have never seen one less than 4). For our purposes let's say that each entry is 8-bytes long.
The bits in the page table entry are divided between those that index physical page frames and control bits. let's make the arbitrary choice that 32 bits index page frames and 32 bits are used for control.
That means the system can theoretically MANAGE 2^32 pages that are 2^9 bytes large or a total of 2^41 bytes. If we were to increase the page size from 2^9 to 2^20, the system could theoretically MANAGE 2^52 (32+20) bytes of memory.
Note that each process can still only ACCESS 2^32 bytes. But in my 9-bit page system, 2^9 processes could each access 2^32 pages simultaneously on a system with 2^41 physical bytes of memory (ignoring the need for a shared system address space in this gross oversimplification).
Note that if I change my page table to 32-bits and assign 9 of those bits to control and and 23 to page frame selection, the system can only MANAGE 2^32 bytes of memory (and that was more common than managing greater than 2^32 bytes).
You quote: "Frequently, on a 32-bit CPU , each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^32 physical page frames. If frame size is 4 KB (2^12 ), then a system with 4-byte entries can address 2^44 bytes (or 16 TB ) of physical memory."
This is theoretical BS. A system that used all 32 bites of the page table entry as an index to page frames could not function. There would have to be some control bits in the page table.
The quotes you are taking from this book are highly misleading. Few (any?) 32-bit processors could even access 2^32 bytes of memory due to address line limitations.
While it is possible that the use of logical pages could allow a processor to manage more memory that the logical address size suggests, that was not the purpose of managing memory in pages.
The purpose of paging—which in its normal and customary usage refers to the movement of virtual memory pages between physical page frames and secondary storage—is to allow processes to access more virtual memory than there was physical memory on the system.
There is an additional system of memory management that is (thankfully) dying out: segments. Segments also provided a means for systems to manage more physical memory than the logical address space would allow.
I have recently started working on bigqueries, I come to know they are column oriented data base and disk seek is much faster in this type of databases.
Can any one explain me how the disk seek is faster in column oriented database compare to relational db.
The big difference is in the way the data is stored on disk.
Let's look at an (over)simplified example:
Suppose we have a table with 50 columns, some are numbers (stored binary) and others are fixed width text - with a total record size of 1024 bytes. Number of rows is around 10 million, which gives a total size of around 10GB - and we're working on a PC with 4GB of RAM. (while those tables are usually stored in separate blocks on disk, we'll assume the data is stored in one big block for simplicity).
Now suppose we want to sum all the values in a certain column (integers stored as 4 bytes in the record). To do that we have to read an integer every 1024 bytes (our record size).
The smallest amount of data that can be read from disk is a sector and is usually 4kB. So for every sector read, we only have 4 values. This also means that in order to sum the whole column, we have to read the whole 10GB file.
In a column store on the other hand, data is stored in separate columns. This means that for our integer column, we have 1024 values in a 4096 byte sector instead of 4! (and sometimes those values can be further compressed) - The total data we need to read now is around 40MB instead of 10GB, and that will also stay in the disk cache for future use.
It gets even better if we look at the CPU cache (assuming data is already cached from disk): one integer every 1024 bytes is far from optimal for the CPU (L1) cache, whereas 1024 integers in one block will speed up the calculation dramatically (those will be in the L1 cache, which is around 50 times faster than a normal memory access).
The "disk seek is much faster" is wrong. The real question is "how column oriented databases store data on disk?", and the answer usually is "by sequential writes only" (eg they usually don't update data in place), and that produces less disk seeks, hence the overall speed gain.
I was learning Linux memory management recently, now I am stopped by the paging mechanism.
As with Regular Paging for 32-bit processors, why page directory entries (32 bits in total) need 20 bits to indicate 2^10 Page Tables? I think 10-bits is just enough and no waste.
What is wrong with my understanding?
Thank you.
A page has a size of 4096 bytes, i.e., 2^12 bytes.
This means that pages are aligned to a multiple of 4 KB, and that the address of a page is xxxxxxxxxxxxxxxxxxxx000000000000.
In other words, a page address needs 12 bits less than the address bus size.
For 32-bit addresses, this ends up being 20 bits.
A page directory entry has 32 bits, so 2^10 of them fit into a 4 KB page.
Regular x86 uses 2 level pagetable, but i think the case is that they talk here about one-level page table ... So you have one huge structure with 2^20 entries, each entry associate virtual page address (mentioned 20 bits) with physical page address. Can you provide a link where have you found this picture?
For a 32-bit processor, it will generate 32-bit address.
If an address generated is 32-bit then, addressable memory is 4GB.(As 2^32 = 4GB)
Now, both page table and page directory reside on memory within a single page.
Also, page size is 4kB.
And in page directory and page table, entries always point to border or edge of page table and page directory respectively.
If you divide 4G(1G=2^30) by 4k(1k = 2^10), you'll get 2^20
That is we need 20bits to access all of 4kb chunks within 4GB memory or maximum addressable memory.
That is why, entries for page table and page directory are always 20-bits