Suppose, the page table changes with each processes then we don't require TLB and memory for page table. We can implement it with some reasonable number of registers. But the galvin book says(not precisely but my interpretation) we have an entry in page table all pages and we have separate table for each processes so we are using pointer to refer a particular table.
Am I correct(understanding from the book)?
If then what is the need to change the page table for each context switch?
if we are arguing that we can use one page table for whole system then simple answer to this question is that using page table/process provides more security by providing memory isolation among processes running on same system. each process has its own page table means it can not interfere with other processes memory. page table management can not be achieved through registers due to size and number of page tables. suppose you want to have extra registers to store active page tables still you will need memory to store back inactive page tables this is equally expensive method(for your first line). I suggest you to spend some time on understanding of present hardware facilities and OS functionalities then try to come up with innovation in design otherwise you will remain astray from learning.
your Op title ask "does page table changes with context switch" YES page table changes on context switch
Related
In studying shadow paging mechanisms, I learned of a case where a shadow page table starts out empty and only gets filled in as the guest VM accesses memory. It got me thinking about traditional page tables. When the OS is running and a page table becomes empty (perhaps when the page table's process terminates), I would think that page table gets released as a free page of memory.
Is there ever a case where an empty page table or even empty page directory table can exist during normal operations? Three cases I can think of are:
When the OS boots - but my understanding is that modern OSes like Linux start in real mode and then switch to paging mode, during which I would imagine process 1 gets its own page table with kernel mappings among other things. Is this correct?
If the last valid entry in a page table is then unmapped or swapped out - but I've also read that invalid entries could be used to store swap addresses, so not sure exactly.
When a new process is spawned - although I think similar to 1), a new process is started with kernel mappings and linked library mappings, so it would already have a small page table upon starting.
UPDATE: I learned that even in the shadow page table where it starts out "empty", it still has some mappings to hypervisor memory, so even then the page tables are not truly empty.
There's no point in having an empty page table, so I'll say no.
If you mean one particular table, then leaving it empty is a waste of memory. If you have an empty page table, you can free it, and in the place that pointed to the page table, you tell the CPU that there is no page table. For example, if a level-1 page table is empty, instead of pointing to it in the level-2 page table, you can put an entry in the level-2 page table which says "there is no level-1 page table for this address".
If you mean the entire set of page tables - so are there no pages at all - the CPU can't run any instructions without page tables (unless paging is turned off) so that's still a no. The CPU would triple-fault (x86) and reboot.
In two level address translation, it's said that the first level page table (1K entries)will always be there in main memory for a process.
Out of 1K second level page tables , only those page tables will be there in memory which are currently in use.
Where will we store other second level page tables ( which are not currently in use) in the absence of any secondary storage (e.g. in embedded systems)?
If we can't swap out second level page tables from memory, is there no advantage of Two level Address Translation?
The advantage of a multi-level table for logical address translation without virtual memory is that one can have dynamic page table size (even if it is not paged out). Paging is just on possible benefit (however, systems with dedicated system address spaces can page page-tables without having nesting).
I reading the book Understanding the linux kernel, and the topic about address transition very confuses me. Book says each linear address has three fields: Directory, Table, and Offset. The Directory field relates to the Directory Table, and Table field relates to Page Table.
One thing it does not point out, or I may miss, is that whether each entry in the tables relates to a page, which is a group of linear addresses, or relates to an individual linear address.
Can someone help me?
Ok, so there are (at least) two types of page tables: single-level, and multi-level.
Single-level page tables' entries map directly to virtual addresses.
Multi-level page tables' entries can map to two different places:
They may map directly to virtual memory addresses (like single-level tables).
They may map to secondary (or tertiary, etc, etc.) page tables
Here's an example of a multi-level page table:
Remember, each page table entry holds a virtual address. It is the responsibility of the operating system to translate virtual addresses to physical addresses (the benefits of which are outside of this particular topic).
Most paging systems also maintain a frame table that keeps track of used and unused frames. The frame table is traditionally a different data structure than the page table.
You can read more about paging tables here.
You can read about page tables here.
Warning
I wanted to add a disclaimer to warn against this database structure. I would highly advise a more streamlined approach.
My personal preference is to use a hierarchical one or two table structure to store navigation menus. For guidance on erecting a menu from a flat structure, see https://stackoverflow.com/a/444303/1778606
Answer
In my case, the answer was to cache the menu on the client. Read on if you want.
I have a three tier menu that is generated using several queries. In the application I have inherited, there are many user groups. Each user has access to different pages/parts of the web application via user groups. A single user may be in many user groups.
In their menu, each user should only be able to see and access pages their groups have access to. ie. only certain Menu Items, Menu Submenu Sections, and Menu Submenu Section Items should be visible for each user. The pivot tables control visibility and access.
To generate the menu, I need to make several queries. My sample proposed menu design is shown below.
This menu looks drastically different depending on which tier1, tier2, or tier3 items a user has access to.
Would it be appropriate to cache the entire menu in session for each user instead of generating it for each page load? Would this be potentially to much data to cache? My main concern is that it would like cache all three tables (Tier1, Tier2, Tier3) for each active user in session. I need to access the data anyway to see if the user has permissions, but meh!
Are there any architecture designs that would help reduce the number of queries required to generate the menu? (assuming the menu is generated on each page load as normally works as I assumed when I started writing this question)
Is this kind of menu thing normal in applications - we have like 20 groups and users are in anywhere from 2 to all 20? Any advice or comments are welcome.
(I want to minimize page load and processing time.)
Context:
I'm using c#, asp.net mvc3, oracle. Max user base estimated at ~20,000. Max active user base close to 1000. Likely active user base is 100.
At first when a user logs in, then load the permitted menus under the logged in user in cache. Then access the menus/submenus from the cache for that user. when the user logs out then clear the cache. In this scenario each time a new user logs in, cache will only be loaded for that user.
Thanks
Instead of using session to cache this information, you could add something like app fabric or memcached.
This has the advantage of not having to handle session in a load balanced application.
Space should not be much of an issue.
Load the entire menu once for the each user and cache it on the client.
It won't need to be reloaded unless they invalidate the cache (via a reload).
A query will still be needed on each page to validate the user has permissions.
(Answer attempt #2 for my own question)
Heres an option...
Keep the Tier1, Tier2, Tier3 data table in the application data cache
Store the primary keys of the Tier1, Tier2, Tier3 tables in arrays in session for each user, ie. a Tier1Key, Tier2Key, Tier3Key int[].
This should reduce the memory footprint somewhat. There is still going to be a bit of a load generation. You may be able to solve this by using a partial view for the submenu and caching the partial view on the client. (This would require a delayed partial menu load via ajax, which would happen on MenuItem click)
You may need to a query to validate access on each page.
for client caching, see OutputCache Location=Client does not appear to work,
(Answer attempt for my own question)
Oracle's database change notification feature sends rowids (physical row addresses) on row inserts, updates and deletes. As indicated in the oracle's documentation this feature can be used by the application to built a middle tier cache. But this seems to contradict when we have a detailed look on how row ids work.
ROWID's (physical row addresses) can change when various database operations are performed as indicated by this stackoverflow thread. In addition to this, as tom mentions in this thread clustered tables can have same rowids.
Based on the above research, it doesn't seem to be safe to use the rowid sent during the database change notification as the key in the application cache right? This also raises a question on - Should database change notification feature be used to built an application server cache? or is a recommendation made to restart all the application server clusters (to reload/refresh the cache) when the tables of the cached objects undergo any operations which result in rowid's to change? Would that be a good assumption to be made for production environments?
It seems to me to none of operations that can potentially change the ROWID is an operation that would be carried out in a productive environment while the application is running. Furthermore, I've seen a lot of productive software that uses the ROWID accross transaction (usually just for a few seconds or minutes). That software would probably fail before your cache if the ROWID changed. So creating a database cache based on change notification seems reasonable to me. Just provide a small disclaimer regarding the ROWID.
The only somewhat problematic operation is an update causing a movement to another partition. But that's something that rarely happens because it defeats the purpose of the partitioning, at least if it occurred regularly. The designer of a particular database schema will be able to tell you whether such an operation can occur and is relevant for caching. If none of the tables has ENABLE ROW MOVEMENT set, you don't even need to ask the designer.
As to duplicate ROWIDs: ROWIDs aren't unique globally, they are unique within a table. And you are given both the ROWID and the table name in the change notification. So the tuple of ROWID and table name is a perfect unique key for building a reliable cache.