Inverted Page Table in 64bit address space? - memory-management

In class today, our instructor was teaching Page Tables, and how in the hardware level the system for 64 bits actually utilizes 48bits, with three page tables and then the physical address, which means up to four memory look-ups for given data pieces.
He also mentioned in the 32 bit world that Inverted Page Table was a concept that mapped to physical address space, but look-ups were a concern as the table is stored in memory.
Before, he taught about hash maps, and how look-up was O(1) depending on the hash. When I looked up the Inverted Page Table here on Wikipedia I found that hash collisions could be an issue.
However, in the 64 bit system up to 16 bits of space is unused.
My question is why can we not make a table with 16 bits of space, and then use the rest for the hash of the data? That way, if there's PID hash collisions, we can just map the hash to a different table, allowing upwards 16x hash collisions.
Is there a fundamental problem with this form of addressing for a process? It seems only three memory visits would be needed (one for page, one for hash, and one for data).

Related

Does use of hash tables cause memory fragmentation?

My understanding of hash tables is that they use hash functions to relate keys to locations in memory, with a total number of "buckets" pre-allocated in memory. The goal is for there to be enough buckets that I don't have to use chaining, slowing my ideal O(1) access time complexity to n/m x O(1) where n is the number of unique keys to store, and m is the number of buckets.
So if I have 1000 unique items to store, I'll want no less than 1000 buckets, and perhaps a lot more to minimize probability of having to use my chained linked list. If this weren't the case, we'd expect the average hash table to have many, many collisions. Now if we've got 1000 pre-allocated buckets, that means I have 1000 bytes of allocated memory, distributed around my memory. Thus every single unique key in my hash table results in a fragment of memory, fragmenting my RAM.
Does this mean that the use of hash tables is basically guaranteed to result in an amount of fragmentation proportional to the number of unique keys? Further, this seems to indicate that you can greatly minimize fragmentation using some statistics to pick the number of buckets, if you know how many unique keys there are going to be. Is this the case?
1000 bytes of allocated memory, distributed around my memory
No, you have one array of 1000 entries (of some size which is almost certainly larger than 1 byte per entry).
If each entry is big enough to handle the non-collision case in-place, no extra dynamic allocation is required until you have a collision. (e.g. maybe you use a union and a 1-bit flag to indicate whether this entry is a stand-alone bucket or whether it's a pointer to a linked list.)
If not, then when you write an entry, space needs to be allocated for it and a pointer stored in the table array itself. (e.g. a key-value hash table with small keys but large values). An empty hash table can still be full of NULL pointers.
You might still want it to hold structs of pointer and hash value (for single-member buckets). Then you can reject definitely-not-present queries without another level of indirection if the full hash value doesn't match the query; e.g. for a 32 or 64-bit hash that's many more bits than the 10 bits for indexing a 1024-entry table.
To reduce overall fragmentation, you can use a slab allocator or other technique for carving nodes out of a contiguous block you get from a global allocator. Having the hash table maintain its own private free-list could help with spatial locality of the linked-list nodes, so they're at least not scattered across many different virtual pages (TLB misses) and hopefully not DRAM pages (even slower cache misses).

How Page Tables are stored in the main memory?

i know that page tables are stored in memory , and each process has its own table , but each table has entries as the number of virtual pages in virtual memory so how can every process has a table and each table resides in main memory besides , the number of entries in each table is larger than the number of physical pages in main memory ...can someone explain that to me i'm very confused ,
Thanks in Advance.
Typically, page tables are said to be stored in the kernel-owned physical memory. However page tables can get awfully big since each process have their own page tables (unless the OS uses inverted paging scheme). For even a 32 bit address space with a typical 4KB page size, we shall require a 20 Bit virtual page number and a 12 bit offset. A 20 bit VPN(Virtual Page Number) implies that there would be 2^20 translations. Even if each translation i.e the Page Table entry requires 4 Bytes of memory, it amounts to 4x(2^20)= 4MB of memory, all just of address translations, which is awful.
Hence modern OSes place such large page tables in virtual kernel memory which is the Hard Disk, and swaps them to the physical memory whenever required. Thus page table is virtualized the same way each page is virtualized.
I would suggest you to go through this wonderful and easy book to get a clear under standing of Memory Virtualization and Paging related concepts:
http://pages.cs.wisc.edu/~remzi/OSTEP.

Why are tries slower than hash tables when stored on-disk?

I heard that tries are less efficient than hash tables for performing lookups when the data strictures are stored on disk rather than main memory. Why would this be the case?
On disk, random access is slow because in order to read bytes at a particular location, the hard drive has to physically spin around to put those bytes under the read head. The cost of a random access on disk can be millions of times slower than a comparable access to RAM.
On top of this, whenever you read data from disk, a block of memory called a page is read from disk, not just the bytes you asked for. This means that if you read some data from disk, accessing the bytes near that byte will likely be very fast because that data will have been read from the same page and loaded into RAM. This means that sequential access in an array on disk will be fast, since after the first (slow) read to get the bytes for the first array element to read, the bytes for the next array elements will probably already be loaded and available.
Think about what this means for tries versus linear probing hash tables. A trie is a tree structure where lookups require following lots of pointers to nodes laid out in no particular order in memory. This means that the cost of a trie lookup will likely be one disk read per character of the string, which is terribly inefficient. On the other hand, if you have a hash table using linear probing, the cost of a lookup will (roughly) be the cost of one disk read, since after finding the initial spot in the table where the value should be the array reads should not require future disk reads.
Note that not all tries and all hash tables have this property. Cache-oblivious tries are tries that are specifically constructed to minimize disk reads and can be very quick in external memory. Many hash tables, such as chained hash tables or double hashing tables, have more scattered lookup patterns and thus incur more disk reads.
Hope this helps!

In operating system, How MMU searches for virtual page number as key in page table

1)So lets say a single level page table
3)A TLB miss happens
3)The required page table is at main memory
Question : Does MMU always fetch the page table required to a number of registers inside it so that fast hardware search like TLB can be performed? I guess no that would be costly hardware
4)MMU fetch the physical page number (I guess MMU must be saved it with a format like high n-bits as virtual page no. and low m bits as physical page frame no. Please correct and explain if I am wrong)
Question: I guess there has to be a key-value map with Virtual page no as key and physical frame no. as value. How MMU search for the key in the page table. If it is a s/w like linear search than it would be very costly.
5)With hardware it appends offset bits to page frame no.
and finally a read occurs for physical address.
So this question is bugging me a lot, how the MMU performs the search for given key(virtual page entry) in page table?
The use of registers for a page table is satisfactory if the page
table is reasonably small(for example, 256 entries). Most contemporary
computers, however, allow the page table to be very large (for
example, 1 million entries). For these machines, the use of fast
registers to implement the page table is not feasible. Rather, the
page table is kept in main memory, and a page table base register (PTBR) points to the page table.
Changing page tables requires changing only this one register,
substantially reducing context-switch time.
The problem with this
approach is the time required to access a user memory location. If we
want to access location i, we must first index into the page table,
using the value in the PTBR offset by the page number for i. This task
requires a memory access. It provides us with the frame number, which
is combined with the page offset to produce the actual address. We can
then access the desired place in memory. With this scheme, two memory
accesses are needed to access a byte (one for the page-table entry,
one for the byte). Thus, memory access is slowed by a factor of 2.
This delay would be intolerable under most circumstances. We might as
well resort to swapping!
The standard solution to this problem is to
use a special, small, fastlookup hardware cache, called a translation look-aside buffer(TLB) . The
TLB is associative, high-speed memory. Each entry in the TLB consists
of two parts: a key (or tag) and a value. When the associative memory
is presented with an item, the item is compared with all keys
simultaneously. If the item is found, the corresponding value field is
returned. The search is fast; the hardware, however, is expensive.
Typically, the number of entries in a TLB is small, often numbering
between 64 and 1,024.
Source:Operating System Concepts by Silberschatz et al. page 333

space efficient algorithm for tracking writes to 2 power 32 elements

This is one of the requirement i came across my work. We have a (2 power 32) contiguous 4294967296 integers allocated as an array in memory whose function is to provide mapping in another table. Some of the entries gets written more often than the other. We want to track the hot spots and provide an approximate histogram.
The catch is that, this is going to be implemented in firmware and not much memory can be used.
information:
The mapping is for scsi lba from host to lbas on the target probably drives or flash memory.
Lets say we have 1 MB space to handle the meta data required to track hot-cold information. How can we use this efficiently other than just bit mapping which shows whether it is written or not. WE can extent and have a mathematical extension on how accurate the data we collect is based on how larget the memory is used for tracking.

Resources