How to view paging system (demand paging) as another layer of cache? - caching

I tried solving the following question
Consider a machine with 128MiB (i.e. 2^27 bytes) of main memory and an MMU which has a page size of 8KiB (i.e.2^13 bytes). The operating system provides demand paging to a large spinning disk.
Viewing this paging system as another layer of caching below the processor’s last-level cache (LLC), answer following questions regarding the characteristics of this “cache”:
Line size in bytes? 2^13 (every page has 2^13 bytes)
Associativity? Full Associative
Number of lines in cache? 2^14 (Memory size / page size)
Tag size in bits? 14 (Number of lines in cache is 2^14 which gives us 14 bits for tag)
Replacement policy? I am not sure at all (maybe clock algorithm which approximates LRU)
Writeback or write-through? write back (It is not consistent with Disk at all times)
Write-allocate? yes, because after page fault we bring the page to memory for both writing and reading
Exclusivity/Inclusivity? I think non-inclusive and non exclusive (NINE), maybe because memory mapped files are partially in memory and partially in swap file or ELF file (program text). Forexample stack of process is only in memory except when we run out of memory and send it to a swap file. Am I right?
I would be glad if someone checked my answers and help me solve this correctly, Thanks! Sorry, if this is not the place to ask these kind of questions

To start; your answers for line size, associativity and number of lines are right.
Tag size in bits? 14 (Number of lines in cache is 2^14 which gives us 14 bits for tag)
Tag size would be "location on disk / line size", plus some other bits (e.g. for managing the replacement policy). We don't know how big the disk is (other than "large").
However; it's possibly not unreasonable to work this out backwards - start from the assumption that the OS was designed to support a wide range of different hard disk sizes and that the tag is a nice "power of 2"; then assume that 4 bits are used for other purposes (e.g. "age" for LRU). If the tag is 32 bits (a very common "power of 2") then it would imply the OS could support a maximum disk size of 2 TiB ("1 << (32-4) * 8 KiB"), which is (keeping "future proofing" in mind) a little too small for an OS designed in the last 10 years or so. The next larger "power of 2" is 64 bits, which is very likely for modern hardware, but less likely for older hardware (e.g. 32-bit CPUs, smaller disks). Based on "128 MiB of RAM" I'd suspect that the hardware is very old (e.g. normal desktop/server systems started having more than 128 MiB in the late 1990s), so I'd go with "32 bit tag".
Replacement policy? I am not sure at all (maybe clock algorithm which approximates LRU)
Writeback or write-through? write back (It is not consistent with Disk at all times) Write-allocate? yes, because after page fault we bring the page to memory for both writing and reading
There isn't enough information to be sure.
A literal write-through policy would be a performance disaster (imagine having to write 8 KiB to a relatively slow disk every time anything pushed data on the stack). A write back policy would be very bad for fault tolerance (e.g. if there's a power failure you'd lose far too much data).
This alone is enough to imply that it's some kind of custom design (neither strictly write-back nor strictly write-through).
To complicate things more; an OS could take into account "eviction costs". E.g. if the data in memory is already on the disk then the page can be re-used immediately, but if the data in memory has been modified then that page would have to be saved to disk before the memory can be re-used; so if/when the OS needs to evict data from cache to make room (e.g. for more recently used data) it'd be reasonable to prefer evicting an unmodified page (which is cheaper to evict). In addition; for spinning disks, it's common for an OS to optimize disk access to minimize head movement (where the goal is to reduce seek times and improve disk IO performance).
The OS might combine all of these factors when deciding when modified data is written to disk.
Exclusivity/Inclusivity? I think non-inclusive and non exclusive (NINE), maybe because memory mapped files are partially in memory and partially in swap file or ELF file (program text). Forexample stack of process is only in memory except when we run out of memory and send it to a swap file. Am I right?
If RAM is treated as a cache of disk, then the system is an example of single-level store (see https://en.wikipedia.org/wiki/Single-level_store ) and isn't a normal OS (with normal virtual memory - e.g. swap space and file systems). Typically systems that use single-level store are built around the idea of having "persistent objects" and do not have files at all. With this in mind; I don't think it's reasonable to make assumptions that would make sense for a normal operating system (e.g. assume that there are executable files, or that memory mapped files are supported, or that some part of the disk is "swap space" and another part of the disk is not).
However; I would assume that you're right about "non-inclusive and non exclusive (NINE)" - inclusive would be bad for performance (for the same reason write-through would be bad for performance) and exclusive would be very bad for fault tolerance (for the same reason that write-back is bad for fault tolerance).

Related

How does using paging method in OS justify its overhead?

So I was reading about paging in Operative Systems.
One of the biggest pros for using paging as a memory management method(that I've come across) is that it solves the external fragmentation problem(in both operative memory and storage) and allows processes to be allocated in operative memory in a non-continual way.However to implement paging, we would need to keep up and search through page table which could have a large number of entries(millions in some cases). And I imagine there is a big overhead in doing so(both time and space wise).
The thing that I don't understand is why can't we just divide a program into an arbitrary number of segments each time we load it into operative memory.We could divide it in such a way that each segments "fills a hole" in operative memory if needed and therefore solve the problem of external fragmentation.Obviously the program could be loaded in a non-continual way and we would only need to store 2 addresses per segment(upper and lower bound) and maybe some table of segments to keep up the order.
To quote the book I'm reading(OS concepts - Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, 9th edition): "Because of its advantages over earlier methods, paging in its various forms is used in most operating systems, from those for mainframes through those for smartphones".
Am I missing something here? How does using paging justify its overhead? Do we really need to keep track of each page? Do some other things go into consideration when choosing the right method used for Memory Management?
Paging is the basis for various "virtual memory tricks", like:
instead of having to wait while a file is loaded before it can be used; mark the pages as "belonging to a memory mapped file" and then pretend that the file was loaded (and fetch the pages from disk when they're actually needed, possibly with some pre-fetching strategy happening in the background). This reduces RAM consumption and can speed up things like executable start-up times.
instead of allocating actual physical RAM during things like process start-up (for its ".bss section"/uninitialized data, "not used yet" heap space, etc), and instead of literally copying all pages during fork(); use "copy on write" tricks so that you only need to create/copy the data when it's actually modified. This reduces RAM consumption and speeds everything up (especially when a lot of the memory is never modified).
if there isn't enough RAM, then you can just send pages to swap space (e.g. disk) and keep everything running. This is much faster than waiting for something to happen that will never happen (because the process crashed due to not having enough memory).
all of the things above that reduce the amount of actual RAM being used mean that you can use that RAM for file system caches instead. Significantly larger file system caches can mean significantly faster file IO (less "file cache miss", less slow disk IO). Of course if data is in file cache you can map the same pages as "copy on write" into multiple different processes too (without copying the data or using more RAM).
For overhead; CPU manufacturers put a lot of effort into minimizing the overhead (using "translation look-aside buffers", pre-fetching strategies, out-of-order execution to keep CPU busy while it waits for a translation to happen, etc). In the same way, operating system developers also try to minimize the overhead (e.g. reduce the number of task switches between processes, trying to keep processes on the same core, etc). This means that the overhead is fairly small compared to the many large benefits.
Of course without paging you end up having to deal with external fragmentation, which typically devolves into wasting lots of CPU time copying large amounts of data from one piece of RAM to another to "de-fragment" RAM. On top of that, you'd need something else to ensure that different processes are isolated and that any permissions (like "this area is not executable") are/can be enforced; and this "something else" will probably add as much overhead as paging all by itself (unless you want a massive security disaster). With this in mind, even if you ignore the benefits of paging, paging is still likely to be less overhead than not using paging.
The thing that I don't understand is why can't we just divide a program into an arbitrary number of segments each time we load it into operative memory.
Dealing with different segment sizes would be painful (less "shift and mask to find the right index in a table" and more "do a linear search through all segments to find the right segment"). If you use fixed sizes segments it would be much faster; but that's what paging is (a "page" is your "fixed size segment").
Dividing a program into arbitrary segments.
They can't quite be arbitrary; for example I might like to make a very large vector for some application:
#define N 10000000
int v[N];
....
for (i = 0; i < N; i++) {
v[i] = ...
}
The compiler really wants v to appear to occupy successive memory locations. So, your segmenter would need to be aware of these items; but it gets worse:
int *v;
v = malloc(sizeof(*v) * N);
for (i = 0; i < N; i++) {
v[i] = ...;
}
Justifying overhead:
Now you need to find a large chunk of physically contiguous RAM at runtime, and since you have no relocation mechanism, you have no way to move around previously allocated chunks. This is the fragmentation problem; and without a page-style mmu, it is very difficult to solve.
You can solve it by turning your compiled languages into pseudo-interpreted languages; but what has more overhead: compiling:
a = v[i];
into:
ld R0,R1+R2*4 ; + overhead of mmu.
or:
mov R0, R1+R2*4
call findseg
ld R0, R0
In the general case, the overhead, in terms of RAM, is in the order of 0.1%; for a concrete example, 10 bytes for a 4k page on an ARM64 or AMD64 architecture.
libc.so, on my linux system, is about 2M of text + 40k data; most programs only use a very small amount of this. Thanks to paging, only the bits of libc that are used need to occupy memory. On a 64G system with 32 processes, the libc savings alone swamps the page table overhead.
3: Keeping track.
There are a number of avenues to attack this from. One is multiple page sizes, which are supported on most architectures. The other is that the OS doesn't have to provide the granularity of the MMU. For instance, on a system with 4k pages, it could insist on only surrendering memory in 64K chunks. Thus reducing the management overhead by a factor of 16, while modestly increasing the granularity wastage.

Is there a performance gain on x86 Windows/Linux apps in minimizing page count?

I'm writing and running software on Windows and Linux of the last five years, say, on machines of the last five years.
I believe the running processes will never get close to the amount of memory we have (we're typically using like 3GB out of 32GB) but I'm certain we'll be filling up the cache to overflowing.
I have a data structure modification which makes the data structure run a bit slower (that is, do quite a bit more pointer math) but reduces both the cache line count and page count (working set) of the application.
My understanding is that if processes' memory is being paged out to disk, reducing the page count and thus disk I/O is a huge win in terms of wall clock time and latency (e.g. to respond to network events, user events, timers, etc.) However with memory so cheap nowadays, I don't think processes using my data structure will ever page out of memory. So is there still an advantage in improving this? For example if I'm using 10,000 cache lines, does it matter whether they're spread over 157 4k pages (=643k) or 10,000 4k pages (=40MB)? Subquestion: even if you're nowhere near filling up RAM, are there cases a modern OS will page a running process out anyways? (I think Linux of the 90s might have done so to increase file caching, for instance.)
In contrast, I also can reduce cache lines and I suspect this will probably wall clock time quite a bit. Just to be clear on what I'm counting, I'm counting the number of 64-byte-aligned areas of memory that I'm touching at least one byte in as the total cache lines used by this structure. Example: if hypothetically I have 40,000 16 byte structures, a computer whose processes are constantly referring to more memory that it has L1 cache will probably run everything faster if those structures are packed into 10,000 64 byte cache lines instead of each straddling two cache lines apiece for 80,000 cache lines?
The general recommendation is to minimize the number of cache lines and virtual pages that exhibit high temporal locality on the same core during the same time interval of the lifetime of a given application.
When the total number of needed physical pages is about to exceed the capacity of main memory, the OS will attempt to free up some physical memory by paging out some of the resident pages to secondary storage. This situation may result in major page faults, which can significantly impact performance. Even if you know with certainty that the system will not reach that point, there are other performance issues that may occur when unnecessarily using more virtual pages (i.e., there is wasted space in the used pages).
On Intel and AMD processors, page table entries and other paging structures are cached in hardware caches in the memory management unit to efficiently translate virtual addresses to physical addresses. These include the L1 and L2 TLBs. On an L2 TLB miss, a hardware component called the page walker is engaged to peform the required address translation. More pages means more misses. On pre-Broadwell1 and pre-Zen microarchitectures, there can only be one outstanding page walk at any time. On later microarchitectures, there can only be two. In addition, on Intel Ivy Bridge and later, it may be more difficult for the TLB prefetcher to keep up with the misses.
On Intel and AMD processors, the L1D and L2 caches are designed so that all cache lines within the same 4K page are guaranteed to be mapped to different cache sets. So if all of the cache lines of a page are used, in contest to, for example, spreading out the cache lines in 10 different pages, conflict misses in these cache levels may be reduced. That said, on all AMD processors and on Intel pre-Haswell microarchitectures, bank conflicts are more likely to occur when accesses are more spread across cache sets.
On Intel and AMD processors, hardware data prefetchers don't work across 4K boundaries. An access pattern that could be detected by one or more prefetchers but has the accesses spread out across many pages would benefit less from hardware prefetching.
On Windows/Intel, the accessed bits of the page table entries of all present pages are reset every second by the working set manager. So when accesses are unnecessarily spread out in the virtual address space, the number of page walks that require microcode assists (to set the accessed bit) per memory access may become larger.
The same applies to minor page faults as well (on Intel and AMD). Both Windows and Linux use demand paging, i.e., an allocated virtual page is only mapped to a physical page on demand. When a virtual page that is not yet mapped to a physical page is accessed, a minor page fault occurs. Just like with the accessed bit, the number of minor page faults per access may be larger.
On Intel processors, with more pages accessed, it becomes more likely for nearby accesses on the same logical core to 4K-alias. For more information on 4K aliasing, see: L1 memory bandwidth: 50% drop in efficiency using addresses which differ by 4096+64 bytes.
There are probably other potential issues.
Subquestion: even if you're nowhere near filling up RAM, are there
cases a modern OS will page a running process out anyways? (I think
Linux of the 90s might have done so to increase file caching, for
instance.)
On both Windows and Linux, there is a maximum working set size limit on each process. On Linux, this is called RLIMIT_RSS and is not enforced. On Windows, this limit can be either soft (the default) or hard. The limit is only enforced if it is hard (which can be specified by calling the SetProcessWorkingSetSizeEx function and passing the QUOTA_LIMITS_HARDWS_MIN_ENABLE flag). When a process reaches its hard working set limit, further memory allocation requests will be satisfied by paging out some of its pages to the page file, even if free physical memory is available.
I don't know about Linux of the 90s.
Footnotes:
(1) The Intel optimization manual mentions in Section 2.2.3 that Skylake can do two page walks in parallel compared to one in Haswell and earlier microarchitectures. To my knowledge, the manual does not clearly mention whether Broadwell can do one or two page walks in parallel. However, according to slide 10 of these Intel slides (entitled "Intel Xeon Processor D: The First Xeon processor optimized for dense solutions"), Xeon Broadwell supports two page walks. I think this also applies to all Broadwell models.

Minimizing page faults (and TLB faults) while "walking" a large graph

Problem (think of the mark phase of a GC)
I have a graph of “objects” that I need to walk, visiting all objects.
I can store in each object if it has been visited.
All the objects are stored in memory and linked together using normal pointers.
The objects are not all the same size.
Sometimes there is not enough ram in the system to hold all the objects in memory at the same time, and I wish to avoid “page thrashing”.
I also wish to avoid TLB faults
Other times, there is more than enough ram.
I do not mind writing low-level code.
I do not mind different code for windows and linux.
The code must run in “user space” without needing none standard permissions.
I don't care the order I visit the nodes in.
I am going to ask more detail questions about possible solutions, linking back to this questions.
Page faults aren't necessarily bad, as long as they're not stalling your progress.
This means that if you have a node Node* p with two candidate successors p->left and p->right, it can be useful to pick the nearest (in terms of (char*)p - (char*)p->next) and pre-fetch the other (e.g. with PrefetchVirtualMemory).
How efficient this will be cannot be predicted; it greatly depends on your graph topology. But the prefetch is virtually free when you have enough RAM.
Closer to the CPU, there's cache prefetching. Same idea, different storage
Use 2M hugepages for address ranges that are full of "hot" data that the kernel can't usefully swap out any / many 4k chunks of. This will reduce TLB misses, but costs extra physical memory if there are any 4k chunks of a hugepage that aren't hot.
Linux does this transparently for anonymous pages (https://www.kernel.org/doc/Documentation/vm/transhuge.txt), but you can use madvise(MADV_HUGEPAGE) on pages you know are worth it, to encourage the kernel to defrag physical memory even if that's not the default in /sys/kernel/mm/transparent_hugepage/defrag. (You can look at /proc/PID/smaps to see how many transparent hugepages are in use for any given mapping.)
Based on what you posted in your answer: An ordered set of nodesToVisit would give you the most locality, but might be too expensive to maintain. Multiple accesses within the same 64-byte cache line are much cheaper than coming back to it later after it's been evicted from L3 cache and has to come from DRAM again.
If you have lots of addresses to visit in your Set, doing one pass of a radix-sort into 2M buckets would give you locality within one hugepage. 2M is also smaller than L3 cache size, so you'll probably get some cache hits when visiting multiple objects in the same cache line, even if you don't hit them back to back.
Depending on how big your Set is, throwing around that many pointers even to partial-sort them might not be worth the memory traffic that takes. But there's probably some sweet spot of taking a window of data and at least partially sorting it. Using the pointers before they are evicted from cache is nice.
SW prefetch can trigger a page-walk to avoid a TLB miss, so you could _mm_prefetch(_MM_HINT_T2) one address from the next 2M bucket before starting on the current bucket. See also Prefetching Examples?. I haven't tested this, but it might work well. It won't help with page faults: prefetch from an unmapped page won't cause a page fault, and you don't want to trigger an actual PF until you're ready to touch the page.
MSalter's suggestion to ask the OS to prefetch and wire the next page is interesting (I think madvise(MADV_WILLNEED) is the Linux equivalent), but a system call will be slow for no benefit if the page was already mapped+wired into the HW page table. There's no x86 asm instruction that just asks if a page is mapped without faulting if it isn't, so I can't think of a way to efficiently choose not to call it. And BTW, I think Linux breaks up transparent hugepages into 4k regular pages for paging in/out. But don't write a big loop that just does _mm_prefetch() or madvise on all the 4k pages in a 2M block; that probably sucks. The prefetcht2 part would probably just result in excess prefetch requests being dropped.
Use perf counters to look at cache hit/miss rates. On Intel CPUs, the mem_load_retired.l1_miss and/or .l2_miss event should show you whether you're getting cache hits on accessing the Set itself, as well as on accessing dereferencing those pointers. Those counters are precise events, so they should map accurately to asm load instructions. (e.g. perf record -e mem_load_retired.l2_miss ./my_program / perf report on Linux).
We remove one item at a time from nodesToVisit
I don't know much about GC design, but can't you use a sequence number or tagged-pointer or something to avoid modifying the Set data structure itself every GC pass? If your minimum object alignment is 4 bytes, you have 2 bits to play with at the bottom of every pointer. ANDing them off before dereferencing is very cheap.
x86-64 with full 64-bit pointers currently requires the top 16 to be the sign-extension of the low 48. So you could use bits there (16 bits, or maybe just the top byte) if you re-canonicalize pointers. (redo sign extension, or just zero the high 16 bits if you want to assume user-space pointers; Linux uses a high-half kernel VM layout so user-space addresses are always in the low half of virtual address space. IDK what Windows does.)
On x86-64, you might consider using the x32 ABI (32-bit pointers in long mode) if 4GiB of address space is enough, especially if you're hitting physical memory limits and swapping. Smaller pointers mean smaller data structures, thus half the cache footprint.
Some Linux systems are built without kernel support for x32, though, only classic x86-64 and usually 32-bit mode. But if it works on your systems, consider gcc -mx32.
These are my first thoughts about a possible solution, they are clearly not optimal. I will delete this answer if someone posts a better answer.
The basic method:
Assume we have a Set<NodePointer> nodesToVisit that contains all nodes we have not yet visited.
We remove one item at a time from nodesToVisit,
and if it has not been visited before we add all “pointers to other nodes” to nodesToVisit.
Improvements:
But we can clearly do better, by ordering nodesToVisit based on address, so that we are more likely to visit nodes that are contained in pages we have recently accessed. This could be as simple as having a second Set<NodePointer> nodesToVisitLater, and putting any node that has an address a long way from the current node into it.
Or we could skip over any node that are contained in pages that are not resident in memory, visiting these nodes after we have visited all nodes that are currently in memory.
(The"set" could just be a stack, as visiting a node more than once is a "no-opp")
https://patents.google.com/patent/US7653797B1/en seems to be related, but I have not read it yet.
https://hosking.github.io/links/Cher+2004ASPLOS.pdf
https://people.cs.umass.edu/~emery/pubs/cramm.pdf
https://people.cs.umass.edu/~emery/pubs/f034-hertz.pdf
https://people.cs.umass.edu/~emery/pubs/04-16.pdf

Large memory block allocation and 4K blocks

Consider this quote from Mark Russiniovich's books on Windows internals. This is about large-page allocation mechanism, intended for allocating large non-paged memory blocks in physical memory
http://books.google.com/books?id=CdxMRjJksScC&pg=PA194&lpg=PA194#v=onepage
Attempts to allocate large pages may fail after the operating system
has been running for an extended period, because the physical memory
for each large page must occupy a significant number (see Table 10-1)
of physically contiguous small pages, and this extent of physical
pages must furthermore begin on a large page boundary. (For example,
physical pages 0 through 511 could be used as a large page on an x64
system, as could physical pages 512 through 1,023, but pages 10
through 521 could not.) Free physical memory does become fragmented as
the system runs. This is not a problem for allocations using small
pages but can cause large page allocations to fail.
If I understand this correctly, he's saying that fragmentation produced by scattered 4K pages can prevent successful allocation of large 2M pages in physical memory. But why? Ordinary 4K physical pages are easily relocatable and can also be easily swapped out. In other words, if we have a physical memory region not occupied by other 2M pages, we can always "clean it up": make it available by relocating any interfering 4K pages from that physical memory region to some other location. I.e. from the "naive" point of view, 2M allocations should "always succeed", as long as we have enough free physical RAM.
What is wrong with my logic? What exactly is Mark talking about when he says that physical memory fragmentation caused by 4K pages can prevent successful allocation of large pages?
It actually worked this way in Windows XP. But the cost was too prohibitive and a design change in Vista disabled this approach. Explained well in this blog post, I'll quote the essential part:
In Windows Vista, the memory manager folks recognized that these long delays made very large pages less attractive for applications, so they changed the behavior so requests for very large pages from applications went through the "easy parts" of looking for contiguous physical memory, but gave up before the memory manager went into desperation mode, preferring instead just to fail.
He's talking about a specific problem that exists with allocating and freeing contiguous memory blocks over time, and you're describing a solution. Nothing is wrong with your logic and that's roughly what the .NET Garbage Collector does to reduce memory fragmentation. You're spot on.
If you have 10 seats per row at a baseball game and seats 2, 4, 6, and 8 are taken (fragmented), you will never be able to get 3 seats in that row for you and your friends unless you ask someone to move (compacted).
There's nothing special about the 4k blocks he's describing.

What are the differences between virtual memory and physical memory?

I am often confused with the concept of virtualization in operating systems. Considering RAM as the physical memory, why do we need the virtual memory for executing a process?
Where does this virtual memory stand when the process (program) from the external hard drive is brought to the main memory (physical memory) for the execution.
Who takes care of the virtual memory and what is the size of the virtual memory?
Suppose if the size of the RAM is 4GB (i.e. 2^32-1 address spaces) what is the size of the virtual memory?
Softwares run on the OS on a very simple premise - they require memory. The device OS provides it in the form of RAM. The amount of memory required may vary - some softwares need huge memory, some require paltry memory. Most (if not all) users run multiple applications on the OS simultaneously, and given that memory is expensive (and device size is finite), the amount of memory available is always limited. So given that all softwares require a certain amount of RAM, and all of them can be made to run at the same time, OS has to take care of two things:
That the software always runs until user aborts it, i.e. it should not auto-abort because OS has run out of memory.
The above activity, while maintaining a respectable performance for the softwares running.
Now the main question boils down to how the memory is being managed. What exactly governs where in the memory will the data belonging to a given software reside?
Possible solution 1: Let individual softwares specify explicitly the memory address they will use in the device. Suppose Photoshop declares that it will always use memory addresses ranging from 0 to 1023 (imagine the memory as a linear array of bytes, so first byte is at location 0, 1024th byte is at location 1023) - i.e. occupying 1 GB memory. Similarly, VLC declares that it will occupy memory range 1244 to 1876, etc.
Advantages:
Every application is pre-assigned a memory slot, so when it is installed and executed, it just stores its data in that memory area, and everything works fine.
Disadvantages:
This does not scale. Theoretically, an app may require a huge amount of memory when it is doing something really heavy-duty. So to ensure that it never runs out of memory, the memory area allocated to it must always be more than or equal to that amount of memory. What if a software, whose maximal theoretical memory usage is 2 GB (hence requiring 2 GB memory allocation from RAM), is installed in a machine with only 1 GB memory? Should the software just abort on startup, saying that the available RAM is less than 2 GB? Or should it continue, and the moment the memory required exceeds 2 GB, just abort and bail out with the message that not enough memory is available?
It is not possible to prevent memory mangling. There are millions of softwares out there, even if each of them was allotted just 1 kB memory, the total memory required would exceed 16 GB, which is more than most devices offer. How can, then, different softwares be allotted memory slots that do not encroach upon each other's areas? Firstly, there is no centralized software market which can regulate that when a new software is being released, it must assign itself this much memory from this yet unoccupied area, and secondly, even if there were, it is not possible to do it because the no. of softwares is practically infinite (thus requiring infinite memory to accommodate all of them), and the total RAM available on any device is not sufficient to accommodate even a fraction of what is required, thus making inevitable the encroaching of the memory bounds of one software upon that of another. So what happens when Photoshop is assigned memory locations 1 to 1023 and VLC is assigned 1000 to 1676? What if Photoshop stores some data at location 1008, then VLC overwrites that with its own data, and later Photoshop accesses it thinking that it is the same data is had stored there previously? As you can imagine, bad things will happen.
So clearly, as you can see, this idea is rather naive.
Possible solution 2: Let's try another scheme - where OS will do majority of the memory management. Softwares, whenever they require any memory, will just request the OS, and the OS will accommodate accordingly. Say OS ensures that whenever a new process is requesting for memory, it will allocate the memory from the lowest byte address possible (as said earlier, RAM can be imagined as a linear array of bytes, so for a 4 GB RAM, the addresses range for a byte from 0 to 2^32-1) if the process is starting, else if it is a running process requesting the memory, it will allocate from the last memory location where that process still resides. Since the softwares will be emitting addresses without considering what the actual memory address is going to be where that data is stored, OS will have to maintain a mapping, per software, of the address emitted by the software to the actual physical address (Note: that is one of the two reasons we call this concept Virtual Memory. Softwares are not caring about the real memory address where their data are getting stored, they just spit out addresses on the fly, and the OS finds the right place to fit it and find it later if required).
Say the device has just been turned on, OS has just launched, right now there is no other process running (ignoring the OS, which is also a process!), and you decide to launch VLC. So VLC is allocated a part of the RAM from the lowest byte addresses. Good. Now while the video is running, you need to start your browser to view some webpage. Then you need to launch Notepad to scribble some text. And then Eclipse to do some coding.. Pretty soon your memory of 4 GB is all used up, and the RAM looks like this:
Problem 1: Now you cannot start any other process, for all RAM is used up. Thus programs have to be written keeping the maximum memory available in mind (practically even less will be available, as other softwares will be running parallelly as well!). In other words, you cannot run a high-memory consuming app in your ramshackle 1 GB PC.
Okay, so now you decide that you no longer need to keep Eclipse and Chrome open, you close them to free up some memory. The space occupied in RAM by those processes is reclaimed by OS, and it looks like this now:
Suppose that closing these two frees up 700 MB space - (400 + 300) MB. Now you need to launch Opera, which will take up 450 MB space. Well, you do have more than 450 MB space available in total, but...it is not contiguous, it is divided into individual chunks, none of which is big enough to fit 450 MB. So you hit upon a brilliant idea, let's move all the processes below to as much above as possible, which will leave the 700 MB empty space in one chunk at the bottom. This is called compaction. Great, except that...all the processes which are there are running. Moving them will mean moving the address of all their contents (remember, OS maintains a mapping of the memory spat out by the software to the actual memory address. Imagine software had spat out an address of 45 with data 123, and OS had stored it in location 2012 and created an entry in the map, mapping 45 to 2012. If the software is now moved in memory, what used to be at location 2012 will no longer be at 2012, but in a new location, and OS has to update the map accordingly to map 45 to the new address, so that the software can get the expected data (123) when it queries for memory location 45. As far as the software is concerned, all it knows is that address 45 contains the data 123!)! Imagine a process that is referencing a local variable i. By the time it is accessed again, its address has changed, and it won't be able to find it any more. The same will hold for all functions, objects, variables, basically everything has an address, and moving a process will mean changing the address of all of them. Which leads us to:
Problem 2: You cannot move a process. The values of all variables, functions and objects within that process have hardcoded values as
spat out by the compiler during compilation, the process depends on
them being at the same location during its lifetime, and changing them is expensive. As a result,
processes leave behind big "holes" when they exit. This is called
External Fragmentation.
Fine. Suppose somehow, by some miraculous manner, you do manage to move the processes up. Now there is 700 MB of free space at the bottom:
Opera smoothly fits in at the bottom. Now your RAM looks like this:
Good. Everything is looking fine. However, there is not much space left, and now you need to launch Chrome again, a known memory-hog! It needs lots of memory to start, and you have hardly any left...Except.. you now notice that some of the processes, which were initially occupying large space, now is not needing much space. May be you have stopped your video in VLC, hence it is still occupying some space, but not as much as it required while running a high resolution video. Similarly for Notepad and Photos. Your RAM now looks like this:
Holes, once again! Back to square one! Except, previously, the holes occurred due to processes terminating, now it is due to processes requiring less space than before! And you again have the same problem, the holes combined yield more space than required, but they are scattered around, not much of use in isolation. So you have to move those processes again, an expensive operation, and a very frequent one at that, since processes will frequently reduce in size over their lifetime.
Problem 3: Processes, over their lifetime, may reduce in size, leaving behind unused space, which if needed to be used, will require
the expensive operation of moving many processes. This is called
Internal Fragmentation.
Fine, so now, your OS does the required thing, moves processes around and start Chrome and after some time, your RAM looks like this:
Cool. Now suppose you again resume watching Avatar in VLC. Its memory requirement will shoot up! But...there is no space left for it to grow, as Notepad is snuggled at its bottom. So, again, all processes has to move below until VLC has found sufficient space!
Problem 4: If processes needs to grow, it will be a very expensive operation
Fine. Now suppose, Photos is being used to load some photos from an external hard disk. Accessing hard-disk takes you from the realm of caches and RAM to that of disk, which is slower by orders of magnitudes. Painfully, irrevocably, transcendentally slower. It is an I/O operation, which means it is not CPU bound (it is rather the exact opposite), which means it does not need to occupy RAM right now. However, it still occupies RAM stubbornly. If you want to launch Firefox in the meantime, you can't, because there is not much memory available, whereas if Photos was taken out of memory for the duration of its I/O bound activity, it would have freed lot of memory, followed by (expensive) compaction, followed by Firefox fitting in.
Problem 5: I/O bound jobs keep on occupying RAM, leading to under-utilization of RAM, which could have been used by CPU bound jobs in the meantime.
So, as we can see, we have so many problems even with the approach of virtual memory.
There are two approaches to tackle these problems - paging and segmentation. Let us discuss paging. In this approach, the virtual address space of a process is mapped to the physical memory in chunks - called pages. A typical page size is 4 kB. The mapping is maintained by something called a page table, given a virtual address, all now we have to do is find out which page the address belong to, then from the page table, find the corresponding location for that page in actual physical memory (known as frame), and given that the offset of the virtual address within the page is same for the page as well as the frame, find out the actual address by adding that offset to the address returned by the page table. For example:
On the left is the virtual address space of a process. Say the virtual address space requires 40 units of memory. If the physical address space (on the right) had 40 units of memory as well, it would have been possible to map all location from the left to a location on the right, and we would have been so happy. But as ill luck would have it, not only does the physical memory have less (24 here) memory units available, it has to be shared between multiple processes as well! Fine, let's see how we make do with it.
When the process starts, say a memory access request for location 35 is made. Here the page size is 8 (each page contains 8 locations, the entire virtual address space of 40 locations thus contains 5 pages). So this location belongs to page no. 4 (35/8). Within this page, this location has an offset of 3 (35%8). So this location can be specified by the tuple (pageIndex, offset) = (4,3). This is just the starting, so no part of the process is stored in the actual physical memory yet. So the page table, which maintains a mapping of the pages on the left to the actual pages on the right (where they are called frames) is currently empty. So OS relinquishes the CPU, lets a device driver access the disk and fetch the page no. 4 for this process (basically a memory chunk from the program on the disk whose addresses range from 32 to 39). When it arrives, OS allocates the page somewhere in the RAM, say first frame itself, and the page table for this process takes note that page 4 maps to frame 0 in the RAM. Now the data is finally there in the physical memory. OS again queries the page table for the tuple (4,3), and this time, page table says that page 4 is already mapped to frame 0 in the RAM. So OS simply goes to the 0th frame in RAM, accesses the data at offset 3 in that frame (Take a moment to understand this. The entire page, which was fetched from disk, is moved to frame. So whatever the offset of an individual memory location in a page was, it will be the same in the frame as well, since within the page/frame, the memory unit still resides at the same place relatively!), and returns the data! Because the data was not found in memory at first query itself, but rather had to be fetched from disk to be loaded into memory, it constitutes a miss.
Fine. Now suppose, a memory access for location 28 is made. It boils down to (3,4). Page table right now has only one entry, mapping page 4 to frame 0. So this is again a miss, the process relinquishes the CPU, device driver fetches the page from disk, process regains control of CPU again, and its page table is updated. Say now the page 3 is mapped to frame 1 in the RAM. So (3,4) becomes (1,4), and the data at that location in RAM is returned. Good. In this way, suppose the next memory access is for location 8, which translates to (1,0). Page 1 is not in memory yet, the same procedure is repeated, and the page is allocated at frame 2 in RAM. Now the RAM-process mapping looks like the picture above. At this point in time, the RAM, which had only 24 units of memory available, is filled up. Suppose the next memory access request for this process is from address 30. It maps to (3,6), and page table says that page 3 is in RAM, and it maps to frame 1. Yay! So the data is fetched from RAM location (1,6), and returned. This constitutes a hit, as data required can be obtained directly from RAM, thus being very fast. Similarly, the next few access requests, say for locations 11, 32, 26, 27 all are hits, i.e. data requested by the process is found directly in the RAM without needing to look elsewhere.
Now suppose a memory access request for location 3 comes. It translates to (0,3), and page table for this process, which currently has 3 entries, for pages 1, 3 and 4 says that this page is not in memory. Like previous cases, it is fetched from disk, however, unlike previous cases, RAM is filled up! So what to do now? Here lies the beauty of virtual memory, a frame from the RAM is evicted! (Various factors govern which frame is to be evicted. It may be LRU based, where the frame which was least recently accessed for a process is to be evicted. It may be first-come-first-evicted basis, where the frame which allocated longest time ago, is evicted, etc.) So some frame is evicted. Say frame 1 (just randomly choosing it). However, that frame is mapped to some page! (Currently, it is mapped by the page table to page 3 of our one and only one process). So that process has to be told this tragic news, that one frame, which unfortunate belongs to you, is to be evicted from RAM to make room for another pages. The process has to ensure that it updates its page table with this information, that is, removing the entry for that page-frame duo, so that the next time a request is made for that page, it right tells the process that this page is no longer in memory, and has to be fetched from disk. Good. So frame 1 is evicted, page 0 is brought in and placed there in the RAM, and the entry for page 3 is removed, and replaced by page 0 mapping to the same frame 1. So now our mapping looks like this (note the colour change in the second frame on the right side):
Saw what just happened? The process had to grow, it needed more space than the available RAM, but unlike our earlier scenario where every process in the RAM had to move to accommodate a growing process, here it happened by just one page replacement! This was made possible by the fact that the memory for a process no longer needs to be contiguous, it can reside at different places in chunks, OS maintains the information as to where they are, and when required, they are appropriately queried. Note: you might be thinking, huh, what if most of the times it is a miss, and the data has to be constantly loaded from disk into memory? Yes, theoretically, it is possible, but most compilers are designed in such a manner that follows locality of reference, i.e. if data from some memory location is used, the next data needed will be located somewhere very close, perhaps from the same page, the page which was just loaded into memory. As a result, the next miss will happen after quite some time, most of the upcoming memory requirements will be met by the page just brought in, or the pages already in memory which were recently used. The exact same principle allows us to evict the least recently used page as well, with the logic that what has not been used in a while, is not likely to be used in a while as well. However, it is not always so, and in exceptional cases, yes, performance may suffer. More about it later.
Solution to Problem 4: Processes can now grow easily, if space problem is faced, all it requires is to do a simple page replacement, without moving any other process.
Solution to Problem 1: A process can access unlimited memory. When more memory than available is needed, the disk is used as backup, the new data required is loaded into memory from the disk, and the least recently used data frame (or page) is moved to disk. This can go on infinitely, and since disk space is cheap and virtually unlimited, it gives an illusion of unlimited memory. Another reason for the name Virtual Memory, it gives you illusion of memory which is not really available!
Cool. Earlier we were facing a problem where even though a process reduces in size, the empty space is difficult to be reclaimed by other processes (because it would require costly compaction). Now it is easy, when a process becomes smaller in size, many of its pages are no longer used, so when other processes need more memory, a simple LRU based eviction automatically evicts those less-used pages from RAM, and replaces them with the new pages from the other processes (and of course updating the page tables of all those processes as well as the original process which now requires less space), all these without any costly compaction operation!
Solution to Problem 3: Whenever processes reduce in size, its frames in RAM will be less used, so a simple LRU based eviction can evict those pages out and replace them with pages required by new processes, thus avoiding Internal Fragmentation without need for compaction.
As for problem 2, take a moment to understand this, the scenario itself is completely removed! There is no need to move a process to accommodate a new process, because now the entire process never needs to fit at once, only certain pages of it need to fit ad hoc, that happens by evicting frames from RAM. Everything happens in units of pages, thus there is no concept of hole now, and hence no question of anything moving! May be 10 pages had to be moved because of this new requirement, there are thousands of pages which are left untouched. Whereas, earlier, all processes (every bit of them) had to be moved!
Solution to Problem 2: To accommodate a new process, data from only less recently used parts of other processes have to be evicted as required, and this happens in fixed size units called pages. Thus there is no possibility of hole or External Fragmentation with this system.
Now when the process needs to do some I/O operation, it can relinquish CPU easily! OS simply evicts all its pages from the RAM (perhaps store it in some cache) while new processes occupy the RAM in the meantime. When the I/O operation is done, OS simply restores those pages to the RAM (of course by replacing the pages from some other processes, may be from the ones which replaced the original process, or may be from some which themselves need to do I/O now, and hence can relinquish the memory!)
Solution to Problem 5: When a process is doing I/O operations, it can easily give up RAM usage, which can be utilized by other processes. This leads to proper utilization of RAM.
And of course, now no process is accessing the RAM directly. Each process is accessing a virtual memory location, which is mapped to a physical RAM address and maintained by the page-table of that process. The mapping is OS-backed, OS lets the process know which frame is empty so that a new page for a process can be fitted there. Since this memory allocation is overseen by the OS itself, it can easily ensure that no process encroaches upon the contents of another process by allocating only empty frames from RAM, or upon encroaching upon the contents of another process in the RAM, communicate to the process to update it page-table.
Solution to Original Problem: There is no possibility of a process accessing the contents of another process, since the entire allocation is managed by the OS itself, and every process runs in its own sandboxed virtual address space.
So paging (among other techniques), in conjunction with virtual memory, is what powers today's softwares running on OS-es! This frees the software developer from worrying about how much memory is available on the user's device, where to store the data, how to prevent other processes from corrupting their software's data, etc. However, it is of course, not full-proof. There are flaws:
Paging is, ultimately, giving user the illusion of infinite memory by using disk as secondary backup. Retrieving data from secondary storage to fit into memory (called page swap, and the event of not finding the desired page in RAM is called page fault) is expensive as it is an IO operation. This slows down the process. Several such page swaps happen in succession, and the process becomes painfully slow. Ever seen your software running fine and dandy, and suddenly it becomes so slow that it nearly hangs, or leaves you with no option that to restart it? Possibly too many page swaps were happening, making it slow (called thrashing).
So coming back to OP,
Why do we need the virtual memory for executing a process? - As the answer explains at length, to give softwares the illusion of the device/OS having infinite memory, so that any software, big or small, can be run, without worrying about memory allocation, or other processes corrupting its data, even when running in parallel. It is a concept, implemented in practice through various techniques, one of which, as described here, is Paging. It may also be Segmentation.
Where does this virtual memory stand when the process (program) from the external hard drive is brought to the main memory (physical memory) for the execution? - Virtual memory doesn't stand anywhere per se, it is an abstraction, always present, when the software/process/program is booted, a new page table is created for it, and it contains the mapping from the addresses spat out by that process to the actual physical address in RAM. Since the addresses spat out by the process are not real addresses, in one sense, they are, actually, what you can say, the virtual memory.
Who takes care of the virtual memory and what is the size of the virtual memory? - It is taken care of by, in tandem, the OS and the software. Imagine a function in your code (which eventually compiled and made into the executable that spawned the process) which contains a local variable - an int i. When the code executes, i gets a memory address within the stack of the function. That function is itself stored as an object somewhere else. These addresses are compiler generated (the compiler which compiled your code into the executable) - virtual addresses. When executed, i has to reside somewhere in actual physical address for duration of that function at least (unless it is a static variable!), so OS maps the compiler generated virtual address of i into an actual physical address, so that whenever, within that function, some code requires the value of i, that process can query the OS for that virtual address, and OS in turn can query the physical address for the value stored, and return it.
Suppose if the size of the RAM is 4GB (i.e. 2^32-1 address spaces) what is the size of the virtual memory? - The size of the RAM is not related to the size of virtual memory, it depends upon the OS. For example, on 32 bit Windows, it is 16 TB, on 64 bit Windows, it is 256 TB. Of course, it is also limited by the disk size, since that is where the memory is backed up.
Virtual memory is, among other things, an abstraction to give the programmer the illusion of having infinite memory available on their system.
Virtual memory mappings are made to correspond to actual physical addresses. The operating system creates and deals with these mappings - utilizing the page table, among other data structures to maintain the mappings. Virtual memory mappings are always found in the page table or some similar data structure (in case of other implementations of virtual memory, we maybe shouldn't call it the "page table"). The page table is in physical memory as well - often in kernel-reserved spaces that user programs cannot write over.
Virtual memory is typically larger than physical memory - there wouldn't be much reason for virtual memory mappings if virtual memory and physical memory were the same size.
Only the needed part of a program is resident in memory, typically - this is a topic called "paging". Virtual memory and paging are tightly related, but not the same topic. There are other implementations of virtual memory, such as segmentation.
I could be assuming wrong here, but I'd bet the things you are finding hard to wrap your head around have to do with specific implementations of virtual memory, most likely paging. There is no one way to do paging - there are many implementations and the one your textbook describes is likely not the same as the one that appears in real OSes like Linux/Windows - there are probably subtle differences.
I could blab a thousand paragraphs about paging... but I think that is better left to a different question targeting specifically that topic.
I am shamelessly copying the excerpts from man page of top
VIRT -- Virtual Image (kb)
The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been
swapped out and pages that have been mapped but not used.
SWAP -- Swapped size (kb)
Memory that is not resident but is present in a task. This is memory that has been swapped out but could include additional non-
resident memory. This column is calculated by subtracting physical memory from virtual memory
See here: Physical Vs Virtual Memory
Virtual memory is stored on the hard drive and is used when the RAM is filled. Physical memory is limited to the size of the RAM chips installed in the computer. Virtual memory is limited by the size of the hard drive, so virtual memory has the capability for more storage.
Physical memory: Physical memory refers to the RAM or the primary memory in the computer. Physical memory is a volatile memory. Therefore, it requires a continuous flow of power to retain data.
Virtual memory is a logical memory. In other words, it is a memory management technique performed by the operating system. Virtual memory allows the programmer to use more memory for the programs than the available physical memory. If the physical memory is 4GB and the virtual memory is 16GB, the programmer can use the 16GB virtual memory to execute the program. Using virtual memory, he can execute complex programs that require more memory than the physical memory.
The main difference between physical and virtual memory is that the physical memory refers to the actual RAM of the system attached to the motherboard, but the virtual memory is a memory management technique that allows the users to execute programs larger than the actual physical memory.
So, any program typically needs to store/read data from memory (physical address). CPU generates address space that is referred as Logical Address Apace, which is a set of addresses that can be used to access the program in memory, since logical address space corresponds to actual addresses there.
Addresses in logical address space are referred to as Logical or Virtual Addresses, whereas the set of addresses that correspond to them in physical memory are referred to as Physical Address Space, each of them called Physical Address, which is actual address in memory.
Note: MMU or Memory Management Unit is responsible for the address binding that takes place.
Source: "Operating System Concepts" by Abraham Silberschatz (Chapter8 Memory Management Strategies)

Resources