Linux memory mapping - memory-management

I got few questions about linux memory management(assume x86 32bit platform)
By default for all processes the top 1Gig of virtual address is mapped to kernel area. Theoretically the Kernel can map additional memory from high memory using vmalloc. My question is what happens with the page tables of all the user processes , I assume that they should get updates about the kernel memory allocation?( may be that memory will get used when the kernel is in process context).
Can someone explain from where The X86 logical address mapping limitation comes from? in "linux device drivers" chapter 15 it is said that there is a limitation on mapping logical address but with no deep explanation:
in many cases, even 32-bit processors can address more than 4 GB of physical memory. The limitation on how much memory can be directly mapped with logical addresses remains, however. Only the lowest portion of memory (up to 1 or 2 GB, depending on the hardware and the kernel configuration) has logical addresses; the rest (high memory) does not.
When does the kernel switch to its own page table(not including boot time)?. When its in process context, and interrupt context it uses the user mode process page table. The kernel threads use the process page table as well.

1.) There is only one set of 256 page tables that map the kernel's 1GiB region. The top 256 entries of each user space page directory point to these page tables. Thus, if the kernel changes a virtual mapping, all user space processes get the update as well.
2.) I'm not sure which limitations you mean, can you quote some text so I can find the passage in the book.
3.) When a process, like QEMU, starts a virtual CPU with kvm, the kernel swaps out the page table of the process, even though it doesn't yield to a different process. There may be more places like this, but in general, I don't think there is such a thing as a "kernel page table". All process page tables already map kernel memory, and it would thus seem wasteful to switch them out.
"Linux Device Drivers" is a great reference, but I can also recommend "Understanding the Linux Virtual Memory Manager", and of course, the kernel's source code.

Related

Why do we have memory zones in linux?

I was reading this on a page that:
Because of hardware limitations, the kernel cannot treat all pages as identical. Some pages, because of their physical address in memory, cannot be used for certain tasks. Because of this limitation, the kernel divides pages into different zones.
I was wondering about those hardware limitation. Can somebody please explain me those hardware limitation and give an example. As well, is there any software guide from intel explaining this?
Also, I read that virtual memory is divided into two parts 1GB for kernel space and 3GB for user space. Why do we give 1GB space in the virtual space of all processes to kernel? How is it mapped to actual physical pages? Can somebody please point me to a clean text explaining this?
Thanks in advance.
The hardware limitations mostly concern old devies. For example, you have the ZONE_DMA, which is from 0 - 16MB. This is e.g. needed for older ISA Devices, which are not capable of adressing above the 16MB limit. Then you have the ZONE_NORMAL, where most of the kernel operations take place and is adressed permanently into the kernels adress space.
The 1GB and 3GB split is simple. You have virtual adresses here, so for your application, the memory adress always starts at 0x00000000, reserved are the 1st GB of this for kernel stuff. Why this is done is pretty simple: You have the kernel mode and the user mode. In kernel mode you are allowed to use system calls. If you would not have the kernel memory mapped to your virtual adress space, you would have to do a context switch to trap you into kernel mode (context switch: Save current context to memory, load another context from memory -> time consuming). But as kernel-mode operations can take place in the same virtual adress space, you dont need to switch the context to, for example, allocate new memory or do any other system call.
for your second question about 1GB kernel mapping in user space for a processor
kernel is mapped of course for time saving by not having switch. 1 GB is for kernel functionality so that if kernel maps new memory for its functionality kernel can do that. any book on Unix can give you details

How does Windows give 4GB address space each to multiple processes when the total memory it can access is also limited to 4GB

How does Windows give 4GB address space each to multiple processes
when the total memory it can access is also limited to 4GB.
The solution of above question i found in Windows Memory Management
(Written by: Pankaj Garg)
Solution:
To achieve this Windows uses a feature of x86 processor (386 and
above) known as paging. Paging allows the software to use a different
memory address (known as logical address) than the physical memory
address. The Processor’ paging unit translates this logical address to
the physicals address transparently. This allows every process in the
system to have its own 4GB logical address space.
Can anyone help me to understand it in simpler form?
The basic idea is that you have limited physical RAM. Once it fills up, you start storing stuff on the hard disk instead. When a process requests data that is currently on disk, or asks for new memory, you kick out a page from RAM by transferring it to the disk, and then page in the data you actually need.
The OS maintains a data structure called a page table to keep track of which logical addresses correspond to the data currently in physical memory and where stuff is on the disk.
Each process has its own virtual address space, and operates using logical addresses within this space. The OS is responsible for translating requests for a given process and logical address into a physical address/location on disk. It is also responsible for preventing processes from accessing memory that belongs to other processes.
When a process asks for data that is not currently in physical memory, a page fault is triggered. When this occurs, the OS selects a page to move to disk (if physical memory is full). There are several page replacement algorithms for selecting the page to kick out.
The wrong original assumption is "when the total memory it can access is also limited to 4GB". It is untrue, the total memory OS can access is not that limited.
There is a limit on 32-bit addresses that 32-bit code can access. It is (1 << 32) which is 4 GB. However this is the amount to access simultaneously only. Imagine OS has cards A, B, ..., F and applications can access only four at a time. App1 might be seeing ABCD, App2 - ABEF, App3 - ABCF. The apps see 4, but OS manages 6.
The limit on 32-bit flat memory model does not imply that the entire OS is subject to the same limit.
Windows uses a technique called virtual memory. Each process has its own memory. One of the reasons this is done, is due to security reasons, to forbid accessing the memory of other processes.
As you've pointed out, the assigned virtual memory can be bigger than the actual physical memory. This is where the process of paging comes into places. My knowledge of memory management and microarchitecture is a bit rusty, so I don't want to post anything wrong, but I 'd recommend reading http://en.wikipedia.org/wiki/Virtual_memory
If you are interested in more literature, I'd recommend reading 'Structured Computer Organization – Tannenbaum'
Virtual address space is not RAM. It's an address space. Each page (the size of a page depends on the system) can be unmapped (the page is nowhere and not accessible. it does not exist), mapped to a file (the page is not directly accessible, its content is stored on disk), mapped to RAM (that's the pages that you can actually access).
Pages mapped to RAM can be swappable or pinned. Pinned pages will never be swapped out to disk. Swappable pages are associated to an area on disc and may be written to that area to free up the RAM they are using.
Pages mapped to RAM can also be read only, write only, read write. If they are writable they may be directly writable or copy-on-write.
Multiple pages (both within the same address space and across separate address spaces) may be mapped identically. This i how two separate processes may access the same data in memory (which may happen at different addresses in each process).
In a modern operating system each process has it's own address space. On 32 bit operating systems each process has 4GiB of address space. On 64 bit operating systems 32 bit processes still only have 4GiB (4 gigabinary bytes) of address space but 64 bit processes may have more. Generally they have 18 EiB (18 exabinary bytes, that is 18,874,368 TiB).
The size of the address space is totally independent of both the amount of RAM memory and the amount of actually allocated space. You can have 100 processes each with 18 EiB of address space on a machine with one gigabyte of RAM. In fact windows has been giving 4GiB of address space to each process since the time when the typical machine had just a few megabytes or RAM.
Assuming the context is 32-bit system:
In addition to http://en.wikipedia.org/wiki/Virtual_memory , However the memory abstraction given by the kernel to each process is 4GB, A process can actually use a far lesser than 4GB, because in each process the kernel is also mapped in most of the pages of the process. In general in NT system out of 4GB, 2GB is used by kernel and in *nix system 1 GB is used by kernel.
I read this a long time ago during my OS course with Windows as case study. The numbers I give may not be accurate but they can give you a decent idea of what happens behind the scenes. From what I can recall:
In windows The memory model used is Demand Paging. On Intel a page size is 4k. Initially when you run a program, only 4 pages each of 4K is loaded from your program. which means a total of 16k of memory is allocated. Programs may be bigger but there is no need to load the whole program at once into memory. Some of these pages are data pages i.e. read/writeable where your variables and data structures are. while the other are code pages which contain the executable code i.e. the code segment. The IP is set to the first instruction of the code segment and the program starts its execution under the impression that 4GB is allocated.
When further pages are needed that is you request more memory (data segment) or your program executes further and need other executable instructions (code segment) Windows check if there is sufficient amount of memory available. If yes then these pages are loaded and mapped into the process's address space. if not much memory is available, then windows checks which pages have not been used for quite some time (this is run for all the processes not just the calling process). when it finds such pages, it moves them to the Paging file to free the space in memory and loads the requested pages.
if sometimes your program calls code from some dll that is already loaded windows simply maps those pages into your process's address space. there is no need to load these pages again as they are already availble in the memory. thus it avoids duplication as well as saves space.
So theoretically the processes are using more memory than available and they can use 4GB of memory but in reality only the portion of the process is loaded at one time.
(Do mark my answer if you find it useful)

32-bit physical page table resolution

I'm running a 32 bit system in legacy mode on a 64-bit (x86-64 that is) capable architecture. When a new process is created, the kernel has to decide where in physical memory all of the pages needed at the time of instantiation are to be allocated (assuming a single thread this may include several memory regions such as the stack, the heaps etc).
I'm assuming the kernel keeps some sort of dynamic list of the physical RAM frames that are in use, and also a static list of all the regions of physical memory that have been taken up by devices for systems that use memory-mapped IO. Is this correct?
In addition, I also read that a 32-bit Windows system has a physical memory limit of 4GB (probably due to minimum address bus assumptions) so, even though a system may have more than 4 gigabytes of physical memory installed, a 32 bit kernel will only allocate addresses within the 4GB range.
Specific information regarding low-level operating system implementation for specific cases such as this is quite difficult to find online. Can anyone verify these statements and possibly refer me to a source where I could attain more information?
Thanks for your considerations.
When a new process is created, the kernel has to decide where in physical memory all of the pages needed at the time of instantiation are to be allocated
Why does it have to decide at process creation time? In fact, it only creates them on-demand - it simply creates the PTEs (i.e. "This address range is valid", but the pages are not backed in any way); when the process first starts executing, it immediately page-faults.
What is a page fault though? What happens is, first the CPU reads the TLB to see if it has an address <=> frame mapping. When that fails, it walks the PTEs looking for an entry that matches. If no entry is found, or if the entry indicates that the page isn't backed, a page-fault is generated. This means, that a CPU exception occurs and the CPU immediately jumps to a predefined address. The first thing the kernel then does is save the CPU Context (i.e. the registers at the location of the fault), then dispatches to the page fault handler.
When the page-fault occurs, Mm (the Memory Manager in NT) will read the mapping in its own data structures (remember that all PE images are memory-mapped files) and determine at that time which physical frame (i.e. 'a real piece of memory') which will be used.
Once the page fault is serviced, the page fault restores the saved CPU context, and jumps back to where it was, and retries the instruction that faulted.
You're correct that a 32-bit OS will only use 4GB of address space (not RAM! Don't forget those memory-mapped devices and files!), the processor will operate in 32-bit mode and interpret the PTEs as 32-bit (remember that AMD64 long mode adds an extra level of page tables and extends the address space to 48 bits).
32bit systems can only ever address 4gig directly (2^32 = 4gig). There's PAE hacks, which let the system have more than 4gig of physical ram, but no process can ever have more than 4gig available. As well, even if you have 4gig of ram, you'll never see more than 3.5gig or so actually available - some is reserved for memory mapping hardware devices, such as your video ram.
For one method of dealing with the physical-virtual memory mapping, look at TLB

Does the last GB in the address space of a linux process map to the same physical memory?

I read that the first 3 GBs are reserved for the process and the last GB is for the Kernel. I also read that the kernel is loaded starting from the 2nd MB of the physical address space (depending on the configuration). My question is that is the mapping of that last 1 GB is same for all processes and maps to this physical area of memory?
Another question is, when a process switches to kernel mode (eg, when a sys call occurs), then what page tables are used, the process page tables or the kernel page tables? If kernel page tables are used, then they can't access the memory locations belonging to the process. If that is the case, then there is apparently no use for the kernel virtual memory since all access to kernel code and data will be through the mapping of the last 1 GB of process address space. Please help me clarify this (any useful links will be much appreciated)
It seems, you are talking about 32-bit x86 systems, right?
If I am not mistaken, the kernel can be configured not only for 3Gb/1Gb memory distrubution, there could be other variants (e.g. 2Gb/2Gb). Still, 3Gb/1Gb is probably the most common one on x86-32.
The kernel part of the address space should be inaccessible from the user space. From the kernel's point of view, yes, the mapping of the memory occupied by the kernel itself is always the same. No matter, in the context of which process (or interrupt handler, or whatever else) the kernel currently operates.
As one of the consequences, if you look at the addresses of kernel symbols in /proc/kallsyms from different processes, you will see the same addresses each time. And these are exactly the addresses of the respective kernel functions, variables and others from the kernel's point of view.
So I suppose, the answer to your first question is "yes" but it is probably not very useful for the user-space code as the kernel space memory is not directly accessible from there anyway.
As for the second question, well, if the kernel currently operates in the context of some process, it can actually access the user-space memory of that process. I can't describe it in detail but probably the implementation of kernel functions copy_from_user and copy_to_user could give you some hints. See arch/x86/lib/usercopy_32.c and arch/x86/include/asm/uaccess.h in the kernel sources. It seems, on x86-32, the user-space memory is accessed in these functions directly, using the default memory mappings for the current process context. The 'magic' stuff there is only related to the optimizations and checking the address of the memory area for correctness.
Yes, the mapping of the kernel part of the address space is the same in all processes. Part of it does map that part of the physical memory where the kernel image is loaded, but that's not the bulk of it - the remainder is used to map other physical memory locations for the kernel's runtime working set.
When a process switches to kernel mode, the page tables are not changed. The kernel part of the address space simply becomes accessible because the CPL (Current Privilege Level) is now zero.

What is the state of the art in Memory Protection?

The more I read about low level languages like C and pointers and memory management, it makes me wonder about the current state of the art with modern operating systems and memory protection. For example what kind of checks are in place that prevent some rogue program from randomly trying to read as much address space as it can and disregard the rules set in place by the operating system?
In general terms how do these memory protection schemes work? What are their strength and weaknesses? To put it another way, are there things that simply cannot be done anymore when running a compiled program in a modern OS even if you have C and you own compiler with whatever tweaks you want?
The protection is enforced by the hardware (i.e., by the CPU). Applications can only express addresses as virtual addresses and the CPU resolves the mapping of virtual address to physical address using lookaside buffers. Whenever the CPU needs to resolve an unknown address it generates a 'page fault' which interrupts the current running application and switches control to the operating system. The operating system is responsible for looking up its internal structures (page tables) and find a mapping between the virtual address touched by the application and the actual physical address. Once the mapping is found the CPU can resume the application.
The CPU instructions needed to load a mapping between a physical address and a virtual one are protected and as such can only be executed by a protected component (ie. the OS kernel).
Overall the scheme works because:
applications cannot address physical memory
resolving mapping from virtual to physical requires protected operations
only the OS kernel is allowed to execute protected operations
The scheme fails though if a rogue module is loaded in the kernel, because at that protection level it can read and write into any physical address.
Application can read and write other processes memory, but only by asking the kernel to do this operation for them (eg. in Win32 ReadProcessMemory), and such APIs are protected by access control (certain privileges are required on the caller).
Memory protection is enforced in hardware, typically with a minimum granularity on the order of KBs.
From the Wikipedia article about memory protection:
In paging, the memory address space is
divided into equal, small pieces,
called pages. Using a virtual memory
mechanism, each page can be made to
reside in any location of the physical
memory, or be flagged as being
protected. Virtual memory makes it
possible to have a linear virtual
memory address space and to use it to
access blocks fragmented over physical
memory address space.
Most computer architectures based on
pages, most notably x86 architecture,
also use pages for memory protection.
A page table is used for mapping
virtual memory to physical memory. The
page table is usually invisible to the
process. Page tables make it easier to
allocate new memory, as each new page
can be allocated from anywhere in
physical memory.
By such design, it is impossible for
an application to access a page that
has not been explicitly allocated to
it, simply because any memory address,
even a completely random one, that
application may decide to use, either
points to an allocated page, or
generates a page fault (PF) error.
Unallocated pages simply do not have
any addresses from the application
point of view.
You should ask Google for Segmentation fault, Memory Violation Error and General Protection Failure. These are errors returned by various OSes in response for a program trying to access memory address it shouldn't access.
And Windows Vista (or 7) has routines for randomized dll attaching, which means that buffer overflow can take you to different addresses each time it occurs. This also makes buffer overflow attack a little bit less repeatable.
So, to link together the answers posted with your question. A program that attempts to read any memory address that is not mapped in its address space, will cause the processor to issue a page fault exception transferring execution control to the operating system code (trusted code), the kernel will then check which is the faulty address, if there is no mapping in the current process address space, it will send the SIGSEV (segmentation fault) signal to the process which typically kills the process (talking about Linux/Unix here), on Windows you get something along the same lines.
Note: you can take a look at mprotect() in Linux and POSIX operating systems, it allows you to protect pages of memory explicitly, functions like malloc() return memory on pages with default protection, which you can then modify, this way you can protect areas of memory as read only (but just in page size chunks, typically around 4KB).

Resources