How OS catches illegal memory references at paging scheme? - memory-management

I am trying to understand how the OS catches all illegal memory access in a system which uses Paging. (32 bits, x86, Paging enabled).
To be more specific, let's suppose I have a tiny App which is just 1 Page in size. Considering that a MS OS take the upper half of the 'virtual memory address space' and that my tiny EXE occupies just 4k of lower half of VMAS, then:
1) How OS realizes that there is an 'illegal memory reference/access' going on when my code tries to write to a memory location outside from my own Exe's 4k? (Obviously, that pointer wasn't obtained from a 'malloc' or similar call).
2) How are Page Tables managed for that tiny Exe? Does OS have to define all 1 M Page Entries (-1 Page Entry) with a 'Non-Present' attribute set and 'System' owned? (When that 'process' is created).
Any advice or comment is wellcome.
EDIT:
Just to make things clear, the answer (compiled form all generous contributions) is:
In order to catch an illegal reference for unallocated memory, the VMAS for the App should be marked as User & Non-Present and the rest of the VMAS should be marked as Kernel & Non-Present.
(Of course, allocated memory is with User attribute. Take note that User & Non-Present is at 'process creation' before its first run!. After that it changes to User & Present).
That way the hardware monitor will catch any access outside of the App boundary!!!
And the Page Fault handler will assume an illegal access because no User code is allowed to access (read/write) a Kernel page.
[VMAS= Virtual Memory Address Space]

1) How OS realizes that there is an 'illegal memory reference/access' going on when my code tries to write to a memory location outside from my own Exe's 4k? (Obviously, that pointer wasn't obtained from a 'malloc' or similar call).
A sequence of events has to take place. The processor takes as inputs (a) the logical page being accessed; (b) the type of access; and (c) the processor mode to determine whether an access is valid.
Is there a page table entry for the page? If not => access violation
Is the page table entry marked valid?
The processing here is system specific, depending upon whether the page tables can distinguish between an invalid page table entry and an valid entry that is not mapped to a page frame. In the former case => access violation. In the latter case, it triggers a page fault and the OS has to determine whether to trigger an access violation or load the page.
Does the page table permit the type of access for the current processor mode? If not => access violation.
If the hardware triggers an access violation exception, it switches to kernel mode and invokes the OS's access violation handler.
2) How are Page Tables managed for that tiny Exe? Does OS have to define all 1 M Page Entries (-1 Page Entry) with a 'Non-Present' attribute set and 'System' owned? (When that 'process' is created).
Operating systems provide system services for mapping memory into the process address space. Generally, the program loader reads the instructions in the EXE file and calls page mapping system services to set up the initial state of the application.
When this occurs depends upon the operating system. In eunuchs-land, a process is a clone of its parent. The running of a program takes place in an exec___ system call. Some operating system have a background command processor that allows multiple applications to be run sequentially within a single process.
From there, it is up to the application to manage the pages mapped to its address space. That is done by calling system services. For example "malloc" calls will cause the application to use system services to map pages.
The initial state of the application is likely to have holes of invalid user addresses. In fact, the range of valid addresses is not likely to be contiguous within the logical address space.

Each page has, among others, the following attributes: Present and Read/Write.
Accessing a page that is not present, or writing a read-only page, generates a privileged event called a page fault. This event takes the form of the CPU executing a specific routine that the OS set up.
Hence the OS is informed of the event and the attempt that was made.
The structures used to implement paging are hierarchical: pages are grouped into directories, and directory into higher directories. There are usually four levels.
Like in a file system, only the directories needed to reach the specific page need to be created.
A definitive source of information is the Intel manuals, specifically the third volume.
This answer intentionally uses simplified words.

How OS realizes that there is an 'illegal memory reference/access' going on when my code tries to write to a memory location outside from my own Exe's 4k? (Obviously, that pointer wasn't obtained from a 'malloc' or similar call).
A page fault is raised and the page fault handler gets executed. In the case of an invalid memory access it terminates the program. In the case of an access of swapped memory, it restores the memory contents from the disk into the main memory again and lets the program continue.
How are Page Tables managed for that tiny Exe? Does OS have to define all 1 M Page Entries (-1 Page Entry) with a 'Non-Present' attribute set and 'System' owned? (When that 'process' is created).
On x86, there are two-level page structures: page directories and page tables. Assuming your program fits in a single page, the OS will initialise a page directory that contains only one valid entry pointing to a page table, and only one valid entry pointing to the page containing the needed memory.

Related

What is in the PTE address field for an anonymously zero-fill-on-demand mapped page?

When a program calls mmap to allocate an anonymous page, also known as a demand-zero page, what appears in the address field of the corresponding page table entry (PTE)? I am assuming that the kernel does not create a zero-initialized page in physical memory (and enter that physical page's page number into the PTE) until the requesting process actually touches the page — hence the term demand-zero. Since it would not be a disk address, and would not be 0 (which is for unallocated pages), what value would appear there? As a different but related question, how does the kernel "know" that this page is to be handled as a demand-zero page, i.e., that the fault handler should find a physical page and initialize it with 0 rather than copy a page from disk?
I am assuming that the kernel does not create a zero-initialized page in physical memory
Indeed, this is usually the case. Unless special cases, like for example if MAP_POPULATE is specified to explicitly request the page to be initialized (also called "pre-fauting").
what appears in the address field of the corresponding page table entry (PTE)?
Right after mmap you don't even have a PTE allocated for the page (or in general, you don't have any entry at any page table level). For what the CPU is concerned, the page doesn't even exist. If you were to walk the page table you would just get to a point (at an arbitrary level) where the corresponding entry is marked as "not present".
Since it would not be a disk address, and would not be 0 (which is for unallocated pages), what value would appear there?
For what the CPU is concerned, the page is unallocated. At the first page fault, two things can happen:
For a read page fault, the PTE is updated to point to the zero page: this is a special page that is always entirely zeroed-out and is pointed to by the PTEs of any anonymous (demand-zero) page in the system that has not been modified yet.
For a write page fault, an actual physical page will be allocated and the corresponding PTE updated to point to its physical address.
Quoting directly from the documentation:
The anonymous memory or anonymous mappings represent memory that is not backed by a filesystem. Such mappings are implicitly created for program’s stack and heap or by explicit calls to mmap(2) system call. Usually, the anonymous mappings only define virtual memory areas that the program is allowed to access. The read accesses will result in creation of a page table entry that references a special physical page filled with zeroes. When the program performs a write, a regular physical page will be allocated to hold the written data. The page will be marked dirty and if the kernel decides to repurpose it, the dirty page will be swapped out.
how does the kernel "know" that this page is to be handled as a demand-zero page, i.e., that the fault handler should find a physical page and initialize it with 0 rather than copy a page from disk?
When a page fault occurs, the kernel page fault handler (architecture-dependent) determines to which VMA the page belongs to, and retrieves the corresponding struct vm_area_struct (which was created earlier either by the kernel itself or by a mmap syscall). This structure is then passed on to architecture-independent code (do_fault()) along with the needed fault information (struct vm_fault).
The vm_area_struct then contains all the remaining necessary information to handle the fault (for example the ->vm_file field which is != NULL in case of a file-backed mapping). The field ->vm_ops points to a struct vm_operations_struct which defines a set of function pointers to call in different occasions. In particular anonymous VMAs have ->vm_ops == NULL.
For other kind of pages, ->fault() is the function used when handling a page fault. This function knows what to check and how to actually handle the fault.
B & O also describe the VMA, but do not explain how the kernel could use the VMA to distinguish between, say, an unallocated page and an allocated page to be created and zero-initialized.
Simple, just check vma->vm_ops == NULL and in such case you know that the page is a demand-zero anon page. Then on a page fault act as needed (read fault -> update PTE to point to global zero page, write fault -> allocate a page and update PTE).

A Process accessing memory outside of allocated region

Assume a process is allocated a certain region of virtual memory.
How will the processor react if the process happens to access a memory region outside this allocation region?
Does the processor kill the process? Or does it raise a Fault?
Thank you in advance.
Processes are not really allocated a certain region of virtual memory. They are allocated physical frames that they can access using virtual memory. Processes have virtual access to all virtual memory available.
When a high level language is compiled, it is placed in an executable. This executable is a file format which specifies several things among which is the virtual memory in use by the program. When the OS launches that executable, it will allocate certain physical pages to the newly created process. These pages contain the actual code. The OS needs to set up the page tables so that the virtual addresses that the process uses are translated to the right position in memory (the right physical addresses).
When a process attempts to jump nowhere at a virtual address it shouldn't jump to, several things can happen. It is undefined behavior.
As stated on osdev.org (https://wiki.osdev.org/Paging):
A page fault exception is caused when a process is seeking to access an area of virtual memory that is not mapped to any physical memory, when a write is attempted on a read-only page, when accessing a PTE or PDE with the reserved bit or when permissions are inadequate.
The CPU pushes an error code on the stack before firing a page fault exception. The error code must be analyzed by the exception handler to determine how to handle the exception. The bottom 3 bits of the exception code are the only ones used, bits 3-31 are reserved.
It really depends on the language you used and several factors come into play. For example, in assembly, if you try to jump in RAM to a random virtual address. Several things can happen.
If you jump into an allocated page, then the page could contain anything. It could as well contain zeroes. If it contain zeroes, then the process will keep executing the instructions until it reaches a page which isn't present in RAM and trigger a page fault. Or it could as well just end up executing a jmp to somewhere else in RAM and in the end trigger page fault.
If you jump into a page which has the present bit not set (unallocated page), then the CPU will trigger a page fault immediately. Since the page is not allocated, it will not magically become allocated. The OS needs to take action. If the page was supposed to be accessed by the process then maybe it was swapped to the hard disk and the OS needs to swap it back in RAM. If it wasn't supposed to be accessed (like in this case), the OS needs to kill the process (and it does). The OS knows the process should not access a page by looking at its memory map for that process. It should not just blindly allocate a page to a process which jumps nowhere. If the process needs more memory during execution it can ask the OS properly using system calls.
If you jump to a virtual address which, once translated by the MMU using the page tables, lands in RAM in kernel mode code (supervisor code), the CPU will trigger a page fault with supervisor and present error codes (1 0 1).
The OS uses 2 levels of permission (0 and 3). Thus all user mode processes run with permission 3. Nothing prevents one user process from accessing the memory and the code of another process except the way the page tables are set up. The page tables are often not filled up completely. If you jump to a random virtual address, anything can happen. The virtual address can be translated to anything.

Can someone explain the Windows ZwMapViewOfSection system call so that a noob (me) can understand?

I'm investigating a set of Windows API system calls made by a piece of malware running in a sandbox so that I can understand its malicious intent. Unfortunately, I'm struggling to understand the ZwMapViewOfSection function described in documentation: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-zwmapviewofsection
Now, I do understand that this function is related to the mapping of physical memory to virtual memory in a page table. Apart from that, I find the documentation arcane and not friendly to beginners. I am also confused why they are calling blocks of physical memory "sections" rather than "frames" (if that is what they are indeed referring to -- its not clear to me). Can anyone provide a more intuitive explanation about this system call and what it does in general? Is this a common system call for programs or is it limited to malware? Thank You.
It is extremely common for normal programs to make this call (not directly of course), every program is going to invoke it multiple times during initialization at the very least (ZwMapViewOfSection is used when performing the memory backing used to implement the executable sections of code itself). Not so common during normal program code but not uncommon either. Particularly common if the program performs dynamic DLL loads but legitimate programs can also do memory-mapped IO for their own reasons.
It operates on memory section objects (I've never really understood that name either) which are one part of the link between disc files and memory-mapped regions, the section is created via ZwCreateSection or opened with ZwOpenSection and then the other part comes into play with ZwMapViewOfSection.
What part of this, exactly, is confusing you? Knowing that would make it far easier to provide an informative response.
As far as I understand it, you have to open the file and acquire a file handle which you then map with CreateFileMapping, which will call NtCreateSection, which calls MmCreateSection. If the file is mapped for the first time a new segment object and control area are created first then depending on whether the section is created for a data, image or page-file backed MiCreateDataFileMap, MiCreateImageFileMap or MiCreatePagingFileMap is called.
MiCreateDataFileMap sets up the subsection object and section object. In the normal case only one subsection is created, but under some special conditions multiple subsections are used, e.g. if the file is very large. For data files, the subsection object field SubsectionBase is left blank. Instead the SegmentPteTemplate field of the segment object is setup properly which can be used to create the PPTEs when necessary. This defers the creation of PPTEs until a view is mapped for the first time which avoids wasting memory when very large data files are mapped. Note a PPTE is a PTE that is serving as a prototype PTE, but an _MMPTE_PROTOTYPE is a PTE that is pointing to a prototype.
MiCreateImageFileMap creates the section object and loads the PE header of the specified file and verifies it then one subsection is created for the PE header and one for each PE section. If a very small image file is mapped then only one subsection is created for the complete file. Besides the subsections also the related PPTEs for each of them are created and their page protection flags are set according to the protection settings of the related PE section. These PPTEs will be used as a template for building the real PTEs when a view is mapped and accessed.
After a section is created it can be mapped into the address space by creating a view from it. The flProtect passed to CreateFileMapping specifies the protection of the section object. All mapped views of the object must be compatible with this protection. You specify dwMaximumSizeLow and dwMaximumSizeHigh to be 0 in order for dwMaximumSizeHigh to be set to the length of the file automatically.
You then pass the returned section object handle to MapViewOfFile, which will calls NtMapViewOfSection on it, which calls MmMapViewOfSegment, which calls MmCreateMemoryArea, which is where the view is mapped into the VAD of the process with the protection dwDesiredAccess supplied to MapViewOfFile, which serves as the protection type for all PTEs that the VAD entry covers. dwNumberOfBytesToMap = 0 and dwFileOffsetLow = 0 in MapViewOfFile maps the whole file.
When a view is mapped, I believe that all of the PTEs are made to point to the prototype PTEs and are given the protection of the PPTE. For an image file, the PPTEs have already been initialised to subsection PTEs. For a data file, the PPTEs for the view need to be initialised to subsection PTEs. The VAD entry for the view is now created. The VAD entry protection isn't always reflective of the protection of the PTEs it covers, because it can cover multiple subsections and multiple blocks within those subsections.
The first time an address in the mapping is actually accessed, the subsection prototype PTE is filled in on demand with the allocated physical page filled with the I/O write for that range and the process PTE is filled in with that same address. For an image, the PPTE was already filled in when the subsections were created along with protection information derived from the section header characteristics in the image, and it just fills in the PTE with that address and the protection information in it.
When the PTE is trimmed from the process working set, the working set manager accesses the PFN to locate the PPTE address, decreases the share count, and it inserts the PPTE address into the PTE.
I'm not sure when a VAD PTE (which have a prototype bit and prototype address of 0xFFFFFFFF0000 and is not valid) occurs. I would have thought the PPTEs are always there at their virtual address and can be pointed to as soon as the VAD entry is created.

Why there is no SIGSEGV signal on copy on write?

The copy-on-write article on wikipedia says that copy-on-write is usually implemented by giving read only access to the pages, so that when one is written, the page fault trap handler can map a unique physical memory page for it. So my question is why a user-level application doesn't receive a SIGSEGV signal when such page fault happens? Afterall, the wikipedia article on SIGSEGV says that SIGSEGV is the signal sent to a process when it makes an invalid memory reference, or segmentation fault. So in this case, that is on copy-on-write case, why no SIGSEGV is sent to the process.
I know it's been a while since this was asked, but I wanted to expand on Alexey's answer a bit.
Copy-on-write (I assume you're talking about virtual memory and not filesystems) usually works like so:
The OS knows which pages need to be copied on write. (They are the pages which are private to a process.) These pages are marked in hardware as read-only. However, the virtual memory map of the process has the pages marked as readable and writable. This means that the user process believes it has full access to the pages in question.
When a user process attempts to write to one of these pages, a page fault is generated because the processor recognizes that the page is read-only (based on the hardware marks before). Page faults are sort of like segfaults, but for the kernel instead of for user processes.
This triggers the page fault handler to run within the kernel, which looks at the page in question and sees that it's a private page which has not yet been copied. The handler will create a copy of the page and mark the copy as writable.
Then the handler will replace the old page's address with the new one in the virtual-to-physical translation table and exit.
The last instruction will be retried by the user process at this point, and this time the write will succeed because the new page is writeable at both the virtual memory map (the user process' view of memory permissions) and hardware (the kernel's view of memory permissions) levels.
A page fault is generated every time a segmentation fault occurs, but most page faults are handled by the kernel and are never passed to the process that caused them as segfaults. There are many reasons why a page fault might be handled at a lower level, including:
The page which was accessed was paged out to disk because it hadn't been used in a long time. The OS must bring it back into memory so the process can use it again.
The process is accessing a newly-allocated page for the first time, and the actual physical page hasn't been allocated yet. The OS must allocate a page and then insert it into the virtual-to-physical translation table before the memory can actually be used.
The OS is playing a hardware page access permissions trick to allow it to watch for accesses to a particular page. This is what happens in copy-on-write, but it can have other uses as well. Consider an OS-level virtualization technology like kvm, where writing to a memory-mapped device's location in memory in the guest OS should actually write to a file or the display in the host OS.
The main idea of COW is that COW is completely transparent to the user process as if it fully owned the memory without any sharing.

Mapping of Page allocated to user process in Kernel virtual address space

When a page is created for a process (which will be mapped into process address space), will that page be mapped into kernel address space ?
If not, then it won't have kernel virtual address. Then how the swapper will find the page and swap that out, if a need arises ?
If we're talking about the x86 or similar (in terms of page translation) architectures, at any given time there's one virtual address space and normally one part of it is reserved for the kernel and the other for user-mode processes.
On a context switch between two processes only the user-mode part of the virtual address space changes.
With such an organization, the kernel always has full access to the current user-mode process, because, again, there's only one current virtual address space at any moment for both the kernel and a user-mode process, it's not two, it's one. So, the kernel doesn't really have to have another, extra mapping for user-mode pages. But that's not the main point.
The main point is that the kernel keeps some sort of statistics for every page that if needed can be saved to the disk and reused elsewhere. The CPU marks each page's page table entry (PTE) as accessed when the page is first read from or written to and as dirty when it's first written to.
The kernel scans the PTEs periodically, reads the accessed and dirty markers to update said statistics and clears accessed and dirty so it can detect a change in them later (of course, if any). Based on this statistics it determines which pages are rarely used or long unused and can be repurposed.
If the "swapper" runs in the context of the current process and if it runs in the kernel, then in theory it has enough information from the kernel (the list of rarely used or long unused pages to save and unmap if dirty or just unmap if not dirty) and sufficient access to the pages of interest.
If the "swapper" itself runs as a user-mode process, things become more complicated because it doesn't have access to another process' pages by default and has to either create a mapping or ask the kernel do some extra work for it in the context of the process of interest.
So, finding rarely used and long unused pages and their addresses occurs in the kernel. The CPU helps by automatically marking PTEs as accessed and dirty. There may need to be an extra mapping to dirty pages if they get saved to the disk not in the context of the process that owns them.

Resources