Replacing memory mappings atomically on Windows - windows

Is there any way to atomically replace a memory mapping on Windows?
On Unix, mmap() with MAP_FIXED will atomically replace the page mapped at the requested address.
But on Windows, MapViewOfFileEx() can't be used on an address if a page is already mapped there. The existing page must first be unmapped, e.g. with UnmapViewOfFile(). This means there is a short period during which the address is unallocated, so if another thread creates a memory mapping concurrently it might be placed at this address.
Is there an interface in Windows that gets around this problem, without modifying the kernel? Perhaps using system calls directly?

Related

memory used by operating system

I think I'm missing a fundamental concept of how the OS manages memory.
OS is responsible for keeping track of what parts of physical memory are free.
OS creates and manages page tables, which have mappings between virtual to physical addresses.
For each instruction that specifies a virtual address, the hardware reads the page table to get the corresponding physical address. One way the hardware may know the location of the current page table is by a register that the OS updates.
This makes sense for how processes access memory. However, I don't understand how the OS itself accesses memory.
Assuming it uses the same instructions, the hardware would still be translating from virtual addresses to physical? Is there, for example, a known physical location for a page table for the OS itself? I know the question is murky, having trouble even understanding what to ask.
At some point there has to be a page table in a physical location. The method used for this depends upon the processor.
Let me give a simple example based upon the VAX processor. Suppose you divide the logical address space into a system range shared by all processes and a user range that is unique to each process. Then give each of those ranges its own page table.
Now you can place the user page table in the system address range of the logical address space.
If you access memory in the user space, you go to page table that the system finds at a logical address in the system space, that the processor then had to translate into a physical address using the page table for the system space; a two level translation.
If you use logical addresses for the the system space page table then you'd have no way of translating those into physical addresses. Instead, the local of the system page table is defined using physical addresses.
Another approach would be to define all page tables using physical addresses.
I don't understand how the OS itself accesses memory.
Think of the operating system as a process. The OS basically is a process, just like other processes, with elevated privileges. Whenever the OS wants to use eome memory location,it uses page tables for virtual to physical address translation, just like other processes would.
Think of it this way: Every process has a page table of its own, the same goes for the OS. The OS remembers the location of these page tables in the control structures associated with each process (e.g. the PCB), and for the currently running process the address (physical pointer) to the page table is kept in hardware (for the x86 architecture this is in control register 4 (CR4)). On x86, whenever the OS switches the running process it changes the value in CR4 so that the address points to the correct page table (its own if it switches to itself).
However, this is greatly simplified in modern operating systems, where the kernel (the OS) is mapped into the memory space of all processes, so that the kernel can run whenever it wants without having to switch page tables (which is costly). The pages in a process' page table belonging to the kernel are restricted to the process, and only accessible once the kernel takes control to do some management task.

What happens when I change data at shared library?

I have some shared libraries mapped into virtual address space of my task. What happens when I change some data for example in .bss section? I do it using kmap with physical page address as argument. I can suggest 2 ways. Data is changed and it influences at all tasks which use the library or the certain page is copied due to COW.
I think it's neither. The .bss area is set up when the executable is loaded. Virtual memory space is allocated for it at that time, and that space won't be shared with any other task. Pages won't be allocated initially (by default, mlock* can change that); they will be faulted in (i.e. demand-zeroed) as referenced.
I think that even if the process forks before touching the memory, the new process would then just get the equivalent (same virtual memory space marked as demand-zero).
So if you already have a physical address for it, I would think that's already happened and you won't be changing anything except the one page belonging to the current process.

Modifying Linux process page table for physical memory access without system call

I am developing a real-time application for Linux 3.5.7. The application needs to manage a PCI-E device.
In order to access the PCI-E card spaces, I have been using mmap in combination with /dev/mem. However (please correct me if I am wrong) each time I read or write the mapped memory, a system call is required for the /dev/mem pseudo-driver to handle the memory access.
To avoid the overhead of this system call, I think it should be possible to write a kernel module so that, within e.g. a ioctl call I can modify the process page table, in order to map the physical device pages to userspace pages and avoid the system call.
Can you give me some orientation on this?
Thanks and regards
However (please correct me if I am wrong) each time I read or write the mapped memory, a system call is required
You are wrong.
it should be possible to write a kernel module so that, within e.g. a ioctl call I can modify the process page table
This is precisely what mmap() does.

Mapping of Page allocated to user process in Kernel virtual address space

When a page is created for a process (which will be mapped into process address space), will that page be mapped into kernel address space ?
If not, then it won't have kernel virtual address. Then how the swapper will find the page and swap that out, if a need arises ?
If we're talking about the x86 or similar (in terms of page translation) architectures, at any given time there's one virtual address space and normally one part of it is reserved for the kernel and the other for user-mode processes.
On a context switch between two processes only the user-mode part of the virtual address space changes.
With such an organization, the kernel always has full access to the current user-mode process, because, again, there's only one current virtual address space at any moment for both the kernel and a user-mode process, it's not two, it's one. So, the kernel doesn't really have to have another, extra mapping for user-mode pages. But that's not the main point.
The main point is that the kernel keeps some sort of statistics for every page that if needed can be saved to the disk and reused elsewhere. The CPU marks each page's page table entry (PTE) as accessed when the page is first read from or written to and as dirty when it's first written to.
The kernel scans the PTEs periodically, reads the accessed and dirty markers to update said statistics and clears accessed and dirty so it can detect a change in them later (of course, if any). Based on this statistics it determines which pages are rarely used or long unused and can be repurposed.
If the "swapper" runs in the context of the current process and if it runs in the kernel, then in theory it has enough information from the kernel (the list of rarely used or long unused pages to save and unmap if dirty or just unmap if not dirty) and sufficient access to the pages of interest.
If the "swapper" itself runs as a user-mode process, things become more complicated because it doesn't have access to another process' pages by default and has to either create a mapping or ask the kernel do some extra work for it in the context of the process of interest.
So, finding rarely used and long unused pages and their addresses occurs in the kernel. The CPU helps by automatically marking PTEs as accessed and dirty. There may need to be an extra mapping to dirty pages if they get saved to the disk not in the context of the process that owns them.

How can I access memory at known physical address inside a kernel module?

I am trying to work on this problem: A user spaces program keeps polling a buffer to get requests from a kernel module and serves it and then responses to the kernel.
I want to make the solution much faster, so instead of creating a device file and communicating via it, I allocate a memory buffer from the user space and mark it as pinned, so the memory pages never get swapped out. Then the user space invokes a special syscall to tell the kernel about the memory buffer so that the kernel module can get the physical address of that buffer. (because the user space program may be context-switched out and hence the virtual address means nothing if the kernel module accesses the buffer at that time.)
When the module wants to send request, it needs put the request to the buffer via physical address. The question is: How can I access the buffer inside the kernel module via its physical address.
I noticed there is get_user_pages, but don't know how to use it, or maybe there are other better methods?
Thanks.
You are better off doing this the other way around - have the kernel allocate the buffer, then allow the userspace program to map it into its address space using mmap().
Finally I figured out how to handle this problem...
Quite simple but may be not safe.
use phys_to_virt, which calls __va(pa), to get the virtual address in kernel and I can access that buffer. And because the buffer is pinned, that can guarantee the physical location is available.
What's more, I don't need s special syscall to tell the kernel the buffer's info. Instead, a proc file is enough because I just need tell the kernel once.

Resources