while studying the principle of the work of cheats and anti-cheats, I became interested in how access to physical (not virtual) RAM addresses, for example, in windows, MmAllocateContiguousMemory is used to read from a physical address (in some examples)
But how does access to physical memory addresses works (I have not found any asm/C examples that does not use nativeApi or WinApi). I suppose that windows takes all control over memory and provides only shells for working with it - but in different OSs and in Windows, access to memory is also somehow implemented, how it works and is it possible to read physical memory without winapi / nativeApi [memtest works somehow))]
When an OS boot up it enables virtual memory. From that point on, every access to memory (address space) goes through the MMU of the CPU.
Controlling the MMU is a privileged operation, only privileged code (read: the OS) can change the MMU configuration.
The MMU configuration controls what physical address a virtual address is mapped to.
When you ask Windows to "read" a range of physical memory, it actually sets up the MMU so that the returned virtual range maps to the given physical memory.
You then read the virtual memory as usual.
Virtual memory is not something along the side of physical memory, it is on top of physical memory.
It's always there.
So a user space program cannot access physical memory without the help of the OS, the CPU will always use virtual memory once enabled.
In fact, this would pose a security risk as it bypasses the security mechanism virtual memory provides on top of physical memory (memory isolation).
Related
I'm currently writing a driver for a PCIe device that should send data to a Linux system using DMA. As far as I can understand my PCIe device needs a DMA controller (DMA master) and my Linux system too (DMA slave). Currently the PCIe device has no DMA controller and should not get one. That confuses me.
A. Is the following possible?
PCIe device sends interrupt
Wait for interrupt in the Linux driver
Start DMA transfer from memory mapped PCIe registers to Linux system DMA.
Read the data from memory in userspace
I have everything setup for this, the only thing I miss is how to transfer the data from the PCIe registers to the memory.
B. Which system call (or series of) do I need to call to do a DMA transfer?
C. I probably need to setup the DMA on the Linux system but what I find points to code that assumes there is a slave, e.g. struct dma_slave_config.
The use case is collecting data from the PCIe device and make it available in memory to userspace.
Any help is much appreciated. Thanks in advance!
DMA, by definition, is completely independent of the CPU and any software (i.e. OS kernel) running on it. DMA is a way for devices to perform memory reads and writes against host memory without the involvement of the host CPU.
The way DMA usually works is something like this: software will allocate a DMA accessible region in memory and share the physical address with the device, say, by performing memory writes against the address space associated with one of the device's BARs. Then, the device will perform a DMA read or write against that block of memory. When that operation is complete, the device will issue an interrupt to the device driver so it can handle the data and/or free the memory.
If your device does not have the capability of issuing a DMA read or write against host memory, then you'll have to interact with it with using the CPU only. Discrete DMA controllers have not been a thing for a very long time.
I am writing some proof of concept code for KVM for communication between Windows 10 and the Host Linux system.
What I have is a virtual RAM device that is actually connected to a shared memory segment on the Host. The PCIe BAR 2 is a direct mapping to this RAM.
My intent is to provide a high bandwidth low latency means of transferring data that doesn't involve other common means used (sockets, etc). ZeroCopy would be ideal.
So far I have pretty much everything working, I have written a driver that calls MmAllocateMdlForIoSpace and then maps the memory using MmMapLockedPagesSpecifyCache to user mode via a DeviceIOControl. This works perfectly, the user mode application is able to address the shared memory and write to it.
What I am missing is the ability to use CreateFileMapping in user mode to obtain a HANDLE to a mapping of this memory. I am fairly new to windows driver programming and as such I am uncertain as to if this is even possible. Any pointers as to the best way to achieve this would be very helpful.
This is looks strange because after mmu enabled we operate with virtual addresses and don't use physical addresses.
I suppose that it is a hardening of the kernel.
Suppose that an attacker is able to corrupt a PTE.
If the physical location of the kernel is always known, then the attacker can immediately remap the page onto a suitable physical location and get code execution as a privileged user.
I think 'protection from DMA-capable devices' is not a valid answer.
If a malicious DMA-capable device has access to all of the physical memory, e.g. no protection through IOTLB, then the device can scrape memory and immediately find where the kernel is located in physical memory.
We are writing a DMA-based driver for a custom made PCI-Express device using WDF for Windows 7.
As you may know, PCI-Express bus transactions are not allowed to cross a 4k memory boundary. The custom device does not check this, and therefore we need to ensure that the driver only requests DMA transfers which are aligned to 4k memory boundaries.
The profile for the device is WdfDmaProfilePacket64.
We tried using WdfDeviceSetAlignmentRequirement(DevExt->Device, 4095), but this does not result in the DMA start address to be properly aligned.
How can we configure the WDF framework so that it only requests properly aligned addresses?
you can handle this in user space application, somehow that you initiate/allocate an aligned memory in user space and then send it to kernel program. it is not easy for a driver to align a memory which already allocated and initiated. even in user-space application we have to allocating extra space and then using the aligned part(I know, it's not pretty, that's why i recommend to solve this problem in device side)
for example if you use C++ for your user-space application you can do something like this
I am writing a kernel module in a guest operating system that will be run on a virtual machine using KVM. Here I want to allcoate a memory page at a particular physical address. kmalloc() gives me memory but at a physical address chosen by the OS.
Background : I am writing a device emulation technique in qemu that wouldn't exit when the guest communicates with the device (It exits, for example, in I/O mapped as well as port mapped devices). The basic idea is as follows : The guest device driver will write to a specific (guest) physical memory address. A thread in the qemu process will be polling it continuously to check for new data (through some status bits etc.). And will take action accordingly without causing an exit. Since there is no (existing) way by which guest can tell the host what address is being used by the device driver, I want a pre-specified memory page to be allocated for it.
You cannot allocate memory at a specific address, however, you can reserve certain physical addresses on boot time using reserve_bootmem(). Calling reserve_bootmem() early on boot (of course, it requires a modified kernel) will ensure that the reserved memory will not be passed on to the buddy system (i.e. alloc_pages() and higher level friends - kmalloc()), and you will be able to use that memory for any purpose.
It sounds like you should be attacking this from the other side, by having a physical memory range reserved in the memory map that the QEMU BIOS passes to the guest kernel at boot.