map a buffer from Kernel to User space allocated by another module - linux-kernel

I am developing a Linux kernel driver on 3.4. The purpose of this driver is to provide a mmap interface to Userspace from a buffer allocated in an other kernel module likely using kzalloc() (more details below). The pointer provided by mmap must point to the first address of this buffer.
I get the physical address from virt_to_phys(). I give this address right shifted by PAGE_SHIFT to remap_pfn_range() in my mmap fops call.
It is working for now but it looks to me that I am not doing the things properly because nothing ensure me that my buffer is at the top of the page (correct me if I am wrong). Maybe mmap()ing is not the right solution? I have already read the chapter 15 of LDD3 but maybe I am missing something?
Details:
The buffer is in fact a shared memory region allocated by the remoteproc module. This region is used within an asymetric multiprocessing design (OMAP4). I can get this buffer thanks to the rproc_da_to_va() call. That is why there is no way to use something like get_free_pages().
Regards
Kev

Yes, you're correct: there is no guarantee that the allocated memory is at the beginning of a page. And no simple way for you to both guarantee that and to make it truly shared memory.
Obviously you could (a) copy the data from the kzalloc'd address to a newly allocated page and insert that into the mmap'ing process' virtual address space, but then it's not shared with the original datum created by the other kernel module.
You could also (b) map the actual page being allocated by the other module into the process' memory map but it's not guaranteed to be on a page boundary and you'd also be sharing whatever other kernel data happened to reside in that page (which is both a security issue and a potential source of kernel data corruption by the user-space process into which you're sharing the page).
I suppose you could (c) modify the memory manager to return every piece of allocated data at the beginning of a page. This would work, but then every time a driver wants to allocate 12 bytes for some small structure, it will in fact be allocating 4K bytes (or whatever your page size is). That's going to waste a huge amount of memory.
There is simply no way to trick the processor into making the memory appear to be at two different offsets within a page. It's not physically possible.
Your best bet is probably to (d) modify the other driver to allocate the specific bits of data that you want to make shared in a way that ensures alignment on a page boundary (i.e. something you write to replace kzalloc).

Related

Can one remap kernel virtual address for use by kernel code

I am porting a large application to ARM32 Linux and splitting off the hardware stuff into a device driver. Nearly all of the extensive driver code uses absolute addresses to access buffers and I/O related variables and registers. I'd have to have to change all that to pointer relative addresses - a lot of code is in assembler as well.
From user space it is simple to use mmap to ask for a target virtual address for physical memory (via /dev/mem) so that side poses no issue.
But how can I do similar in kernel code ? IOremap and Memremap give you a random kernel virtual address, worse, loading a driver using INSMOD places both code and data (.bbs) in vmalloc memory.
remap_PFN_range can be used to map kernel memory to user space via mmap call (and with that ask for a given virtual address range) - but how can that be used from the kernel itself if at all ?
So for example, say I have a buffer at physical address 0x60000000 - how can I tell the Kernel to map that to a given kernel accessible virtual address (perhaps also 0x60000000 but could be anything as long as its known at compile time) ?
So far I have spent days surfing anything that mentions remapping but am not finding the "golden" answer. Anybody know if one exists ?
AFAIK there is no "easy" way to do that.
This document explains the memory layout of the Linux kernel memory, and as you see, modules has a specific mapping space which can't be changed as long as you load your code with init_module syscall, and dynamic memory that's allocated using stuff like kmalloc also has a specific range.
Maybe you'll be able to hack something together to create a buffer at a known address, but if my memory doesn't fool me, Linux kind of depends on the layout I mentioned above for some fundamental stuff (page faults etc...).
OK, I have the answer and it's embarrassingly simple.
In my case I am running a STM32MP157 chip under buildroot. It so happens 512MBytes of DRAM is placed at 0xC0000000 physical. This means kernel space virtual address = physical address. PAGE_OFFSET and PHYS_OFFSET are 0xC0000000 so they simply cancel out.
Right, to display a nice logo on startup a 3MByte framebuffer is allocated in CMA memory which starts at 0xD8000000. This is done in early kernel init and is the first thing in CMA. Later on I allocate more framebuffers via DRM but the first one stays.
It's unused after kernel boot - except it now isn't. It's my perfect solution - just read and write directly into 0xD8000000 to 0xD83FFFFF (physical location and size of that framebuffer). All the variables I need to have at locations known at compile time are located into that space. Directly accessible, no pointers needed. No need to modify my existing code other than tell the linker to place the variables at 0xD8000000.

what is the use of attaching static data along with a program when it is loaded on the main memory?

When the operating system loads Program onto the main memory , it , along with the stack and heap memory , also attaches the static data along with it. I googled about what is present in the static data which said it contained the global variables and static variables. But I am confused as both of these are already present in the text file of the program then why do we add them seperately?
The data in the executable is often referred as the data segment. The CPU doesn't interact with the hard-disk but only with RAM. The data segment must thus be loaded in RAM before the CPU can access it. The file of the executable is not really a text file. It is an executable so it has a different extension. Text files often refer to an actual file with a .txt extension.
With that said, you also asked another question not long ago (If the amount of stack memory provided to a program is fixed then why does it grow downwards in the process architecture? Or am I getting it wrong?) so I will try to give some insight for both of these in this same answer.
I don't know much about caching and low level inner CPU workings but, today mostly, the CPU doesn't even operate on RAM directly. It will load a bunch of RAM chunks into the cache and make operations on them and keep RAM-cache consistency by implementing complex mechanisms. The OS also has its role to play in RAM-cache consistency but, like I said, I am far from an expert here. Other than that, caching is mostly transparent to the OS. The CPU handles it and the OS simply provides instructions to the CPU which executes them.
Today, you have paging used by most OS and implemented on most CPU architectures. With paging, every process sees a full contiguous virtual address space. The virtual address space is accessed contiguously and the hardware MMU translates those addresses to physical ones automatically by crossing the page tables. The OS is responsible to make sure the page tables are consistent and the MMU does the rest of the job (for more info read: What is paging exactly? OSDEV). If you understand paging well, things become much clearer.
For a process, there is mostly 3 types of memory. There is the stack (often called automatic storage), the heap and the static/global data. I will attempt to give precision on all of these to give a global picture.
The stack is given a maximum size when the process begins. The OS handles that and creates the page tables and places the proper address in the stack pointer register so that stack accesses reach the proper region of physical memory. The stack is automatic storage which means that it isn't handled manually by the high level programmer. For example, in C/C++, the stack is managed by the compiler which, at the entry of a function, will create a stack frame and place offsets from the stack base pointer in the instructions. Every local variable (within a function) will be accessed with a relative negative offset from the stack base pointer. What the compiler needs to do is to create a stack frame of the proper size so that there will be enough place for all local variables of a particular function (for more info on the stack see: Each program allocates a fixed stack size? Who defines the amount of stack memory for each application running?).
For the heap, the OS reserves a very big amount of virtual memory. Today, virtual memory is very big (2^48 bytes or more). The amount of heap available for each process is often only limited by the amount of physical memory available to back virtual memory allocations. For example, a process could use malloc() to allocate 4KB of memory in C. The OS will be called with a system call by the libc library which is an implementation of the C standard library. The OS will then reserve a page of the virtual memory available for the heap and change the page tables so that accessing that portion of virtual memory will translate to somewhere in RAM (probably somewhere another process wasn't already using).
The static/global data are simply placed in the executable in the data segment. The data segment is loaded in the virtual memory alongside the text segment. The text segment will thus be able to access this data often using RIP-relative addressing.

Does the last GB in the address space of a linux process map to the same physical memory?

I read that the first 3 GBs are reserved for the process and the last GB is for the Kernel. I also read that the kernel is loaded starting from the 2nd MB of the physical address space (depending on the configuration). My question is that is the mapping of that last 1 GB is same for all processes and maps to this physical area of memory?
Another question is, when a process switches to kernel mode (eg, when a sys call occurs), then what page tables are used, the process page tables or the kernel page tables? If kernel page tables are used, then they can't access the memory locations belonging to the process. If that is the case, then there is apparently no use for the kernel virtual memory since all access to kernel code and data will be through the mapping of the last 1 GB of process address space. Please help me clarify this (any useful links will be much appreciated)
It seems, you are talking about 32-bit x86 systems, right?
If I am not mistaken, the kernel can be configured not only for 3Gb/1Gb memory distrubution, there could be other variants (e.g. 2Gb/2Gb). Still, 3Gb/1Gb is probably the most common one on x86-32.
The kernel part of the address space should be inaccessible from the user space. From the kernel's point of view, yes, the mapping of the memory occupied by the kernel itself is always the same. No matter, in the context of which process (or interrupt handler, or whatever else) the kernel currently operates.
As one of the consequences, if you look at the addresses of kernel symbols in /proc/kallsyms from different processes, you will see the same addresses each time. And these are exactly the addresses of the respective kernel functions, variables and others from the kernel's point of view.
So I suppose, the answer to your first question is "yes" but it is probably not very useful for the user-space code as the kernel space memory is not directly accessible from there anyway.
As for the second question, well, if the kernel currently operates in the context of some process, it can actually access the user-space memory of that process. I can't describe it in detail but probably the implementation of kernel functions copy_from_user and copy_to_user could give you some hints. See arch/x86/lib/usercopy_32.c and arch/x86/include/asm/uaccess.h in the kernel sources. It seems, on x86-32, the user-space memory is accessed in these functions directly, using the default memory mappings for the current process context. The 'magic' stuff there is only related to the optimizations and checking the address of the memory area for correctness.
Yes, the mapping of the kernel part of the address space is the same in all processes. Part of it does map that part of the physical memory where the kernel image is loaded, but that's not the bulk of it - the remainder is used to map other physical memory locations for the kernel's runtime working set.
When a process switches to kernel mode, the page tables are not changed. The kernel part of the address space simply becomes accessible because the CPL (Current Privilege Level) is now zero.

Is there any way to convert unshared heap memory to shared memory? Primary target *nix

I was wondering whether there is any reasonably portable way to take existing, unshared heap memory and to convert it into shared memory. The usage case is a block of memory which is too large for me to want to copy it unnecessarily (i.e. into a new shared memory segment) and which is allocated by routines out of my control. The primary target is *nix/POSIX, but I would also be interested to know if it can be done on Windows.
Many *nixe have Plan-9's procfs which allows to open read a process's memory by inspecting /proc/{pid}/mem
http://en.wikipedia.org/wiki/Procfs
You tell the other process your pid, size and the base address and it can simply read the data (or mmap the region in its own address space)
EDIT:: Apparently you can't open /proc/{pid}/mem without a prior ptrace(), so this is basically worthless.
Under most *nixes, ptrace(2) allows to attach to a process and read its memory.
The ptrace method does not work on OSX, you need more magic there:
http://www.voidzone.org/readprocessmemory-and-writeprocessmemory-on-os-x/
What you need under Windows is the function ReadProcessMemory
.
Googling "What is ReadProcessMemory for $OSNAME" seems to return comprehensive result sets.
You can try using mmap() with MAP_FIXED flag and address of your unshared memory (allocated from heap). However, if you provide mmap() with own pointer then it is constrained to be aligned and sized according to the size of memory page (may be requested by sysconf()) since mapping is only available for the whole page.

Getting the lowest free virtual memory address in windows

Title says it pretty much all : is there a way to get the lowest free virtual memory address under windows ? I should add that I am interested by this information at the beginning of the program (before any dynamic memory allocation has been done).
Why I need it : trying to build a malloc implementation under Windows. If it is not possible I would have to really to whatever VirtualAlloc() returns when given NULL as first parameter. While you would expect it to do something sensible, like allocation memory at the bottom of what is available, there are no guarantees.
This can be implemented yourself by using VirtualQuery looking for pages that are marked as free. It would be relatively slow though. (You will also need to consider allocation granularity which is different from page size.)
I will say that unless you need contiguous blocks of memory, trying to keep everything close together is mostly meaningless since if two pages of virtual memory might be next to each other in the address space, there is no reason to assume they are close to each other in physical memory. In fact, even if they are close to each other at some point in time, if those pages get moved to backing store and then faulted back into memory, the page would not be faulted to the same physical address page.
The OS uses more complicated metrics than just what is the "lowest" memory address available. Specifically, VirtualAlloc allocates pages of memory, so depending on how much you're asking for, at least one page of unused address space has to be available at the starting address. So even if you think there's a "lower" address that it should have used, that address might not have been compatible with the operation that you asked for.

Resources