I have an SoC which has both DSP and ARM cores on it and I would like to create a section of shared memory that both my userspace software, and DSP software are able to access. What would be the best way to allocate a buffer like this in Linux? Here is a little background, right now what I have is a kernel module in which I use kmalloc() to get a kernel buffer, I then use the __pa() macro from asm/page.h to get the physical address of my kernel buffer. I save this address as a sysfs entry so that my userspace code can get the physical address of this buffer. I can then write this address to the DSP so it knows where the shared memory location is, and I can also mmap /dev/mem or my own kernel module so that I can access this buffer from userspace (I could also use the read/write fileops).
For some reason I feel like this is overboard but I cannot find the best way to do what I am trying to do.
Would it be possible to just mmap \dev\mem a section of memory and just read and write to this section? My feeling is that this would not 'lock' this section of memory from the kernel, thus the kernel could still read/write to this memory without me knowing. Is this the case. After reading the memory management chapter of LDD3 I see that mmap creates a new VMA of the mapping. Would this lock this area of memory so that other processes would not get allocated this section of memory?
Any and all help is appreciated
Depending on the kind of DMA you're using, you need to allocate the buffer with dma_alloc_coherent(), or use standard allocations and the dma_map_* functions.
(You must not use __pa(); physical addresses are not necessarily the same as DMA bus addresses.)
To map the buffers to user space, use dma_mmap_coherent() for coherent buffers, or map the memory pages manually for streaming buffers.
For a similar requirement of mine, I had reserved about 16 MB of memory towards the end of ram and used it in both kernel and user space. Suppose you have 128 MB ram, you can set BOOTMEM argument as 112 MB in your boot loader. I am assuming you are using uboot. This will reserve 16 MB towards the end of the ram. Now in kernel and user space you can map this area and use it as shared memory.
Related
I would first like to give a brief description of the scenario that I am working on.
What I am trying to accomplish is to load image data from my user space application and transfer it over PCIe to a custom acceleration engine located inside a FPGA board.
The specifications of my host machine are:
Intel Xeon Processor with 16G ram.
64 Bit Debian Linux with kernel version 4.18.
The FPGA is a Virtex 7 KC705 development board.
The FPGA uses a PCIe controller (bridge) for the communication between the PCIe infrastructure and the AXI interface of the FPGA.
In addition, the FPGA is equiped with a DMA engine which is supposed to read data through the PCIe controller from the kernel memory and forward them to the accelerator.
Since in future implementations I would like to make multiple kernel allocations up to 256M, I have configured my kernel to support CMA and DMA Contiguous Allocator.
According to dmesg I can verify that my system reserves at startup the CMA area.
Regarding the acceleration procedure:
The driver initially allocates 4M kernel memory by using the dma_alloc_coherent() with GFP_KERNEL flag. This allocation is inside the range of the CMA.
Then from my user space application I call mmap with READ_PROT/WRITE_PROT and MAP_SHARED/MAP_LOCKED flags to map the previously allocated CMA memory and load the image data in there.
Once the image data is loaded I forward the dma_addr_t physical address of the CMA allocated memory and I start the DMA to transfer the data to the accelerator. When the acceleration is completed the DMA is supposed to write the processed data back to the same CMA kernel allocated memory.
On completion the user space application reads the processed data from the CMA memory and saves it to a .bmp file. When I check the "processed" image it is the same as the original one. I suppose that the processed data were never written to the CMA memory.
Is there some kind of memory protection that does not allow writing to the CMA memory when using GFP_KERNEL flag?
An interesting fact is that when I allocate kernel memory with dma_alloc_coherent but with either GFP_ATOMIC or GFP_DMA the processed data are written correctly to the kernel memory but unfortunately the allocated memory does not belong to the range of the CMA area.
What is wrong in my implementation?
Please let me know if you need more information!
In order to use mmap() I have adopted the debugfs file operations method.
Initially, I open a debugfs file as follows:
shared_image_data_file = open("/sys/kernel/debug/shared_image_data_mmap_value", O_RDWR);
The shared_image_data_mmap_value is my debugfs file which is created in my kernel driver and the shared_image_data_file is just an integer.
Then, I call mmap() from userspace as follows:
kernel_address = (unsigned int *)mmap(0, (4 * MBYTE), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_LOCKED, shared_image_data_file, 0);
When I call the mmap() function in user space the mmap file operation of my debugfs file executes the following function in the kernel driver:
dma_mmap_coherent(&dev->dev, vma, shared_image_data_virtual_address, shared_image_data_physical_address, length);
The shared_image_data_virtual_address is a pointer of type uint_64_t while the shared_image_data_physical_address is of type dma_addr_t and they where created earlier when I used the following code to allocate memory in kernel space:
shared_image_data_virtual_address = dma_alloc_coherent(&dev->dev, 4 * MBYTE, &shared_image_data_physical_address, GFP_KERNEL);
The address that I pass to the DMA of the FPGA is the shared_image_data_physical_address.
I hope that the above are helpful.
Thank you!
I've got a Xilinx Zynq 7000-based board with a peripheral in the FPGA fabric that has DMA capability (on an AXI bus). We've developed a circuit and are running Linux on the ARM cores. We're having performance problems accessing a DMA buffer from user space after it's been filled by hardware.
Summary:
We have pre-reserved at boot time a section of DRAM for use as a large DMA buffer. We're apparently using the wrong APIs to map this buffer, because it appears to be uncached, and the access speed is terrible.
Using it even as a bounce-buffer is untenably slow due to horrible performance. IIUC, ARM caches are not DMA coherent, so I would really appreciate some insight on how to do the following:
Map a region of DRAM into the kernel virtual address space but ensure that it is cacheable.
Ensure that mapping it into userspace doesn't also have an undesirable effect, even if that requires we provide an mmap call by our own driver.
Explicitly invalidate a region of physical memory from the cache hierarchy before doing a DMA, to ensure coherency.
More info:
I've been trying to research this thoroughly before asking. Unfortunately, this being an ARM SoC/FPGA, there's very little information available on this, so I have to ask the experts directly.
Since this is an SoC, a lot of stuff is hard-coded for u-boot. For instance, the kernel and a ramdisk are loaded to specific places in DRAM before handing control over to the kernel. We've taken advantage of this to reserve a 64MB section of DRAM for a DMA buffer (it does need to be that big, which is why we pre-reserve it). There isn't any worry about conflicting memory types or the kernel stomping on this memory, because the boot parameters tell the kernel what region of DRAM it has control over.
Initially, we tried to map this physical address range into kernel space using ioremap, but that appears to mark the region uncacheable, and the access speed is horrible, even if we try to use memcpy to make it a bounce buffer. We use /dev/mem to map this also into userspace, and I've timed memcpy as being around 70MB/sec.
Based on a fair amount of searching on this topic, it appears that although half the people out there want to use ioremap like this (which is probably where we got the idea from), ioremap is not supposed to be used for this purpose and that there are DMA-related APIs that should be used instead. Unfortunately, it appears that DMA buffer allocation is totally dynamic, and I haven't figured out how to tell it, "here's a physical address already allocated -- use that."
One document I looked at is this one, but it's way too x86 and PC-centric:
https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
And this question also comes up at the top of my searches, but there's no real answer:
get the physical address of a buffer under Linux
Looking at the standard calls, dma_set_mask_and_coherent and family won't take a pre-defined address and wants a device structure for PCI. I don't have such a structure, because this is an ARM SoC without PCI. I could manually populate such a structure, but that smells to me like abusing the API, not using it as intended.
BTW: This is a ring buffer, where we DMA data blocks into different offsets, but we align to cache line boundaries, so there is no risk of false sharing.
Thank you a million for any help you can provide!
UPDATE: It appears that there's no such thing as a cacheable DMA buffer on ARM if you do it the normal way. Maybe if I don't make the ioremap call, the region won't be marked as uncacheable, but then I have to figure out how to do cache management on ARM, which I can't figure out. One of the problems is that memcpy in userspace appears to really suck. Is there a memcpy implementation that's optimized for uncached memory I can use? Maybe I could write one. I have to figure out if this processor has Neon.
Have you tried implementing your own char device with an mmap() method remapping your buffer as cacheable (by means of remap_pfn_range())?
I believe you need a driver that implements mmap() if you want the mapping to be cached.
We use two device drivers for this: portalmem and zynqportal. In the Connectal Project, we call the connection between user space software and FPGA logic a "portal". These drivers require dma-buf, which has been stable for us since Linux kernel version 3.8.x.
The portalmem driver provides an ioctl to allocate a reference-counted chunk of memory and returns a file descriptor associated with that memory. This driver implements dma-buf sharing. It also implements mmap() so that user-space applications can access the memory.
At allocation time, the application may choose cached or uncached mapping of the memory. On x86, the mapping is always cached. Our implementation of mmap() currently starts at line 173 of the portalmem driver. If the mapping is uncached, it modifies vma->vm_page_prot using pgprot_writecombine(), enabling buffering of writes but disabling caching.
The portalmem driver also provides an ioctl to invalidate and optionally write back data cache lines.
The portalmem driver has no knowledge of the FPGA. For that, we the zynqportal driver, which provides an ioctl for transferring a translation table to the FPGA so that we can use logically contiguous addresses on the FPGA and translate them to the actual DMA addresses. The allocation scheme used by portalmem is designed to produce compact translation tables.
We use the same portalmem driver with pcieportal for PCI Express attached FPGAs, with no change to the user software.
The Zynq has neon instructions, and an assembly code implementation of memcpy using neon instructions, using aligned on cache boundary (32 bytes) will achieve 300 MB/s rates or higher.
I struggled with this for some time with udmabuf and discovered the answer was as simple as adding dma_coherent; to its entry in the device tree. I saw a dramatic speedup in access time from this simple step - though I still need to add code to invalidate/flush whenever I transfer ownership from/to the device.
I was reading this on a page that:
Because of hardware limitations, the kernel cannot treat all pages as identical. Some pages, because of their physical address in memory, cannot be used for certain tasks. Because of this limitation, the kernel divides pages into different zones.
I was wondering about those hardware limitation. Can somebody please explain me those hardware limitation and give an example. As well, is there any software guide from intel explaining this?
Also, I read that virtual memory is divided into two parts 1GB for kernel space and 3GB for user space. Why do we give 1GB space in the virtual space of all processes to kernel? How is it mapped to actual physical pages? Can somebody please point me to a clean text explaining this?
Thanks in advance.
The hardware limitations mostly concern old devies. For example, you have the ZONE_DMA, which is from 0 - 16MB. This is e.g. needed for older ISA Devices, which are not capable of adressing above the 16MB limit. Then you have the ZONE_NORMAL, where most of the kernel operations take place and is adressed permanently into the kernels adress space.
The 1GB and 3GB split is simple. You have virtual adresses here, so for your application, the memory adress always starts at 0x00000000, reserved are the 1st GB of this for kernel stuff. Why this is done is pretty simple: You have the kernel mode and the user mode. In kernel mode you are allowed to use system calls. If you would not have the kernel memory mapped to your virtual adress space, you would have to do a context switch to trap you into kernel mode (context switch: Save current context to memory, load another context from memory -> time consuming). But as kernel-mode operations can take place in the same virtual adress space, you dont need to switch the context to, for example, allocate new memory or do any other system call.
for your second question about 1GB kernel mapping in user space for a processor
kernel is mapped of course for time saving by not having switch. 1 GB is for kernel functionality so that if kernel maps new memory for its functionality kernel can do that. any book on Unix can give you details
Of course in general the physical memory behind user mode allocations (e. g. pointers returned by malloc()) might very well move to some different physical address at the discretion of the kernel (for example if the memory is swapped to disk and is later recalled to some other segment of physical memory). However the user mode code will be unaffected as on those occasions the kernel will update the page tables to point to the new location with no change to the virtual address being used by the user.
If this (perhaps discontiguous) physical memory is to be the target of some DMA then there are the user mode mlock() and munlock() calls to prevent this this memory from being paged out and you can call get_user_pages() etc. to get the physical addresses to feed to the DMA hardware and so on. That’s all well and good- but that’s not my case.
Rather I have done some kernel mode allocations (using __get_free_pages()) to obtain physically contiguous memory as our PCIe card-based DMA engine does not support scatter-gather. All of this has been working fine for some time- until I encountered a motherboard on which I see hard crashes.
I have thus far not worried that the kernel might at any time just move the physical memory locations for these ranges (resulting in my DMA card writing data to the wrong locations).
But with the new crashes on this motherboard now have me questioning that assumption.
Can that kernel memory allocated with __get_free_pages() too be moved by the kernel at any time to a different physical address range?
And if so is there any kernel facility to prevent these moves?
I'm learning the linux kernel internals and while reading "Understanding Linux Kernel", quite a few memory related questions struck me. One of them is, how the Linux kernel handles the memory mapping if the physical memory of say only 512 MB is installed on my system.
As I read, kernel maps 0(or 16) MB-896MB physical RAM into 0xC0000000 linear address and can directly address it. So, in the above described case where I only have 512 MB:
How can the kernel map 896 MB from only 512 MB ? In the scheme described, the kernel set things up so that every process's page tables mapped virtual addresses from 0xC0000000 to 0xFFFFFFFF (1GB) directly to physical addresses from 0x00000000 to 0x3FFFFFFF (1GB). But when I have only 512 MB physical RAM, how can I map, virtual addresses from 0xC0000000-0xFFFFFFFF to physical 0x00000000-0x3FFFFFFF ? Point is I have a physical range of only 0x00000000-0x20000000.
What about user mode processes in this situation?
Every article explains only the situation, when you've installed 4 GB of memory and the kernel maps the 1 GB into kernel space and user processes uses the remaining amount of RAM.
I would appreciate any help in improving my understanding.
Thanks..!
Not all virtual (linear) addresses must be mapped to anything. If the code accesses unmapped page, the page fault is risen.
The physical page can be mapped to several virtual addresses simultaneously.
In the 4 GB virtual memory there are 2 sections: 0x0... 0xbfffffff - is process virtual memory and 0xc0000000 .. 0xffffffff is a kernel virtual memory.
How can the kernel map 896 MB from only 512 MB ?
It maps up to 896 MB. So, if you have only 512, there will be only 512 MB mapped.
If your physical memory is in 0x00000000 to 0x20000000, it will be mapped for direct kernel access to virtual addresses 0xC0000000 to 0xE0000000 (linear mapping).
What about user mode processes in this situation?
Phys memory for user processes will be mapped (not sequentially but rather random page-to-page mapping) to virtual addresses 0x0 .... 0xc0000000. This mapping will be the second mapping for pages from 0..896MB. The pages will be taken from free page lists.
Where are user mode processes in phys RAM?
Anywhere.
Every article explains only the situation, when you've installed 4 GB of memory and the
No. Every article explains how 4 Gb of virtual address space is mapped. The size of virtual memory is always 4 GB (for 32-bit machine without memory extensions like PAE/PSE/etc for x86)
As stated in 8.1.3. Memory Zones of the book Linux Kernel Development by Robert Love (I use third edition), there are several zones of physical memory:
ZONE_DMA - Contains page frames of memory below 16 MB
ZONE_NORMAL - Contains page frames of memory at and above 16 MB and below 896 MB
ZONE_HIGHMEM - Contains page frames of memory at and above 896 MB
So, if you have 512 MB, your ZONE_HIGHMEM will be empty, and ZONE_NORMAL will have 496 MB of physical memory mapped.
Also, take a look to 2.5.5.2. Final kernel Page Table when RAM size is less than 896 MB section of the book. It is about case, when you have less memory than 896 MB.
Also, for ARM there is some description of virtual memory layout: http://www.mjmwired.net/kernel/Documentation/arm/memory.txt
The line 63 PAGE_OFFSET high_memory-1 is the direct mapped part of memory
The hardware provides a Memory Management Unit. It is a piece of circuitry which is able to intercept and alter any memory access. Whenever the processor accesses the RAM, e.g. to read the next instruction to execute, or as a data access triggered by an instruction, it does so at some address which is, roughly speaking, a 32-bit value. A 32-bit word can have a bit more than 4 billions distinct values, so there is an address space of 4 GB: that's the number of bytes which could have a unique address.
So the processor sends out the request to its memory subsystem, as "fetch the byte at address x and give it back to me". The request goes through the MMU, which decides what to do with the request. The MMU virtually splits the 4 GB space into pages; page size depends on the hardware you use, but typical sizes are 4 and 8 kB. The MMU uses tables which tell it what to do with accesses for each page: either the access is granted with a rewritten address (the page entry says: "yes, the page containing address x exists, it is in physical RAM at address y") or rejected, at which point the kernel is invoked to handle things further. The kernel may decide to kill the offending process, or to do some work and alter the MMU tables so that the access may be tried again, this time successfully.
This is the basis for virtual memory: from the point of view, the process has some RAM, but the kernel has moved it to the hard disk, in "swap space". The corresponding table is marked as "absent" in the MMU tables. When the process accesses his data, the MMU invokes the kernel, which fetches the data from the swap, puts it back at some free space in physical RAM, and alters the MMU tables to point at that space. The kernel then jumps back to the process code, right at the instruction which triggered the whole thing. The process code sees nothing of the whole business, except that the memory access took quite some time.
The MMU also handles access rights, which prevents a process from reading or writing data which belongs to other processes, or to the kernel. Each process has its own set of MMU tables, and the kernel manage those tables. Thus, each process has its own address space, as if it was alone on a machine with 4 GB of RAM -- except that the process had better not access memory that it did not allocate rightfully from the kernel, because the corresponding pages are marked as absent or forbidden.
When the kernel is invoked through a system call from some process, the kernel code must run within the address space of the process; so the kernel code must be somewhere in the address space of each process (but protected: the MMU tables prevent access to the kernel memory from unprivileged user code). Since code can contain hardcoded addresses, the kernel had better be at the same address for all processes; conventionally, in Linux, that address is 0xC0000000. The MMU tables for each process map that part of the address space to whatever physical RAM blocks the kernel was actually loaded upon boot. Note that the kernel memory is never swapped out (if the code which can read back data from swap space was itself swapped out, things would turn sour quite fast).
On a PC, things can be a bit more complicated, because there are 32-bit and 64-bit modes, and segment registers, and PAE (which acts as a kind of second-level MMU with huge pages). The basic concept remains the same: each process gets its own view of a virtual 4 GB address space, and the kernel uses the MMU to map each virtual page to an appropriate physical position in RAM, or nowhere at all.
osgx has an excellent answer, but I see a comment where someone still doesn't understand.
Every article explains only the situation, when you've installed 4 GB
of memory and the kernel maps the 1 GB into kernel space and user
processes uses the remaining amount of RAM.
Here is much of the confusion. There is virtual memory and there is physical memory. Every 32bit CPU has 4GB of virtual memory. The Linux kernel's traditional split was 3G/1G for user memory and kernel memory, but newer options allow different partitioning.
Why distinguish between the kernel and user space? - my own question
When a task swaps, the MMU must be updated. The kernel MMU space should remain the same for all processes. The kernel must handle interrupts and fault requests at any time.
How does virtual to physical mapping work? - my own question.
There are many permutations of virtual memory.
a single private mapping to a physical RAM page.
a duplicate virtual mapping to a single physical page.
a mapping that throws a SIGBUS or other error.
a mapping backed by disk/swap.
From the above list, it is easy to see why you may have more virtual address space than physical memory. In fact, the fault handler will typically inspect process memory information to see if a page is mapped (I mean allocated for the process), but not in memory. In this case the fault handler will call the I/O sub-system to read in the page. When the page has been read and the MMU tables updated to point the virtual address to a new physical address, the process that caused the fault resumes.
If you understand the above, it becomes clear why you would like to have a larger virtual mapping than physical memory. It is how memory swapping is supported.
There are other uses. For instance two processes may use the same code library. It is possible that they are at different virtual addresses in the process space due to linking. You may map the different virtual addresses to the same physical page in this case in order to save physical memory. This is quite common for new allocations; they all point to a physical 'zero page'. When you touch/write the memory the zero page is copied and a new physical page allocated (COW or copy on write).
It is also sometimes useful to have the virtual pages aliased with one as cached and another as non-cached. The two pages can be examined to see what data is cached and what is not.
Mainly virtual and physical are not the same! Easily stated, but often confusing when looking at the Linux VMM code.
-
Hi, actually, I don't work on x86 hardware platform, so there may exist some technical errors in my post.
To my knowledge, the range between 0(or 16)MB - 896MB is listed specially while you have more RAM than that number, say, you have 1GB physical RAM on your board, which is called "low-memory". If you have more physical RAM than 896MB on your board, then, rest of the physical RAM is called highmem.
Speaking of your question, there are 512MiBytes physical RAM on your board, so actually, there is no 896, no highmem.
The total RAM kernel can see and also can map is 512MB.
'Cause there is 1-to-1 mapping between physical memory and kernel virtual address, so there is 512MiBytes virtual address space for kernel. I'm really not sure whether or not the prior sentence is right, but it's what in my mind.
What I mean is if there is 512MBytes, then the amount of physical RAM the kernel can manage is also 512MiBytes, further, the kernel cannot create such big address space like beyond 512MBytes.
Refer to user space, there is one different point, pages of user's application can be swapped out to harddisk, but pages of the kernel cannot.
So, for user space, with the help of page tables and other related modules, it seems there is still 4GBytes address space.
Of course, this is virtual address space, not physical RAM space.
This is what I understand.
Thanks.
If the physical memory is less than 896 MB then the linux kernel maps upto that physical address lineraly.
For details see this.. http://learnlinuxconcepts.blogspot.in/2014/02/linux-addressing.html