I am working on a SOC which it's RAM starts at 0x200M.
For some reason I need to limit the DMA allocations up to 0x220M, so I called - dma_set_mask_and_coherent(dev, 0x21FFFFFFF).
I noticed that
dev->dma_mask was not set, while
dev->coherent_dma_mask was set.
Still, calling dma_alloc_coherent returns buffers above the requested limit.
How can I limit the buffer address?
Related
I'm using Orange Pi3 LTS with Allwinner H6 ARM CPU. I'm writing now UART driver with DMA for Rx and Tx. I allocated physical RAM memory using kmalloc() call and I got physical and logical address for my allocated memory. So, I know physical address in processor and corresponding logical address in Linux Kernel Driver space. I have a problem with updating physical memory after update logical. I mean, for example in my linux kernel driver I have callback init() when I'm attaching my driver to kernel and exit() when I'm disconnecting driver from kernel. In this call init() I'm allocating physical memory using kmalloc() call. In the same call I'm filling this memory with some data, but using logical address (because from kernel I can't access physical memory). In the same call (after fill memory) I'm triggering one of DMA channel to do job (I'm putting data to CPU registers). So, DMA should take descriptor (as pointer) from physical RAM memory and do some job for transmit data over UART. But it seems that physical memory is not updated in this "init()" call. Only logical RAM memory is updated, because in CPU registers I have wrong data. But when I put filling in RAM only descriptor data and for example in another kernel callback (exit) I'm triggering DMA then it is working -> in physical RAM memory is correct data and data is sending over UART as expected. I don't understand this situation. Why in single linux kernel driver callback (i.e. "init") physical memory is not updated, but it is updated only in logical memory space. Why linux kernel driver is not updating physical memory (over MMU) directly after write to logical memory, but after this call (after leave init() callbcak)?
As I wrote in problem description.
I studied documentation about DMA API Linux. Finally I found solution.
As was wrote in comment here was a problem with cache coherency.
Instead of use kmalloc() call to allocate RAM memory for DMA should be use dma_alloc_coherent() which returns pointer to logical address for kernel and also in argument it returns physical address without cache (non-cached).
Here is my example/test code which is working for me and now physical memory is updated immediately with logical inside kernel memory space. Allocation of 1024 bytes in RAM.
static struct device *dev;
static dma_addr_t physical_address;
static unsigned int *logical_address;
static void ptr_init(void)
{
unsigned long long dma_mask = DMA_BIT_MASK(32);
dev->dma_mask = &dma_mask;
if (dma_set_mask_and_coherent(dev, dma_mask) != 0)
printk("Mask not OK\n");
else
printk("Mask OK\n");
logical_address = (unsigned int *)dma_alloc_coherent(dev, 1024, &physical_address, GFP_KERNEL);
if (logical_address != NULL)
printk("allocation OK \n");
else
printk("allocation NOT OK\n");
printk("logical address: %x\n", logical_address);
printk("physical address: %x\n", physical_address);
}
I'm using ARM a53 platform, it has ACP component, and I'm trying to use DMA to transfer data through ACP.
By ARM trm document, if I understand it correctly, the DMA transmission data size limits to 64 bytes for each DMA transfer when using ACP.
If so, does this limitation make DMA not usable? Because it's dumb to configure DMA descriptor but to transfer 64 bytes only each time.
Or DMA should auto divide its transfer length into many ACP size limited(64 bytes) packets, without any software intervention.
Need any expert to explain how ACP and DMA work together.
Somewhere in the interfaces from the DMA to the ACP's AXI port should auto divide its transfer length as needed into transfers of appropriate length. For the Cortex-A53 ACP, AXI transfers are limited to 64B(perhaps intentionally 1x cacheline).
From https://developer.arm.com/documentation/ddi0500/e/level-2-memory-system/acp/transfer-size-support :
x byte INCR request characterized by:(some list of limitations)
Note the use of INCR instead of FIXED. INCR will automatically increment the address according to the size of the transfer, while FIXED will not. This makes it simple for the peripheral break a large transfer into a series of multiple INCR transfers.
However, do note that on the Cortex-A53, transfer size(x in the quote) is fixed at 16 or 64 byte aligned transfers. If the DMA sends an inappropriate sized transfer(because misconfigured or correct size unsupported), the AXI will emit a SLVERR. If the buffer is not appropriately aligned, I think this also causes a SLVERR.
Lastly, the on-chip network routing must support connecting the DMA to the ACP at chip design time. In my experience this is more commonly done for network accelerators and FPGA fabric glue, but tends to be less often connected for low speed peripherals like UART/SPI/I2C.
When using the VirtualAlloc API to allocate and commit a region of virtual memory with a power of two size of the page boundary such as:
void* address = VirtualAlloc(0, 0x10000, MEM_COMMIT, PAGE_READWRITE); // Get 64KB
The address seems to always be in 64KB alignment, not just the page boundary, which in my case is 4KB.
The question is: Is this alignment reliable and prescribed, or is it just coincidence? The docs state that it is guaranteed to be on a page boundary, but does not address the behavior I'm seeing. I ask because I'd later like to take an arbitrary pointer (provided by a pool allocator that uses this chunk) and determine which 64KB chunk it belongs to by something similar to:
void* chunk = (void*)((uintptr_t)ptr & 0xFFFF0000);
The documentation for VirtualAlloc describes the behavior for 2 scenarios: 1) Reserving memory and 2) Committing memory:
If the memory is being reserved, the specified address is rounded down to the nearest multiple of the allocation granularity.
If the memory is already reserved and is being committed, the address is rounded down to the next page boundary.
In other words, memory is allocated (reserved) in multiples of the allocation granularity and committed in multiples of a page size. If you are reserving and committing memory in a single step, it will be be aligned at a multiple of the allocation granularity. When committing already reserved memory it will be aligned at a page boundary.
To query a system's page size and allocation granularity, call GetSystemInfo. The SYSTEM_INFO structure's dwPageSize and dwAllocationGranularity will hold the page size and allocation granularity, respectively.
This is entirely normal. 64KB is the value of SYSTEM_INFO.dwAllocationGranularity. It is a simple counter-measure against address space fragmentation, 4KB pages are too small. The memory manager will still sub-divide 64KB chunks as needed if you change page protection of individual pages within the chunk.
Use HeapAlloc() to sub-allocate. The heap manager has specific counter-measures against fragmentation.
I have a process will do much lithography calculation, so I used mmap to alloc some memory for memory pool. When process need a large chunk of memory, I used mmap to alloc a chunk, after use it then put it in the memory pool, if the same chunk memory is needed again in the process, get it from the pool directly, not used memory map again.(not alloc all the need memory and put it in the pool at the beginning of the process). Between mmaps function, there are some memory malloc not used mmap, such as malloc() or new().
Now the question is:
If I used memset() to set all the chunk data to ZERO before putting it in the memory pool, the process will use too much virtual memory as following, format is "mmap(size)=virtual address":
mmap(4198400)=0x2aaab4007000
mmap(4198400)=0x2aaab940c000
mmap(8392704)=0x2aaabd80f000
mmap(8392704)=0x2aaad6883000
mmap(67112960)=0x2aaad7084000
mmap(8392704)=0x2aaadb085000
mmap(2101248)=0x2aaadb886000
mmap(8392704)=0x2aaadba89000
mmap(67112960)=0x2aaadc28a000
mmap(2101248)=0x2aaae028b000
mmap(2101248)=0x2aaae0c8d000
mmap(2101248)=0x2aaae0e8e000
mmap(8392704)=0x2aaae108f000
mmap(8392704)=0x2aaae1890000
mmap(4198400)=0x2aaae2091000
mmap(4198400)=0x2aaae6494000
mmap(8392704)=0x2aaaea897000
mmap(8392704)=0x2aaaeb098000
mmap(2101248)=0x2aaaeb899000
mmap(8392704)=0x2aaaeba9a000
mmap(2101248)=0x2aaaeca9c000
mmap(8392704)=0x2aaaec29b000
mmap(8392704)=0x2aaaecc9d000
mmap(2101248)=0x2aaaed49e000
mmap(8392704)=0x2aaafd6a7000
mmap(2101248)=0x2aacc5f8c000
The mmap last - first = 0x2aacc5f8c000 - 0x2aaab4007000 = 8.28G
But if I don't call memset before put in the memory pool:
mmap(4198400)=0x2aaab4007000
mmap(8392704)=0x2aaab940c000
mmap(8392704)=0x2aaad2480000
mmap(67112960)=0x2aaad2c81000
mmap(2101248)=0x2aaad6c82000
mmap(4198400)=0x2aaad6e83000
mmap(8392704)=0x2aaadb288000
mmap(8392704)=0x2aaadba89000
mmap(67112960)=0x2aaadc28a000
mmap(2101248)=0x2aaae0a8c000
mmap(2101248)=0x2aaae0c8d000
mmap(2101248)=0x2aaae0e8e000
mmap(8392704)=0x2aaae1890000
mmap(8392704)=0x2aaae108f000
mmap(4198400)=0x2aaae2091000
mmap(4198400)=0x2aaae6494000
mmap(8392704)=0x2aaaea897000
mmap(8392704)=0x2aaaeb098000
mmap(2101248)=0x2aaaeb899000
mmap(8392704)=0x2aaaeba9a000
mmap(2101248)=0x2aaaec29b000
mmap(8392704)=0x2aaaec49c000
mmap(8392704)=0x2aaaecc9d000
mmap(2101248)=0x2aaaed49e000
The mmap last - first = 0x2aaaed49e000 - 0x2aaab4007000= 916M
So the first process will "out of memory" and killed.
In the process, the mmap memory chunk will not be fully used or not even used although it is alloced, I mean, for example, before calibration, the process mmap 67112960(64M), it will not used(write or read data in this memory region) or just used the first 2M bytes, then put in the memory pool.
I know the mmap just return virtual address, the physical memory used delay alloc, it will be alloced when read or write on these address.
But what made me confused is that, why the virtual address increase so much? I used the centos 5.3, kernel version is 2.6.18, I tried this process both on libhoard and the GLIBC(ptmalloc), both with the same behavior.
Do anyone meet the same issue before, what is the possible root cause?
Thanks.
VMAs (virtual memory areas, AKA memory mappings) do not need to be contiguous. Your first example uses ~256 Mb, the second ~246 Mb.
Common malloc() implementations use mmap() automatically for large allocations (usually larger than 64Kb), freeing the corresponding chunks with munmap(). So you do not need to mmap() manually for large allocations, your malloc() library will take care of that.
When mmap()ing, the kernel returns a COW copy of a special zero page, so it doesn't allocate memory until it's written to. Your zeroing is causing memory to be really allocated, better just return it to the allocator, and request a new memory chunk when you need it.
Conclusion: don't write your own memory management unless the system one has proven inadecuate for your needs, and then use your own memory management only when you have proved it noticeably better for your needs with real life load.
I want to allocate a large DMA buffer, about 40 MB in size. When I use dma_alloc_coherent(), it fails and what I see is:
------------[ cut here ]------------
WARNING: at mm/page_alloc.c:2106 __alloc_pages_nodemask+0x1dc/0x788()
Modules linked in:
[<8004799c>] (unwind_backtrace+0x0/0xf8) from [<80078ae4>] (warn_slowpath_common+0x4c/0x64)
[<80078ae4>] (warn_slowpath_common+0x4c/0x64) from [<80078b18>] (warn_slowpath_null+0x1c/0x24)
[<80078b18>] (warn_slowpath_null+0x1c/0x24) from [<800dfbd0>] (__alloc_pages_nodemask+0x1dc/0x788)
[<800dfbd0>] (__alloc_pages_nodemask+0x1dc/0x788) from [<8004a880>] (__dma_alloc+0xa4/0x2fc)
[<8004a880>] (__dma_alloc+0xa4/0x2fc) from [<8004b0b4>] (dma_alloc_coherent+0x54/0x60)
[<8004b0b4>] (dma_alloc_coherent+0x54/0x60) from [<803ced70>] (mxc_ipu_ioctl+0x270/0x3ec)
[<803ced70>] (mxc_ipu_ioctl+0x270/0x3ec) from [<80123b78>] (do_vfs_ioctl+0x80/0x54c)
[<80123b78>] (do_vfs_ioctl+0x80/0x54c) from [<8012407c>] (sys_ioctl+0x38/0x5c)
[<8012407c>] (sys_ioctl+0x38/0x5c) from [<80041f80>] (ret_fast_syscall+0x0/0x30)
---[ end trace 4e0c10ffc7ffc0d8 ]---
I've tried different values and it looks like dma_alloc_coherent() can't allocate more than 2^25 bytes (32 MB).
How can such a large DMA buffer can be allocated?
After the system has booted up dma_alloc_coherent() is not necessarily reliable for large allocations. This is simply because non-moveable pages quickly fill up your physical memory making large contiguous ranges rare. This has been a problem for a long time.
Conveniently a recent patch-set may help you out, this is the contiguous memory allocator which appeared in kernel 3.5. If you're using a kernel with this then you should be able to pass cma=64M on your kernel command line and that much memory will be reserved (only moveable pages will be placed there). When you subsequently ask for your 40M allocation it should reliably succeed. Simples!
For more information check out this LWN article:
https://lwn.net/Articles/486301/