Let's say the address of a buffer in my kernel is 0xB0E4. Do all other systems also have a kernel buffer in same address?
Absolutely not.
A different kernel might not even have the buffer at all, much less at the same address.
If you restrict yourself to the exact same kernel binary, any dynamically created buffer could be at a different address from boot to boot.
If the buffer is static, then the offset is defined when the kernel is linked. So the same kernel binary would have the buffer at the same offset. If the kernel is not relocatable, then the address would be the same. A relocatable kernel could still change from boot to boot, though the offset from the kernel start would be the same.
A module is run time linked when it is loaded, so a static buffer in a module will have different address depending on what memory was allocated to hold it.
What you might find at the same address is memory mapped IO regions. On many SoC systems these are at fixed addresses for a given device.
Related
I am porting a large application to ARM32 Linux and splitting off the hardware stuff into a device driver. Nearly all of the extensive driver code uses absolute addresses to access buffers and I/O related variables and registers. I'd have to have to change all that to pointer relative addresses - a lot of code is in assembler as well.
From user space it is simple to use mmap to ask for a target virtual address for physical memory (via /dev/mem) so that side poses no issue.
But how can I do similar in kernel code ? IOremap and Memremap give you a random kernel virtual address, worse, loading a driver using INSMOD places both code and data (.bbs) in vmalloc memory.
remap_PFN_range can be used to map kernel memory to user space via mmap call (and with that ask for a given virtual address range) - but how can that be used from the kernel itself if at all ?
So for example, say I have a buffer at physical address 0x60000000 - how can I tell the Kernel to map that to a given kernel accessible virtual address (perhaps also 0x60000000 but could be anything as long as its known at compile time) ?
So far I have spent days surfing anything that mentions remapping but am not finding the "golden" answer. Anybody know if one exists ?
AFAIK there is no "easy" way to do that.
This document explains the memory layout of the Linux kernel memory, and as you see, modules has a specific mapping space which can't be changed as long as you load your code with init_module syscall, and dynamic memory that's allocated using stuff like kmalloc also has a specific range.
Maybe you'll be able to hack something together to create a buffer at a known address, but if my memory doesn't fool me, Linux kind of depends on the layout I mentioned above for some fundamental stuff (page faults etc...).
OK, I have the answer and it's embarrassingly simple.
In my case I am running a STM32MP157 chip under buildroot. It so happens 512MBytes of DRAM is placed at 0xC0000000 physical. This means kernel space virtual address = physical address. PAGE_OFFSET and PHYS_OFFSET are 0xC0000000 so they simply cancel out.
Right, to display a nice logo on startup a 3MByte framebuffer is allocated in CMA memory which starts at 0xD8000000. This is done in early kernel init and is the first thing in CMA. Later on I allocate more framebuffers via DRM but the first one stays.
It's unused after kernel boot - except it now isn't. It's my perfect solution - just read and write directly into 0xD8000000 to 0xD83FFFFF (physical location and size of that framebuffer). All the variables I need to have at locations known at compile time are located into that space. Directly accessible, no pointers needed. No need to modify my existing code other than tell the linker to place the variables at 0xD8000000.
I've got a Xilinx Zynq 7000-based board with a peripheral in the FPGA fabric that has DMA capability (on an AXI bus). We've developed a circuit and are running Linux on the ARM cores. We're having performance problems accessing a DMA buffer from user space after it's been filled by hardware.
Summary:
We have pre-reserved at boot time a section of DRAM for use as a large DMA buffer. We're apparently using the wrong APIs to map this buffer, because it appears to be uncached, and the access speed is terrible.
Using it even as a bounce-buffer is untenably slow due to horrible performance. IIUC, ARM caches are not DMA coherent, so I would really appreciate some insight on how to do the following:
Map a region of DRAM into the kernel virtual address space but ensure that it is cacheable.
Ensure that mapping it into userspace doesn't also have an undesirable effect, even if that requires we provide an mmap call by our own driver.
Explicitly invalidate a region of physical memory from the cache hierarchy before doing a DMA, to ensure coherency.
More info:
I've been trying to research this thoroughly before asking. Unfortunately, this being an ARM SoC/FPGA, there's very little information available on this, so I have to ask the experts directly.
Since this is an SoC, a lot of stuff is hard-coded for u-boot. For instance, the kernel and a ramdisk are loaded to specific places in DRAM before handing control over to the kernel. We've taken advantage of this to reserve a 64MB section of DRAM for a DMA buffer (it does need to be that big, which is why we pre-reserve it). There isn't any worry about conflicting memory types or the kernel stomping on this memory, because the boot parameters tell the kernel what region of DRAM it has control over.
Initially, we tried to map this physical address range into kernel space using ioremap, but that appears to mark the region uncacheable, and the access speed is horrible, even if we try to use memcpy to make it a bounce buffer. We use /dev/mem to map this also into userspace, and I've timed memcpy as being around 70MB/sec.
Based on a fair amount of searching on this topic, it appears that although half the people out there want to use ioremap like this (which is probably where we got the idea from), ioremap is not supposed to be used for this purpose and that there are DMA-related APIs that should be used instead. Unfortunately, it appears that DMA buffer allocation is totally dynamic, and I haven't figured out how to tell it, "here's a physical address already allocated -- use that."
One document I looked at is this one, but it's way too x86 and PC-centric:
https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
And this question also comes up at the top of my searches, but there's no real answer:
get the physical address of a buffer under Linux
Looking at the standard calls, dma_set_mask_and_coherent and family won't take a pre-defined address and wants a device structure for PCI. I don't have such a structure, because this is an ARM SoC without PCI. I could manually populate such a structure, but that smells to me like abusing the API, not using it as intended.
BTW: This is a ring buffer, where we DMA data blocks into different offsets, but we align to cache line boundaries, so there is no risk of false sharing.
Thank you a million for any help you can provide!
UPDATE: It appears that there's no such thing as a cacheable DMA buffer on ARM if you do it the normal way. Maybe if I don't make the ioremap call, the region won't be marked as uncacheable, but then I have to figure out how to do cache management on ARM, which I can't figure out. One of the problems is that memcpy in userspace appears to really suck. Is there a memcpy implementation that's optimized for uncached memory I can use? Maybe I could write one. I have to figure out if this processor has Neon.
Have you tried implementing your own char device with an mmap() method remapping your buffer as cacheable (by means of remap_pfn_range())?
I believe you need a driver that implements mmap() if you want the mapping to be cached.
We use two device drivers for this: portalmem and zynqportal. In the Connectal Project, we call the connection between user space software and FPGA logic a "portal". These drivers require dma-buf, which has been stable for us since Linux kernel version 3.8.x.
The portalmem driver provides an ioctl to allocate a reference-counted chunk of memory and returns a file descriptor associated with that memory. This driver implements dma-buf sharing. It also implements mmap() so that user-space applications can access the memory.
At allocation time, the application may choose cached or uncached mapping of the memory. On x86, the mapping is always cached. Our implementation of mmap() currently starts at line 173 of the portalmem driver. If the mapping is uncached, it modifies vma->vm_page_prot using pgprot_writecombine(), enabling buffering of writes but disabling caching.
The portalmem driver also provides an ioctl to invalidate and optionally write back data cache lines.
The portalmem driver has no knowledge of the FPGA. For that, we the zynqportal driver, which provides an ioctl for transferring a translation table to the FPGA so that we can use logically contiguous addresses on the FPGA and translate them to the actual DMA addresses. The allocation scheme used by portalmem is designed to produce compact translation tables.
We use the same portalmem driver with pcieportal for PCI Express attached FPGAs, with no change to the user software.
The Zynq has neon instructions, and an assembly code implementation of memcpy using neon instructions, using aligned on cache boundary (32 bytes) will achieve 300 MB/s rates or higher.
I struggled with this for some time with udmabuf and discovered the answer was as simple as adding dma_coherent; to its entry in the device tree. I saw a dramatic speedup in access time from this simple step - though I still need to add code to invalidate/flush whenever I transfer ownership from/to the device.
I have an SoC which has both DSP and ARM cores on it and I would like to create a section of shared memory that both my userspace software, and DSP software are able to access. What would be the best way to allocate a buffer like this in Linux? Here is a little background, right now what I have is a kernel module in which I use kmalloc() to get a kernel buffer, I then use the __pa() macro from asm/page.h to get the physical address of my kernel buffer. I save this address as a sysfs entry so that my userspace code can get the physical address of this buffer. I can then write this address to the DSP so it knows where the shared memory location is, and I can also mmap /dev/mem or my own kernel module so that I can access this buffer from userspace (I could also use the read/write fileops).
For some reason I feel like this is overboard but I cannot find the best way to do what I am trying to do.
Would it be possible to just mmap \dev\mem a section of memory and just read and write to this section? My feeling is that this would not 'lock' this section of memory from the kernel, thus the kernel could still read/write to this memory without me knowing. Is this the case. After reading the memory management chapter of LDD3 I see that mmap creates a new VMA of the mapping. Would this lock this area of memory so that other processes would not get allocated this section of memory?
Any and all help is appreciated
Depending on the kind of DMA you're using, you need to allocate the buffer with dma_alloc_coherent(), or use standard allocations and the dma_map_* functions.
(You must not use __pa(); physical addresses are not necessarily the same as DMA bus addresses.)
To map the buffers to user space, use dma_mmap_coherent() for coherent buffers, or map the memory pages manually for streaming buffers.
For a similar requirement of mine, I had reserved about 16 MB of memory towards the end of ram and used it in both kernel and user space. Suppose you have 128 MB ram, you can set BOOTMEM argument as 112 MB in your boot loader. I am assuming you are using uboot. This will reserve 16 MB towards the end of the ram. Now in kernel and user space you can map this area and use it as shared memory.
I am a little confused about the terms physical/logical/virtual addresses in an Operating System(I use Linux- open SUSE)
Here is what I understand:
Physical Address- When the processor is in system mode, the address used by the processor is physical address.
Logical Address- When the processor is in user mode, the address used is the logical address. these are anyways mapped to some physical address by adding a base register with the offset value.It in a way provides a sort of memory protection.
I have come across discussion that virtual and logical addresses/address space are the same. Is it true?
Any help is deeply appreciated.
My answer is true for Intel CPUs running on a modern Linux system, and I am speaking about user-level processes, not kernel code. Still, I think it'll give you some insight enough to think about the other possibilities
Address Types
Regarding question 3:
I have come across discussion that virtual and logical
addresses/address space are the same. Is it true?
As far as I know they are the same, at least in modern OS's running on top of Intel processors.
Let me try to define two notions before I explain more:
Physical Address: The address of where something is physically located in the RAM chip.
Logical/Virtual Address: The address that your program uses to reach its things. It's typically converted to a physical address later by a hardware chip (mostly, not even the CPU is aware really of this conversion).
Virtual/Logical Address
The virtual address is well, a virtual address, the OS along with a hardware circuit called the MMU (Memory Management Unit) delude your program that it's running alone in the system, it's got the whole address space(having 32-bits system means your program will think it has 4 GBs of RAM; roughly speaking).
Obviously, if you have more than one program running at the time (you always do, GUI, Init process, Shell, clock app, calendar, whatever), this won't work.
What will happen is that the OS will put most of your program memory in the hard disk, the parts it uses the most will be present in the RAM, but hey, that doesn't mean they'll have the address you and your program know.
Example: Your process might have a variable named (counter) that's given the virtual address 0xff (imaginably...) and another variable named (oftenNotUsed) that's given the virtual address (0xaa).
If you read the assembly of your compiled code after all linking's happened, you'll be accessing them using those addresses but well, the (oftenNotUsed) variable won't be really there in RAM at 0xaa, it'll be in the hard disk because the process is not using it.
Moreover, the variable (counter) probably won't be physically at (0xff), it'll be somewhere else in RAM, when your CPU tries to fetch what's in 0xff, the MMU and a part of the OS, will do a mapping and get that variable from where it's really available in the RAM, the CPU won't even notice it wasn't in 0xff.
Now what happens if your program asks for the (oftenNotUsed) variable? The MMU+OS will notice this 'miss' and will fetch it for the CPU from the Harddisk into RAM then hand it over to the CPU as if it were in the address (0xaa); this fetching means some data that was present in RAM will be sent back to the Harddisk.
Now imagine this running for every process in your system. Every process thinks they have 4GB of RAMs, no one actually have that but everything works because everyone has some parts of their program available physically in the RAM but most of the program resides in the HardDisk. Don't confuse this part of the program memory being put in HD with the program data you can access through file operations.
Summary
Virtual address: The address you use in your programs, the address that your CPU use to fetch data, is not real and gets translated via MMU to some physical address; everyone has one and its size depends on your system(Linux running 32-bit has 4GB address space)
Physical address: The address you'll never reach if you're running on top of an OS. It's where your data, regardless of its virtual address, resides in RAM. This will change if your data is sent back and forth to the hard disk to accommodate more space for other processes.
All of what I have mentioned above, although it's a simplified version of the whole concept, is what's called the memory management part of the the computer system.
Consequences of this system
Processes cannot access each other memory, everyone has their separate virtual addresses and every process gets a different translation to different areas even though sometimes you may look and find that two processes try to access the same virtual address.
This system works well as a caching system, you typically don't use the whole 4GB you have available, so why waste that? let others share it and let them use it too; when your process needs more, the OS will fetch your data from the HD and replace other process' data, at an expense of course.
Physical Address- When the processor is in system mode, the address used by the processor is physical address.
Not necessarily true. It depends on the particular CPU. On x86 CPUs, once you've enabled page translation, all code ceases to operate with physical addresses or addresses trivially convertible into physical addresses (except, SMM, AFAIK, but that's not important here).
Logical Address- When the processor is in user mode, the address used is the logical address. these are anyways mapped to some physical address by adding a base register with the offset value.
Logical addresses do not necessarily apply to the user mode exclusively. On x86 CPUs they exist in the kernel mode as well.
I have come across discussion that virtual and logical addresses/address space are the same. Is it true?
It depends on the particular CPU. x86 CPUs can be configured in such a way that segments aren't used explicitly. They are used implicitly and their bases are always 0 (except for thread-local-storage segments). What remains when you drop the segment selector from a logical address is a 32-bit (or 64-bit) offset whose value coincides with the 32-bit (or 64-bit) virtual address. In this simplified set-up you may consider the two to be the same or that logical addresses don't exist. It's not true, but for most practical purposes, good enough of an approximation.
I am referring to below answer base on intel x86 CPU
Difference Between Logical to Virtual Address
Whenever your program is under execution CPU generates logical address for instructions which contains (16 bit Segment Selector and 32 bit offset ).Basically Virtual(Linear address) is generated using logical address fields.
Segment selector is 16 bit field out of which first 13bit is index (Which is a pointer to the segment descriptor resides in GDT,described below) , 1 bit TI field ( TI = 1, Refer LDT , TI=0 Refer GDT )
Now Segment Selector OR say segment identifier refers to Code Segment OR Data Segment OR Stack Segment etc. Linux contains one GDT/LDT (Global/Local Descriptor Table) Which contains 8 byte descriptor of each segments and holds the base (virtual) address of the segment.
So for for each logical address, virtual address is calculated using below steps.
1) Examines the TI field of the Segment Selector to determine which Descriptor
Table stores the Segment Descriptor. This field indicates that the Descriptor is
either in the GDT (in which case the segmentation unit gets the base linear
address of the GDT from the gdtr register) or in the active LDT (in which case the
segmentation unit gets the base linear address of that LDT from the ldtr register).
2) Computes the address of the Segment Descriptor from the index field of the Segment
Selector. The index field is multiplied by 8 (the size of a Segment Descriptor),
and the result is added to the content of the gdtr or ldtr register.
3) Adds the offset of the logical address to the Base field of the Segment Descriptor,
thus obtaining the linear(Virtual) address.
Now it is the job of Pagging unit to translate physical address from virtual address.
Refer : Understanding the linux Kernel , Chapter 2 Memory Addressing
User virtual addresses
These are the regular addresses seen by user-space programs. User addresses are either 32 or 64 bits in length, depending on the underlying hardware architecture, and each process has its own virtual address space.
Physical addresses
The addresses used between the processor and the system's memory. Physical addresses are 32- or 64-bit quantities; even 32-bit systems can use 64-bit physical addresses in some situations.
Bus addresses
The addresses used between peripheral buses and memory. Often they are the same as the physical addresses used by the processor, but that is not necessarily the case. Bus addresses are highly architecture dependent, of course.
Kernel logical addresses
These make up the normal address space of the kernel. These addresses map most or all of main memory, and are often treated as if they were physical addresses. On most architectures, logical addresses and their associated physical addresses differ only by a constant offset. Logical addresses use the hardware's native pointer size, and thus may be unable to address all of physical memory on heavily equipped 32-bit systems. Logical addresses are usually stored in variables of type unsigned long or void *. Memory returned from kmalloc has a logical address.
Kernel virtual addresses
These differ from logical addresses in that they do not necessarily have a direct mapping to physical addresses. All logical addresses are kernel virtual addresses; memory allocated by vmalloc also has a virtual address (but no direct physical mapping). The function kmap returns virtual addresses. Virtual addresses are usually stored in pointer variables.
If you have a logical address, the macro __pa() (defined in ) will return its associated physical address. Physical addresses can be mapped back to logical addresses with __va(), but only for low-memory pages.
Reference.
Normally every address issued (for x86 architecture) is a logical address which is translated to a linear address via the segment tables. After the translation into linear address, it is then translated to physical address via page table.
A nice article explaining the same in depth:
http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation/
Physical Address is the address that is seen by the memory unit, i.e., one loaded into memory address register.
Logical Address is the address that is generated by the CPU.
The user program can never see the real physical address.Memory mapping unit converts the logical address to physical address.
Logical address generated by user process must be mapped to physical memory before they are used.
Logical memory is relative to the respective program i.e (Start point of program + offset)
Virtual memory uses a page table that maps to ram and disk. In this way each process can promise more memory for each individual process.
In the Usermode or UserSpace all the addresses seen by program are Virtual addresses.
When in kernel mode addresses seen by kernel are still virtual but termed as logical as they are equal to physical + pageoffset .
Physical addresses are the ones which are seen by RAM .
With Virtual memory every address in program goes through page tables.
when u write a small program eg:
int a=10;
int main()
{
printf("%d",a);
}
compile: >gcc -c fname.c
>ls
fname.o //fname.o is generated
>readelf -a fname.o >readelf_obj.txt
/readelf is a command to understand the object files and executabe file which will be in 0s and 1s. output is written in readelf_onj.txt file/
`>vim readelf_obj.txt`
/* under "section header" you will see .data .text .rodata sections of your object file. every starting or the base address is started from 0000 and grows to the respective size till it reach the size under the heading "size"----> these are the logical addresses.*/
>gcc fname.c
>ls
a.out //your executabe
>readelf -a a.out>readelf_exe.txt
>vim readelf_exe.txt
/* here the base address of all the sections are not zero. it will start from particular address and end up to the particular address. The linker will give the continuous adresses to all the sections (observe in the readelf_exe.txt file. observe base address and size of each section. They start continuously) so only the base addresses are different.---> this is called the virtual address space.*/
Physical address-> the memory ll have the physical address. when your executable file is loaded into memory it ll have physical address. Actually the virtual adresses are mapped to physical addresses for the execution.
I read that the first 3 GBs are reserved for the process and the last GB is for the Kernel. I also read that the kernel is loaded starting from the 2nd MB of the physical address space (depending on the configuration). My question is that is the mapping of that last 1 GB is same for all processes and maps to this physical area of memory?
Another question is, when a process switches to kernel mode (eg, when a sys call occurs), then what page tables are used, the process page tables or the kernel page tables? If kernel page tables are used, then they can't access the memory locations belonging to the process. If that is the case, then there is apparently no use for the kernel virtual memory since all access to kernel code and data will be through the mapping of the last 1 GB of process address space. Please help me clarify this (any useful links will be much appreciated)
It seems, you are talking about 32-bit x86 systems, right?
If I am not mistaken, the kernel can be configured not only for 3Gb/1Gb memory distrubution, there could be other variants (e.g. 2Gb/2Gb). Still, 3Gb/1Gb is probably the most common one on x86-32.
The kernel part of the address space should be inaccessible from the user space. From the kernel's point of view, yes, the mapping of the memory occupied by the kernel itself is always the same. No matter, in the context of which process (or interrupt handler, or whatever else) the kernel currently operates.
As one of the consequences, if you look at the addresses of kernel symbols in /proc/kallsyms from different processes, you will see the same addresses each time. And these are exactly the addresses of the respective kernel functions, variables and others from the kernel's point of view.
So I suppose, the answer to your first question is "yes" but it is probably not very useful for the user-space code as the kernel space memory is not directly accessible from there anyway.
As for the second question, well, if the kernel currently operates in the context of some process, it can actually access the user-space memory of that process. I can't describe it in detail but probably the implementation of kernel functions copy_from_user and copy_to_user could give you some hints. See arch/x86/lib/usercopy_32.c and arch/x86/include/asm/uaccess.h in the kernel sources. It seems, on x86-32, the user-space memory is accessed in these functions directly, using the default memory mappings for the current process context. The 'magic' stuff there is only related to the optimizations and checking the address of the memory area for correctness.
Yes, the mapping of the kernel part of the address space is the same in all processes. Part of it does map that part of the physical memory where the kernel image is loaded, but that's not the bulk of it - the remainder is used to map other physical memory locations for the kernel's runtime working set.
When a process switches to kernel mode, the page tables are not changed. The kernel part of the address space simply becomes accessible because the CPL (Current Privilege Level) is now zero.