In PCI configuration space, Cache line size indicates system cacheline size in units of DWORDs. This register must be implemented by master devices that can generate the Memory Write and Invalidate command.
The value in this register is also used by master devices to determine whether to use Read, Read Line, or Read Multiple commands for accessing memory.
Slave devices that want to allow memory bursting using cacheline wrap addressing mode must implement this register to know when a burst sequence wraps to the beginning of the cacheline.
But this field is implemented by PCI Express devices as a read-write field for legacy compatibility purposes but has no effect on any PCI Express device behavior.
Then how PCIe system implements memory-write-invalidate feature ?
The PCIe has a supplement protocol that is called Address Translation Services (ATS), in this protocol, there is a description for invalidation (chapter 3). The bottom line is a MsgD Transaction Layer Packet (TLP) called Invalidate that can do that.
Note that in general, it is completely separate (protocol-wise) from the MWr TLP.
As far as I know, PCIe does not have an explicit message memory write and invalidate. Instead, a root complex that recieves a write that happens to cover an entire cacheline can avoid reading that cacheline and invalidate it immediately.
I think in most cases you would simply generate MaxPayloadSize requests if possible, and hopefully also trigger this behaviour. If you must know the cacheline size from the device, I would suggest designing a device-specific mechanism, and configuring it from your driver.
Related
According to the following documentation
https://www.kernel.org/doc/html/latest/x86/pat.html,
Drivers wanting to export some pages to userspace do it by using mmap interface and a combination of:
pgprot_noncached()
io_remap_pfn_range() or remap_pfn_range() or vmf_insert_pfn()
Note that this set of APIs only works with IO (non RAM) regions. If driver wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() as step 0 above and also track the usage of those pages and use set_memory_wb() before the page is freed to free pool.
Why is the extra step set_memory_uc() or set_memory_wc() needed for RAM regions?
This is needed since set_memory_uc() and set_memory_wc() are specifically written to work with memory regions; the other API functions you're being told to use here are for I/O regions.
Since you want to work with page(s) in a RAM region using the API functions listed, your driver needs to mark them as uncached or write-combined first so that they can essentially be treated like I/O pages, use the APIs, and then be sure to follow up with explicit writeback(s) of the memory page(s) in order to sync their contents before your driver considers itself "finished" with them.
I am using the GCC toolchain and the ARM Cortex-M0 uC. I would like to ask if it is possible to define a space in the linker so that the reading and writing operations would call the external device driver functions for reading and writing it's space (eg. SPI memory). Can anyone give some hints how to do it?
Regards, Rafal
EDIT:
Thank you for your comments and replies. My setup is:
The random access SPI memory is connected via SPI controller and I use a "standard" driver to access the memory space and store/read data from it.
What I wanted to do is to avoid calling the driver's functions explicitly, but to hide them behind some fixed RAM address, so that any read of that address would call the spi read memory driver function and write would call the spi write memory function (the offset of the initial address would be the address of the data in the external memory). I doubt that it is at all possible in the uC without the MMU, but I think it is always worth to ask someone else who might have had similar idea.
No, this is not how it works. Cortex-M0 has no memory management Unit, and is therefore unable to intercept accesses to specific memory regions.
It's not really clear what you are trying to achieve. If you have connected SPI memory external to the chip, you have to perform all the accesses using a driver, it is not possible to memory map the SPI port abstraction.
If this is an on-device SPI memory controller, it will have two regions in the memory map. One will be the 'memory'region, and will probably behave read-only, one with be the control registers for the memory controller hardware, and it is these registers which the device driver talks to. Specifically, to write to the SPI, you need to perform driver accesses to perform the write.
In the extreme case, (for example Cortex-M1 for Xilinx), there will be an eXecute In Place (XIP) peripheral for the memory map behaviour, and a SPI Master device for the read/write functionality. A GPIO pin is used to multiplex the SPI EEPROM pins between 'memory mode' and çonfiguration mode'.
I have managed to create a virtual IOPCIDevice which attaches to IOResources and basically does nothing. I'm able to get existing drivers to register and match to it.
However when it comes to IO handling, I have some trouble. IO access by functions (e.g. configRead, ioRead, configWrite, ioWrite) that are described in IOPCIDevice class can be handled by my own code. But drivers that use memory mapping and IODMACommand are the problem.
There seems to be two things that I need to manage: IODeviceMemory(described in the IOPCIDevice) and DMA transfer.
How could I create a IODeviceMemory that ultimately points to memory/RAM, so that when driver tries to communicate to PCI device, it ultimately does nothing or just moves the data to RAM, so my userspace client can handle this data and act as an emulated PCI device?
And then could DMA commands be directed also to my userspace client without interfering to existing drivers' source code that use IODMACommand.
Thanks!
Trapping memory accesses
So in theory, to achieve what you want, you would need to allocate a memory region, set its protection bits to read-only (or possibly neither read nor write if a read in the device you're simulating has side effects), and then trap any writes into your own handler function where you'd then simulate device register writes.
As far as I'm aware, you can do this sort of thing in macOS userspace, using Mach exception handling. You'd need to set things up that page protection fault exceptions from the process you're controlling get sent to a Mach port you control. In that port's message handler, you'd:
check where the access was going to
if it's the device memory, you'd suspend all the threads of the process
switch the thread where the write is coming from to single-step, temporarily allow writes to the memory region
resume the writer thread
trap the single-step message. Your "device memory" now contains the written value.
Perform your "device's" side effects.
Turn off single-step in the writer thread.
Resume all threads.
As I said, I believe this can be done in user space processes. It's not easy, and you can cobble together the Mach calls you need to use from various obscure examples across the web. I got something similar working once, but can't seem to find that code anymore, sorry.
… in the kernel
Now, the other problem is you're trying to do this in the kernel. I'm not aware of any public KPIs that let you do anything like what I've described above. You could start looking for hacks in the following places:
You can quite easily make IOMemoryDescriptors backed by system memory. Don't worry about the IODeviceMemory terminology: these are just IOMemoryDescriptor objects; the IODeviceMemory class is a lie. Trapping accesses is another matter entirely. In principle, you can find out what virtual memory mappings of a particular MD exist using the "reference" flag to the createMappingInTask() function, and then call the redirect() method on the returned IOMemoryMap with a NULL backing memory argument. Unfortunately, this will merely suspend any thread attempting to access the mapping. You don't get a callback when this happens.
You could dig into the guts of the Mach VM memory subsystem, which mostly lives in the osfmk/vm/ directory of the xnu source. Perhaps there's a way to set custom fault handlers for a VM region there. You're probably going to have to get dirty with private kernel APIs though.
Why?
Finally, why are you trying to do this? Take a step back: What is it you're ultimately trying to do with this? It doesn't seem like simulating a PCI device in this way is an end to itself, so is this really the only way to do what greater goal you're ultimately trying to achieve? See: XY problem
I write a kernel driver which exposes to the user space my I/O device.
Using mmap the application gets virtual address to write into the device.
Since i want the application write uses a big PCIe transaction, the driver maps this memory to be write combining.
According to the memory type (write-combining or non-cached) the application applies an optimal method to work with the device.
But, some architectures do not support write-combining or may support but just for part of the memory space.
Hence, it is important that the kernel driver tell to application if it succeeded to map the memory to be write-combining or not.
I need a generic way to check in the kernel driver if the memory it mapped (or going to map) is write-combining or not.
How can i do it?
here is part of my code:
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
io_remap_pfn_range(vma, vma->vm_start, pfn, PAGE_SIZE, vma->vm_page_prot);
First, you can find out if an architecture supports write-combining at compile time, with the macro ARCH_HAS_IOREMAP_WC. See, for instance, here.
At run-time, you can check the return values from ioremap_wc, or set_memory_wc and friends for success or not.
I have a process with some sensitive memory which must never be written to disk.
I also have a requirement that I need core dumps to satisfy first-time data capture requirements of my client.
Does locking a page using mlock() prevent the page from appearing in a core dump?
Note, this is an embedded system and we don't actually have any swap space.
Taken from man 2 madvise:
The madvise() system call advises the kernel about how to handle
paging input/output in the address range beginning at address addr and
with size length bytes. It allows an application to tell the kernel
how it expects to use some mapped or shared memory areas, so that the
kernel can choose appropriate read-ahead and caching techniques.
This call does not influence the semantics of the application (except
in the case of MADV_DONTNEED), but may influence its performance. The
kernel is free to ignore the advice.
Particularly check the option MADV_DONTDUMP :
Exclude from a core dump those pages in the range specified by addr
and length. This is useful in applications that have large areas of
memory that are known not to be useful in a core dump. The effect of
MADV_DONTDUMP takes precedence over the bit mask that is set via the
/proc/PID/coredump_filter file (see core(5)).