Can On-Chip RAM memory of the SOM be accessed by the FPGA or PCIE endpoint device? - fpga

I am using Nxp i.mx8m plus SOM having Processor Cortex A53 running on Linux and Real time Co-processor Cortex M7 running on FreeRTOS.
Can I access the OCRAM memory of the SOM by the FPGA / PCIE endpoint device directly?

Related

Sharing I2C driver by kernel and userspace

My hardware design uses the same I2C controller for chips controlled by kernel modules (DAC and ADC in sound ASoC) and for devices I want to control from userspace (I2C port expanders -> relays). Can I use the controller in the ASoC devicetree files and use it from user-space libraries at the same time? If so, how can I guard/lock access to the controller between kernel and userspace to avoid clashing I2C transactions?

Linux PCIe DMA driver

I'm currently writing a driver for a PCIe device that should send data to a Linux system using DMA. As far as I can understand my PCIe device needs a DMA controller (DMA master) and my Linux system too (DMA slave). Currently the PCIe device has no DMA controller and should not get one. That confuses me.
A. Is the following possible?
PCIe device sends interrupt
Wait for interrupt in the Linux driver
Start DMA transfer from memory mapped PCIe registers to Linux system DMA.
Read the data from memory in userspace
I have everything setup for this, the only thing I miss is how to transfer the data from the PCIe registers to the memory.
B. Which system call (or series of) do I need to call to do a DMA transfer?
C. I probably need to setup the DMA on the Linux system but what I find points to code that assumes there is a slave, e.g. struct dma_slave_config.
The use case is collecting data from the PCIe device and make it available in memory to userspace.
Any help is much appreciated. Thanks in advance!
DMA, by definition, is completely independent of the CPU and any software (i.e. OS kernel) running on it. DMA is a way for devices to perform memory reads and writes against host memory without the involvement of the host CPU.
The way DMA usually works is something like this: software will allocate a DMA accessible region in memory and share the physical address with the device, say, by performing memory writes against the address space associated with one of the device's BARs. Then, the device will perform a DMA read or write against that block of memory. When that operation is complete, the device will issue an interrupt to the device driver so it can handle the data and/or free the memory.
If your device does not have the capability of issuing a DMA read or write against host memory, then you'll have to interact with it with using the CPU only. Discrete DMA controllers have not been a thing for a very long time.

Configure DMA in a Linux Kernel Module

for my application I would to send some data allocated in RAM to PWM fifo through DMA in Kernel Space.
I would to use DMA to generate an Interrupt when the data vector is completed, so to load next one vector and trigger other behavior...
I read "Linux Device Drivers" 3rd edition from O'Reilly but I'm a bit confused about using DMA Engine.
I would ask which step I have to follow to start a DMA transaction Memory-to-Device (PWM) with Interrupt callback?
EDIT 1:
I need to learn how to use Linux DMA API for my case (memory -> pwm fifo), in kernel space.
I have sumbit a patch to improve ethernet performance by using dma engine. In this patch, driver is able to move packets from rx fifo to RAM (from device to mem. ) So you can get some infomation about using dma engine in linux kernel from this patch. sun4i-emac.c: add dma support
steps:
request an dma channel (api: dma_request_chan)
setup dma channel (api: dmaengine_slave_config)
map data buf to dma region (api: dma_map_single)
prepare for transfer (api: dmaengine_prep_slave_single)
submit dma transfer request (api: dmaengine_submit)
launch! (api: dma_async_issue_pending)

In which file enumeration of PCIe devices is located in Linux kernel for ARM based system?

I am working to develop PCIe drivers for a custom ARM based platform. As a starting point I have started to look into Linux kernel 4.15.9 code. I am unable to locate the relevant PCIe driver files. In particular I am interested in PCIe device enumeration and configuration. Any help in this regard would be appreciated.
PCIe driver code is divided in 4 section.
1 - PCIe subsystem code
This is the generic PCIe subsystem code which takes care of Bus scan,
MSI allocation, BAR allocation, etc.
Path - driver/pci/*
2 - PCIe host controller IP generic code
This is specific to the host controller. That means for a certain host
in a platform, PCIe subsystem will communicate via the APIs provided
by this code.
Path - drivers/pci/dwc/*
Example - DWC host
NOTE - Not all controller manufacturer has a separate folder like DWC
(Synopsys).
3 - PCIe host controller initialization platform specific code
This is specific to the PCIe IP and it will be specific for a SoC.
Every SoC will have their own chip specific code to initialize the
controller. So the APIs in this part will be used by the "PCIe host
generic code"
Path - drivers/pci/host/*
4 - PCIe capabilities
This code segment contain capabilities processing like AER, DPC. ASPM,
etc.
Path - drivers/pci/pcie/*

Memory Alignment for a DMA transaction (Windows Driver Foundation)

We are writing a DMA-based driver for a custom made PCI-Express device using WDF for Windows 7.
As you may know, PCI-Express bus transactions are not allowed to cross a 4k memory boundary. The custom device does not check this, and therefore we need to ensure that the driver only requests DMA transfers which are aligned to 4k memory boundaries.
The profile for the device is WdfDmaProfilePacket64.
We tried using WdfDeviceSetAlignmentRequirement(DevExt->Device, 4095), but this does not result in the DMA start address to be properly aligned.
How can we configure the WDF framework so that it only requests properly aligned addresses?
you can handle this in user space application, somehow that you initiate/allocate an aligned memory in user space and then send it to kernel program. it is not easy for a driver to align a memory which already allocated and initiated. even in user-space application we have to allocating extra space and then using the aligned part(I know, it's not pretty, that's why i recommend to solve this problem in device side)
for example if you use C++ for your user-space application you can do something like this

Resources