Is there any MIPI payload restrictions in different SOC platform? - linux-kernel

I know that the mipi payload should be a multiple of 8-bits. (That means each Mipi packet length should be a multiple of 8-bits.)
But in Qualcomm socs, I found it is better to make sure the mipi payload is a multiple of 8 bytes/ 16 bytes.
On the other SOCs, (for example MediaTek) is there any further length restriction of the Mipi payload?
I tried on Qualcomm SOCs. If the payload is not a multiple of 16 bytes, there will be padding numbers (zeros) at the end of each packet.

Related

ACP and DMA, how they work?

I'm using ARM a53 platform, it has ACP component, and I'm trying to use DMA to transfer data through ACP.
By ARM trm document, if I understand it correctly, the DMA transmission data size limits to 64 bytes for each DMA transfer when using ACP.
If so, does this limitation make DMA not usable? Because it's dumb to configure DMA descriptor but to transfer 64 bytes only each time.
Or DMA should auto divide its transfer length into many ACP size limited(64 bytes) packets, without any software intervention.
Need any expert to explain how ACP and DMA work together.
Somewhere in the interfaces from the DMA to the ACP's AXI port should auto divide its transfer length as needed into transfers of appropriate length. For the Cortex-A53 ACP, AXI transfers are limited to 64B(perhaps intentionally 1x cacheline).
From https://developer.arm.com/documentation/ddi0500/e/level-2-memory-system/acp/transfer-size-support :
x byte INCR request characterized by:(some list of limitations)
Note the use of INCR instead of FIXED. INCR will automatically increment the address according to the size of the transfer, while FIXED will not. This makes it simple for the peripheral break a large transfer into a series of multiple INCR transfers.
However, do note that on the Cortex-A53, transfer size(x in the quote) is fixed at 16 or 64 byte aligned transfers. If the DMA sends an inappropriate sized transfer(because misconfigured or correct size unsupported), the AXI will emit a SLVERR. If the buffer is not appropriately aligned, I think this also causes a SLVERR.
Lastly, the on-chip network routing must support connecting the DMA to the ACP at chip design time. In my experience this is more commonly done for network accelerators and FPGA fabric glue, but tends to be less often connected for low speed peripherals like UART/SPI/I2C.

Gianfar Linux Kernel Driver Maximum Receive/Transmit Size

I have been trying to understand the code for the gianfar linux ethernet driver and was having difficulty understanding fragemented pages. I understand the maximum transmission size is 9600 bytes, however does this include fragments ?
Is it possible to send and received transmissions that are larger in size (e.g. 14000 bytes) if they are split among multiple fragements ?
Thank you in advance
9600 is a jumbo frame maximum size. The maximum MTU ("jumbo MTU") size is 9600 - 14 = 9586 bytes. Also, if I recall correctly, MTU never includes 4-byte FCS.
So, 9586 must be simply the maximum Ethernet "payload" size which can be put on wire. It's a limitation with respect to a single Ethernet frame. So, if you have a larger chunk of data ("transmission"), you might be able to "slice" it and produce multiple Ethernet frames from it (to be precise, multiple independent skb-s), each fitting the MTU size. So, in this case you will have multiple independent Ethernet frames to be handed over to the network driver. The interconnection between these frames will only be detectable on the IP header level, i.e., if you peek at IP header of the 1st frame you will be able to see "more fragments" flag indicating that the next frame contains an IP packet which is the next fragment of the original (large) chunk of data. But from the driver's point of view such frames should remain independent.
However, if you mean "skb fragments" rather than "IP fragments", then putting a 14000 byte frame into multiple fragments ("data fragments") of a single skb might not be helpful with respect to the MTU (say, you've configured the jumbo MTU on the interface). Because these fragments are just smaller chunks of contiguous memory containing different parts of the same Ethernet frame. And the driver just makes multiple descriptors pointing to these chunks of memory. The hardware will pick them to send a single frame. And if the HW sees that the overall frame length is bigger than the maximum MTU, it might decline the transmission. Exact behaviour in this case is a topic for a separate talk.

FTDI driver (Windows) FT_Write() issue with large (1KB) chunk - (version 2.12.16.0)

My application on PC sends a file (2 MB) in chunks of 1 KB to embedded device.
I use FTDI Windows driver, I use the classic FT_Write() API function as my code is cross-platform.
Note: These issues below appear when I use 1KB chunk size. Smaller chunk (I tried 64 bytes) works fine.
The problem is the function returns "0 byte sent" every couple hundred packets and stuck. I found a work around, by purging both TX and Rx, followed by ResetDevice() call recovered the chip. It still happened every couple hundred packets, but at least I can send the whole file (2 MB).
But when I use USB isolator (http://www.bb-elec.com/Products/USB-Connectivity/USB-Isolators/Compact-USB-Port-Guardian.aspx)
the work around failed.
I believe my work around is not a graceful solution.
Note: I use large chunk because of suggestion I found in FTDI application note below:
When writing data to an FTDI device, as much data as possible should
be buffered in the application and written to the device in a single
write function call (either WriteFile for a VCP application using the
Win32 API, FT_Write if using the D2XX classic interface or
FT_WriteFile if using the D2XX FT_W32 interface). The result of this
is that the data will be written to the device with 64 bytes per USB
packet.
Any idea what's the proper fix for these issues? Is it related to FTDI initialization? My driver version is 2.12.16.0 (3/9/2016).
I also saw the same problem of API FT_Write() not working right if too much data was passed,
while working on the library for my USB device Nusbio.
I mostly work in the mode Synchronous Bitbanging rather than UART but after all it is the same
hardware, driver and API.
There are the USB 2.0 specification or the FTDI FT232RL specification and then there is
reality of the electron and bit. The expected numbers of transfer speed never really match at
least at first. In other words it is complicated (see more below in my referenced blog post).
In 2015 I was under the impression that with FTDI chip FT232RL the size of 384 bytes was working well
and the number comes from the chip datasheet (128 byte receive buffer and 256 byte transmit buffer).
Using a size of 500 bytes would still work but above 600 bytes thing would not work.
I later used the chip FT231X which has a larger buffer (1k, 512 byte receive buffer and 512 byte transmit buffer).
and was able to transfer with FT_Write() 1k and 2k buffer of data, therefore more than doubling my speed of transfer.
But above 2k things would not work.
In 2016, I read every thing you can read about FTDI USB 2.0 Full speed chip, I came to the
conclusion that FT_Write should support up to 64K (see datasheet for the following chip
FT232RL, FT231X, FT232H, FT260, FT4222).
I also did some research on faster serial port communication from .NET than 115200 baud.
Somehow I was able to update my C# library to send data in buffer of 32k in FT_Write() and it is
working with the FT232RL and the FT231X chip, but I can't tell you what changed.
I was probably not completely underdanding the in and out of the USB 2.0 full speed FTDI technology.
For example let's say you are using the FT232RL and transfering 384 bytes at the time with
FT_Write(). Knowing that there is at least a 1 milli-second latency in USB 2.0 full speed what ever you
do, you are transfering from a USB point of view 384*1000/1024, that is 375 K byte/s in theory
(that would be the max), that said now what is the baudrate supported by your embedded device.
What is the baudrate used?
The FT232RL max baudrate is 900 000 baud, which would give you only 900000/(1+8+1) == 87 K byte/S.
Right away you can tell there is going to be some problem, may be the FTDI driver takes care of
it or not. I can't tell.
Re do the math based on the baudrate supported by your embedded device, and a 384 byte buffer
sent 1000 per second, then slow down your USB speed with a sleep() to match your baud rate.
That is where I would start.

Output from an ADC is needed to be stored in memory

We want to take the output of a 16-bit Analog to Digital Converter, which is coming at a rate of 10 million samples per second and SAVE the sequence of 16 output bits in a computer memory. How to save this 16-bit binary voltage signal (0V, 5V) in a computer memory?
If a FPGA is to be used, please elaborate the method.
Sample Data and feed to fifo
Take data from fifo and prepare UDP frames and send data over ethernet
Received UDP packets on PC side and put in memory

When to Update ALSA Audio Driver Buffer Pointer

I am writing an USB Audio Playback driver using ALSA APIs. For that I was trying to understand existing audio drivers in Linux kernel. But I get confused on when to update the kernel audio buffer pointer. We know kernel puts new audio data in a ring buffer and our drivers task is to take new data from the ring buffer, pass it over USB and update the kernel buffer pointer.
The drivers I was looking at takes care of this in URB completion function. Say they have a predefined macro for USB transfer size, which is around 4096 bytes in almost all cases. So when the URB transfer is finished and the code execution path comes in URB completion, they copy another 4096 bytes from the kernel buffer into the URB buffer, submit the URB again to the USB controller and forward the kernel buffer pointer by 4096 bytes.
But what I don't understand is, how come they be so sure that by the time a URB trasfer is finished, there are 4096 bytes of new data in the kernel buffer? The new data amount in the kernel buffer might be smaller than 4096 bytes? Then why does it always update the buffer pointer by 4096 bytes. I think there should be some of knowing how many new bytes are in the kernel buffer and the driver should only update by that amount or may be I misunderstood something? Any suggestion or guideline is appreciable.
These USB audio drivers behave exactly like a PCI sound card, i.e., when the device needs some samples, those samples are just read from the ring buffer.
A PCI chip has no way of knowing what part of the buffer actually contains valid samples.
A buffer underrun is detected later by software (the device informs the driver about the current position with an interrupt; the interrupt handler then raises the underrun error if the position is too far ahead).
USB audio drivers use exactly the same mechanism for detecting underruns, i.e., the snd_pcm_period_elapsed() function checks whether the current position (as returned by your .pointer callback) is too far ahead.

Resources