Gianfar Linux Kernel Driver Maximum Receive/Transmit Size - linux-kernel

I have been trying to understand the code for the gianfar linux ethernet driver and was having difficulty understanding fragemented pages. I understand the maximum transmission size is 9600 bytes, however does this include fragments ?
Is it possible to send and received transmissions that are larger in size (e.g. 14000 bytes) if they are split among multiple fragements ?
Thank you in advance

9600 is a jumbo frame maximum size. The maximum MTU ("jumbo MTU") size is 9600 - 14 = 9586 bytes. Also, if I recall correctly, MTU never includes 4-byte FCS.
So, 9586 must be simply the maximum Ethernet "payload" size which can be put on wire. It's a limitation with respect to a single Ethernet frame. So, if you have a larger chunk of data ("transmission"), you might be able to "slice" it and produce multiple Ethernet frames from it (to be precise, multiple independent skb-s), each fitting the MTU size. So, in this case you will have multiple independent Ethernet frames to be handed over to the network driver. The interconnection between these frames will only be detectable on the IP header level, i.e., if you peek at IP header of the 1st frame you will be able to see "more fragments" flag indicating that the next frame contains an IP packet which is the next fragment of the original (large) chunk of data. But from the driver's point of view such frames should remain independent.
However, if you mean "skb fragments" rather than "IP fragments", then putting a 14000 byte frame into multiple fragments ("data fragments") of a single skb might not be helpful with respect to the MTU (say, you've configured the jumbo MTU on the interface). Because these fragments are just smaller chunks of contiguous memory containing different parts of the same Ethernet frame. And the driver just makes multiple descriptors pointing to these chunks of memory. The hardware will pick them to send a single frame. And if the HW sees that the overall frame length is bigger than the maximum MTU, it might decline the transmission. Exact behaviour in this case is a topic for a separate talk.

Related

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

ACP and DMA, how they work?

I'm using ARM a53 platform, it has ACP component, and I'm trying to use DMA to transfer data through ACP.
By ARM trm document, if I understand it correctly, the DMA transmission data size limits to 64 bytes for each DMA transfer when using ACP.
If so, does this limitation make DMA not usable? Because it's dumb to configure DMA descriptor but to transfer 64 bytes only each time.
Or DMA should auto divide its transfer length into many ACP size limited(64 bytes) packets, without any software intervention.
Need any expert to explain how ACP and DMA work together.
Somewhere in the interfaces from the DMA to the ACP's AXI port should auto divide its transfer length as needed into transfers of appropriate length. For the Cortex-A53 ACP, AXI transfers are limited to 64B(perhaps intentionally 1x cacheline).
From https://developer.arm.com/documentation/ddi0500/e/level-2-memory-system/acp/transfer-size-support :
x byte INCR request characterized by:(some list of limitations)
Note the use of INCR instead of FIXED. INCR will automatically increment the address according to the size of the transfer, while FIXED will not. This makes it simple for the peripheral break a large transfer into a series of multiple INCR transfers.
However, do note that on the Cortex-A53, transfer size(x in the quote) is fixed at 16 or 64 byte aligned transfers. If the DMA sends an inappropriate sized transfer(because misconfigured or correct size unsupported), the AXI will emit a SLVERR. If the buffer is not appropriately aligned, I think this also causes a SLVERR.
Lastly, the on-chip network routing must support connecting the DMA to the ACP at chip design time. In my experience this is more commonly done for network accelerators and FPGA fabric glue, but tends to be less often connected for low speed peripherals like UART/SPI/I2C.

Is Realterm dropping characters or am I?

I'm using a SAML21 board to accept some data over a serial connection, and at the moment, just mirror it to a serial port on a computer. However this data is 6 bytes at ~250Hz(It was closer to 3KHz before). As far as I can tell I'm tracking the start and end bytes correctly, however my columnar alignment gets out of whack on occasion in realterm.
I have it set up for 6 bytes in single mode. SO all columns should be presenting the same bytes up and down. However, over time as I increase the rate at which I mirror(I am still receiving the data at a fixed rate) the first column's byte tends to float.
I have not used realterm at speeds this high before, so I am not aware of it's limitations.

Output from an ADC is needed to be stored in memory

We want to take the output of a 16-bit Analog to Digital Converter, which is coming at a rate of 10 million samples per second and SAVE the sequence of 16 output bits in a computer memory. How to save this 16-bit binary voltage signal (0V, 5V) in a computer memory?
If a FPGA is to be used, please elaborate the method.
Sample Data and feed to fifo
Take data from fifo and prepare UDP frames and send data over ethernet
Received UDP packets on PC side and put in memory

How do I calculate PCIe 1x, 2.0, 3.0, speeds properly?

I am honestly very lost with the speeds calculations of PCIe devices.
I can understand the 33MHz - 66MHz clocks of PCI and PCI-X devices, but PCIe confuses me.
Could anyone explain how to calculate the transfer speeds of PCIe?
To understand the table pointed to by Paebbels, you should know how PCIe transmission works. Contrary to PCI and PCI-X, PCIe is a point-to-point serial bus with link aggregation (meaning that several serial lanes are put together to increase transfer bandwidth).
For PCIe 1.0, a single lane transmits symbols at every edge of a 1.25GHz clock (Takrate). This yield a transmission rate of 2.5G transfers (or symbols) per second. The protocol encodes 8 bit of data with 10 symbols (8b10b encoding) for DC balance and clock recovery. Therefore the raw transfer rate of a lane is
2.5Gsymb/s / 10symb * 8bits = 250MB/s
The raw transfer rate can be multiplied by the number of lanes available to get the full link transfer rate.
Note that the useful transfer rate is actually less than that because data is packetized similar to ethernet protocol layer packetization.
A more detailed explanation can be found in this Xilinx white paper.

Resources