reading mpeg-ts file to parallel port at 27Mhz - fpga

I planed to read a MPEG-TS file from the SD card and write it to an 8-bit parallel output as mpeg transport with a 27 MHz clock (simple FPGA-based MPEG-TS tester for DVB purposes). But I can,t calculate speed of writing packets in out port and how many null-packets I need insert into out port. Can you help me?

Related

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

Convert LPCM buffer to AAC for HTTP Live Streaming

I have an application that records audio from devices into a Float32 (LPCM) buffer.
However, LPCM needs to be encoded in an audio format (MP3, AAC) to be used as a media segment to be streamed, according to the HTTP Live Streaming specifications. I have found some useful resources on how to convert a LPCM file to an AAC / MP3 file but this is not exactly what I am looking for, since I am not willing to convert a file but a buffer.
What are the main differences between converting an audio file and a raw audio buffer (LPCM, Float32)? Is the latter more trivial?
My initial thought was to create a thread that would regularly fetch data from a ring buffer (where the raw audio is stored) and convert it to a a valid audio format (either AAC or MP3).
Would it be more sensible to do so immediately when the AudioBuffer is captured through a AURenderCallback and hence pruning the ring buffer?
Thanks for your help,
The core audio recording buffer length and the desired audio file length are rarely always exactly the same. So it might be better to poll your circular/ring buffer (you know the sample rate, which should tell approximately how often) to decouple the two rates, and convert the buffer (if filled sufficiently) to a file at a later time. You can memory map a raw audio file to the buffer, but there may or may not be any performance difference between that, and async writing a temp file.

Output from an ADC is needed to be stored in memory

We want to take the output of a 16-bit Analog to Digital Converter, which is coming at a rate of 10 million samples per second and SAVE the sequence of 16 output bits in a computer memory. How to save this 16-bit binary voltage signal (0V, 5V) in a computer memory?
If a FPGA is to be used, please elaborate the method.
Sample Data and feed to fifo
Take data from fifo and prepare UDP frames and send data over ethernet
Received UDP packets on PC side and put in memory

digital audio output - what format is it in?

My MacBook has an optical digital audio output 3.5 mm plug (see here). I'm asking here on SO because I think this is a standard digital audio output plug; the description says I should use a Toslink cable with a Toslink mini-plug adapter or a fiber-optic cable.
I was wondering: What is the format of the audio data transferred over this cable? Is it a fixed format, e.g. 44.1kHz, 16bit integer, two-channel (standard PCM like from an audio CD)? Or what formats does it allow? For example, I would like to send 96kHz (or 48kHz), 32bit float (or 24bit integer), two-channel (or 6 channels) audio data over it. How is the data encoded? How does the receiver (the DA converter) know about the format? Is there some communication back from the receiver so that the receiver tells my computer what format it would prefer? Or how do I know the maximal sample rate and the maximal bit width of a sample?
How do I do that on the software side? Is it enough to tell CoreAudio to use whatever format I like and it puts that unmodified onto the cable? At least that is my goal. So basically my main questions are: What formats are supported, how do I know that my raw audio data in my application gets exactly in that format on the cable?
Digital audio interconnects like TOSLINK use the S/PDIF protocol. The channel layout and compression status is encoded in the stream, and the sample rate is implied by the speed at which the signal is sent (!). For uncompressed streams, S/PDIF transmits 24-bit (integer) PCM data. (Lower bit depths can be transmitted as well; S/PDIF just pads them out to 24 bits anyway.) Note that, due to bandwidth constraints, compression must be used if more than two channels are being transmitted.
From the software side, on OS X, most of the properties of a digital audio output are controlled by the settings of your audio output device.

MPEG2 TS end-to-end delay

I need to calculate the end to end delay (between the encoder and the decoder) in an MPEG 2 TS
based on time stamps information (PTS PCR DTS). Are those time stamps enough to calculate the delay?
These time-stamps are inserted into the transport stream by the encoder, and are used by the decoder - such as syncing between audio and video frames, and in general locking with the original clock to display the video correctly.
The delay between an encoder and a decoder, on the other hand, is like asking what is the delay between transmitting the data from the source and receiving it in the destination. This is not determined by the data (i.e. the transport stream and the data within such as time stamps) but by the network conditions.

Resources