How to properly use MIDIReadProc? - core-audio

According to apple's docs it says:
Because your MIDIReadProc callback is invoked from a separate thread,
be aware of the synchronization issues when using data provided by
this callback.
Does this mean, use #synchronize to do thread blocking for safety?
Or does this literally mean synchronization timing issues may happen?
I am currently trying to read a midi file, and use a MIDIReadProc to trigger the note-on / note-off of a software synth based off of midi events. I need this to be extremely reliable and perfectly in-time. Right now, I am noticing that when I consume these midi events and write the audio to a buffer (all done from the MIDIReadProc), the timing is extremely sloppy and not sounding right at all. So I would like to know, what is the "proper" way to consume midi events from a MIDIReadProc?
Also, is a MIDIReadProc the only option for consuming midi events from a midi file?
Is there another option as far as setting up a virtual endpoint that could be directly consumed by my synthesizer? If so, how does that work exactly?

If you presume a function of this format to be the midiReadProc,
void midiReadProc(const MIDIPacketList *packetList,
void* readProcRefCon,
void* srcConnRefCon)
{
MIDIPacket *packet = (MIDIPacket*)packetList->packet;
int count = packetList->numPackets;
for (int k=0; k<count; k++) {
Byte midiStatus = packet->data[0];
Byte midiChannel= midiStatus & 0x0F;
Byte midiCommand = midiStatus >> 4;
//parse MIDI messages, extract relevant information and pass it to the controller
//controller must be visible from the midiReadProc
}
packet = MIDIPacketNext(packet);
}
the MIDI client has to be declared in the controller, interpreted MIDI events get stored into the controller from MIDI callback and read by the audioRenderCallback() on each audio render cycle. This way you can minimize timing imprecisions to the
length of the audio buffer, which you can negotiate during AudioUnit setup to be as short as the system allows for.
A controller can be a #interface myMidiSynthController : NSViewController you define, consisting of a matrix of MIDI channels and a pre-determined maximum-polyphony-per-channel, among other relevant data such as interface elements, phase accumulators for each active voice, AudioComponentInstance, etc... It would be wrong to resize the controller based on the midiReadProc() input. RAM is cheap nowadays.
I'm using such MIDI callbacks for processing live input from MIDI devices. Concerning playback of MIDI files, if you
want to process streams or files of arbitrary complexity, you may also run into surprises. MIDI standard itself
has timing features, which work as good as MIDI hardware allows for. Once you read an entire file into the memory, you can
translate your data into whatever you want and use your own code for controlling sound synthesis.
Please, observe not to use any code which would block the audio render thread (i.e. inside audioRenderCallback()), or would do memory management on it.

You could use AVAudioEngine.musicSequence and prepare your audio unit graph. Then use the MusicSequence API to load your GM file. Like this you don’t need to do the timing by yourself. Note I have not done this myself so far but I understand in theory it should work like this.
After you instantiate your synthesizer audio unit, you attach and connect it to the AVAudioEngine graph.
Does this mean, use #synchronize to do thread blocking for safety?
The opposite of what you’ve said is true: You should certainly not lock in a realtime thread. The #synchronized directive will lock if the resource is already locked. You may consider to use lock-free queues for realtime threads. See also Four common mistakes in audio development.
If you have to use CoreMIDI and MIDIReadProc, you can send MIDI commands to the synthesizer audio unit by calling MusicDeviceMIDIEvent right from your callback.

Related

Why does QSerialPort::writeData start writes with a single-shot timer?

I'm trying to understand Qt's serial port module and I'm not too familiar with how Qt handles asynchronous I/O. On Windows, the QSerialPort::writeData method places the data to be written in a ring buffer and then starts a single-shot QTimer to actually perform the write when its timeout signal fires:
qint64 QSerialPortPrivate::writeData(const char *data, qint64 maxSize)
{
Q_Q(QSerialPort);
writeBuffer.append(data, maxSize);
if (!writeBuffer.isEmpty() && !writeStarted) {
if (!startAsyncWriteTimer) {
startAsyncWriteTimer = new QTimer(q);
QObjectPrivate::connect(startAsyncWriteTimer, &QTimer::timeout, this, &QSerialPortPrivate::_q_startAsyncWrite);
startAsyncWriteTimer->setSingleShot(true);
}
if (!startAsyncWriteTimer->isActive())
startAsyncWriteTimer->start();
}
return maxSize;
}
The readData method doesn't use a timer in this way, instead calling ReadFileEx directly.
What does the single-shot timer accomplish versus just calling WriteFileEx?
There is a special case for a QTimer with an interval of 0: this timer will fire once control returns to the event loop. The implementation on Unix/Linux does something similar, but not using a QTimer, instead having a subclass of QSocketNotifier that will get called when the port is able to be written to. Both of these implementations mean that you will buffer the data and write it out once you get back to the main event loop.
There are two reasons that I can think of for doing this:
There is something different between the POSIX and Win32 serial APIs that require the code to be structured this way. As far as I am aware, this is not the case
What #Mike said in a comment: this will allow for data to be buffered before it is written
The buffering seems like the most likely reason for this, as doing a syscall for each piece of data that you want to write would be a rather expensive operation.

OSX: AudioUnit callback not being called enough

I've got a simple app that records audio, and processes the bytes that are coming in. I've found that I am missing quite a lot of the data that should be coming in, something like 2/3 of it.
This routine:
static OSStatus AudioCallBack (void* userData, AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 nFrames, AudioBufferList* ioData)
is not being called enough. What can typically cause this?
If you try to do any significant amount of processing inside an Audio Unit callback (or anything else that does Objective C messaging, synchronization, locks, or memory management, etc.), your callback function or block might take too long, and thus your app might miss some callbacks, and thus miss some audio data. You can check for this by removing all processing inside the callback, and just total the number of audio samples received to make sure the right amount is coming in per second.
If this is happening, then, to prevent your callback from blocking too long, you should rearrange your processing to do all or most of it in another thread, and just quickly copy the data out of the Audio Unit callback into an array, queue, or fifo to pass to that other processing task or thread. Since you know the rate of Audio Callbacks, you can determine the correct rate to poll for the needed processing.

Cancel WaitCommEvent for overlapped serial I/O

I'm handling a non-standard modem via serial port in an overlapped manner. Besides reading from and writing to the telecommunication line, I have to check the control lines like CTS and DSR using the WaitCommEvent() function.
DWORD EvtMask;
/// (some scopes/levels ommitted)
const BOOL syncChange = WaitComEvent(hFile, &EvtMask, &overlapped);
if (!syncChange) {
assert(GetLastError() == ERROR_IO_PENDING);
/// *background activity* probably writing into EvtMask
/// until overlapped.hEvent gets signalled
}
In the (practically all) cases the function call indicates *background activity*, I have to wait on the overlapped.hEvent to happen. Since I'm also waiting for events from alternative sources (like IPC caused by user input, program termination), I use the WaitForMuiltipleObjects() function. But, if the blocking wait is finished for other reasons than control line changes, how can I stop the background activity on EvtMask? The code I'm based on, currently uses SetCommMask(hFile, 0), but I did not find a reliable reference for this being appropriate.
I also observe cases where changes to control lines are not supported properly (driver?, VM?), so I have to do a sliced wait with in-between checking.
What must be done to safely leave the scope where the variable EvtMask is declared?
The code you have is correct, and fully supported by the documentation, which clearly says:
If a process attempts to change the device handle's event mask by using the SetCommMask function while an overlapped WaitCommEvent operation is in progress, WaitCommEvent returns immediately.
I've used this fact on both "real" serial ports, and USB virtual serial port emulations, and it works reliably.
(In my particular case, I was watching for EV_TXEMPTY so that I could guarantee a minimal separation between certain transmissions on the wire)

Rs232 software flow control

I have a general question about Rs232 Software Flowcontrol (aka XOn/XOff)
The .Net implementation (and the nativ win32 api) bothe define a property called WriteTimeout / ReadTimeout, which is a time in ms after which a communication is considered to be overdue.
No my problem is this: If I send, lets say a 5 Byte string to the device I don't see any WriteTimeout, as expected. How is this implemented? Everything I find about Software flow control is that XOFF is to be set, when the recieve buffer is full; XOn when it is ready to recieve again.
But from the behavior I see, I would suspect, hat the device sends XON, after it has processed the 5-Byte information that I send, thus creating the information for windows to generate the corresponding events.
So when to send XON on a two-wire only RS232 implementation? Only if the buffer was full and to restart recieving; Or to signal, that we are "still ready" to receive after every chunk we processed?
How to implement?
Cheers & thx in advance!
Corelgott
Send an XON any time you are ready to receive data (your receive buffer is empty or nearly so). Send an XOFF any time you cannot accept more incoming data (your receive buffer is full or nearly so). The process is documented on the Wikipedia software flow control page.

Can you re-use buffers with Windows wave audio input?

I'm using the Windows multimedia APIs to record and process wave audio (waveInOpen and friends). I'd like to use a small number of buffers in a round robin fashion.
I know that you're supposed to use waveInPrepareHeader before adding a buffer to the device, and that you're supposed to call waveInUnprepareHeader after the wave device has "returned the buffer to the application" and before you deallocate it.
My question is, do I have to unprepare and re-prepare in order to re-use a buffer? Or can I just add a previously used buffer back to the device?
Also, does it matter what thread I do this on? I'm using the callback function, which seems to be called on a worker thread that belongs to the audio system. Can I call waveInUnprepareHeader, waveInPrepareHeader, and waveInAddBuffer on that thread, during the callback?
Yes, my experience has been you need to call prepare and unprepare every time. From memory, it returns an error if you try to reuse the same one.
And you typically call the prepare and unprepare on whatever thread you are handling the callbacks on.
When you create the buffers, call waveInPrepareHeader. Then you can simply set the prepared flag before you call waveInAddBuffer on a buffer that was returned from the device.
pHdr->dwFlags = WHDR_PREPARED;
You can do this on the callback thread (or in the message handler).

Resources