I've got a simple app that records audio, and processes the bytes that are coming in. I've found that I am missing quite a lot of the data that should be coming in, something like 2/3 of it.
This routine:
static OSStatus AudioCallBack (void* userData, AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 nFrames, AudioBufferList* ioData)
is not being called enough. What can typically cause this?
If you try to do any significant amount of processing inside an Audio Unit callback (or anything else that does Objective C messaging, synchronization, locks, or memory management, etc.), your callback function or block might take too long, and thus your app might miss some callbacks, and thus miss some audio data. You can check for this by removing all processing inside the callback, and just total the number of audio samples received to make sure the right amount is coming in per second.
If this is happening, then, to prevent your callback from blocking too long, you should rearrange your processing to do all or most of it in another thread, and just quickly copy the data out of the Audio Unit callback into an array, queue, or fifo to pass to that other processing task or thread. Since you know the rate of Audio Callbacks, you can determine the correct rate to poll for the needed processing.
Related
According to apple's docs it says:
Because your MIDIReadProc callback is invoked from a separate thread,
be aware of the synchronization issues when using data provided by
this callback.
Does this mean, use #synchronize to do thread blocking for safety?
Or does this literally mean synchronization timing issues may happen?
I am currently trying to read a midi file, and use a MIDIReadProc to trigger the note-on / note-off of a software synth based off of midi events. I need this to be extremely reliable and perfectly in-time. Right now, I am noticing that when I consume these midi events and write the audio to a buffer (all done from the MIDIReadProc), the timing is extremely sloppy and not sounding right at all. So I would like to know, what is the "proper" way to consume midi events from a MIDIReadProc?
Also, is a MIDIReadProc the only option for consuming midi events from a midi file?
Is there another option as far as setting up a virtual endpoint that could be directly consumed by my synthesizer? If so, how does that work exactly?
If you presume a function of this format to be the midiReadProc,
void midiReadProc(const MIDIPacketList *packetList,
void* readProcRefCon,
void* srcConnRefCon)
{
MIDIPacket *packet = (MIDIPacket*)packetList->packet;
int count = packetList->numPackets;
for (int k=0; k<count; k++) {
Byte midiStatus = packet->data[0];
Byte midiChannel= midiStatus & 0x0F;
Byte midiCommand = midiStatus >> 4;
//parse MIDI messages, extract relevant information and pass it to the controller
//controller must be visible from the midiReadProc
}
packet = MIDIPacketNext(packet);
}
the MIDI client has to be declared in the controller, interpreted MIDI events get stored into the controller from MIDI callback and read by the audioRenderCallback() on each audio render cycle. This way you can minimize timing imprecisions to the
length of the audio buffer, which you can negotiate during AudioUnit setup to be as short as the system allows for.
A controller can be a #interface myMidiSynthController : NSViewController you define, consisting of a matrix of MIDI channels and a pre-determined maximum-polyphony-per-channel, among other relevant data such as interface elements, phase accumulators for each active voice, AudioComponentInstance, etc... It would be wrong to resize the controller based on the midiReadProc() input. RAM is cheap nowadays.
I'm using such MIDI callbacks for processing live input from MIDI devices. Concerning playback of MIDI files, if you
want to process streams or files of arbitrary complexity, you may also run into surprises. MIDI standard itself
has timing features, which work as good as MIDI hardware allows for. Once you read an entire file into the memory, you can
translate your data into whatever you want and use your own code for controlling sound synthesis.
Please, observe not to use any code which would block the audio render thread (i.e. inside audioRenderCallback()), or would do memory management on it.
You could use AVAudioEngine.musicSequence and prepare your audio unit graph. Then use the MusicSequence API to load your GM file. Like this you don’t need to do the timing by yourself. Note I have not done this myself so far but I understand in theory it should work like this.
After you instantiate your synthesizer audio unit, you attach and connect it to the AVAudioEngine graph.
Does this mean, use #synchronize to do thread blocking for safety?
The opposite of what you’ve said is true: You should certainly not lock in a realtime thread. The #synchronized directive will lock if the resource is already locked. You may consider to use lock-free queues for realtime threads. See also Four common mistakes in audio development.
If you have to use CoreMIDI and MIDIReadProc, you can send MIDI commands to the synthesizer audio unit by calling MusicDeviceMIDIEvent right from your callback.
I have a program running under OpenCL where after I perform the calculations in private memory, I would like to write them to Global memory. I have no use for the results further down the road-essentially I am looking for a built in solution to write to Global memory from either __local or __private memory asynchronously.
I already tried async_work_group_copy and I noticed that in order to ensure the data is correctly copied I have to wait for the event. For my card AMD HD7970 this is the same as doing a synchronous copy directly to Global memory.
Does anyone have any experience with async_work_group_copy without waiting for the event or any other viable alternative?
for (...) {
//Calculate some results and copy to __local array src
event_t e = async_work_group_copy(dest, src, size, 0);
wait_group_events(1, &e); //Can we safely skip this??
}
Here src is __local and dest is __global.
I suspect that since this function has to be identical for the whole Group, skipping waiting for the event may not work since other local work items may not have finished. This is in a for loop which complicates things further.
I think there isn't much you have to (can) do in this situation. I know that the Intel's GPU implementation will not stall on a global write unless there's a register dependency hazard to soon after the write (e.g. if the program reuses that register too soon after the write, it'll stall until the dependency hazard clears). Sadly, you can't really control register allocation or even see it unfortunately though.
I need a step by step walkthrough on how to use audioConverterFillComplexBuffer and its callback. No, don't tell me to read the Apple docs. I do everything they say and the conversion always fails. No, don't tell me to go look for examples of audioConverterFillComplexBuffer and its callback in use - I've duplicated about a dozen such examples both line for line and modified and the conversion always fails. No, there isn't any problem with the input data. No, it isn't an endian issue. No, the problem isn't my version of OS X.
The problem is that I don't understand how audioConverterFillComplexBuffer works, so I don't know what I'm doing wrong. And nothing out there is helping me understand, because it seems like nobody on Earth really understands how audioConverterFillComplexBuffer works, either. From the people who actually use it(I spy cargo cult programming in their code) to even the authors of Learning Core Audio and/or Apple itself(http://stackoverflow.com/questions/13604612/core-audio-how-can-one-packet-one-byte-when-clearly-one-packet-4-bytes).
This isn't just a problem for me, it's a problem for anybody who wants to program high-performance audio on the Mac platform. Threadbare documentation that's apparently wrong and examples that don't work are no fun.
Once again, to be clear: I NEED A STEP BY STEP WALKTHROUGH ON HOW TO USE audioConverterFillComplexBuffer plus its callback and so does the entire Mac developer community.
This is a very old question but I think is still relevant. I've spent a few days fighting this and have finally achieved a successful conversion. I'm certainly no expert but I'll outline my understanding of how it works. Note I'm using Swift, which I'm also just learning.
Here are the main function arguments:
inAudioConverter: AudioConverterRef: This one is simple enough, just pass in a previously created AudioConverterRef.
inInputDataProc: AudioConverterComplexInputDataProc: The very complex callback. We'll come back to this.
inInputDataProcUserData, UnsafeMutableRawPointer?: This is a reference to whatever data you may need to be provided to the callback function. Important because even in swift the callback can't inherit context. E.g. you may need to access an AudioFileID or keep track of the number of packets read so far.
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>: This one is a little misleading. The name implies it's the packet size but reading the documentation we learn it's the total number of packets expected for the output format. You can calculate this as outPacketCount = frameCount / outStreamDescription.mFramesPerPacket.
outOutputData: UnsafeMutablePointer<AudioBufferList>: This is an audio buffer list which you need to have already initialized with enough space to hold the expected output data. The size can be calculated as byteSize = outPacketCount * outMaxPacketSize.
outPacketDescription: UnsafeMutablePointer<AudioStreamPacketDescription>?: This is optional. If you need packet descriptions, pass in a block of memory the size of outPacketCount * sizeof(AudioStreamPacketDescription).
As the converter runs it will repeatedly call the callback function to request more data to convert. The main job of the callback is simply to read the requested number packets from the source data. The converter will then convert the packets to the output format and fill the output buffer. Here are the arguments for the callback:
inAudioConverter: AudioConverterRef: The audio converter again. You probably won't need to use this.
ioNumberDataPackets: UnsafeMutablePointer<UInt32>: The number of packets to read. After reading, you must set this to the number of packets actually read (which may be less than the number requested if we reached the end).
ioData: UnsafeMutablePointer<AudioBufferList>: An AudioBufferList which is already configured except for the actual data. You need to initialise ioData.mBuffers.mData with enough capacity to hold the expected number of packets, i.e. ioNumberDataPackets * inMaxPacketSize. Set the value of ioData.mBuffers.mDataByteSize to match.
outDataPacketDescription: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?: Depending on the formats used, the converter may need to keep track of packet descriptions. You need to initialise this with enough capacity to hold the expected number of packet descriptions.
inUserData: UnsafeMutableRawPointer?: The user data that you provided to the converter.
So, to start you need to:
Have sufficient information about your input and output data, namely the number of frames and maximum packet sizes.
Initialise an AudioBufferList with sufficient capacity to hold the output data.
Call AudioConverterFillComplexBuffer.
And on each run of the callback you need to:
Initialise ioData with sufficient capacity to store ioNumberDataPackets of source data.
Initialise outDataPacketDescription with sufficient capacity to store ioNumberDataPackets of AudioStreamPacketDescriptions.
Fill the buffer with source packets.
Write the packet descriptions.
Set ioNumberDataPackets to the number of packets actually read.
return noErr if successful.
Here's an example where I read the data from an AudioFileID:
var converter: AudioConverterRef?
// User data holds an AudioFileID, input max packet size, and a count of packets read
var uData = (fRef, maxPacketSize, UnsafeMutablePointer<Int64>.allocate(capacity: 1))
err = AudioConverterNew(&inStreamDesc, &outStreamDesc, &converter)
err = AudioConverterFillComplexBuffer(converter!, { _, ioNumberDataPackets, ioData, outDataPacketDescription, inUserData in
let uData = inUserData!.load(as: (AudioFileID, UInt32, UnsafeMutablePointer<Int64>).self)
ioData.pointee.mBuffers.mDataByteSize = uData.1
ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: Int(uData.1), alignment: 1)
outDataPacketDescription?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(ioNumberDataPackets.pointee))
let err = AudioFileReadPacketData(uData.0, false, &ioData.pointee.mBuffers.mDataByteSize, outDataPacketDescription?.pointee, uData.2.pointee, ioNumberDataPackets, ioData.pointee.mBuffers.mData)
uData.2.pointee += Int64(ioNumberDataPackets.pointee)
return err
}, &uData, &numPackets, &bufferList, nil)
Again, I'm no expert, this is just what I've learned by trial and error.
I'm using the Windows multimedia APIs to record and process wave audio (waveInOpen and friends). I'd like to use a small number of buffers in a round robin fashion.
I know that you're supposed to use waveInPrepareHeader before adding a buffer to the device, and that you're supposed to call waveInUnprepareHeader after the wave device has "returned the buffer to the application" and before you deallocate it.
My question is, do I have to unprepare and re-prepare in order to re-use a buffer? Or can I just add a previously used buffer back to the device?
Also, does it matter what thread I do this on? I'm using the callback function, which seems to be called on a worker thread that belongs to the audio system. Can I call waveInUnprepareHeader, waveInPrepareHeader, and waveInAddBuffer on that thread, during the callback?
Yes, my experience has been you need to call prepare and unprepare every time. From memory, it returns an error if you try to reuse the same one.
And you typically call the prepare and unprepare on whatever thread you are handling the callbacks on.
When you create the buffers, call waveInPrepareHeader. Then you can simply set the prepared flag before you call waveInAddBuffer on a buffer that was returned from the device.
pHdr->dwFlags = WHDR_PREPARED;
You can do this on the callback thread (or in the message handler).
Context: I have a piece of code that knows the value of a waveOut handle (HWAVEOUT). However the code did not create the handle, thus the WAVEFORMATEX that was passed to waveOutOpen when creating the handle is unknown.
I want to find out the contents of that WAVEFORMATEX struct that was passed to the waveOutOpen call.
Some more details where this is used: The code runs in a hook function that's invoked instead of waveOutWrite. Thus the code knows the handle value, but does not know the details of the handle creation.
Just so that people do not need to look it up:
The signature of waveOutOpen is
MMRESULT waveOutOpen(
LPHWAVEOUT phwo,
UINT uDeviceID,
LPWAVEFORMATEX pwfx,
DWORD dwCallback,
DWORD dwInstance,
DWORD fdwOpen
);
The signature of waveOutWrite is:
MMRESULT waveOutWrite(
HWAVEOUT hwo,
LPWAVEHDR pwh,
UINT cbwh
);
Note: I am also hooking waveOutOpen, but it could already be called before I have a hook.
You can't get this information from the wave API. You'll have to get it from whoever opened the wave device.
You can get the playback rate using waveOutGetPlaybackRate(), and knowing that, you could (in theory) know cell size by timing how long it takes to play a buffer of known size. (0 is always silence) But 8 bit stereo will end up taking the same amount of time to play back as 16 bit mono. same with float/32 bit mono and 16 bit stereo.
I'd say that 99% of the time 16 bit stereo will the the right answer, but when you guess wrong, the result sounds really bad (and loud!) so guessing may not be a good idea.
You can also use waveOutMessage() to send custom messages to the wave driver. It's possible that there is some custom_query_wave_format message, but there is no message like that defined in the standard. It's assumed that whoever opened the wave device will keep track of what format (s)he opened it with.
You access the pwfx item of the waveOutOpen struct just as you would access any other struct.
myWaveOutOpen.pwfx.wFormatTag
Or the equivalent format in your language.
Your question is hard to understand. I'm not sure what you want...?