Context: I have a piece of code that knows the value of a waveOut handle (HWAVEOUT). However the code did not create the handle, thus the WAVEFORMATEX that was passed to waveOutOpen when creating the handle is unknown.
I want to find out the contents of that WAVEFORMATEX struct that was passed to the waveOutOpen call.
Some more details where this is used: The code runs in a hook function that's invoked instead of waveOutWrite. Thus the code knows the handle value, but does not know the details of the handle creation.
Just so that people do not need to look it up:
The signature of waveOutOpen is
MMRESULT waveOutOpen(
LPHWAVEOUT phwo,
UINT uDeviceID,
LPWAVEFORMATEX pwfx,
DWORD dwCallback,
DWORD dwInstance,
DWORD fdwOpen
);
The signature of waveOutWrite is:
MMRESULT waveOutWrite(
HWAVEOUT hwo,
LPWAVEHDR pwh,
UINT cbwh
);
Note: I am also hooking waveOutOpen, but it could already be called before I have a hook.
You can't get this information from the wave API. You'll have to get it from whoever opened the wave device.
You can get the playback rate using waveOutGetPlaybackRate(), and knowing that, you could (in theory) know cell size by timing how long it takes to play a buffer of known size. (0 is always silence) But 8 bit stereo will end up taking the same amount of time to play back as 16 bit mono. same with float/32 bit mono and 16 bit stereo.
I'd say that 99% of the time 16 bit stereo will the the right answer, but when you guess wrong, the result sounds really bad (and loud!) so guessing may not be a good idea.
You can also use waveOutMessage() to send custom messages to the wave driver. It's possible that there is some custom_query_wave_format message, but there is no message like that defined in the standard. It's assumed that whoever opened the wave device will keep track of what format (s)he opened it with.
You access the pwfx item of the waveOutOpen struct just as you would access any other struct.
myWaveOutOpen.pwfx.wFormatTag
Or the equivalent format in your language.
Your question is hard to understand. I'm not sure what you want...?
Related
I've got a simple app that records audio, and processes the bytes that are coming in. I've found that I am missing quite a lot of the data that should be coming in, something like 2/3 of it.
This routine:
static OSStatus AudioCallBack (void* userData, AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 nFrames, AudioBufferList* ioData)
is not being called enough. What can typically cause this?
If you try to do any significant amount of processing inside an Audio Unit callback (or anything else that does Objective C messaging, synchronization, locks, or memory management, etc.), your callback function or block might take too long, and thus your app might miss some callbacks, and thus miss some audio data. You can check for this by removing all processing inside the callback, and just total the number of audio samples received to make sure the right amount is coming in per second.
If this is happening, then, to prevent your callback from blocking too long, you should rearrange your processing to do all or most of it in another thread, and just quickly copy the data out of the Audio Unit callback into an array, queue, or fifo to pass to that other processing task or thread. Since you know the rate of Audio Callbacks, you can determine the correct rate to poll for the needed processing.
According to apple's docs it says:
Because your MIDIReadProc callback is invoked from a separate thread,
be aware of the synchronization issues when using data provided by
this callback.
Does this mean, use #synchronize to do thread blocking for safety?
Or does this literally mean synchronization timing issues may happen?
I am currently trying to read a midi file, and use a MIDIReadProc to trigger the note-on / note-off of a software synth based off of midi events. I need this to be extremely reliable and perfectly in-time. Right now, I am noticing that when I consume these midi events and write the audio to a buffer (all done from the MIDIReadProc), the timing is extremely sloppy and not sounding right at all. So I would like to know, what is the "proper" way to consume midi events from a MIDIReadProc?
Also, is a MIDIReadProc the only option for consuming midi events from a midi file?
Is there another option as far as setting up a virtual endpoint that could be directly consumed by my synthesizer? If so, how does that work exactly?
If you presume a function of this format to be the midiReadProc,
void midiReadProc(const MIDIPacketList *packetList,
void* readProcRefCon,
void* srcConnRefCon)
{
MIDIPacket *packet = (MIDIPacket*)packetList->packet;
int count = packetList->numPackets;
for (int k=0; k<count; k++) {
Byte midiStatus = packet->data[0];
Byte midiChannel= midiStatus & 0x0F;
Byte midiCommand = midiStatus >> 4;
//parse MIDI messages, extract relevant information and pass it to the controller
//controller must be visible from the midiReadProc
}
packet = MIDIPacketNext(packet);
}
the MIDI client has to be declared in the controller, interpreted MIDI events get stored into the controller from MIDI callback and read by the audioRenderCallback() on each audio render cycle. This way you can minimize timing imprecisions to the
length of the audio buffer, which you can negotiate during AudioUnit setup to be as short as the system allows for.
A controller can be a #interface myMidiSynthController : NSViewController you define, consisting of a matrix of MIDI channels and a pre-determined maximum-polyphony-per-channel, among other relevant data such as interface elements, phase accumulators for each active voice, AudioComponentInstance, etc... It would be wrong to resize the controller based on the midiReadProc() input. RAM is cheap nowadays.
I'm using such MIDI callbacks for processing live input from MIDI devices. Concerning playback of MIDI files, if you
want to process streams or files of arbitrary complexity, you may also run into surprises. MIDI standard itself
has timing features, which work as good as MIDI hardware allows for. Once you read an entire file into the memory, you can
translate your data into whatever you want and use your own code for controlling sound synthesis.
Please, observe not to use any code which would block the audio render thread (i.e. inside audioRenderCallback()), or would do memory management on it.
You could use AVAudioEngine.musicSequence and prepare your audio unit graph. Then use the MusicSequence API to load your GM file. Like this you don’t need to do the timing by yourself. Note I have not done this myself so far but I understand in theory it should work like this.
After you instantiate your synthesizer audio unit, you attach and connect it to the AVAudioEngine graph.
Does this mean, use #synchronize to do thread blocking for safety?
The opposite of what you’ve said is true: You should certainly not lock in a realtime thread. The #synchronized directive will lock if the resource is already locked. You may consider to use lock-free queues for realtime threads. See also Four common mistakes in audio development.
If you have to use CoreMIDI and MIDIReadProc, you can send MIDI commands to the synthesizer audio unit by calling MusicDeviceMIDIEvent right from your callback.
I need a step by step walkthrough on how to use audioConverterFillComplexBuffer and its callback. No, don't tell me to read the Apple docs. I do everything they say and the conversion always fails. No, don't tell me to go look for examples of audioConverterFillComplexBuffer and its callback in use - I've duplicated about a dozen such examples both line for line and modified and the conversion always fails. No, there isn't any problem with the input data. No, it isn't an endian issue. No, the problem isn't my version of OS X.
The problem is that I don't understand how audioConverterFillComplexBuffer works, so I don't know what I'm doing wrong. And nothing out there is helping me understand, because it seems like nobody on Earth really understands how audioConverterFillComplexBuffer works, either. From the people who actually use it(I spy cargo cult programming in their code) to even the authors of Learning Core Audio and/or Apple itself(http://stackoverflow.com/questions/13604612/core-audio-how-can-one-packet-one-byte-when-clearly-one-packet-4-bytes).
This isn't just a problem for me, it's a problem for anybody who wants to program high-performance audio on the Mac platform. Threadbare documentation that's apparently wrong and examples that don't work are no fun.
Once again, to be clear: I NEED A STEP BY STEP WALKTHROUGH ON HOW TO USE audioConverterFillComplexBuffer plus its callback and so does the entire Mac developer community.
This is a very old question but I think is still relevant. I've spent a few days fighting this and have finally achieved a successful conversion. I'm certainly no expert but I'll outline my understanding of how it works. Note I'm using Swift, which I'm also just learning.
Here are the main function arguments:
inAudioConverter: AudioConverterRef: This one is simple enough, just pass in a previously created AudioConverterRef.
inInputDataProc: AudioConverterComplexInputDataProc: The very complex callback. We'll come back to this.
inInputDataProcUserData, UnsafeMutableRawPointer?: This is a reference to whatever data you may need to be provided to the callback function. Important because even in swift the callback can't inherit context. E.g. you may need to access an AudioFileID or keep track of the number of packets read so far.
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>: This one is a little misleading. The name implies it's the packet size but reading the documentation we learn it's the total number of packets expected for the output format. You can calculate this as outPacketCount = frameCount / outStreamDescription.mFramesPerPacket.
outOutputData: UnsafeMutablePointer<AudioBufferList>: This is an audio buffer list which you need to have already initialized with enough space to hold the expected output data. The size can be calculated as byteSize = outPacketCount * outMaxPacketSize.
outPacketDescription: UnsafeMutablePointer<AudioStreamPacketDescription>?: This is optional. If you need packet descriptions, pass in a block of memory the size of outPacketCount * sizeof(AudioStreamPacketDescription).
As the converter runs it will repeatedly call the callback function to request more data to convert. The main job of the callback is simply to read the requested number packets from the source data. The converter will then convert the packets to the output format and fill the output buffer. Here are the arguments for the callback:
inAudioConverter: AudioConverterRef: The audio converter again. You probably won't need to use this.
ioNumberDataPackets: UnsafeMutablePointer<UInt32>: The number of packets to read. After reading, you must set this to the number of packets actually read (which may be less than the number requested if we reached the end).
ioData: UnsafeMutablePointer<AudioBufferList>: An AudioBufferList which is already configured except for the actual data. You need to initialise ioData.mBuffers.mData with enough capacity to hold the expected number of packets, i.e. ioNumberDataPackets * inMaxPacketSize. Set the value of ioData.mBuffers.mDataByteSize to match.
outDataPacketDescription: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?: Depending on the formats used, the converter may need to keep track of packet descriptions. You need to initialise this with enough capacity to hold the expected number of packet descriptions.
inUserData: UnsafeMutableRawPointer?: The user data that you provided to the converter.
So, to start you need to:
Have sufficient information about your input and output data, namely the number of frames and maximum packet sizes.
Initialise an AudioBufferList with sufficient capacity to hold the output data.
Call AudioConverterFillComplexBuffer.
And on each run of the callback you need to:
Initialise ioData with sufficient capacity to store ioNumberDataPackets of source data.
Initialise outDataPacketDescription with sufficient capacity to store ioNumberDataPackets of AudioStreamPacketDescriptions.
Fill the buffer with source packets.
Write the packet descriptions.
Set ioNumberDataPackets to the number of packets actually read.
return noErr if successful.
Here's an example where I read the data from an AudioFileID:
var converter: AudioConverterRef?
// User data holds an AudioFileID, input max packet size, and a count of packets read
var uData = (fRef, maxPacketSize, UnsafeMutablePointer<Int64>.allocate(capacity: 1))
err = AudioConverterNew(&inStreamDesc, &outStreamDesc, &converter)
err = AudioConverterFillComplexBuffer(converter!, { _, ioNumberDataPackets, ioData, outDataPacketDescription, inUserData in
let uData = inUserData!.load(as: (AudioFileID, UInt32, UnsafeMutablePointer<Int64>).self)
ioData.pointee.mBuffers.mDataByteSize = uData.1
ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: Int(uData.1), alignment: 1)
outDataPacketDescription?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(ioNumberDataPackets.pointee))
let err = AudioFileReadPacketData(uData.0, false, &ioData.pointee.mBuffers.mDataByteSize, outDataPacketDescription?.pointee, uData.2.pointee, ioNumberDataPackets, ioData.pointee.mBuffers.mData)
uData.2.pointee += Int64(ioNumberDataPackets.pointee)
return err
}, &uData, &numPackets, &bufferList, nil)
Again, I'm no expert, this is just what I've learned by trial and error.
I hope this does not turn out to be a totally braindead question.
I am editing a template WDF Windows USB device driver to send formatted data to one of the device's bulk out pipes; the data has to be set up in a certain way to tell the device to read an internal register.
The problem is that I cannot get the data to go across the bus in the exact format necessary. I wrote a small test app to enumerate the device and call DeviceIoControl with the input buffer set to a struct I set up according to spec.
I have a copy of a USB bus trace for a working case (performed by a driver whose source I have no access to), and I captured a bus trace for what happens when I call the custom IOCTL in my driver. What I see go across the bus is the data structure I set up prefixed with twelve bytes of data; the data structure is correct, but I want to know what the initial twelve bytes of data are, and stop the driver from sending them.
The driver, I believe, has been written properly; I put some debug traces in the driver and it looks like the buffer retrieved by WdfRequestRetrieveInputMemory already has the 12 bytes prepended, so this seems like this is happening pre-driver.
If it is useful information, the IOCTL is set up as METHOD_BUFFERED with FILE_ANY_ACCESS.
The relevant portion of the test code that sets this up is very simple:
const ULONG ulBufferSize = sizeof( CONTROL_READ_DATA );
unsigned char pBuffer[sizeof(CONTROL_READ_DATA)];
DWORD dwBytesReturned;
CONTROL_READ_DATA* readData = (CONTROL_READ_DATA*)pBuffer;
readData->field1 = data;
readData->field2 = moreData;
// ... all fields filled in...
// Send IOCTLs into camera
if( !::DeviceIoControl( hDevice,
IOCTL_CUSTOM_000,
&readData,
ulBufferSize,
&readData,
ulBufferSize,
&dwBytesReturned,
NULL ) )
{
dwError = ::GetLastError();
// Clean up here
return dwError;
}
The data I see go across the bus is: 80FD1200 CCCCCCCC CCCCCCCC + (My data).
Does anyone have any insights?
Wow, really ridiculous error. Notice I'm passing the address of readData to DeviceIoControl, which itself is already a pointer. I can't believe I wasted so much time on this.
Thanks all!
Alignment of the data is the culprit. Check out http://msdn.microsoft.com/en-us/library/2e70t5y1(v=vs.80).aspx to set it to one.
This guys says yes:
http://web.tiscalinet.it/giordy/midi-tech/lowmidi.htm
Same with a really old book from 1998 (Maximum MIDI).
MSDN doesn't mention it.
I'm not getting any sound.
I fill a char buffer with status|note|velocity|status|note|velocity...
Set lpData, dwBufferLength, and dwFlags of a MIDIHDR struct
call midiOutPrepareHeader (MMSYSERR_NOERROR)
call midiOutLongMsg (MMSYSERR_NOERROR)
Still no sound! Spamming midiOutShortMsg is working but will that work for slower machines? Did they change the functionality?
Thanks.
I'm an idiot! I figured it out: Microsoft GS Wavetable Synth does NOT support sending multiple short messages in midiOutLongMsg. The MIDI Mapper DOES!
midiOutShortMsg should be plenty fast, even on slow machines. MIDI interfaces themselves (hardware that is, but some software will limit themselves) run at 31,250 baud. This of course is ignoring any slow code you may have wrapped around where you call midiOutShortMsg.
Anyway, technically you should also be able to get away with one status byte, if the following notes use the same status byte. So, if you want to do note on/off (using velocity 0 for off) and those notes are on the same channel, you could do this:
status|note|velocity|note|velocity|note|velocity|note|velocity
This is called running status.