I'm developing a music application for iOS using the AVAudioplayer, in which I want to implement an equalizer.
I searched the internet for a good solution, and ended up with and AUGraph configuration like this:
// multichannel mixer unit
AudioComponentDescription mixer_desc;
mixer_desc.componentType = kAudioUnitType_Mixer;
mixer_desc.componentSubType = kAudioUnitSubType_MultiChannelMixer;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
// iPodEQ unit
AudioComponentDescription eq_desc;
eq_desc.componentType = kAudioUnitType_Effect;
eq_desc.componentSubType = kAudioUnitSubType_AUiPodEQ;
eq_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
eq_desc.componentFlags = 0;
eq_desc.componentFlagsMask = 0;
// output unit
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_GenericOutput;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
// create a new AUGraph
OSStatus result = NewAUGraph(&mGraph);
// Add Audio Nodes to graph
AUNode outputNode;
AUNode eqNode;
AUNode mixerNode;
AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
AUGraphAddNode(mGraph, &eq_desc, &eqNode);
AUGraphAddNode(mGraph, &output_desc, &outputNode);
// open the graph AudioUnits (but not initialized)
result = AUGraphOpen(mGraph);
// grab the audio unit instances from the nodes
AudioUnit mEQ;
AudioUnit mMixer;
result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
result = AUGraphNodeInfo(mGraph, eqNode, NULL, &mEQ);
// set number of input buses for the mixer Audio Unit
UInt32 numbuses = 0;
AudioUnitSetProperty ( mMixer, kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input, 0, &numbuses, sizeof(numbuses));
// get the equalizer factory presets list
CFArrayRef mEQPresetsArray;
UInt32 sizeof1 = sizeof(mEQPresetsArray);
AudioUnitGetProperty(mEQ, kAudioUnitProperty_FactoryPresets,
kAudioUnitScope_Global, 0, &mEQPresetsArray, &sizeof1);
result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, eqNode, 0);
result = AUGraphConnectNodeInput(mGraph, eqNode, 0, outputNode, 0);
AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Enable, kAudioUnitScope_Input, 0, 1, 0);
AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, 7);
AudioUnitSetProperty (mEQ, kAudioUnitProperty_PresentPreset,
kAudioUnitScope_Global, 0, aPreset, sizeof(AUPreset));
AUGraphInitialize(mGraph);
AUGraphStart(mGraph);
The AUGraph is running, but the EQ isn't applied. The argument '7' in AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, 7); is the index of the equalizer that should be applied. (Electronic)
I got that index from logging the values of the mEQPresetsArray-Array:
for (int i = 0; i < CFArrayGetCount(mEQPresetsArray); i++) {
AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mEQPresetsArray, i);
NSLog(#"%d: %#", (int)aPreset->presetNumber, aPreset->presetName);
}
How can I solve my problem? I've already tried the NVDSP, but it didn't seem to be working as well. I didn't find any other solution on the internet.
Thanks in advance, Fabian.
If this is for iOS then you need to use kAudioUnitSubType_RemoteIO instead of kAudioUnitSubType_GenericOutput.
You cannot use AVAudioPlayer to do your EQ, you need AVPlayer.
See here for a sample project using the audio tap:
https://developer.apple.com/library/ios/samplecode/AudioTapProcessor/Introduction/Intro.html
Related
I am working on Visual Studio 12 (Windows 10 OS) and trying to display an image buffer using StretchDIBits(), but it's failing with return 0.
I don't have any header file for that buffer. I don't know how to draw the client area using only buffer data. If anyone knows, please share your thoughts.
I have added sample snippet below:
void DShow::set framebuffer(HDC hDC){
BITMAPINFO m_bi;
DWORD result = 0;
m_bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
m_bi.bmiHeader.biWidth = m_curResolutionWidth;
m_bi.bmiHeader.biHeight = m_curResolutionHeight;
m_bi.bmiHeader.biPlanes = (WORD)1;
m_bi.bmiHeader.biBitCount = (WORD)24;
m_bi.bmiHeader.biCompression = 0;
m_bi.bmiHeader.biSizeImage = (m_curResolutionWidth*m_curResolutionHeight)*(24/8);
m_bi.bmiHeader.biXPelsPerMeter = 0;
m_bi.bmiHeader.biYPelsPerMeter = 0;
m_bi.bmiHeader.biClrUsed = 0;
m_bi.bmiHeader.biClrImportant = 0;
result = StretchDIBits(hDC,
m_TaniaDestRect.left, m_TaniaDestRect.top,
m_TaniaDestRect.right - m_TaniaDestRect.left, m_TaniaDestRect.bottom - m_TaniaDestRect.top,
0, 0,
m_curResolutionWidth, abs(m_curResolutionHeight),
g_pBuffer, &m_bi,
DIB_RGB_COLORS, SRCCOPY);
}
I successfully managed to build a complex AUGraph that I'm able to reconfigure on the fly, and all is working well.
I'm facing a wall now with what seems a very simple task: selecting a sepcific output device.
I'm able to get the deviceUID and ID thanks to this post: AudioObjectGetPropertyData to get a list of input devices (that I've modified to get output devices) and to the code below (I can't remember where I've found it, unfortunately)
- (AudioDeviceID) deviceIDWithUID:(NSString *)uid
{
AudioDeviceID myDevice;
AudioValueTranslation trans;
CFStringRef *myKnownUID = (__bridge CFStringRef *)uid;
trans.mInputData = &myKnownUID;
trans.mInputDataSize = sizeof (CFStringRef);
trans.mOutputData = &myDevice;
trans.mOutputDataSize = sizeof(AudioDeviceID);
UInt32 size = sizeof (AudioValueTranslation);
AudioHardwareGetProperty (kAudioHardwarePropertyDeviceForUID,
&size,
&trans);
return myDevice;
}
I'm getting the AudioDeviceID from this method which I store in an NSDictionary. I can NSLog it and when I convert it in hexadecimal it gives me the right ID, found in HALLab.
But when I configure my unit (see code below) the graph only plays on the default device (the one selected in Sound Preferences).
AudioComponent comp = AudioComponentFindNext(NULL, &_componentDescription);
if (comp == NULL) {
printf ("Can't get output unit");
exit (-1);
}
CheckError(AudioComponentInstanceNew(comp, &_auUnit),
"Couldn't open component for output Unit");
UInt32 disableFlag = 0;
UInt32 enableFlag = 1;
AudioUnitScope outputBus = 0;
AudioUnitScope inputBus = 1;
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
outputBus,
&enableFlag,
sizeof(enableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - enable Output");
CheckError (AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&disableFlag,
sizeof(disableFlag)), "AudioUnitSetProperty[kAudioOutputUnitProperty_EnableIO] failed - disable Input");
AudioDeviceID devID = (AudioDeviceID)[[[_devices objectAtIndex:0] objectForKey:#"deviceID"] unsignedIntValue];
CheckError(AudioUnitSetProperty(_auUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Output,
0,
&devID,
sizeof(AudioDeviceID)), "AudioUnitSetProperty[kAudioOutputUnitProperty_CurrentDevice] failed");
The AUGraph is already configured with all units, nodes are connected, and it's open. What am I doing wrong ?
I would be very grateful for any clue to resolve this problem.
I'm making a CoreAudio based FLAC player, and ran into a naughty issue with AudioQueues.
I'm initializing my stuff like this (variables beginning with an underscore are instance variables):
_flacDecoder = FLAC__stream_decoder_new();
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mSampleRate = 44100,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mFramesPerPacket = 1,
.mBytesPerFrame = 4,
.mReserved = 0
};
AudioQueueNewOutput(&asbd, HandleOutputBuffer, (__bridge void *)(self), CFRunLoopGetCurrent(), kCFRunLoopDefaultMode, 0, &_audioQueue);
for (int i = 0; i < kNumberBuffers; ++i) {
AudioQueueAllocateBuffer(_audioQueue, 0x10000, &_audioQueueBuffers[i]);
}
AudioQueueSetParameter(_audioQueue, kAudioQueueParam_Volume, 1.0);
16 bit stereo PCM at 44.1 kHz, pretty basic setup. kNumberBuffers is 3, and each buffer is 0x10000 bytes.
I populate the buffers with these callbacks:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
AudioQueueStop(self->_audioQueue, false);
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
static FLAC__StreamDecoderWriteStatus flacDecoderWriteCallback(const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 * const buffer[], void *client_data){
FLACPlayer * self = (__bridge FLACPlayer *)client_data;
assert(frame->header.bits_per_sample == 16); // TODO
int16_t * bufferWritePosition = (int16_t*)((uint8_t*)self->_buffer->mAudioData + self->_buffer->mAudioDataByteSize);
for(int i = 0; i < frame->header.blocksize; i++){
for(int j = 0; j < frame->header.channels; j++){
*bufferWritePosition = (int16_t)buffer[j][i];
bufferWritePosition++;
}
}
int totalFramePayloadInBytes = frame->header.channels * frame->header.blocksize * frame->header.bits_per_sample/8;
self->_buffer->mAudioDataByteSize += totalFramePayloadInBytes;
return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
}
static void flacDecoderMetadataCallback(const FLAC__StreamDecoder *decoder, const FLAC__StreamMetadata *metadata, void *client_data){
FLACPlayer * self = (__bridge FLACPlayer*) client_data;
if(metadata->type == FLAC__METADATA_TYPE_STREAMINFO){
self->_currentStreamInfo = metadata->data.stream_info;
}
}
Basically when the queue requests a new buffer, I fill the buffer from the FLAC__stream_decoder, then I enqueue it. Just like everyone else would do. When libFLAC tells me that I've reached the end of my file, I tell the AudioQueue to stop asynchronously, until it had consumed all the buffers' contents. However, instead of playing through the end, the playback stops a tiny bit before it should. If I remove this line:
AudioQueueStop(self->_audioQueue, false);
everything works fine; the audio plays end-to-end, although my queue keeps running till the end of time. If I change that line to this:
AudioQueueStop(self->_audioQueue, true);
then the playback stops immediately/synchronously, as you'd expect from Apple's documentation:
If you pass true, stopping occurs immediately (that is,
synchronously). If you pass false, the function returns immediately,
but the audio queue does not stop until its queued buffers are played
or recorded (that is, the stop occurs asynchronously). Audio queue
callbacks are invoked as necessary until the queue actually stops.
My questions are:
- am I doing anything wrong?
- how can I play my audio until the end, and shut down the queue appropriately?
Of course, after struggling with this stuff for hours, I've found the solution minutes after posting this question...
The problem was that the AudioQueue doesn't care about buffers enqueued after calling AudioQueueStop(..., false). So now I'm feeding the queue like this, and everything works like charm:
static void HandleOutputBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
FLACPlayer * self = (__bridge FLACPlayer*)inUserData;
UInt32 largestBlockSizeInBytes = self->_currentStreamInfo.max_blocksize * self->_currentStreamInfo.channels * self->_currentStreamInfo.bits_per_sample/8;
inBuffer->mAudioDataByteSize = 0;
self->_buffer = inBuffer;
bool shouldStop = false;
while(inBuffer->mAudioDataByteSize <= inBuffer->mAudioDataBytesCapacity - largestBlockSizeInBytes){
FLAC__bool result = FLAC__stream_decoder_process_single(self->_flacDecoder);
assert(result);
if(FLAC__stream_decoder_get_state(self->_flacDecoder) == FLAC__STREAM_DECODER_END_OF_STREAM){
shouldStop = true;
break;
}
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if(shouldStop){
AudioQueueStop(self->_audioQueue, false);
}
}
I have an audio analysis app using Audio Units that works perfectly when the app is run in isolation. However, if there are other audio apps running in the background AudioUnitRender returns a -50 error.
Does anyone know a way to resolve this, so that AudioUnitRender works even when other audio apps are running?
Thanks in advance.
Audio session initiation
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setPreferredHardwareSampleRate:sampleRate error:&err];
[session setCategory:AVAudioSessionCategoryRecord error:&err];
[session setActive:YES error:&err];
[session setMode:setMode:AVAudioSessionModeMeasurement error:&err];
[session setDelegate:listener];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
I/O unit description:
OSStatus err;
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescrition.componentFlags = 0;
ioUnitDescription.componentFlagsMask = 0;
// Declare and instantiate an audio processing graph
NewAUGraph(&processingGraph);
// Add an audio unit node to the graph, then instantiate the audio unit.
/*
An AUNode is an opaque type that represents an audio unit in the context
of an audio processing graph. You receive a reference to the new audio unit
instance, in the ioUnit parameter, on output of the AUGraphNodeInfo
function call.
*/
AUNode ioNode;
AUGraphAddNode(processingGraph, &ioUnitDescription, &ioNode);
AUGraphOpen(processingGraph); // indirectly performs audio unit instantiation
// Obtain a reference to the newly-instantiated I/O unit. Each Audio Unit
// requires its own configuration.
AUGraphNodeInfo(processingGraph, ioNode, NULL, &ioUnit);
// Initialize below.
AURenderCallbackStruct callbackStruct = {0};
UInt32 enableInput;
UInt32 enableOutput;
// Enable input and disable output.
enableInput = 1; enableOutput = 0;
callbackStruct.inputProc = RenderFFTCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus, &enableInput, sizeof(enableInput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus, &enableOutput, sizeof(enableOutput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
// Set the stream format.
size_t bytesPerSample = [self ASBDForSoundMode];
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &streamFormat, sizeof(streamFormat));
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus, &streamFormat, sizeof(streamFormat));
// Disable system buffer allocation. We'll do it ourselves.
UInt32 flag = 1;
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus, &flag, sizeof(flag));}
Render callback:
RIOInterface* THIS = (__bridge_transfer RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;
float *hammingWeights = THIS->hammingWeights;
uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;
AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
if (renderErr < 0) {
return renderErr;
}
I discovered that this issue was occurring when another AVAudioSession was active in another app, in which case the first initiated AVAudioSession's settings took priority over mine. I was trying to set the sampling frequency to 22050, but if the other audio session had it set at 44100 then it remained at 44100.
I resolved the issue by making my code 'adaptive' to other settings e.g. in respect to the buffer size, so it would still work effectively (if not optimally) with audio settings that differed from my preference.
I am writing a Mac OS X application to capture some audio through the microphone with echo cancellation. I am creating an AudioUnit of type VoiceProcessingIO. I want to output the audio as Signed Integer Linear PCM. However, when I indicate that I want the output sample format to record as SignedInteger, I get an "Unsupported Format" error.
How can I configure the AudioUnit to output data in the signed integer format? Here is how I am configuring it right now. If I try replacing kAudioFormatFlagIsFloat with kAudioFormatFlagIsSignedInteger, then I get an error :(
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
OSStatus status = AudioComponentInstanceNew(comp, &_audioUnit);
...
const int sampleSize = 2;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 16000;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
streamFormat.mBytesPerPacket = sampleSize;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = sampleSize;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = sampleSize * eight_bits_per_byte;
status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamFormat, sizeof(streamFormat));
// status = UnsupportedFormatError
I decided that I can't do it. I used the AudioConverter class to convert the floating point format to signed integer. On the bright side, AudioConverter is surprisingly easy to use!