Audio Unit File Reader with AudioFileID and Encrypted File - macos

I am reading in and playing audio files in MacOS using Audio Unit Generator AudioFilePlayer
AudioComponentDescription fileplayercd = {0};
fileplayercd.componentType = kAudioUnitType_Generator;
fileplayercd.componentSubType = kAudioUnitSubType_AudioFilePlayer;
fileplayercd.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode fileNode;
AUGraphAddNode(graph, &fileplayercd, &fileNode);
and setting its source file with
CFURLRef inputFileUrl =CFURLCreateWithFileSystemPath(kCFAllocatorDefault, INPUT_FILE_LOCATION, kCFURLPOSIXPathStyle, false);
AudioFileID inputFile;
AudioFileOpenURL(inputFileUrl, kAudioFileReadPermission, 0, &inputFile)
AUGraphNodeInfo(graph, fileNode, NULL, fileAU);
AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFileIDs, kAudioUnitScope_Global, 0, &inputFile, sizeof(inputFile))
but my real audio files are all encrypted so I cannot use a raw AudioFileID. Instead, I need to somehow extend the ID and insert the decrypt code before any "real" reads. Is this possible?

This can be accomplished using AudioFileOpenWithCallbacks with parameters for read getsize callbacks.
OSStatus result = AudioFileOpenWithCallbacks((__bridge void*)audioData, readProc, 0, getSizeProc, 0, 0, &inputFile);
Details on usage can be found in this related question.

Related

Setting sampling rate of default audio output device programmatically

I'm working on an application that plays sounds through the default audio device on a Mac. I want to change the output sampling rate and bit depth of the default output device but it always gives me a kAudioUnitErr_PropertyNotWritable error code.
Here is my test code:
AudioStreamBasicDescription streamFormat;
AudioStreamBasicDescription newStreamFormat;
newStreamFormat.mSampleRate = 96000; // the sample rate of the audio stream
newStreamFormat.mFormatID = kAudioFormatLinearPCM; // the specific encoding type of audio stream
newStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger;//kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonMixable;
newStreamFormat.mFramesPerPacket = 1;
newStreamFormat.mChannelsPerFrame = 1;
newStreamFormat.mBitsPerChannel = 24;
newStreamFormat.mBytesPerPacket = 2;
newStreamFormat.mBytesPerFrame = 2;
UInt32 size = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamFormat, &size);
result = AudioOutputUnitStop(myUnit);
result = AudioUnitUninitialize(myUnit);
result = AudioUnitSetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &newStreamFormat, size);
result = AudioUnitInitialize(myUnit);
result = AudioOutputUnitStart(myUnit);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamFormat, &size);
result = AudioUnitGetProperty(myUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamFormat, &size);
When I make the call to set the stream format on kAudioUnitScope_Input I don't get any error but when I set it on kAudioUnitScope_Output if fails with the property not writable error.
It must be possible to do this programmatically (Audio MIDI Setup does it) but I have searched and searched but I haven't been able to find any solution.
I did find this post that implies that setting the input sampling rate of the device will update the output as well. I tried this but when I read back the property the output doesn't match what I set on the input.
I'm pretty sure it's not the output AudioUnit's job to configure devices. It's more of an intermediary between clients and audio devices. Which means AudioUnitSetProperty() is the wrong API for the job.
So if you want to configure the device, try setting kAudioDevicePropertyNominalSampleRate on it using the AudioObjectSetPropertyData() function.
Then, unless you want a gratuitous rate conversion, you probably want to make sure your audio unit input format matches the new device sample rate by doing what you're already doing: calling AudioUnitSetProperty() on the input (data going into the audio unit) scope.

CoreAudio Output Sample Rate Discrepancy

I am using ffmpeg to acquire audio from .mov files. Looking over my settings, I am not sample rate converting the audio buffers I am generating so that is unlikely to account for the issues I am having. Regardless of the sample rate I set on my Built-in Output, my audio files that are at 44.1 kHz playback at the correct rate. If I playback a 48kHz file, the file plays back slower (at 91% of the normal rate) which indicates that the true rate is 44.1kHz. I can change my built-in output to 44.1, 48, or 96 kHz and the same phenomenon exists. I change my default output rate using the Audio Midi Setup app. I then verify my sample rate using AudioUnitGetProperty on my ouputAudioUnit. This matches the sample rate in the Audio Midi Setup.
Thoughts? I am including my audio graph code
CheckError(NewAUGraph(&fp.graph), "Couldn't create a new AUGraph");
//varispeednode has an input callback
//the vairspeed node feeds an output node which is running
//at the frequency of the system default output
AUNode outputNode;
AudioComponentDescription outputcd = [self defaultOutputComponent];
CheckError(AUGraphAddNode(fp.graph, &outputcd, &outputNode),
"AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
AUNode varispeedNode;
AudioComponentDescription varispeedcd = [self variSpeedComponent];
CheckError(AUGraphAddNode(fp.graph, &varispeedcd, &varispeedNode),
"AUGraphAddNode[kAudioUnitSubType_Varispeed] failed");
CheckError(AUGraphOpen(fp.graph),
"Couldn't Open AudioGraph");
CheckError(AUGraphNodeInfo(fp.graph, outputNode, NULL, &fp.outputAudioUnit),
"Couldn't Retrieve output node");
CheckError(AUGraphNodeInfo(fp.graph, varispeedNode, NULL, &fp.variSpeedAudioUnit),
"Couldn't Retrieve Varispeed Audio Unit");
AURenderCallbackStruct input;
input.inputProc = CBufferProviderCallback;
input.inputProcRefCon = &playerStruct;
CheckError(AudioUnitSetProperty(fp.variSpeedAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
CheckError(AUGraphConnectNodeInput(fp.graph, varispeedNode, 0, outputNode, 0),
"Couldn't Connect varispeed to output");
CheckError(AUGraphInitialize(fp.graph),
"Couldn't Initialize AUGraph");
// check output sample rate
Float64 outputSampleRate = 48000.0;
UInt32 sizeOfFloat64 = sizeof(Float64);
outputSampleRate = 0.0;
CheckError(AudioUnitGetProperty(fp.outputAudioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Global,
0,
&outputSampleRate,
&sizeOfFloat64),
"Couldn't get output sampleRate");
I solved the issue. When building the audio graph, you need to specify the input sample rate of the varispeed audio unit before you connect it to an output node inside of the augraph. See the example code at
https://developer.apple.com/library/content/samplecode/CAPlayThrough/Listings/ReadMe_txt.html
CheckError(NewAUGraph(&fp.graph), "BuildGraphError");
AUNode outputNode;
AudioComponentDescription outputcd = [self defaultOutputComponent];
CheckError(AUGraphAddNode(fp.graph, &outputcd, &outputNode),
"AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
AUNode varispeedNode;
AudioComponentDescription varispeedcd = [self variSpeedComponent];
CheckError(AUGraphAddNode(fp.graph, &varispeedcd, &varispeedNode),
"AUGraphAddNode[kAudioUnitSubType_Varispeed] failed");
CheckError(AUGraphOpen(fp.graph),
"Couldn't Open AudioGraph");
CheckError(AUGraphNodeInfo(fp.graph, outputNode, NULL, &fp.outputAudioUnit),
"Couldn't Retrieve File Audio Unit");
CheckError(AUGraphNodeInfo(fp.graph, varispeedNode, NULL, &fp.variSpeedAudioUnit),
"Couldn't Retrieve Varispeed Audio Unit");
AURenderCallbackStruct input;
input.inputProc = CBufferProviderCallback;
input.inputProcRefCon = &playerStruct;
CheckError(AudioUnitSetProperty(fp.variSpeedAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input)),
"AudioUnitSetProperty failed");
//you have to set the varispeed rate before you connect it
//see CAPlayThrough
AudioStreamBasicDescription asbd = {0};
UInt32 size;
Boolean outWritable;
//Gets the size of the Stream Format Property and if it is writable
OSStatus result = AudioUnitGetPropertyInfo(fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&size,
&outWritable);
//Get the current stream format of the output
result = AudioUnitGetProperty (fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&asbd,
&size);
asbd.mSampleRate = targetSampleRate;
//Set the stream format of the output to match the input
result = AudioUnitSetProperty (fp.variSpeedAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&asbd,
size);
printf("AudioUnitSetProperty result %d %d\n", result, noErr);
CheckError(AUGraphConnectNodeInput(fp.graph, varispeedNode, 0, outputNode, 0),
"Couldn't Connect varispeed to output");
CheckError(AUGraphInitialize(fp.graph),
"Couldn't Initialize AUGraph");
Float64 outputSampleRate = 48000.0;
UInt32 sizeOfFloat64 = sizeof(Float64);
outputSampleRate = 0.0;
CheckError(AudioUnitGetProperty(fp.outputAudioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Global,
0,
&outputSampleRate,
&sizeOfFloat64),
"Couldn't get output sampleRate");
NSLog(#"Output Sample Rate of the ->%f", outputSampleRate);

EXC_BAS_ACCESS in Core Audio - writing mic data to file w/ Extended AudioFile Services

I am attempting to write incoming mic audio to a file. Because the audio samples are delivered 4096 frames (the set frame rate for my project) at a time in a time-critical callback I cannot simply write the bytes to a file with AudioFileWriteBytes. I also did not wish to go through the effort and complexity of setting up my own ring buffer to store samples to write elsewhere. So I am using Extended Audio File API for its ExtAudioFileWriteAsync function.
As per instructed by the documentation I create the ExtAudioFileRef with a CFURL and than run it once with a null buffer and 0 frames in main. Then I initiate my AUHAL unit and the input callback begins to be called.
ExtAudioFileWriteAsync(player.recordFile, 0, NULL);
There I have my code to write to this file asynchronously. I have the call nested in a dispatch queue so that it runs after the callback function exits scope (tho not sure if that is necessary but I get this error with all without the enclosing dispatch block). This is the callback as it is right now.
OSStatus InputRenderProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
MyAUGraphPlayer *player = (MyAUGraphPlayer*) inRefCon;
// rendering incoming mic samples to player->inputBuffer
OSStatus inputProcErr = noErr;
inputProcErr = AudioUnitRender(player->inputUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
player->inputBuffer);
printf("%i", inNumberFrames);
dispatch_async(player->fileWritingQueue, ^{
ExtAudioFileWriteAsync(player->recordFile, 4096, player->inputBuffer);
});
return inputProcErr;
}
It immediately bails out with the bad access exception on the first callback invocation. For clarity these are the settings I have for creating the file to begin with.
// describe a PCM format for audio file
AudioStreamBasicDescription format = { 0 };
format.mBytesPerFrame = 4;
format.mBytesPerPacket = 4;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
format.mFormatID = kAudioFormatLinearPCM;
CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("./test2.wav"), kCFURLPOSIXPathStyle, false);
ExtAudioFileCreateWithURL(myFileURL,
kAudioFileWAVEType,
&format,
NULL,
kAudioFileFlags_EraseFile,
&player.recordFile);
player.fileWritingQueue = dispatch_queue_create("myQueue", NULL);
ExtAudioFileWriteAsync(player.recordFile, 0, NULL);

Format of microphone audio passed to call back in mac OS X core audio example

I need access to audio data from microphone on macbook. I have the an example program for recording microphone data based on the one in "Learning Core Audio". When I run this program and break on the call back routine I see the inBuffer pointer and the mAudioData pointer. However I am having a heck of a time making sense of the data. I've tried casting the void* pointer to mAudioData to SInt16, to SInt32 and to float and tried a number of endian conversions all with nonsense looking results. What I need to know definitively is the number format for the data in the buffer. The example actually works writing microphone data to a file which I can play so I know that real audio is being recorded.
AudioStreamBasicDescription recordFormat;
memset(&recordFormat,0,sizeof(recordFormat));
//recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(SInt16);
recordFormat.mFramesPerPacket = 1;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
//set up queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");

Trying to setup an audio unit graph with a buffer of samples as the input

I am trying to implement a simple audio unit graph that goes:
buffer of samples->low pass filter->generic output
Where the generic output would be copied into a new buffer that could then be processed further, saved to disk, etc.
All of the examples I can find online having to do with setting up an audio unit graph involve using a generator with the kAudioUnitSubType_AudioFilePlayer as the input source... I am dealing with a buffer of samples already acquired, so those examples do not help... Based on looking around in the AudioUnitProperties.h file, it looks like I should be using using is kAudioUnitSubType_ScheduledSoundPlayer?
I can't seem to much documentation on how to hook this up, so I am quite stuck and am hoping someone here can help me out.
To simplify things, I just started out by trying to get my buffer of samples to go straight to the system output, but am unable to make this work...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
#interface EffectMachine ()
#property (nonatomic, strong) Buffer *buffer;
#end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
#implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
Does anyone know what I am doing wrong here?
I think this is the documentation you're looking for.
To summarize: setup your augraph, setup your audio units & add them to the graph, write & attach a rendercallback function on the first node in your graph. Run the graph. Note that the rendercallback is where your app will be asked to provide buffers of samples to the augraph. This is where you'll need to read from your buffers and fill the buffers supplied by the rendercallback. I think this is what you're missing.
If you're on iOS8, i recommend AVAudioEngine, which helps conceal some of the grungier boiler-platey details of graphs and effects
Extras:
Complete pre-iOS8 example code on github
iOS Music player app that reads audio from your MP3 library into a circular buffer and then processes it via an augraph (using a mixer & eq AU). You can see how a rendercallback is setup to read from a buffer, etc.
Amazing Audio Engine
Novocaine Audio library

Resources