I have an audio related app that has multichannel mixer to play m4a files at a time.
I'm using the AudioToolBox framework to stream audio, but on iOS9 the framework throws me exception in mixer rendering callback where i am streaming the audio files.
Interestingly apps compiled with the iOS9 SDK continue to stream the same file perfectly on iOS7/8 devices, but not iOS9.
Now i can't figure out if Apple broke something in iOS9, or we have the files encoded wrong on our end, but they play just fine on both iOS 7/8 but not 9.
Exception:
malloc: *** error for object 0x7fac74056e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
It works for all other formats does not give any exception or any kind of memory errors but does not work for m4a format which is very surprising.
Here is a code to load files which works for wav,aif etc formats but not for m4a:
- (void)loadFiles{
AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:kGraphSampleRate
channels:1
interleaved:NO];
for (int i = 0; i < numFiles && i < maxBufs; i++) {
ExtAudioFileRef xafref = 0;
// open one of the two source files
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
if (result || !xafref) {break; }
// get the file data format, this represents the file's actual data format
AudioStreamBasicDescription fileFormat;
UInt32 propSize = sizeof(fileFormat);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
if (result) { break; }
// set the client format - this is the format we want back from ExtAudioFile and corresponds to the format
// we will be providing to the input callback of the mixer, therefore the data type must be the same
double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
propSize = sizeof(AudioStreamBasicDescription);
result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);
if (result) { break; }
// get the file's length in sample frames
UInt64 numFrames = 0;
propSize = sizeof(numFrames);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
if (result) { break; }
if(i==metronomeBusIndex)
numFrames = (numFrames+6484)*4;
//numFrames = (numFrames * rateRatio); // account for any sample rate conversion
numFrames *= rateRatio;
// set up our buffer
mSoundBuffer[i].numFrames = (UInt32)numFrames;
mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
mSoundBuffer[i].sampleNum = 0;
// set up a AudioBufferList to read data into
AudioBufferList bufList;
bufList.mNumberBuffers = 1;
bufList.mBuffers[0].mNumberChannels = 1;
bufList.mBuffers[0].mData = mSoundBuffer[i].data;
bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);
// perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
UInt32 numPackets = (UInt32)numFrames;
result = ExtAudioFileRead(xafref, &numPackets, &bufList);
if (result) {
free(mSoundBuffer[i].data);
mSoundBuffer[i].data = 0;
}
// close the file and dispose the ExtAudioFileRef
ExtAudioFileDispose(xafref);
}
// [clientFormat release];
}
If anyone could point me in the right direction, how do i go about debugging the issue?
Do we need to re-encode our files in some specific way?
I tried it on iOS 9.1.beta3 yesterday and things seem to be back to normal.
Try it out. Let us know if it works out for you too.
Related
When I captured video from camera on Intel Mac, used VideoToolbox to hardware encode raw pixel buffers to H.264 codec slices, I found that the VideoToolbox encoded I frame not clear, causing it looks like blurs every serveral seconds. Below are properties setted:
self.bitrate = 1000000;
self.frameRate = 20;
int interval_second = 2;
int interval_second = 2;
NSDictionary *compressionProperties = #{
(id)kVTCompressionPropertyKey_ProfileLevel: (id)kVTProfileLevel_H264_High_AutoLevel,
(id)kVTCompressionPropertyKey_RealTime: #YES,
(id)kVTCompressionPropertyKey_AllowFrameReordering: #NO,
(id)kVTCompressionPropertyKey_H264EntropyMode: (id)kVTH264EntropyMode_CABAC,
(id)kVTCompressionPropertyKey_PixelTransferProperties: #{
(id)kVTPixelTransferPropertyKey_ScalingMode: (id)kVTScalingMode_Trim,
},
(id)kVTCompressionPropertyKey_AverageBitRate: #(self.bitrate),
(id)kVTCompressionPropertyKey_ExpectedFrameRate: #(self.frameRate),
(id)kVTCompressionPropertyKey_MaxKeyFrameInterval: #(self.frameRate * interval_second),
(id)kVTCompressionPropertyKey_MaxKeyFrameIntervalDuration: #(interval_second),
(id)kVTCompressionPropertyKey_DataRateLimits: #[#(self.bitrate / 8), #1.0],
};
result = VTSessionSetProperties(self.compressionSession, (CFDictionaryRef)compressionProperties);
if (result != noErr) {
NSLog(#"VTSessionSetProperties failed: %d", (int)result);
return;
} else {
NSLog(#"VTSessionSetProperties succeeded");
}
These are very strange compression settings. Do you really need short GOP and very strict data rate limits?
I very much suspect you just copied some code off the internet without having any idea what it does. If it's the case, just set interval_second = 300 and remove kVTCompressionPropertyKey_DataRateLimits completely
I want to be able to play MIDI files that are included as resources in my app. I have a very simple function to do this, given the name of the resource (minus the .MID) file extension:
MusicPlayer musicPlayer;
MusicSequence sequence;
int MusicPlaying=0;
void PlayMusic(char *fname)
{
OSStatus res=noErr;
res = NewMusicPlayer(&musicPlayer);
res = NewMusicSequence(&sequence);
strcpy(TmpPath, "MUSIC/");
strcat(TmpPath, fname);
strcat(TmpPath, ".MID");
NSString *iName = [NSString stringWithUTF8String:TmpPath];
NSURL *url = [[NSBundle mainBundle] URLForResource:iName withExtension:nil];
res = MusicSequenceFileLoad (sequence, (__bridge CFURLRef _Nonnull)(url), 0, kMusicSequenceLoadSMF_ChannelsToTracks);
res = MusicPlayerSetSequence(musicPlayer, sequence);
res = MusicPlayerStart(musicPlayer);
if( res==noErr ) MusicPlaying = 1;
}
This all works fine and dandy, takes very little code... the problem is that I can't figure out how to know when the MIDI file has finished playing. I've tried MusicPlayerIsPlaying() (it ALWAYS returns true, LONG after the file has finished). I've tried checking MusicPlayerGetTime(), but the time count keeps on going after the MIDI finishes. I can't find any way to get a notification from this or any other way to determine that the actual MIDI data has finished playing.
Any ideas?
Apple's PlaySequence example shows how to do this:
You have to determine the length of the sequence by getting the length of each track:
MusicSequenceGetTrackCount(sequence, &ntracks);
for (UInt32 i = 0; i < ntracks; ++i) {
result = MusicSequenceGetIndTrack(sequence, i, &track);
result = MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength,
&trackLength, &propsize);
if (trackLength > sequenceLength)
sequenceLength = trackLength;
}
Then wait until you have reached that time:
while (1) {
usleep (2 * 1000 * 1000);
result = MusicPlayerGetTime(player, &time);
if (time >= sequenceLength)
break;
}
I need access to audio data from microphone on macbook. I have the an example program for recording microphone data based on the one in "Learning Core Audio". When I run this program and break on the call back routine I see the inBuffer pointer and the mAudioData pointer. However I am having a heck of a time making sense of the data. I've tried casting the void* pointer to mAudioData to SInt16, to SInt32 and to float and tried a number of endian conversions all with nonsense looking results. What I need to know definitively is the number format for the data in the buffer. The example actually works writing microphone data to a file which I can play so I know that real audio is being recorded.
AudioStreamBasicDescription recordFormat;
memset(&recordFormat,0,sizeof(recordFormat));
//recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(SInt16);
recordFormat.mFramesPerPacket = 1;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
//set up queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");
I am trying to implement a simple audio unit graph that goes:
buffer of samples->low pass filter->generic output
Where the generic output would be copied into a new buffer that could then be processed further, saved to disk, etc.
All of the examples I can find online having to do with setting up an audio unit graph involve using a generator with the kAudioUnitSubType_AudioFilePlayer as the input source... I am dealing with a buffer of samples already acquired, so those examples do not help... Based on looking around in the AudioUnitProperties.h file, it looks like I should be using using is kAudioUnitSubType_ScheduledSoundPlayer?
I can't seem to much documentation on how to hook this up, so I am quite stuck and am hoping someone here can help me out.
To simplify things, I just started out by trying to get my buffer of samples to go straight to the system output, but am unable to make this work...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
#interface EffectMachine ()
#property (nonatomic, strong) Buffer *buffer;
#end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
#implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
Does anyone know what I am doing wrong here?
I think this is the documentation you're looking for.
To summarize: setup your augraph, setup your audio units & add them to the graph, write & attach a rendercallback function on the first node in your graph. Run the graph. Note that the rendercallback is where your app will be asked to provide buffers of samples to the augraph. This is where you'll need to read from your buffers and fill the buffers supplied by the rendercallback. I think this is what you're missing.
If you're on iOS8, i recommend AVAudioEngine, which helps conceal some of the grungier boiler-platey details of graphs and effects
Extras:
Complete pre-iOS8 example code on github
iOS Music player app that reads audio from your MP3 library into a circular buffer and then processes it via an augraph (using a mixer & eq AU). You can see how a rendercallback is setup to read from a buffer, etc.
Amazing Audio Engine
Novocaine Audio library
I have a remoteIO application that loads and plays samples on iOS. It works fine when built with xcode5. I use iOS7 as a deployment target.
My application was originally built using the AudioUnitSampleType audio format and the kAudioFormatFlagsCanonical format flags. My sample files are 16 bits/44100Hz/Mono/Caf files.
Now I want to run it on iOS8.
Building my app with its original code on xcode6, the app runs fine on an iOS7 device but it produces no sounds on an iOS8 device.
As AudioUnitSampleType and kAudioFormatFlagsCanonical are deprecated in iOS8, I replaced them, after some researches, with float and kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved.
Now my app runs fine on iOS8 but the sounds are saturated on iOS7.
Has anyone experiences this? Any help ? Thanks, I am stuck here.
Pascal
ps : here is my sample loading method
#define AUDIO_SAMPLE_TYPE float
#define AUDIO_FORMAT_FLAGS kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved
-(void) load:(NSURL*)fileNameURL{
if (frameCount>0){
if (leftChannel!= NULL){
free (leftChannel);
leftChannel = 0;
}
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
}
soundFileURLRef=(CFURLRef)fileNameURL;
//----------------------------------------------
// 1.[OPEN AUDIO FILE] and associate it with the extended audio file object.
//----------------------------------------------
ExtAudioFileRef audioFileExtendedObject = 0;
log_if_err(ExtAudioFileOpenURL((CFURLRef)soundFileURLRef,
&audioFileExtendedObject),
#"ExtAudioFileOpenURL failed");
//----------------------------------------------
// 2.[AUDIO FILE LENGTH] Get the audio file's length in frames.
//----------------------------------------------
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile),
#"ExtAudioFileGetProperty (audio file length in frames) failed");
frameCount = totalFramesInFile;
//----------------------------------------------
// 3.[AUDIO FILE FORMAT] Get the audio file's number of channels. Normally CAF.
//----------------------------------------------
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
log_if_err(ExtAudioFileGetProperty(audioFileExtendedObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat),
#"ExtAudioFileGetProperty (file audio format) failed");
//----------------------------------------------
// 4.[ALLOCATE AUDIO FILE MEMORY] Allocate memory in the soundFiles instance
// variable to hold the left channel, or mono, audio data
//----------------------------------------------
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// DLog(#"fileNameURL=%# | channelCount=%d",fileNameURL,(int)channelCount);
if (leftChannel != NULL){
free (leftChannel);
leftChannel = 0;
}
leftChannel =(AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof(AUDIO_UNIT_SAMPLE_TYPE));
AudioStreamBasicDescription importFormat = {0};
if (2==channelCount) {
isStereo = YES;
if (rightChannel != NULL){
free (rightChannel);
rightChannel = 0;
}
rightChannel = (AUDIO_UNIT_SAMPLE_TYPE *) calloc (totalFramesInFile, sizeof (AUDIO_UNIT_SAMPLE_TYPE));
importFormat = stereoStreamFormat;
} else if (1==channelCount) {
isStereo = NO;
importFormat = monoStreamFormat;
} else {
ExtAudioFileDispose (audioFileExtendedObject);
return;
}
//----------------------------------------------
// 5.[ASSIGN THE MIXER INPUT BUS STREAM DATA FORMAT TO THE AUDIO FILE]
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
//----------------------------------------------
UInt32 importFormatPropertySize = (UInt32) sizeof (importFormat);
log_if_err(ExtAudioFileSetProperty(audioFileExtendedObject,
kExtAudioFileProperty_ClientDataFormat,
importFormatPropertySize,
&importFormat),
#"ExtAudioFileSetProperty (client data format) failed");
//----------------------------------------------
// 6.[SET THE AUDIBUFFER LIST STRUCT] which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundFiles[soundFile].leftChannel buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
//
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
//----------------------------------------------
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
if (NULL==bufferList){
NSLog(#"*** malloc failure for allocating bufferList memory");
return;
}
//----------------------------------------------
// 7.initialize the mNumberBuffers member
//----------------------------------------------
bufferList->mNumberBuffers = channelCount;
//----------------------------------------------
// 8.initialize the mBuffers member to 0
//----------------------------------------------
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
//----------------------------------------------
// 9.set up the AudioBuffer structs in the buffer list
//----------------------------------------------
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[0].mData = leftChannel;
if (channelCount==2){
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AUDIO_UNIT_SAMPLE_TYPE);
bufferList->mBuffers[1].mData = rightChannel;
}
//----------------------------------------------
// 10.Perform a synchronous, sequential read of the audio data out of the file and
// into the "soundFiles[soundFile].leftChannel" and (if stereo) ".rightChannel" members.
//----------------------------------------------
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
OSStatus result = ExtAudioFileRead (audioFileExtendedObject,
&numberOfPacketsToRead,
bufferList);
free (bufferList);
if (noErr != result) {
log_if_err(result,#"ExtAudioFileRead failure");
//
// If reading from the file failed, then free the memory for the sound buffer.
//
free (leftChannel);
leftChannel = 0;
if (2==channelCount) {
free (rightChannel);
rightChannel = 0;
}
frameCount = 0;
}
//----------------------------------------------
// Dispose of the extended audio file object, which also
// closes the associated file.
//----------------------------------------------
ExtAudioFileDispose (audioFileExtendedObject);
return;
}