LoadFinished false even for local html file load - macos

We had developed an ".app" installer for MAC OSX using C++ / QT. We were facing issues with loadFinished signal being called with false intermittently even while loading html file which is from local file system.
I found some other posts regarding loadFinished signal being called multiple times. In my case, the signal is called only once with false.
Anyone faced this kind of issue? Relying loadProgress signal with the value 100 is fine?
QT - 4.8.2
Mac OS X - 10.7
Code Snippet:-
boost::scoped_ptr<QWebView> m_webViewP;
static const boost::filesystem::path kJSEntryPoint = boost::filesystem::path("web") / boost::filesystem::path("index.html");
QUrl fileURL(QString::fromStdString(ISettingsManager::getInstance()->getJSEntryPoint()));
Log(INFO, "Loading auth page[%s].", qPrintable(fileURL.toString()));
m_webViewP->load(fileURL);
m_webViewP->activateWindow();
m_webViewP->raise();
const std::string SettingsManagerImpl::getJSEntryPoint() const
{
boost::filesystem::path retPath;
retPath = getAppDir_();
retPath /= kJSEntryPoint;
std::string toRet = retPath.string();
convertToURL_(toRet);
return toRet;
}
void SettingsManagerImpl::convertToURL_(std::string &path) const
{
size_t index = path.find(kBackSlash);
while (index != std::string::npos)
{
path.replace(index, 1, kForwarSlash);
index = path.find(kBackSlash);
}
path = kURLPrefix + path;
}
boost::filesystem::path SettingsManagerImpl::getAppDir_() const
{
#ifdef _WIN32
TCHAR pathName[MAX_PATH];
if (::GetModuleFileName(NULL, pathName, MAX_PATH) == 0)
{
SSLog(ERROR, "Error while retrieving app path.");
return "";
}
::PathRemoveFileSpec(pathName);
return boost::filesystem::path(pathName);
#else
NSAutoreleasePool *poolP = [[NSAutoreleasePool alloc] init];
NSString *pathP = [[NSBundle mainBundle] executablePath];
pathP = [pathP stringByDeletingLastPathComponent];
boost::filesystem::path toRet([pathP cStringUsingEncoding:NSASCIIStringEncoding]);
[poolP drain];
return toRet;
#endif
}

Related

Launch JVM through JNI_CreateJavaVM in OSX is console-only

I am trying to launch JVM from my C/ObjC program, but I have some issues displaying visual elements, like for example a window, on Catalina.
The part of the program that uses non-visual items, i.e. console & System.out works fine. But not any application window appears, although a proper Dock icon appears, which when clicked it does nothing.
I have written a helper function, to launch JVM. As arguments it has
the location of the libjvm.dylib
the JVM options
the size of the jvm options
the main arguments
the size of the main arguments
the main class
I looks to me that I am missing something, but I can't figure out what. Any ideas?
Here is the code:
typedef jint (*JNI_CreateJavaVM_f)(JavaVM **, void **, void *);
int launchjvm(char *jvmlib, char **jvmopts, int c_jvmopts, char **args, int c_args, char *mainclass)
{
JavaVM *vm;
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options[c_jvmopts];
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
memset(&vm_args, 0, sizeof(vm_args));
vm_args.version = JNI_VERSION_1_8;
vm_args.ignoreUnrecognized = JNI_TRUE;
for (int i = 0; i < c_jvmopts; i++)
options[i].optionString = jvmopts[i];
vm_args.options = options;
vm_args.nOptions = c_jvmopts;
void *lib_handle = dlopen(jvmlib, RTLD_LOCAL | RTLD_LAZY);
if (!lib_handle)
{
printf("[%s] Unable to load library: %s\n", __FILE__, dlerror());
return 1;
}
JNI_CreateJavaVM_f JNI_CreateJavaVM = dlsym(lib_handle, "JNI_CreateJavaVM");
if (!JNI_CreateJavaVM)
{
printf("[%s] Unable to get symbol: %s\n", __FILE__, dlerror());
return 1;
}
jint res = JNI_CreateJavaVM(&vm, (void **)&env, &vm_args);
if (res != JNI_OK)
{
printf("Failed to create Java VM\n");
return res;
}
jclass cls = (*env)->FindClass(env, mainclass);
if (cls == NULL)
{
printf("Failed to find %s class\n", mainclass);
return 1;
}
jmethodID mid = (*env)->GetStaticMethodID(env, cls, "main", "([Ljava/lang/String;)V");
if (mid == NULL)
{
printf("Failed to find main function in class %s\n", mainclass);
return 2;
}
jobjectArray main_args = (*env)->NewObjectArray(env, c_args, (*env)->FindClass(env, "java/lang/String"), NULL);
for(int i = 0 ; i < c_args ; i++)
(*env)->SetObjectArrayElement(env, main_args, i, (*env)->NewStringUTF(env, args[i]));
int result = 0;
(*env)->CallStaticVoidMethod(env, cls, mid, main_args);
if ((*env)->ExceptionOccurred(env)) { // check if an exception occurred
(*env)->ExceptionDescribe(env); // print the stack trace
result = -1;
}
(*vm)->DestroyJavaVM(vm);
[pool drain];
return result;
}

NSSetUncaughtExceptionHandler doesn't work on mac

My code is here:
in myAppDelegate.m:
void catchException(NSException *e)
{
NSLog( #"here is a exception");
}
#implementation myAppDelegate
{
NSSetUncaughtExceptionHandler(&catchException);
NSException *e = [[NSException alloc] initWithName: #"aException" reason: #"test" userInfo: nil];
#throw e;
}
However the function catchException is never be trapped. Can anybody tell me why?
I have found in my own OSX code that I had to use NSExceptionHandler in order to reliably catch exceptions:
#import <Cocoa/Cocoa.h>
#import <ExceptionHandling/NSExceptionHandler.h>
#import "CocoaUtil.h"
#import <execinfo.h>
// ExceptionDelegate
#interface ExceptionDelegate : NSObject
#end
static ExceptionDelegate * _exceptionDelegate = nil;
int main(int argc, const char **argv) {
int retval = 1;
#autoreleasepool
{
//
// Set exception handler delegate
//
_exceptionDelegate = [[ExceptionDelegate alloc] init];
NSExceptionHandler *exceptionHandler = [NSExceptionHandler defaultExceptionHandler];
exceptionHandler.exceptionHandlingMask = NSLogAndHandleEveryExceptionMask;
exceptionHandler.delegate = _exceptionDelegate;
//
// Set signal handling
//
int signals[] = {
SIGQUIT, SIGILL, SIGTRAP, SIGABRT, SIGEMT, SIGFPE, SIGBUS, SIGSEGV,
SIGSYS, SIGPIPE, SIGALRM, SIGXCPU, SIGXFSZ
};
const unsigned numSignals = sizeof(signals) / sizeof(signals[0]);
struct sigaction sa;
sa.sa_sigaction = signalHandler;
sa.sa_flags = SA_SIGINFO;
sigemptyset(&sa.sa_mask);
for (unsigned i = 0; i < numSignals; i++)
sigaction(signals[i], &sa, NULL);
// Do work
} // #autoreleasepool
return retval;
}
static void dumpStack(void **frames, unsigned numFrames) {
char **frameStrings = backtrace_symbols(frames, numFrames);
if (frameStrings) {
for (unsigned i = 0; i < numFrames && frameStrings[i] ; i++)
logbareutf8((char *)frameStrings[i]);
free(frameStrings);
} else {
logerr(#"No frames to dump");
}
}
static void signalHandler(int sig, siginfo_t *info, void *context) {
logerr(#"Caught signal %d", sig);
const size_t maxFrames = 128;
void *frames[maxFrames];
unsigned numFrames = backtrace(frames, maxFrames);
dumpStack(frames, numFrames);
bool send = criticalAlertPanel(#"Application Error",
#"Upload logfile to support site",
#"MyApp is terminating due to signal %d", sig);
if (send)
sendLogfile();
exit(102);
}
static void sendLogfile() {
// Redacted
}
#implementation ExceptionDelegate
- (BOOL)exceptionHandler:(NSExceptionHandler *)exceptionHandler
shouldLogException:(NSException *)exception
mask:(NSUInteger)mask {
logerr(#"An unhandled %# exception occurred: %#", [exception name], [exception reason]);
// [exception callStackReturnAddresses] will be empty, so parse the NSStackTraceKey from
// [exception userInfo].
NSString *stackTrace = [[exception userInfo] objectForKey:NSStackTraceKey];
NSScanner *scanner = [NSScanner scannerWithString:stackTrace];
NSMutableArray *addresses = [[NSMutableArray alloc] initWithCapacity:0];
NSCharacterSet *whitespace = [NSCharacterSet whitespaceAndNewlineCharacterSet];
NSString *token;
while ([scanner scanUpToCharactersFromSet:whitespace
intoString:&token]) {
[addresses addObject:token];
}
NSUInteger numFrames = [addresses count];
if (numFrames > 0) {
void **frames = (void **)malloc(sizeof(void *) * numFrames);
NSUInteger i, parsedFrames;
for (i = 0, parsedFrames = 0; i < numFrames; i++) {
NSString *address = [addresses objectAtIndex:i];
if (![CocoaUtil parseString:address toVoidPointer:&frames[parsedFrames]]) {
logerr(#"Failed to parse frame address '%#'", address);
break;
}
parsedFrames++;
}
if (parsedFrames > 0) {
logerr(#"Stack trace:");
dumpStack(frames, (unsigned)parsedFrames);
}
free(frames);
} else {
logerr(#"No addresses in stacktrace");
}
return YES;
}
- (BOOL)exceptionHandler:(NSExceptionHandler *)exceptionHandler
shouldHandleException:(NSException *)exception
mask:(NSUInteger)mask {
bool send = criticalAlertPanel(#"Application Error",
#"Upload logfile to support site",
#"MyApp is terminating due to an unhandled exception:\n\n%#",
[exception reason]);
if (send)
sendLogfile();
exit(101);
// not reached
return NO;
}
You'll need to import ExceptionHandling.framework as noted by trojanfoe.
Here is gist which cleans up the stack trace printing (some functions are missing):
https://gist.github.com/alfwatt/5ddfaf68e530a5a69e3d

get the bundle identifier of an application running from another user

The scenario is like this: "I run an app (say myproc) from one user and then fast user switch to second user"
Now, when I try to determine all processes running with a particular bundle Identifier (say com.ak.myproc); I am not able to determine this for processes running from first user.
I've tried the following but in vain:
[NSRunningApplication runningApplicationsWithBundleIdentifier:]
[[NSWorkspace sharedWorkspace] runningApplications] and then comparing bundle identifier of each application - the app running for first user does not even show up in this list.
using sysctl() and then iterating through the process list - Here, the pid of the app from first user does come. After that:
When I try [NSRunningApplication runningApplicationWithProcessIdentifier:], I get nil.
When I try GetProcessForPID() followed by ProcessInformationCopyDictionary(), I get a nil dictionary.
When I try GetProcessForPID() followed by GetProcessInformation(), I do not get anything useful in ProcessInfoRec.
Can somebody please help? Thanks.
OS: Mac OS X 10.8.4
Xcode: 4.6.2
You can map process name to bundle id using NSWorkspace.
#include <sys/sysctl.h>
#include <pwd.h>
typedef struct kinfo_proc kinfo_proc;
static int GetBSDProcessList(kinfo_proc **procList, size_t *procCount)
// Returns a list of all BSD processes on the system. This routine
// allocates the list and puts it in *procList and a count of the
// number of entries in *procCount. You are responsible for freeing
// this list (use "free" from System framework).
// On success, the function returns 0.
// On error, the function returns a BSD errno value.
{
int err;
kinfo_proc * result;
bool done;
static const int name[] = { CTL_KERN, KERN_PROC, KERN_PROC_ALL, 0 };
// Declaring name as const requires us to cast it when passing it to
// sysctl because the prototype doesn't include the const modifier.
size_t length;
// assert( procList != NULL);
// assert(*procList == NULL);
// assert(procCount != NULL);
*procCount = 0;
// We start by calling sysctl with result == NULL and length == 0.
// That will succeed, and set length to the appropriate length.
// We then allocate a buffer of that size and call sysctl again
// with that buffer. If that succeeds, we're done. If that fails
// with ENOMEM, we have to throw away our buffer and loop. Note
// that the loop causes use to call sysctl with NULL again; this
// is necessary because the ENOMEM failure case sets length to
// the amount of data returned, not the amount of data that
// could have been returned.
result = NULL;
done = false;
do {
assert(result == NULL);
// Call sysctl with a NULL buffer.
length = 0;
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
NULL, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
// Allocate an appropriately sized buffer based on the results
// from the previous call.
if (err == 0) {
result = malloc(length);
if (result == NULL) {
err = ENOMEM;
}
}
// Call sysctl again with the new buffer. If we get an ENOMEM
// error, toss away our buffer and start again.
if (err == 0) {
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
result, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
if (err == 0) {
done = true;
} else if (err == ENOMEM) {
assert(result != NULL);
free(result);
result = NULL;
err = 0;
}
}
} while (err == 0 && ! done);
// Clean up and establish post conditions.
if (err != 0 && result != NULL) {
free(result);
result = NULL;
}
*procList = result;
if (err == 0) {
*procCount = length / sizeof(kinfo_proc);
}
assert( (err == 0) == (*procList != NULL) );
return err;
}
+ (NSArray*)getBSDProcessList
{
kinfo_proc *mylist =NULL;
size_t mycount = 0;
GetBSDProcessList(&mylist, &mycount);
NSMutableArray *processes = [NSMutableArray arrayWithCapacity:(int)mycount];
for (int i = 0; i < mycount; i++) {
struct kinfo_proc *currentProcess = &mylist[i];
struct passwd *user = getpwuid(currentProcess->kp_eproc.e_ucred.cr_uid);
NSMutableDictionary *entry = [NSMutableDictionary dictionaryWithCapacity:4];
NSNumber *processID = [NSNumber numberWithInt:currentProcess->kp_proc.p_pid];
NSString *processName = [NSString stringWithFormat: #"%s",currentProcess->kp_proc.p_comm];
if (processID)[entry setObject:processID forKey:#"processID"];
if (processName)[entry setObject:processName forKey:#"processName"];
if (processName)
{
NSString *bunldeID = [self bundleIdentifierForApplicationName:processName];
if (bunldeID)
[entry setObject:bunldeID forKey:#"bundleId"];
}
if (user){
NSNumber *userID = [NSNumber numberWithUnsignedInt:currentProcess->kp_eproc.e_ucred.cr_uid];
NSString *userName = [NSString stringWithFormat: #"%s",user->pw_name];
if (userID)[entry setObject:userID forKey:#"userID"];
if (userName)[entry setObject:userName forKey:#"userName"];
}
[processes addObject:[NSDictionary dictionaryWithDictionary:entry]];
}
free(mylist);
return [NSArray arrayWithArray:processes];
}
+ (NSString *) bundleIdentifierForApplicationName:(NSString *)appName
{
NSWorkspace * workspace = [NSWorkspace sharedWorkspace];
NSString * appPath = [workspace fullPathForApplication:appName];
if (appPath) {
NSBundle * appBundle = [NSBundle bundleWithPath:appPath];
return [appBundle bundleIdentifier];
}
return nil;
}

How to record and play back audio in real time on OS X

I'm trying to record sound from the microphone and play it back in real time on OS X. Eventually it will be streamed over the network, but for now I'm just trying to achieve local recording/playback.
I'm able to record sound and write to a file, which I could do with both AVCaptureSession and AVAudioRecorder. However, I'm not sure how to play back the audio as I record it. Using AVCaptureAudioDataOutput works:
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
self.serialQueue = dispatch_queue_create("audioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:self.serialQueue];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioDataOutput]) {
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioDataOutput];
[self.captureSession startRunning];
// Stop after arbitrary time
double delayInSeconds = 4.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
} else {
NSLog(#"Couldn't add them; error = %#",error);
}
...but I'm not sure how to implement the callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
?
}
I've tried getting the data out of the sampleBuffer and playing it using AVAudioPlayer by copying the code from this SO answer, but that code crashes on the appendBytes:length: method.
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
NSLog(#"Length = %i",audioBuffer.mDataByteSize);
[data appendBytes:frame length:audioBuffer.mDataByteSize]; // Crashes here
}
CFRelease(blockBuffer);
NSError *playerError;
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:&playerError];
if(player && !playerError) {
NSLog(#"Player was valid");
[player play];
} else {
NSLog(#"Error = %#",playerError);
}
Edit The CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer method returns an OSStatus code of -12737, which according to the documentation is kCMSampleBufferError_ArrayTooSmall
Edit2: Based on this mailing list response, I passed a size_t out parameter as the second parameter to ...GetAudioBufferList.... This returned 40. Right now I'm just passing in 40 as a hard-coded value, which seems to work (the OSStatus return value is 0, atleast).
Now the player initWithData:error: method gives the error:
Error Domain=NSOSStatusErrorDomain Code=1954115647 "The operation couldn’t be completed. (OSStatus error 1954115647.)" which I'm looking into.
I've done iOS programming for a long time, but I haven't used AVFoundation, CoreAudio, etc until now. It looks like there are a dozen ways to accomplish the same thing, depending on how low or high level you want to be, so any high level overviews or framework recommendations are appreciated.
Appendix
Recording to a file
Recording to a file using AVCaptureSession:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(captureSessionStartedNotification:) name:AVCaptureSessionDidStartRunningNotification object:nil];
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioFileOutput *audioOutput = [[AVCaptureAudioFileOutput alloc] init];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioOutput]) {
NSLog(#"Can add the inputs and outputs");
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioOutput];
[self.captureSession startRunning];
double delayInSeconds = 5.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
}
else {
NSLog(#"Error was = %#",error);
}
}
- (void)captureSessionStartedNotification:(NSNotification *)notification
{
AVCaptureSession *session = notification.object;
id audioOutput = session.outputs[0];
NSLog(#"Capture session started; notification = %#",notification);
NSLog(#"Notification audio output = %#",audioOutput);
[audioOutput startRecordingToOutputFileURL:[[self class] outputURL] outputFileType:AVFileTypeAppleM4A recordingDelegate:self];
}
+ (NSURL *)outputURL
{
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentPath = [searchPaths objectAtIndex:0];
NSString *filePath = [documentPath stringByAppendingPathComponent:#"z1.alac"];
return [NSURL fileURLWithPath:filePath];
}
Recording to a file using AVAudioRecorder:
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
#(kAudioFormatAppleLossless),
AVFormatIDKey,
nil];
NSError *recorderError;
self.recorder = [[AVAudioRecorder alloc] initWithURL:[[self class] outputURL] settings:recordSettings error:&recorderError];
self.recorder.delegate = self;
if (self.recorder && !recorderError) {
NSLog(#"Success!");
[self.recorder recordForDuration:10];
} else {
NSLog(#"Failure, recorder = %#",self.recorder);
NSLog(#"Error = %#",recorderError);
}
Ok, I ended up working at a lower level than AVFoundation -- not sure if that was necessary. I read up to Chapter 5 of Learning Core Audio and went with an implementation using Audio Queues. This code is translated from being used for recording to a file/playing back a file, so there are surely some unnecessary bits I've accidentally left in. Additionally, I'm not actually re-enqueuing buffers onto the Output Queue (I should be), but just as a proof of concept this works. The only file is listed here, and is also on Github.
//
// main.m
// Recorder
//
// Created by Maximilian Tagher on 8/7/13.
// Copyright (c) 2013 Tagher. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#define kNumberRecordBuffers 3
//#define kNumberPlaybackBuffers 3
#define kPlaybackFileLocation CFSTR("/Users/Max/Music/iTunes/iTunes Media/Music/Taylor Swift/Red/02 Red.m4a")
#pragma mark - User Data Struct
// listing 4.3
struct MyRecorder;
typedef struct MyPlayer {
AudioQueueRef playerQueue;
SInt64 packetPosition;
UInt32 numPacketsToRead;
AudioStreamPacketDescription *packetDescs;
Boolean isDone;
struct MyRecorder *recorder;
} MyPlayer;
typedef struct MyRecorder {
AudioQueueRef recordQueue;
SInt64 recordPacket;
Boolean running;
MyPlayer *player;
} MyRecorder;
#pragma mark - Utility functions
// Listing 4.2
static void CheckError(OSStatus error, const char *operation) {
if (error == noErr) return;
char errorString[20];
// See if it appears to be a 4-char-code
*(UInt32 *)(errorString + 1) = CFSwapInt32HostToBig(error);
if (isprint(errorString[1]) && isprint(errorString[2])
&& isprint(errorString[3]) && isprint(errorString[4])) {
errorString[0] = errorString[5] = '\'';
errorString[6] = '\0';
} else {
// No, format it as an integer
NSLog(#"Was integer");
sprintf(errorString, "%d",(int)error);
}
fprintf(stderr, "Error: %s (%s)\n",operation,errorString);
exit(1);
}
OSStatus MyGetDefaultInputDeviceSampleRate(Float64 *outSampleRate)
{
OSStatus error;
AudioDeviceID deviceID = 0;
AudioObjectPropertyAddress propertyAddress;
UInt32 propertySize;
propertyAddress.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(AudioDeviceID);
error = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject,
&propertyAddress, 0, NULL,
&propertySize,
&deviceID);
if (error) return error;
propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(Float64);
error = AudioHardwareServiceGetPropertyData(deviceID,
&propertyAddress, 0, NULL,
&propertySize,
outSampleRate);
return error;
}
// Recorder
static void MyCopyEncoderCookieToFile(AudioQueueRef queue, AudioFileID theFile)
{
OSStatus error;
UInt32 propertySize;
error = AudioQueueGetPropertySize(queue, kAudioConverterCompressionMagicCookie, &propertySize);
if (error == noErr && propertySize > 0) {
Byte *magicCookie = (Byte *)malloc(propertySize);
CheckError(AudioQueueGetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "Couldn't get audio queue's magic cookie");
CheckError(AudioFileSetProperty(theFile, kAudioFilePropertyMagicCookieData, propertySize, magicCookie), "Couldn't set audio file's magic cookie");
free(magicCookie);
}
}
// Player
static void MyCopyEncoderCookieToQueue(AudioFileID theFile, AudioQueueRef queue)
{
UInt32 propertySize;
// Just check for presence of cookie
OSStatus result = AudioFileGetProperty(theFile, kAudioFilePropertyMagicCookieData, &propertySize, NULL);
if (result == noErr && propertySize != 0) {
Byte *magicCookie = (UInt8*)malloc(sizeof(UInt8) * propertySize);
CheckError(AudioFileGetProperty(theFile, kAudioFilePropertyMagicCookieData, &propertySize, magicCookie), "Get cookie from file failed");
CheckError(AudioQueueSetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, propertySize), "Set cookie on file failed");
free(magicCookie);
}
}
static int MyComputeRecordBufferSize(const AudioStreamBasicDescription *format, AudioQueueRef queue, float seconds)
{
int packets, frames, bytes;
frames = (int)ceil(seconds * format->mSampleRate);
if (format->mBytesPerFrame > 0) { // Not variable
bytes = frames * format->mBytesPerFrame;
} else { // variable bytes per frame
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0) {
// Constant packet size
maxPacketSize = format->mBytesPerPacket;
} else {
// Get the largest single packet size possible
UInt32 propertySize = sizeof(maxPacketSize);
CheckError(AudioQueueGetProperty(queue, kAudioConverterPropertyMaximumOutputPacketSize, &maxPacketSize, &propertySize), "Couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0) {
packets = frames / format->mFramesPerPacket;
} else {
// Worst case scenario: 1 frame in a packet
packets = frames;
}
// Sanity check
if (packets == 0) {
packets = 1;
}
bytes = packets * maxPacketSize;
}
return bytes;
}
void CalculateBytesForPlaythrough(AudioQueueRef queue,
AudioStreamBasicDescription inDesc,
Float64 inSeconds,
UInt32 *outBufferSize,
UInt32 *outNumPackets)
{
UInt32 maxPacketSize;
UInt32 propSize = sizeof(maxPacketSize);
CheckError(AudioQueueGetProperty(queue,
kAudioQueueProperty_MaximumOutputPacketSize,
&maxPacketSize, &propSize), "Couldn't get file's max packet size");
static const int maxBufferSize = 0x10000;
static const int minBufferSize = 0x4000;
if (inDesc.mFramesPerPacket) {
Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
*outBufferSize = numPacketsForTime * maxPacketSize;
} else {
*outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize;
}
if (*outBufferSize > maxBufferSize &&
*outBufferSize > maxPacketSize) {
*outBufferSize = maxBufferSize;
} else {
if (*outBufferSize < minBufferSize) {
*outBufferSize = minBufferSize;
}
}
*outNumPackets = *outBufferSize / maxPacketSize;
}
#pragma mark - Record callback function
static void MyAQInputCallback(void *inUserData,
AudioQueueRef inQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc)
{
// NSLog(#"Input callback");
// NSLog(#"Input thread = %#",[NSThread currentThread]);
MyRecorder *recorder = (MyRecorder *)inUserData;
MyPlayer *player = recorder->player;
if (inNumPackets > 0) {
// Enqueue on the output Queue!
AudioQueueBufferRef outputBuffer;
CheckError(AudioQueueAllocateBuffer(player->playerQueue, inBuffer->mAudioDataBytesCapacity, &outputBuffer), "Input callback failed to allocate new output buffer");
memcpy(outputBuffer->mAudioData, inBuffer->mAudioData, inBuffer->mAudioDataByteSize);
outputBuffer->mAudioDataByteSize = inBuffer->mAudioDataByteSize;
// [NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize];
// Assuming LPCM so no packet descriptions
CheckError(AudioQueueEnqueueBuffer(player->playerQueue, outputBuffer, 0, NULL), "Enqueing the buffer in input callback failed");
recorder->recordPacket += inNumPackets;
}
if (recorder->running) {
CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
}
}
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
// NSLog(#"Output thread = %#",[NSThread currentThread]);
// NSLog(#"Output callback");
MyPlayer *aqp = (MyPlayer *)inUserData;
MyRecorder *recorder = aqp->recorder;
if (aqp->isDone) return;
}
int main(int argc, const char * argv[])
{
#autoreleasepool {
MyRecorder recorder = {0};
MyPlayer player = {0};
recorder.player = &player;
player.recorder = &recorder;
AudioStreamBasicDescription recordFormat;
memset(&recordFormat, 0, sizeof(recordFormat));
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2; //stereo
// Begin my changes to make LPCM work
recordFormat.mBitsPerChannel = 16;
// Haven't checked if each of these flags is necessary, this is just what Chapter 2 used for LPCM.
recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
// end my changes
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat), "AudioFormatGetProperty failed");
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat, MyAQInputCallback, &recorder, NULL, NULL, 0, &queue), "AudioQueueNewInput failed");
recorder.recordQueue = queue;
// Fills in ABSD a little more
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");
// MyCopyEncoderCookieToFile(queue, recorder.recordFile);
int bufferByteSize = MyComputeRecordBufferSize(&recordFormat,queue,0.5);
NSLog(#"%d",__LINE__);
// Create and Enqueue buffers
int bufferIndex;
for (bufferIndex = 0;
bufferIndex < kNumberRecordBuffers;
++bufferIndex) {
AudioQueueBufferRef buffer;
CheckError(AudioQueueAllocateBuffer(queue,
bufferByteSize,
&buffer), "AudioQueueBufferRef failed");
CheckError(AudioQueueEnqueueBuffer(queue, buffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
}
// PLAYBACK SETUP
AudioQueueRef playbackQueue;
CheckError(AudioQueueNewOutput(&recordFormat,
MyAQOutputCallback,
&player, NULL, NULL, 0,
&playbackQueue), "AudioOutputNewQueue failed");
player.playerQueue = playbackQueue;
UInt32 playBufferByteSize;
CalculateBytesForPlaythrough(queue, recordFormat, 0.1, &playBufferByteSize, &player.numPacketsToRead);
bool isFormatVBR = (recordFormat.mBytesPerPacket == 0
|| recordFormat.mFramesPerPacket == 0);
if (isFormatVBR) {
NSLog(#"Not supporting VBR");
player.packetDescs = (AudioStreamPacketDescription*) malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
} else {
player.packetDescs = NULL;
}
// END PLAYBACK
recorder.running = TRUE;
player.isDone = false;
CheckError(AudioQueueStart(playbackQueue, NULL), "AudioQueueStart failed");
CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 10, TRUE);
printf("Playing through, press <return> to stop:\n");
getchar();
printf("* done *\n");
recorder.running = FALSE;
player.isDone = true;
CheckError(AudioQueueStop(playbackQueue, false), "Failed to stop playback queue");
CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
AudioQueueDispose(playbackQueue, FALSE);
AudioQueueDispose(queue, TRUE);
}
return 0;
}

Why doesn't this simple CoreMIDI program produce MIDI output?

Here is an extremely simple CoreMIDI OS X application that sends MIDI data. The problem is that it doesn't work. It compiles fine, and runs. It reports no errors, and does not crash. The Source created becomes visible in MIDI Monitor. However, no MIDI data comes out.
Could somebody let me know what I'm doing wrong here?
#include <CoreMIDI/CoreMIDI.h>
int main(int argc, char *args[])
{
MIDIClientRef theMidiClient;
MIDIEndpointRef midiOut;
MIDIPortRef outPort;
char pktBuffer[1024];
MIDIPacketList* pktList = (MIDIPacketList*) pktBuffer;
MIDIPacket *pkt;
Byte midiDataToSend[] = {0x91, 0x3c, 0x40};
int i;
MIDIClientCreate(CFSTR("Magical MIDI"), NULL, NULL,
&theMidiClient);
MIDISourceCreate(theMidiClient, CFSTR("Magical MIDI Source"),
&midiOut);
MIDIOutputPortCreate(theMidiClient, CFSTR("Magical MIDI Out Port"),
&outPort);
pkt = MIDIPacketListInit(pktList);
pkt = MIDIPacketListAdd(pktList, 1024, pkt, 0, 3, midiDataToSend);
for (i = 0; i < 100; i++) {
if (pkt == NULL || MIDISend(outPort, midiOut, pktList)) {
printf("failed to send the midi.\n");
} else {
printf("sent!\n");
}
sleep(1);
}
return 0;
}
You're calling MIDISourceCreate to create a virtual MIDI source.
This means that your source will appear in other apps' MIDI setup UI, and that those apps can choose whether or not to listen to your source. Your MIDI will not get sent to any physical MIDI ports, unless some other app happens to channel it there. It also means that your app has no choice as to where the MIDI it's sending goes. I'm assuming that's what you want.
The documentation for MIDISourceCreate says:
After creating a virtual source, use MIDIReceived to transmit MIDI messages from your virtual source to any clients connected to the virtual source.
So, do two things:
Remove the code that creates the output port. You don't need it.
change MIDISend(outPort, midiOut, pktList) to: MIDIReceived(midiOut, pktlist).
That should solve your problem.
So what are output ports good for? If you wanted to direct your MIDI data to a specific destination -- maybe a physical MIDI port -- you would NOT create a virtual MIDI source. Instead:
Call MIDIOutputPortCreate() to make an output port
Use MIDIGetNumberOfDestinations() and MIDIGetDestination() to get the list of destinations and find the one you're interested in.
To send MIDI to one destination, call MIDISend(outputPort, destination, packetList).
I'm just leaving this here for my own reference. It's a full example based 100% on yours, but including the other side (receiving), my bad C code and the accepted answer's corrections (of course).
#import "AppDelegate.h"
#implementation AppDelegate
#synthesize window = _window;
#define NSLogError(c,str) do{if (c) NSLog(#"Error (%#): %u:%#", str, (unsigned int)c,[NSError errorWithDomain:NSMachErrorDomain code:c userInfo:nil]); }while(false)
static void spit(Byte* values, int length, BOOL useHex) {
NSMutableString *thing = [#"" mutableCopy];
for (int i=0; i<length; i++) {
if (useHex)
[thing appendFormat:#"0x%X ", values[i]];
else
[thing appendFormat:#"%d ", values[i]];
}
NSLog(#"Length=%d %#", length, thing);
}
- (void) startSending {
MIDIEndpointRef midiOut;
char pktBuffer[1024];
MIDIPacketList* pktList = (MIDIPacketList*) pktBuffer;
MIDIPacket *pkt;
Byte midiDataToSend[] = {0x91, 0x3c, 0x40};
int i;
MIDISourceCreate(theMidiClient, CFSTR("Magical MIDI Source"),
&midiOut);
pkt = MIDIPacketListInit(pktList);
pkt = MIDIPacketListAdd(pktList, 1024, pkt, 0, 3, midiDataToSend);
for (i = 0; i < 100; i++) {
if (pkt == NULL || MIDIReceived(midiOut, pktList)) {
printf("failed to send the midi.\n");
} else {
printf("sent!\n");
}
sleep(1);
}
}
void ReadProc(const MIDIPacketList *packetList, void *readProcRefCon, void *srcConnRefCon)
{
const MIDIPacket *packet = &packetList->packet[0];
for (int i = 0; i < packetList->numPackets; i++)
{
NSData *data = [NSData dataWithBytes:packet->data length:packet->length];
spit((Byte*)data.bytes, data.length, YES);
packet = MIDIPacketNext(packet);
}
}
- (void) setupReceiver {
OSStatus s;
MIDIEndpointRef virtualInTemp;
NSString *inName = [NSString stringWithFormat:#"Magical MIDI Destination"];
s = MIDIDestinationCreate(theMidiClient, (__bridge CFStringRef)inName, ReadProc, (__bridge void *)self, &virtualInTemp);
NSLogError(s, #"Create virtual MIDI in");
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
MIDIClientCreate(CFSTR("Magical MIDI"), NULL, NULL,
&theMidiClient);
[self setupReceiver];
[self startSending];
}
#end
A little detail that others are skipping: the time parameter of MIDIPacketListAdd is important for some musical apps.
Here is an example of how you can retrieve it:
#import <mach/mach_time.h>
MIDITimeStamp midiTime = mach_absolute_time();
Source: Apple Documentation
And then, applied to the other examples here:
pktBuffer[1024];
MIDIPacketList *pktList = (MIDIPacketList*)pktBuffer;
MIDIPacket *pktPtr = MIDIPacketListInit(pktList);
MIDITimeStamp midiTime = mach_absolute_time();
Byte midiDataToSend[] = {0x91, 0x3c, 0x40};
pktPtr = MIDIPacketListAdd(pktList, sizeof(pktList), pktPtr, midiTime, sizeof(midiDataToSend), midiDataToSend);
Consider your own midi client creating application may crash or the host sending midi can crash also. You can handle this easier with checking if an client/destination exists already then doing this by handling singleton allocations. When your Midi client is existing but not working then this is because you need to tell CoreMidi what your costume made client is capable of processing and what latency it will have specially when the host sending client is using timestamps a lot (aka ableton and other).
in your .h file
#import <CoreMIDI/CoreMIDI.h>
#import <CoreAudio/HostTime.h>
#interface YourVirtualMidiHandlerObject : NSObject
#property (assign, nonatomic) MIDIClientRef midi_client;
#property (nonatomic) MIDIEndpointRef outSrc;
#property (nonatomic) MIDIEndpointRef inSrc;
- (id)initWithVirtualSourceName:(NSString *)clientName;
#end
in your .m file
#interface YourVirtualMidiHandlerObject () {
MIDITimeStamp midiTime;
MIDIPacketList pktList;
}
#end
You would prepare initiation of your virtual client in the following way
also in your .m file
#implementation YourVirtualMidiHandlerObject
// this you can call in dealloc or manually
// else where when you stop working with your virtual client
-(void)teardown {
MIDIEndpointDispose(_inSrc);
MIDIEndpointDispose(_outSrc);
MIDIClientDispose(_midi_client);
}
- (id)initWithVirtualSourceName:(NSString *)clientName {
if (self = [super init]) {
OSStatus status = MIDIClientCreate((__bridge CFStringRef)clientName, (MIDINotifyProc)MidiNotifyProc, (__bridge void *)(self), &_midi_client);
BOOL isSourceLoaded = NO;
BOOL isDestinationLoaded = NO;
ItemCount sourceCount = MIDIGetNumberOfSources();
for (ItemCount i = 0; i < sourceCount; ++i) {
_outSrc = MIDIGetSource(i);
if ( _outSrc != 0 ) {
if ([[self getMidiDisplayName:_outSrc] isEqualToString:clientName] && !isSourceLoaded) {
isSourceLoaded = YES;
break; //stop looping thru sources if it is existing
}
}
}
ItemCount destinationCount = MIDIGetNumberOfDestinations();
for (ItemCount i = 0; i < destinationCount; ++i) {
_inSrc = MIDIGetDestination(i);
if (_inSrc != 0) {
if ([[self getMidiDisplayName:_inSrc] isEqualToString:clientName] && !isDestinationLoaded) {
isDestinationLoaded = YES;
break; //stop looping thru destinations if it is existing
}
}
}
if(!isSourceLoaded) {
//your costume source needs to tell CoreMidi what it is handling
MIDISourceCreate(_midi_client, (__bridge CFStringRef)clientName, &_outSrc);
MIDIObjectSetIntegerProperty(_outSrc, kMIDIPropertyMaxTransmitChannels, 16);
MIDIObjectSetIntegerProperty(_outSrc, kMIDIPropertyTransmitsProgramChanges, 1);
MIDIObjectSetIntegerProperty(_outSrc, kMIDIPropertyTransmitsNotes, 1);
// MIDIObjectSetIntegerProperty(_outSrc, kMIDIPropertyTransmitsClock, 1);
isSourceLoaded = YES;
}
if(!isDestinationLoaded) {
//your costume destination needs to tell CoreMidi what it is handling
MIDIDestinationCreate(_midi_client, (__bridge CFStringRef)clientName, midiRead, (__bridge void *)(self), &_inSrc);
MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyAdvanceScheduleTimeMuSec, 1); // consider more 14ms in some cases
MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesClock, 1);
MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesNotes, 1);
MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesProgramChanges, 1);
MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyMaxReceiveChannels, 16);
// MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesMTC, 1);
// MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesBankSelectMSB, 1);
// MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertyReceivesBankSelectLSB, 1);
// MIDIObjectSetIntegerProperty(_inSrc, kMIDIPropertySupportsMMC, 1);
isDestinationLoaded = YES;
}
if (!isDestinationLoaded || !isSourceLoaded) {
if (status != noErr ) {
NSLog(#"Failed creation of virtual Midi client \"%#\", so disposing the client!",clientName);
MIDIClientDispose(_midi_client);
}
}
}
return self;
}
// Returns the display name of a given MIDIObjectRef as an NSString
-(NSString *)getMidiDisplayName:(MIDIObjectRef)obj {
CFStringRef name = nil;
if (noErr != MIDIObjectGetStringProperty(obj, kMIDIPropertyDisplayName, &name)) return nil;
return (__bridge NSString *)name;
}
For those of you trying to read tempo (midi transport) and set the propertys for the virtual destination in your creation process...
Don't forget timestamps are send with the packets but a packet can contain several commands of same type, even several clock commands. When constructing a clock counter to find bpm tempo you will have to consider counting at least 12 of them before calculating. When you go only with 3 of them you are actually measuring your own buffer read processing latency instead of the real timestamps.
Your reading procedure (callback) will handle timestamps if the midi sender fails to set those properly with...
void midiRead(const MIDIPacketList * pktlist, void * readProcRefCon, void * srcConnRefCon) {
const MIDIPacket *pkt = pktlist->packet;
for ( int index = 0; index < pktlist->numPackets; index++, pkt = MIDIPacketNext(pkt) ) {
MIDITimeStamp timestamp = pkt->timeStamp;
if ( !timestamp ) timestamp = mach_absolute_time();
if ( pkt->length == 0 ) continue;
const Byte *p = &pkt->data[0];
Byte functionalDataGroup = *p & 0xF0;
// Analyse the buffered bytes in functional groups is faster
// like 0xF will tell it is clock/transport midi stuff
// go in detail after this will shorten the processing
// and it is easier to read in code
switch (functionalDataGroup) {
case 0xF : {
// in here read the exact Clock command
// 0xF8 = clock
}
break;
case ... : {
// do other nice grouped stuff here, like reading notes
}
break;
default : break;
}
}
}
dont forget the client needs a callback where internal notifications are handled.
void MidiNotifyProc(const MIDINotification* message, void* refCon) {
// when creation of virtual client fails we despose the whole client
// meaning unless you need it you can ignore added/removed notifications
if (message->messageID != kMIDIMsgObjectAdded &&
message->messageID != kMIDIMsgObjectRemoved) return;
// reactions to other midi notications you gonna trigger here..
}
then you can send midi with...
-(void)sendMIDICC:(uint8_t)cc Value:(uint8_t)v ChZeroToFifteen:(uint8_t)ch {
MIDIPacket *packet = MIDIPacketListInit(&pktList);
midiTime = packet->timeStamp;
unsigned char ctrl[3] = { 0xB0 + ch, cc, v };
while (1) {
packet = MIDIPacketListAdd(&pktList, sizeof(pktList), packet, midiTime, sizeof(ctrl), ctrl);
if (packet != NULL) break;
// create an extra packet to fill when it failed before
packet = MIDIPacketListInit(&pktList);
}
// OSStatus check = // you dont need it if you don't check failing
MIDIReceived(_outSrc, &pktList);
}

Resources