The scenario is like this: "I run an app (say myproc) from one user and then fast user switch to second user"
Now, when I try to determine all processes running with a particular bundle Identifier (say com.ak.myproc); I am not able to determine this for processes running from first user.
I've tried the following but in vain:
[NSRunningApplication runningApplicationsWithBundleIdentifier:]
[[NSWorkspace sharedWorkspace] runningApplications] and then comparing bundle identifier of each application - the app running for first user does not even show up in this list.
using sysctl() and then iterating through the process list - Here, the pid of the app from first user does come. After that:
When I try [NSRunningApplication runningApplicationWithProcessIdentifier:], I get nil.
When I try GetProcessForPID() followed by ProcessInformationCopyDictionary(), I get a nil dictionary.
When I try GetProcessForPID() followed by GetProcessInformation(), I do not get anything useful in ProcessInfoRec.
Can somebody please help? Thanks.
OS: Mac OS X 10.8.4
Xcode: 4.6.2
You can map process name to bundle id using NSWorkspace.
#include <sys/sysctl.h>
#include <pwd.h>
typedef struct kinfo_proc kinfo_proc;
static int GetBSDProcessList(kinfo_proc **procList, size_t *procCount)
// Returns a list of all BSD processes on the system. This routine
// allocates the list and puts it in *procList and a count of the
// number of entries in *procCount. You are responsible for freeing
// this list (use "free" from System framework).
// On success, the function returns 0.
// On error, the function returns a BSD errno value.
{
int err;
kinfo_proc * result;
bool done;
static const int name[] = { CTL_KERN, KERN_PROC, KERN_PROC_ALL, 0 };
// Declaring name as const requires us to cast it when passing it to
// sysctl because the prototype doesn't include the const modifier.
size_t length;
// assert( procList != NULL);
// assert(*procList == NULL);
// assert(procCount != NULL);
*procCount = 0;
// We start by calling sysctl with result == NULL and length == 0.
// That will succeed, and set length to the appropriate length.
// We then allocate a buffer of that size and call sysctl again
// with that buffer. If that succeeds, we're done. If that fails
// with ENOMEM, we have to throw away our buffer and loop. Note
// that the loop causes use to call sysctl with NULL again; this
// is necessary because the ENOMEM failure case sets length to
// the amount of data returned, not the amount of data that
// could have been returned.
result = NULL;
done = false;
do {
assert(result == NULL);
// Call sysctl with a NULL buffer.
length = 0;
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
NULL, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
// Allocate an appropriately sized buffer based on the results
// from the previous call.
if (err == 0) {
result = malloc(length);
if (result == NULL) {
err = ENOMEM;
}
}
// Call sysctl again with the new buffer. If we get an ENOMEM
// error, toss away our buffer and start again.
if (err == 0) {
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
result, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
if (err == 0) {
done = true;
} else if (err == ENOMEM) {
assert(result != NULL);
free(result);
result = NULL;
err = 0;
}
}
} while (err == 0 && ! done);
// Clean up and establish post conditions.
if (err != 0 && result != NULL) {
free(result);
result = NULL;
}
*procList = result;
if (err == 0) {
*procCount = length / sizeof(kinfo_proc);
}
assert( (err == 0) == (*procList != NULL) );
return err;
}
+ (NSArray*)getBSDProcessList
{
kinfo_proc *mylist =NULL;
size_t mycount = 0;
GetBSDProcessList(&mylist, &mycount);
NSMutableArray *processes = [NSMutableArray arrayWithCapacity:(int)mycount];
for (int i = 0; i < mycount; i++) {
struct kinfo_proc *currentProcess = &mylist[i];
struct passwd *user = getpwuid(currentProcess->kp_eproc.e_ucred.cr_uid);
NSMutableDictionary *entry = [NSMutableDictionary dictionaryWithCapacity:4];
NSNumber *processID = [NSNumber numberWithInt:currentProcess->kp_proc.p_pid];
NSString *processName = [NSString stringWithFormat: #"%s",currentProcess->kp_proc.p_comm];
if (processID)[entry setObject:processID forKey:#"processID"];
if (processName)[entry setObject:processName forKey:#"processName"];
if (processName)
{
NSString *bunldeID = [self bundleIdentifierForApplicationName:processName];
if (bunldeID)
[entry setObject:bunldeID forKey:#"bundleId"];
}
if (user){
NSNumber *userID = [NSNumber numberWithUnsignedInt:currentProcess->kp_eproc.e_ucred.cr_uid];
NSString *userName = [NSString stringWithFormat: #"%s",user->pw_name];
if (userID)[entry setObject:userID forKey:#"userID"];
if (userName)[entry setObject:userName forKey:#"userName"];
}
[processes addObject:[NSDictionary dictionaryWithDictionary:entry]];
}
free(mylist);
return [NSArray arrayWithArray:processes];
}
+ (NSString *) bundleIdentifierForApplicationName:(NSString *)appName
{
NSWorkspace * workspace = [NSWorkspace sharedWorkspace];
NSString * appPath = [workspace fullPathForApplication:appName];
if (appPath) {
NSBundle * appBundle = [NSBundle bundleWithPath:appPath];
return [appBundle bundleIdentifier];
}
return nil;
}
Related
I have thoroughly read the question and answer in this thread:
How to exclude input or output channels from an aggregate CoreAudio device?
And it appears to be missing information on the solution:
I have created an aggregated device containing multiple audio devices. When calling core audio to receive the number of streams (using kAudioDevicePropertyStreams) the return value is always 1. I have also tried the implementation in CoreAudio Utility classes: CAHALAudioDevice::GetIOProcStreamUsage. Still I could not see how to access sub-streams and disable/enable them as mentioned here.
What needs to be done to accomplish disable/enable of sub-streams?
EDIT
Here is CAHALAudioDevice::GetIOProcStreamUsage for reference:
void CAHALAudioDevice::GetIOProcStreamUsage(AudioDeviceIOProcID
inIOProcID, bool inIsInput, bool* outStreamUsage) const
{
// make an AudioHardwareIOProcStreamUsage the right size
UInt32 theNumberStreams = GetNumberStreams(inIsInput);
UInt32 theSize = SizeOf32(void*) + SizeOf32(UInt32) + (theNumberStreams * SizeOf32(UInt32));
CAAutoFree<AudioHardwareIOProcStreamUsage> theStreamUsage(theSize);
// set it up
theStreamUsage->mIOProc = reinterpret_cast<void*>(inIOProcID);
theStreamUsage->mNumberStreams = theNumberStreams;
// get the property
CAPropertyAddress theAddress(kAudioDevicePropertyIOProcStreamUsage, inIsInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput);
GetPropertyData(theAddress, 0, NULL, theSize, theStreamUsage);
// fill out the return value
for(UInt32 theIndex = 0; theIndex < theNumberStreams; ++theIndex)
{
outStreamUsage[theIndex] = (theStreamUsage->mStreamIsOn[theIndex] != 0);
}
}
For reference, here is the function my program uses the accomplish the results described in the linked question:
// Tell CoreAudio which input (or output) streams we actually want to use in our device
// #param devID the CoreAudio audio device ID of the aggregate device to modify
// #param ioProc the rendering callback-function (as was passed to AudioDeviceCreateIOProcID()'s second argument)
// #param scope either kAudioObjectPropertyScopeInput or kAudioObjectPropertyScopeOutput depending on which type of channels we want to modify
// #param numValidChannels how many audio channels in the aggregate device we want to actually use
// #param rightJustify if true, we want to use the last (numValidChannels) in the device; if false we want to use the first (numValidChannels) in the device
// #returns 0 on success or -1 on error
// #note this function doesn't change the layout of the audio-sample data in the audio-render callback; rather it causes some channels of audio in the callback to become zero'd out/unused.
int SetProcStreamUsage(AudioDeviceID devID, void * ioProc, AudioObjectPropertyScope scope, int numValidChannels, bool rightJustify)
{
const AudioObjectPropertyAddress sizesAddress =
{
kAudioDevicePropertyStreamConfiguration,
scope,
kAudioObjectPropertyElementMaster
};
Uint32 streamSizesDataSize = 0;
OSStatus err = AudioObjectGetPropertyDataSize(devID, &sizesAddress, 0, NULL, &streamSizesDataSize);
if (err != noErr)
{
printf("SetProcStreamUsage(%u,%i,%i): AudioObjectGetPropertyDataSize(kAudioDevicePropertyStreamConfiguration) failed!\n", (unsigned int) devID, scope, rightJustify);
return -1; // ("AudioObjectGetPropertyDataSize(kAudioDevicePropertyStreamConfiguration) failed");
}
const AudioObjectPropertyAddress usageAddress =
{
kAudioDevicePropertyIOProcStreamUsage,
scope,
kAudioObjectPropertyElementMaster
};
Uint32 streamUsageDataSize = 0;
err = AudioObjectGetPropertyDataSize(devID, &usageAddress, 0, NULL, &streamUsageDataSize);
if (err != noErr)
{
printf("SetProcStreamUsage(%u,%i,%i): AudioObjectGetPropertyDataSize(kAudioDevicePropertyIOProcStreamUsage) failed!\n", (unsigned int) devID, scope, rightJustify);
return -1; // ("AudioObjectGetPropertyDataSize(kAudioDevicePropertyIOProcStreamUsage) failed");
}
AudioBufferList * bufList = (AudioBufferList*) malloc(streamSizesDataSize); // using malloc() because the object-size is variable
if (bufList)
{
int ret;
err = AudioObjectGetPropertyData(devID, &sizesAddress, 0, NULL, &streamSizesDataSize, bufList);
if (err == noErr)
{
AudioHardwareIOProcStreamUsage * streamUsage = (AudioHardwareIOProcStreamUsage *) malloc(streamUsageDataSize); // using malloc() because the object-size is variable
if (streamUsage)
{
streamUsage->mIOProc = ioProc;
err = AudioObjectGetPropertyData(devID, &usageAddress, 0, NULL, &streamUsageDataSize, streamUsage);
if (err == noErr)
{
if (bufList->mNumberBuffers == streamUsage->mNumberStreams)
{
Int32 numChannelsLeft = numValidChannels;
if (rightJustify)
{
// We only want streams corresponding to the last (N) channels to be enabled
for (Int32 i=streamUsage->mNumberStreams-1; i>=0; i--)
{
streamUsage->mStreamIsOn[i] = (numChannelsLeft > 0);
numChannelsLeft -= bufList->mBuffers[i].mNumberChannels;
}
}
else
{
// We only want streams corresponding to the first (N) channels to be enabled
for (Uint32 i=0; i<streamUsage->mNumberStreams; i++)
{
streamUsage->mStreamIsOn[i] = (numChannelsLeft > 0);
numChannelsLeft -= bufList->mBuffers[i].mNumberChannels;
}
}
// Now set the stream-usage per our update, above
err = AudioObjectSetPropertyData(devID, &usageAddress, 0, NULL, streamUsageDataSize, streamUsage);
if (err != noErr)
{
printf("SetProcStreamUsage(%u,%i,%i): AudioObjectSetPropertyData(kAudioDevicePropertyIOProcStreamUsage) failed!\n", (unsigned int) devID, scope, rightJustify);
ret = -1; // ("AudioObjectSetPropertyData(kAudioDevicePropertyIOProcStreamUsage) failed");
}
}
else
{
printf("SetProcStreamUsage(%u,%i,%i): #Buffers (%u) doesn't match #Streams (%u)!\n", (unsigned int) devID, scope, rightJustify, bufList->mNumberBuffers, streamUsage->mNumberStreams);
ret = -1;
}
}
else
{
printf("SetProcStreamUsage(%u,%i,%i): AudioObjectSetPropertyData(kAudioDevicePropertyIOProcStreamUsage) failed!\n", (unsigned int) devID, scope, rightJustify);
ret = -1; // ("AudioObjectGetPropertyData(kAudioDevicePropertyIOProcStreamUsage) failed");
}
free(streamUsage);
}
else ret = -1; // out of memory?
}
else
{
printf("SetProcStreamUsage(%u,%i,%i): AudioObjectGetPropertyData(kAudioDevicePropertyStreamConfiguration) failed!\n", (unsigned int) devID, scope, rightJustify);
ret = -1; // ("AudioObjectGetPropertyData(kAudioDevicePropertyStreamConfiguration) failed");
}
free(bufList);
return ret;
}
else return -1; // out of memory?
}
I'm trying to record sound from the microphone and play it back in real time on OS X. Eventually it will be streamed over the network, but for now I'm just trying to achieve local recording/playback.
I'm able to record sound and write to a file, which I could do with both AVCaptureSession and AVAudioRecorder. However, I'm not sure how to play back the audio as I record it. Using AVCaptureAudioDataOutput works:
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
self.serialQueue = dispatch_queue_create("audioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:self.serialQueue];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioDataOutput]) {
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioDataOutput];
[self.captureSession startRunning];
// Stop after arbitrary time
double delayInSeconds = 4.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
} else {
NSLog(#"Couldn't add them; error = %#",error);
}
...but I'm not sure how to implement the callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
?
}
I've tried getting the data out of the sampleBuffer and playing it using AVAudioPlayer by copying the code from this SO answer, but that code crashes on the appendBytes:length: method.
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
NSLog(#"Length = %i",audioBuffer.mDataByteSize);
[data appendBytes:frame length:audioBuffer.mDataByteSize]; // Crashes here
}
CFRelease(blockBuffer);
NSError *playerError;
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:&playerError];
if(player && !playerError) {
NSLog(#"Player was valid");
[player play];
} else {
NSLog(#"Error = %#",playerError);
}
Edit The CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer method returns an OSStatus code of -12737, which according to the documentation is kCMSampleBufferError_ArrayTooSmall
Edit2: Based on this mailing list response, I passed a size_t out parameter as the second parameter to ...GetAudioBufferList.... This returned 40. Right now I'm just passing in 40 as a hard-coded value, which seems to work (the OSStatus return value is 0, atleast).
Now the player initWithData:error: method gives the error:
Error Domain=NSOSStatusErrorDomain Code=1954115647 "The operation couldn’t be completed. (OSStatus error 1954115647.)" which I'm looking into.
I've done iOS programming for a long time, but I haven't used AVFoundation, CoreAudio, etc until now. It looks like there are a dozen ways to accomplish the same thing, depending on how low or high level you want to be, so any high level overviews or framework recommendations are appreciated.
Appendix
Recording to a file
Recording to a file using AVCaptureSession:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(captureSessionStartedNotification:) name:AVCaptureSessionDidStartRunningNotification object:nil];
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioFileOutput *audioOutput = [[AVCaptureAudioFileOutput alloc] init];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioOutput]) {
NSLog(#"Can add the inputs and outputs");
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioOutput];
[self.captureSession startRunning];
double delayInSeconds = 5.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
}
else {
NSLog(#"Error was = %#",error);
}
}
- (void)captureSessionStartedNotification:(NSNotification *)notification
{
AVCaptureSession *session = notification.object;
id audioOutput = session.outputs[0];
NSLog(#"Capture session started; notification = %#",notification);
NSLog(#"Notification audio output = %#",audioOutput);
[audioOutput startRecordingToOutputFileURL:[[self class] outputURL] outputFileType:AVFileTypeAppleM4A recordingDelegate:self];
}
+ (NSURL *)outputURL
{
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentPath = [searchPaths objectAtIndex:0];
NSString *filePath = [documentPath stringByAppendingPathComponent:#"z1.alac"];
return [NSURL fileURLWithPath:filePath];
}
Recording to a file using AVAudioRecorder:
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
#(kAudioFormatAppleLossless),
AVFormatIDKey,
nil];
NSError *recorderError;
self.recorder = [[AVAudioRecorder alloc] initWithURL:[[self class] outputURL] settings:recordSettings error:&recorderError];
self.recorder.delegate = self;
if (self.recorder && !recorderError) {
NSLog(#"Success!");
[self.recorder recordForDuration:10];
} else {
NSLog(#"Failure, recorder = %#",self.recorder);
NSLog(#"Error = %#",recorderError);
}
Ok, I ended up working at a lower level than AVFoundation -- not sure if that was necessary. I read up to Chapter 5 of Learning Core Audio and went with an implementation using Audio Queues. This code is translated from being used for recording to a file/playing back a file, so there are surely some unnecessary bits I've accidentally left in. Additionally, I'm not actually re-enqueuing buffers onto the Output Queue (I should be), but just as a proof of concept this works. The only file is listed here, and is also on Github.
//
// main.m
// Recorder
//
// Created by Maximilian Tagher on 8/7/13.
// Copyright (c) 2013 Tagher. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#define kNumberRecordBuffers 3
//#define kNumberPlaybackBuffers 3
#define kPlaybackFileLocation CFSTR("/Users/Max/Music/iTunes/iTunes Media/Music/Taylor Swift/Red/02 Red.m4a")
#pragma mark - User Data Struct
// listing 4.3
struct MyRecorder;
typedef struct MyPlayer {
AudioQueueRef playerQueue;
SInt64 packetPosition;
UInt32 numPacketsToRead;
AudioStreamPacketDescription *packetDescs;
Boolean isDone;
struct MyRecorder *recorder;
} MyPlayer;
typedef struct MyRecorder {
AudioQueueRef recordQueue;
SInt64 recordPacket;
Boolean running;
MyPlayer *player;
} MyRecorder;
#pragma mark - Utility functions
// Listing 4.2
static void CheckError(OSStatus error, const char *operation) {
if (error == noErr) return;
char errorString[20];
// See if it appears to be a 4-char-code
*(UInt32 *)(errorString + 1) = CFSwapInt32HostToBig(error);
if (isprint(errorString[1]) && isprint(errorString[2])
&& isprint(errorString[3]) && isprint(errorString[4])) {
errorString[0] = errorString[5] = '\'';
errorString[6] = '\0';
} else {
// No, format it as an integer
NSLog(#"Was integer");
sprintf(errorString, "%d",(int)error);
}
fprintf(stderr, "Error: %s (%s)\n",operation,errorString);
exit(1);
}
OSStatus MyGetDefaultInputDeviceSampleRate(Float64 *outSampleRate)
{
OSStatus error;
AudioDeviceID deviceID = 0;
AudioObjectPropertyAddress propertyAddress;
UInt32 propertySize;
propertyAddress.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(AudioDeviceID);
error = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject,
&propertyAddress, 0, NULL,
&propertySize,
&deviceID);
if (error) return error;
propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(Float64);
error = AudioHardwareServiceGetPropertyData(deviceID,
&propertyAddress, 0, NULL,
&propertySize,
outSampleRate);
return error;
}
// Recorder
static void MyCopyEncoderCookieToFile(AudioQueueRef queue, AudioFileID theFile)
{
OSStatus error;
UInt32 propertySize;
error = AudioQueueGetPropertySize(queue, kAudioConverterCompressionMagicCookie, &propertySize);
if (error == noErr && propertySize > 0) {
Byte *magicCookie = (Byte *)malloc(propertySize);
CheckError(AudioQueueGetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "Couldn't get audio queue's magic cookie");
CheckError(AudioFileSetProperty(theFile, kAudioFilePropertyMagicCookieData, propertySize, magicCookie), "Couldn't set audio file's magic cookie");
free(magicCookie);
}
}
// Player
static void MyCopyEncoderCookieToQueue(AudioFileID theFile, AudioQueueRef queue)
{
UInt32 propertySize;
// Just check for presence of cookie
OSStatus result = AudioFileGetProperty(theFile, kAudioFilePropertyMagicCookieData, &propertySize, NULL);
if (result == noErr && propertySize != 0) {
Byte *magicCookie = (UInt8*)malloc(sizeof(UInt8) * propertySize);
CheckError(AudioFileGetProperty(theFile, kAudioFilePropertyMagicCookieData, &propertySize, magicCookie), "Get cookie from file failed");
CheckError(AudioQueueSetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, propertySize), "Set cookie on file failed");
free(magicCookie);
}
}
static int MyComputeRecordBufferSize(const AudioStreamBasicDescription *format, AudioQueueRef queue, float seconds)
{
int packets, frames, bytes;
frames = (int)ceil(seconds * format->mSampleRate);
if (format->mBytesPerFrame > 0) { // Not variable
bytes = frames * format->mBytesPerFrame;
} else { // variable bytes per frame
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0) {
// Constant packet size
maxPacketSize = format->mBytesPerPacket;
} else {
// Get the largest single packet size possible
UInt32 propertySize = sizeof(maxPacketSize);
CheckError(AudioQueueGetProperty(queue, kAudioConverterPropertyMaximumOutputPacketSize, &maxPacketSize, &propertySize), "Couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0) {
packets = frames / format->mFramesPerPacket;
} else {
// Worst case scenario: 1 frame in a packet
packets = frames;
}
// Sanity check
if (packets == 0) {
packets = 1;
}
bytes = packets * maxPacketSize;
}
return bytes;
}
void CalculateBytesForPlaythrough(AudioQueueRef queue,
AudioStreamBasicDescription inDesc,
Float64 inSeconds,
UInt32 *outBufferSize,
UInt32 *outNumPackets)
{
UInt32 maxPacketSize;
UInt32 propSize = sizeof(maxPacketSize);
CheckError(AudioQueueGetProperty(queue,
kAudioQueueProperty_MaximumOutputPacketSize,
&maxPacketSize, &propSize), "Couldn't get file's max packet size");
static const int maxBufferSize = 0x10000;
static const int minBufferSize = 0x4000;
if (inDesc.mFramesPerPacket) {
Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
*outBufferSize = numPacketsForTime * maxPacketSize;
} else {
*outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize;
}
if (*outBufferSize > maxBufferSize &&
*outBufferSize > maxPacketSize) {
*outBufferSize = maxBufferSize;
} else {
if (*outBufferSize < minBufferSize) {
*outBufferSize = minBufferSize;
}
}
*outNumPackets = *outBufferSize / maxPacketSize;
}
#pragma mark - Record callback function
static void MyAQInputCallback(void *inUserData,
AudioQueueRef inQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc)
{
// NSLog(#"Input callback");
// NSLog(#"Input thread = %#",[NSThread currentThread]);
MyRecorder *recorder = (MyRecorder *)inUserData;
MyPlayer *player = recorder->player;
if (inNumPackets > 0) {
// Enqueue on the output Queue!
AudioQueueBufferRef outputBuffer;
CheckError(AudioQueueAllocateBuffer(player->playerQueue, inBuffer->mAudioDataBytesCapacity, &outputBuffer), "Input callback failed to allocate new output buffer");
memcpy(outputBuffer->mAudioData, inBuffer->mAudioData, inBuffer->mAudioDataByteSize);
outputBuffer->mAudioDataByteSize = inBuffer->mAudioDataByteSize;
// [NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize];
// Assuming LPCM so no packet descriptions
CheckError(AudioQueueEnqueueBuffer(player->playerQueue, outputBuffer, 0, NULL), "Enqueing the buffer in input callback failed");
recorder->recordPacket += inNumPackets;
}
if (recorder->running) {
CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
}
}
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
// NSLog(#"Output thread = %#",[NSThread currentThread]);
// NSLog(#"Output callback");
MyPlayer *aqp = (MyPlayer *)inUserData;
MyRecorder *recorder = aqp->recorder;
if (aqp->isDone) return;
}
int main(int argc, const char * argv[])
{
#autoreleasepool {
MyRecorder recorder = {0};
MyPlayer player = {0};
recorder.player = &player;
player.recorder = &recorder;
AudioStreamBasicDescription recordFormat;
memset(&recordFormat, 0, sizeof(recordFormat));
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2; //stereo
// Begin my changes to make LPCM work
recordFormat.mBitsPerChannel = 16;
// Haven't checked if each of these flags is necessary, this is just what Chapter 2 used for LPCM.
recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
// end my changes
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat), "AudioFormatGetProperty failed");
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat, MyAQInputCallback, &recorder, NULL, NULL, 0, &queue), "AudioQueueNewInput failed");
recorder.recordQueue = queue;
// Fills in ABSD a little more
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");
// MyCopyEncoderCookieToFile(queue, recorder.recordFile);
int bufferByteSize = MyComputeRecordBufferSize(&recordFormat,queue,0.5);
NSLog(#"%d",__LINE__);
// Create and Enqueue buffers
int bufferIndex;
for (bufferIndex = 0;
bufferIndex < kNumberRecordBuffers;
++bufferIndex) {
AudioQueueBufferRef buffer;
CheckError(AudioQueueAllocateBuffer(queue,
bufferByteSize,
&buffer), "AudioQueueBufferRef failed");
CheckError(AudioQueueEnqueueBuffer(queue, buffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
}
// PLAYBACK SETUP
AudioQueueRef playbackQueue;
CheckError(AudioQueueNewOutput(&recordFormat,
MyAQOutputCallback,
&player, NULL, NULL, 0,
&playbackQueue), "AudioOutputNewQueue failed");
player.playerQueue = playbackQueue;
UInt32 playBufferByteSize;
CalculateBytesForPlaythrough(queue, recordFormat, 0.1, &playBufferByteSize, &player.numPacketsToRead);
bool isFormatVBR = (recordFormat.mBytesPerPacket == 0
|| recordFormat.mFramesPerPacket == 0);
if (isFormatVBR) {
NSLog(#"Not supporting VBR");
player.packetDescs = (AudioStreamPacketDescription*) malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
} else {
player.packetDescs = NULL;
}
// END PLAYBACK
recorder.running = TRUE;
player.isDone = false;
CheckError(AudioQueueStart(playbackQueue, NULL), "AudioQueueStart failed");
CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 10, TRUE);
printf("Playing through, press <return> to stop:\n");
getchar();
printf("* done *\n");
recorder.running = FALSE;
player.isDone = true;
CheckError(AudioQueueStop(playbackQueue, false), "Failed to stop playback queue");
CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
AudioQueueDispose(playbackQueue, FALSE);
AudioQueueDispose(queue, TRUE);
}
return 0;
}
I want to open one application instance only once, so I just do like
NSArray *apps = [NSRunningApplication runningApplicationsWithBundleIdentifier:#"com.my.app"]; in main.m
When I open the app from the console, the count of apps array will be 0. However, when I double click on the app, it will be 1
So can anyone tell me what's the difference between the double click and console open? or give me another way to check if there's already one instance running?
that command queries the workspace and that is updated 'delayed'
when the app is started from finder, it is launched via the NSWorkspace, so the workspace is updated right away
when the app is started via the console/xcode it is not started via NSWorkspace, so that class returns 0 at the start. After the NSApplication of your process is up, the workspace is informed and its 1.
=> it is always correct in - (void)applicationDidFinishLaunching:(NSNotification *)aNotification
so either wait for NSApplication to start and THEN kill it (like you do now, but later)
OR
see Preventing multiple process instances on Linux for a way to do it without cocoa
OR
you look at launchd which can do this :) http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html
You can use below code. getBSDProcessList function will return NSArray of running process.[1]
static int GetBSDProcessList(kinfo_proc **procList, size_t *procCount)
// Returns a list of all BSD processes on the system. This routine
// allocates the list and puts it in *procList and a count of the
// number of entries in *procCount. You are responsible for freeing
// this list (use "free" from System framework).
// On success, the function returns 0.
// On error, the function returns a BSD errno value.
{
int err;
kinfo_proc * result;
bool done;
static const int name[] = { CTL_KERN, KERN_PROC, KERN_PROC_ALL, 0 };
// Declaring name as const requires us to cast it when passing it to
// sysctl because the prototype doesn't include the const modifier.
size_t length;
// assert( procList != NULL);
// assert(*procList == NULL);
// assert(procCount != NULL);
*procCount = 0;
// We start by calling sysctl with result == NULL and length == 0.
// That will succeed, and set length to the appropriate length.
// We then allocate a buffer of that size and call sysctl again
// with that buffer. If that succeeds, we're done. If that fails
// with ENOMEM, we have to throw away our buffer and loop. Note
// that the loop causes use to call sysctl with NULL again; this
// is necessary because the ENOMEM failure case sets length to
// the amount of data returned, not the amount of data that
// could have been returned.
result = NULL;
done = false;
do {
assert(result == NULL);
// Call sysctl with a NULL buffer.
length = 0;
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
NULL, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
// Allocate an appropriately sized buffer based on the results
// from the previous call.
if (err == 0) {
result = malloc(length);
if (result == NULL) {
err = ENOMEM;
}
}
// Call sysctl again with the new buffer. If we get an ENOMEM
// error, toss away our buffer and start again.
if (err == 0) {
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
result, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
if (err == 0) {
done = true;
} else if (err == ENOMEM) {
assert(result != NULL);
free(result);
result = NULL;
err = 0;
}
}
} while (err == 0 && ! done);
// Clean up and establish post conditions.
if (err != 0 && result != NULL) {
free(result);
result = NULL;
}
*procList = result;
if (err == 0) {
*procCount = length / sizeof(kinfo_proc);
}
assert( (err == 0) == (*procList != NULL) );
return err;
}
+ (NSArray*)getBSDProcessList
{
NSMutableArray *ret = [NSMutableArray arrayWithCapacity:1];
kinfo_proc *mylist;
size_t mycount = 0;
mylist = (kinfo_proc *)malloc(sizeof(kinfo_proc));
GetBSDProcessList(&mylist, &mycount);
int k;
for(k = 0; k < mycount; k++) {
kinfo_proc *proc = NULL;
proc = &mylist[k];
NSString *fullName = [[self infoForPID:proc->kp_proc.p_pid] objectForKey:(id)kCFBundleNameKey];
NSLog(#"fullName %#", fullName);
if (fullName != nil)
{
[ret addObject:fullName];
}
}
free(mylist);
return ret;
}
+ (NSDictionary *)infoForPID:(pid_t)pid
{
NSDictionary *ret = nil;
ProcessSerialNumber psn = { kNoProcess, kNoProcess };
if (GetProcessForPID(pid, &psn) == noErr) {
CFDictionaryRef cfDict = ProcessInformationCopyDictionary(&psn,kProcessDictionaryIncludeAllInformationMask);
ret = [NSDictionary dictionaryWithDictionary:(NSDictionary *)cfDict];
CFRelease(cfDict);
}
return ret;
}
Take a look at Technical Q&A QA1123 (Getting List of All Processes on Mac OS X)
I want to get list of all running processes in MacOs.
When I use
[myWorkspace runningApplications];
I get only list of current user Applications.
How I can find out list of all processes, with root or mysql owner.
Have a look at Technical Q&A QA1123
#include <sys/sysctl.h>
#include <pwd.h>
typedef struct kinfo_proc kinfo_proc;
static int GetBSDProcessList(kinfo_proc **procList, size_t *procCount)
// Returns a list of all BSD processes on the system. This routine
// allocates the list and puts it in *procList and a count of the
// number of entries in *procCount. You are responsible for freeing
// this list (use "free" from System framework).
// On success, the function returns 0.
// On error, the function returns a BSD errno value.
{
int err;
kinfo_proc * result;
bool done;
static const int name[] = { CTL_KERN, KERN_PROC, KERN_PROC_ALL, 0 };
// Declaring name as const requires us to cast it when passing it to
// sysctl because the prototype doesn't include the const modifier.
size_t length;
// assert( procList != NULL);
// assert(*procList == NULL);
// assert(procCount != NULL);
*procCount = 0;
// We start by calling sysctl with result == NULL and length == 0.
// That will succeed, and set length to the appropriate length.
// We then allocate a buffer of that size and call sysctl again
// with that buffer. If that succeeds, we're done. If that fails
// with ENOMEM, we have to throw away our buffer and loop. Note
// that the loop causes use to call sysctl with NULL again; this
// is necessary because the ENOMEM failure case sets length to
// the amount of data returned, not the amount of data that
// could have been returned.
result = NULL;
done = false;
do {
assert(result == NULL);
// Call sysctl with a NULL buffer.
length = 0;
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
NULL, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
// Allocate an appropriately sized buffer based on the results
// from the previous call.
if (err == 0) {
result = malloc(length);
if (result == NULL) {
err = ENOMEM;
}
}
// Call sysctl again with the new buffer. If we get an ENOMEM
// error, toss away our buffer and start again.
if (err == 0) {
err = sysctl( (int *) name, (sizeof(name) / sizeof(*name)) - 1,
result, &length,
NULL, 0);
if (err == -1) {
err = errno;
}
if (err == 0) {
done = true;
} else if (err == ENOMEM) {
assert(result != NULL);
free(result);
result = NULL;
err = 0;
}
}
} while (err == 0 && ! done);
// Clean up and establish post conditions.
if (err != 0 && result != NULL) {
free(result);
result = NULL;
}
*procList = result;
if (err == 0) {
*procCount = length / sizeof(kinfo_proc);
}
assert( (err == 0) == (*procList != NULL) );
return err;
}
+ (NSArray*)getBSDProcessList
{
kinfo_proc *mylist =NULL;
size_t mycount = 0;
GetBSDProcessList(&mylist, &mycount);
NSMutableArray *processes = [NSMutableArray arrayWithCapacity:(int)mycount];
for (int i = 0; i < mycount; i++) {
struct kinfo_proc *currentProcess = &mylist[i];
struct passwd *user = getpwuid(currentProcess->kp_eproc.e_ucred.cr_uid);
NSMutableDictionary *entry = [NSMutableDictionary dictionaryWithCapacity:4];
NSNumber *processID = [NSNumber numberWithInt:currentProcess->kp_proc.p_pid];
NSString *processName = [NSString stringWithFormat: #"%s",currentProcess->kp_proc.p_comm];
if (processID)[entry setObject:processID forKey:#"processID"];
if (processName)[entry setObject:processName forKey:#"processName"];
if (user){
NSNumber *userID = [NSNumber numberWithUnsignedInt:currentProcess->kp_eproc.e_ucred.cr_uid];
NSString *userName = [NSString stringWithFormat: #"%s",user->pw_name];
if (userID)[entry setObject:userID forKey:#"userID"];
if (userName)[entry setObject:userName forKey:#"userName"];
}
[processes addObject:[NSDictionary dictionaryWithDictionary:entry]];
}
free(mylist);
return [NSArray arrayWithArray:processes];
}
How to change Mac display brightness from cocoa application?
CGDisplayIOServicePort is deprecated in OS 10.9 – so you have to use IOServiceGetMatchingServices to get the service parameter for IODisplaySetFloatParameter. Here's a basic function that looks for services named "display" and changes their brightness.
- (void) setBrightnessTo: (float) level
{
io_iterator_t iterator;
kern_return_t result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect"),
&iterator);
// If we were successful
if (result == kIOReturnSuccess)
{
io_object_t service;
while ((service = IOIteratorNext(iterator))) {
IODisplaySetFloatParameter(service, kNilOptions, CFSTR(kIODisplayBrightnessKey), level);
// Let the object go
IOObjectRelease(service);
return;
}
}
}
And in Swift (via #Dov):
private func setBrightnessLevel(level: Float) {
var iterator: io_iterator_t = 0
let result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect").takeUnretainedValue(),
&iterator)
if result == kIOReturnSuccess {
var service: io_object_t = 1
for ;; {
service = IOIteratorNext(iterator)
if service == 0 {
break
}
IODisplaySetFloatParameter(service, 0, kIODisplayBrightnessKey, level)
IOObjectRelease(service)
}
}
}
(code is open source of course)
Expanding on Alex's answer:
In Xcode8 beta3 with Swift3, the code is a lot more streamlined.
private func setBrightness(level: Float) {
let service = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IODisplayConnect"))
IODisplaySetFloatParameter(service, 0, kIODisplayBrightnessKey, level)
IOObjectRelease(service)
}
From Alec Jacobson's Brightness Menu source code:
- (void) set_brightness:(float) new_brightness {
CGDirectDisplayID display[kMaxDisplays];
CGDisplayCount numDisplays;
CGDisplayErr err;
err = CGGetActiveDisplayList(kMaxDisplays, display, &numDisplays);
if (err != CGDisplayNoErr)
printf("cannot get list of displays (error %d)\n",err);
for (CGDisplayCount i = 0; i < numDisplays; ++i) {
CGDirectDisplayID dspy = display[i];
CFDictionaryRef originalMode = CGDisplayCurrentMode(dspy);
if (originalMode == NULL)
continue;
io_service_t service = CGDisplayIOServicePort(dspy);
float brightness;
err= IODisplayGetFloatParameter(service, kNilOptions, kDisplayBrightness,
&brightness);
if (err != kIOReturnSuccess) {
fprintf(stderr,
"failed to get brightness of display 0x%x (error %d)",
(unsigned int)dspy, err);
continue;
}
err = IODisplaySetFloatParameter(service, kNilOptions, kDisplayBrightness,
new_brightness);
if (err != kIOReturnSuccess) {
fprintf(stderr,
"Failed to set brightness of display 0x%x (error %d)",
(unsigned int)dspy, err);
continue;
}
}
}
in Xcode 8 beta 6 does not compile:
IODisplaySetFloatParameter(service, 0, kIODisplayBrightnessKey, level)
Cannot convert value of type 'String' to expected argument type 'CFString!'
so let' cast it:
IODisplaySetFloatParameter(service, 0, kIODisplayBrightnessKey as CFString!, level)