Is there a Snow Leopard compatible "sudden motion sensor" API available? - cocoa

I have been using Unimotion in my application to read motion sensor values for Apple laptops, but have been unable to port the code to 10.6 64-bit. (I have also tried SMSLib and had the no luck either.)
Is there any simple 10.6 compatible
SMS API?
If there is no alternative, I am also considering patching one of the libraries. Both Unimotion and SMSLib use the following call, which has been deprecated in 10.5 and removed from 10.6 64-bit:
result = IOConnectMethodStructureIStructureO(
dataPort, kernFunc, structureInputSize,
&structureOutputSize, &inputStructure,
outputStructure);
Is there any simple way to replace
this with new IOKit calls?
(This post did not really get me much further)

If there is no alternative, I am also considering patching one of the libraries. Both Unimotion and SMSLib use the following call, which has been deprecated in 10.5 and removed from 10.6 64-bit:
result = IOConnectMethodStructureIStructureO(
dataPort, kernFunc, structureInputSize,
&structureOutputSize, &inputStructure,
outputStructure);
Is there any simple way to replace this with new IOKit calls?
That very document suggests replacements. What about this one?
kern_return_t
IOConnectCallStructMethod(
mach_port_t connection, // In
uint32_t selector, // In
const void *inputStruct, // In
size_t inputStructCnt, // In
void *outputStruct, // Out
size_t *outputStructCnt) // In/Out
As far as I can tell, there should be no difference except for the order of the arguments. That said, I've never used I/O Kit, so I could be missing some critical conceptual difference that will make this call not work as the old one did.

I haven't used this in 10.6, but does this work?
http://code.google.com/p/google-mac-qtz-patches/

Related

What is the correct usage of FrameTiming and FrameTimingManager

I'm trying to log the time the GPU takes to render a frame. To do this I found that Unity implemented a struct FrameTiming, and a class named FrameTimingManager
The FrameTiming struct has a property gpuFrameTime which sounds like exactly what I need, however the value is never set, and the documentation on it doesn't provide much help either
public double gpuFrameTime;
Description
The GPU time for a given frame, in ms.
Looking further I found the FrameTimingManager class which contains a static method for GetGpuTimerFrequency(), which has the not so helpful documentation stating only:
Returns ulong GPU timer frequency for current platform.
Description
This returns the frequency of GPU timer on the current platform, used to interpret timing results. If the platform does not support returning this value it will return 0.
Calling this method in an update loop only ever yields 0 (on both Window 10 running Unity 2019.3 and Android phone running Android 10).
private void OnEnable()
{
frameTiming = new FrameTiming();
}
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result); //logs 0
var gpuFrameTime = frameTiming.gpuFrameTime;
Debug.LogFormat("gpuFrameTime: {0}", gpuFrameTime); //logs 0
}
So what's the deal here, am I using the FrameTimeManager incorrectly, or are Windows and Android not supported (Unity mentions in the docs that not all platforms are supported, but nowhere do they give a list of supported devices..)?
While grabbing documentation links for the question I stumbled across some forum posts that shed light on the issue, so leaving it here for future reference.
The FrameTimingManager is indeed not supported for Windows, and only has limited support for Android devices, more specifically only for Android Vulkan devices. As explained by jwtan_Unity on the forums here (emphasis mine):
FrameTimingManager was introduced to support Dynamic Resolution. Thus, it is only supported on platforms that support Dynamic Resolution. These platforms are currently Xbox One, PS4, Nintendo Switch, iOS, macOS and tvOS (Metal only), Android (Vulkan only), Windows Standalone and UWP (DirectX 12 only).
Now to be able to use the FrameTimingManager.GetGpuTimerFrequency() we need to do something else first. We need to take a snapshot of the current timings using FrameTimingManager.CaptureFrameTimings first (this needs to be done every frame). From the docs:
This function triggers the FrameTimingManager to capture a snapshot of FrameTiming's data, that can then be accessed by the user.
The FrameTimingManager tries to capture as many frames as the platform allows but will only capture complete timings from finished and valid frames so the number of frames it captures may vary. This will also capture platform specific extended frame timing data if the platform supports more in depth data specifically available to it.
As explained by Timothyh_Unity on the forums hereenter link description here
CaptureFrameTimings() - This should be called once per frame(presuming you want timing data that frame). Basically this function captures a user facing collection of timing data.
So the total code to get the GPU frequency (on a supported device) would be
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result);
}
Note that all FrameTimingManager methods are static, and do not require you to instantiate a manager first
Why none of this is properly documented by Unity beats me...

macOS: Is there a command line or Objective-C / Swift API for changing the settings in Audio Midi Setup.app?

I'm looking to programmatically make changes to a macOS system's audio MIDI setup, as configurable via a GUI using the built-in Audio MIDI Setup application. Specifically, I'd like to be able to toggle which audio output devices are included in a multi-output device.
Is there any method available for accomplishing that? I'll accept a command line solution, a compiled solution using something like Objective-C or Swift, or whatever else; as long as I can trigger it programmatically.
Yes, there is.
On Mac there is this framework called Core Audio. The interface found in AudioHardware.h is an interface to the HAL (Hardware Abstraction Layer). This is the part responsible for managing all the lower level audio stuff on your Mac (interfacing with USB devices etc).
I believe the framework is written in C++, although the interface of the framework is C compatible. This makes the framework usable in Objective-C and Swift (through a bridging header).
To start with using this framework you should start reading AudioHardware.h in CoreAudio.framework. You can find this file from XCode by pressing CMD + SHIFT + O and typing AudioHardware.h.
To give you an example as starter (which creates a new aggregate with no subdevices):
// Create a CFDictionary to hold all the options associated with the to-be-created aggregate
CFMutableDictionaryRef params = CFDictionaryCreateMutable(kCFAllocatorDefault, 10, NULL, NULL);
// Define the UID of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceUIDKey), CFSTR("DemoAggregateUID"));
// Define the name of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceNameKey), CFSTR("DemoAggregateName"));
// Define if the aggregate should be a stacked aggregate (ie multi-output device)
static char stacked = 0; // 0 = stacked, 1 = aggregate
CFNumberRef cf_stacked = CFNumberCreate(kCFAllocatorDefault, kCFNumberCharType, &stacked);
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceIsStackedKey), cf_stacked);
// Create the actual aggrgate device
AudioObjectID resulting_id = 0;
OSStatus result = AudioHardwareCreateAggregateDevice(params, &resulting_id);
// Check if we got an error.
// Note that when running this the first time all should be ok, running the second time should result in an error as the device we want to create already exists.
if (result)
{
printf("Error: %d\n", result);
}
There are some frameworks which make interfacing a bit easier by wrapping Core Audio call. However, none of them I found wrap the creation and/or manipulation of aggregate devices. Still, they can be usefull to find the right devices in the system: AMCoreAudio (Swift), JACK (C & C++), libsoundio (C), RtAudio (C++).

OpenCL half precision extension support on Apple OS X

Does anybody know the state of half precision floating point support in OpenCL as implemented by Apple.
According to OpenCL 1.1 spec The following statement should enable half2:
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
but when I come to build the kernel the compiler throws a message such as
error: variable has incomplete type 'half4' (aka 'struct __Reserved_Name__Do_
The following thread ask a similar question : OpenCL half4 type Apple OS X
But this thread is old. Can anyone please tell me if the half precision is supported by apple recently?
When you want to know if a extension is supported by a specific implementation (regardless if it's Apple's or another), just use the function
cl_int clGetPlatformInfo(cl_platform_id platform,
cl_platform_info param_name,
size_t param_value_size,
void *param_value,
size_t *param_value_size_ret)
passing the value CL_PLATFORM_EXTENSIONS for the param_name argument. it'll return a space-separated list of extension names.
Note that this list must returns the extensions "supported by all devices associated with this platform".
So it means that even if the platform supports the cl_khr_fp16 extension but not your device, it won't appear in the list.
To know the extension available on your device use
clGetDeviceInfo(...)
with the value CL_DEVICE_EXTENSIONS for the param_name argument.
For a generic answer to OpenCL extension querying see CaptainObvious' answer above (https://stackoverflow.com/a/17425167/5394228).
I asked Apple Developer Support about this and they say that half support is available in Metal and there are no plans to add new functionality to OpenCL now. (they answered Nov 2017)

USBHIDManager HID, getReport() and setReport() On Mac Environment

We are trying to communicate with a USB HIDDevice. This device is working fine in windows, where we can send a report and get a report back using WriteFile() and ReadFile().
On the Mac, we are trying to interface with the device using setReoprt() and getReport(). But getReport() is not returning any data, but an error.
What is the wrong in the application?
In order to make use of asynchronous behavior, the event source obtained using getAsyncEventSource must be added to a run loop.
The above note is part of the comment of setReport. U might need to learn the runloop mechanism of Runloop in Mac OS first.
Since it's impossible to explain the mechanism here. The following functions and orders might help u coding when u get familiar with RunLoop.(Try to search "CFRunLoop" in google)
CFRunLoopGetCurrent();
CFRunLoopRun();
CFRunLoopAddSource(CFRunLoopRef rl, CFRunLoopSourceRef source, CFStringRef mode);
CFRunLoopStop(CFRunLoopRef rl);(i usually call this function in the callback method)

Same C code producing different results on Mac OS X than Windows and Linux

I'm working with an older version of OpenSSL, and I'm running into some behavior that has stumped me for days when trying to work with cross-platform code.
I have code that calls OpenSSL to sign something. My code is modeled after the code in ASN1_sign, which is found in a_sign.c in OpenSSL, which exhibits the same issues when I use it. Here is the relevant line of code (which is found and used exactly the same way in a_sign.c):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx is a structure that OpenSSL uses, not relevant to this discussion
buf_in is a char* of the data that is to be signed
inl is the length of buf_in
EVP_SignUpdate can be called repeatedly in order to read in data to be signed before EVP_SignFinal is called to sign it.
Everything works fine when this code is used on Ubuntu and Windows 7, both of them produce the exact same signatures given the same inputs.
On OS X, if the size of inl is less than 64 (that is there are 64 bytes or less in buf_in), then it too produces the same signatures as Ubuntu and Windows. However, if the size of inl becomes greater than 64, it produces its own internally consistent signatures that differ from the other platforms. By internally consistent, I mean that the Mac will read the signatures and verify them as proper, while it will reject the signatures from Ubuntu and Windows, and vice versa.
I managed to fix this issue, and cause the same signatures to be created by changing that line above to the following, where it reads the buffer one byte at a time:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
This causes OS X to reject its own signatures of data > 64 bytes as invalid, and I tracked down a similar line elsewhere for verifying signatures that needed to be broken up in an identical manner.
This fixes the signature creation and verification, but something is still going wrong, as I'm encountering other problems, and I really don't want to go traipsing (and modifying!) much deeper into OpenSSL.
Surely I'm doing something wrong, as I'm seeing the exact same issues when I use stock ASN1_sign. Is this an issue with the way that I compiled OpenSSL? For the life of me I can't figure it out. Can anyone educate me on what bone-headed mistake I must be making?
This is likely a bug in the MacOS implementation. I recommend you file a bug by sending the above text to the developers as described at http://www.openssl.org/support/faq.html#BUILD17
There are known issues with OpenSSL on the mac (you have to jump through a few hoops to ensure it links with the correct library instead of the system library). Did you compile it yourself? The PROBLEMS file in the distribution explains the details of the issue and suggests a few workarounds. (Or if you are running with shared libraries, double check that your DYLD_LIBRARY_PATH is correctly set). No guarantee, but this looks a likely place to start...
The most common issue porting Windows and Linux code around is default values of memory. I think Windows sets it to 0xDEADBEEF and Linux set's it to 0s.

Resources