I have a cross platform (Xamrin) app that does some classic Bluetooth communication and is working absolutely fine on iOS8. However, after re-building and running it on iOS9 I can't get the NSInputStream to ever have "HasBytesAvailable"= true. Note: I followed all the instructions from Xamarin's website.
I tried both assigning a delegate to the InputStream and waiting on the NSRunLoop but the stream never seems to have bytes available. The event only fires (on iOS9) when the Input stream is opened (on iOS8 it fires as expected).
Here is a snippet of the code that does the reading successfully on iOS8 (delegate method):
EAsession.InputStream.Delegate = new Foo();
EAsession.InputStream.Schedule(NSRunLoop.Current,NSRunLoop.NSDefaultRunLoopMode);
EAsession.InputStream.Open();
(NSRunLoop.Current).RunUntil(NSDate.FromTimeIntervalSinceNow(2));
Where Foo is a class that implements: NSObject, INSStreamDelegate
public class Foo :NSObject, INSStreamDelegate
{
[Export("stream:handleEvent:")]
public void HandleEvent(Foundation.NSStream theStream, Foundation.NSStreamEvent streamEvent)
{
//Code to read bytes here
}
To make sure there really are bytes sent to the iPhone5 I modified the external Bluetooth device to just echo any bytes received.
Using either method (delegate or waiting on NSRunLoop) on iOS8, the echo arrives immediately. However when I change the target device to the iOS9 I can wait forever and HasBytesAvailable would always be false.
I even tried reading regardless of HasBytesAvailable being false, but nothing is being read (no big surprise there I guess).
Moreover, I tried building using both Xcode6.4 and Xcode 7 with same result.
At the moment I am out of ideas so any help would be greatly appreciated!
Thanks in advance!
EDIT:
I contacted Xamarin and I am writing a test app for them to test whether it is an Apple issue or Xamarin issue.
Also, see the comment in this link about Bluetooth... perhaps related?
So I finally solved it,
I am not entirely sure what was the problem eventually as I did a few things:
Update to Xamarin beta stream ( 5.9.7 build 12).
Update to the latest iOS (9.0.2)
Use Xcode 7.0
And change where I called the delegate, open stream and schedule.
The change to the code was that instead of handling the opening, scheduling and delegate assignment in the receiving and sending methods separately, I did it at the same place. Namely, when creating the EASession I assigned all of the above like so:
s = new EASession(device, protocol);
output_stream = s.OutputStream;
input_stream = s.InputStream;
output_stream.Delegate=this;
output_stream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
output_stream.Open();
input_stream.Delegate = this;
input_stream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
input_stream.Open();
At the moment I am not unscheduling the input and output streams and I am not sure it this is the right thing to do. I'll try and keep this updated as I move from working to a nice looking code that works...
Unlike what Bluecode describes I definitely needed to use my input stream and I was always opening it; so I am not sure if it was the same problem. I am still unsure what was the solution to the problem as it can be either one or a combination of the above. I might experiment a bit more into this later on when I have more time to see which one was the single solution? If any.
Hope this helps any.
Cheers
I was able to solve this problem by opening an InputStream to the EAAccessory that I had no intention of using. My guess is that both an Input and Output streams are required as part of the new iOS9 Bluetooth contract. This is the relevant update to my working code:
accessory.Delegate = new EAAccessoryDelegateHandler ();
if (PrinterSession == null)
{
PrinterSession = new EASession (accessory, PrintProtocol);
}
var mysession = PrinterSession;
mysession.InputStream.Open();
mysession.OutputStream.Delegate = this;
mysession.OutputStream.Schedule (NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
mysession.OutputStream.Open ();
Then when I close my OutputStream I close the unused InputStream immediately afterwards.
Related
I'm trying to log the time the GPU takes to render a frame. To do this I found that Unity implemented a struct FrameTiming, and a class named FrameTimingManager
The FrameTiming struct has a property gpuFrameTime which sounds like exactly what I need, however the value is never set, and the documentation on it doesn't provide much help either
public double gpuFrameTime;
Description
The GPU time for a given frame, in ms.
Looking further I found the FrameTimingManager class which contains a static method for GetGpuTimerFrequency(), which has the not so helpful documentation stating only:
Returns ulong GPU timer frequency for current platform.
Description
This returns the frequency of GPU timer on the current platform, used to interpret timing results. If the platform does not support returning this value it will return 0.
Calling this method in an update loop only ever yields 0 (on both Window 10 running Unity 2019.3 and Android phone running Android 10).
private void OnEnable()
{
frameTiming = new FrameTiming();
}
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result); //logs 0
var gpuFrameTime = frameTiming.gpuFrameTime;
Debug.LogFormat("gpuFrameTime: {0}", gpuFrameTime); //logs 0
}
So what's the deal here, am I using the FrameTimeManager incorrectly, or are Windows and Android not supported (Unity mentions in the docs that not all platforms are supported, but nowhere do they give a list of supported devices..)?
While grabbing documentation links for the question I stumbled across some forum posts that shed light on the issue, so leaving it here for future reference.
The FrameTimingManager is indeed not supported for Windows, and only has limited support for Android devices, more specifically only for Android Vulkan devices. As explained by jwtan_Unity on the forums here (emphasis mine):
FrameTimingManager was introduced to support Dynamic Resolution. Thus, it is only supported on platforms that support Dynamic Resolution. These platforms are currently Xbox One, PS4, Nintendo Switch, iOS, macOS and tvOS (Metal only), Android (Vulkan only), Windows Standalone and UWP (DirectX 12 only).
Now to be able to use the FrameTimingManager.GetGpuTimerFrequency() we need to do something else first. We need to take a snapshot of the current timings using FrameTimingManager.CaptureFrameTimings first (this needs to be done every frame). From the docs:
This function triggers the FrameTimingManager to capture a snapshot of FrameTiming's data, that can then be accessed by the user.
The FrameTimingManager tries to capture as many frames as the platform allows but will only capture complete timings from finished and valid frames so the number of frames it captures may vary. This will also capture platform specific extended frame timing data if the platform supports more in depth data specifically available to it.
As explained by Timothyh_Unity on the forums hereenter link description here
CaptureFrameTimings() - This should be called once per frame(presuming you want timing data that frame). Basically this function captures a user facing collection of timing data.
So the total code to get the GPU frequency (on a supported device) would be
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result);
}
Note that all FrameTimingManager methods are static, and do not require you to instantiate a manager first
Why none of this is properly documented by Unity beats me...
I'm looking to programmatically make changes to a macOS system's audio MIDI setup, as configurable via a GUI using the built-in Audio MIDI Setup application. Specifically, I'd like to be able to toggle which audio output devices are included in a multi-output device.
Is there any method available for accomplishing that? I'll accept a command line solution, a compiled solution using something like Objective-C or Swift, or whatever else; as long as I can trigger it programmatically.
Yes, there is.
On Mac there is this framework called Core Audio. The interface found in AudioHardware.h is an interface to the HAL (Hardware Abstraction Layer). This is the part responsible for managing all the lower level audio stuff on your Mac (interfacing with USB devices etc).
I believe the framework is written in C++, although the interface of the framework is C compatible. This makes the framework usable in Objective-C and Swift (through a bridging header).
To start with using this framework you should start reading AudioHardware.h in CoreAudio.framework. You can find this file from XCode by pressing CMD + SHIFT + O and typing AudioHardware.h.
To give you an example as starter (which creates a new aggregate with no subdevices):
// Create a CFDictionary to hold all the options associated with the to-be-created aggregate
CFMutableDictionaryRef params = CFDictionaryCreateMutable(kCFAllocatorDefault, 10, NULL, NULL);
// Define the UID of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceUIDKey), CFSTR("DemoAggregateUID"));
// Define the name of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceNameKey), CFSTR("DemoAggregateName"));
// Define if the aggregate should be a stacked aggregate (ie multi-output device)
static char stacked = 0; // 0 = stacked, 1 = aggregate
CFNumberRef cf_stacked = CFNumberCreate(kCFAllocatorDefault, kCFNumberCharType, &stacked);
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceIsStackedKey), cf_stacked);
// Create the actual aggrgate device
AudioObjectID resulting_id = 0;
OSStatus result = AudioHardwareCreateAggregateDevice(params, &resulting_id);
// Check if we got an error.
// Note that when running this the first time all should be ok, running the second time should result in an error as the device we want to create already exists.
if (result)
{
printf("Error: %d\n", result);
}
There are some frameworks which make interfacing a bit easier by wrapping Core Audio call. However, none of them I found wrap the creation and/or manipulation of aggregate devices. Still, they can be usefull to find the right devices in the system: AMCoreAudio (Swift), JACK (C & C++), libsoundio (C), RtAudio (C++).
I am attempting to implement a bluetooth hands free profile under OS X 10.10 using IOBluetoothHandsFreeDevice. I can connect and control my iPhone 6 running iOS 8.2 without issues.
Every first attempt to make or receive a call (or use Siri) results in static noise being streamed from my phone to my computer instead of audio. The static doesn't appear to be completely random as it is in sync with the sound I expect to hear (e.g. ringing tone, etc). The audio from my Mac to my iPhone is however crystal clear.
After the initial static audio call, making another call using the same connection yields 50/50 results, half the calls perfect, and the other half being static.
Here's the basic code:
IOBluetoothDevice* device = ...;
_hfDevice = [[IOBluetoothHandsFreeDevice alloc] initWithDevice:device delegate:self];
uint32_t supportedFeatures = _hfDevice.supportedFeatures;
supportedFeatures |= IOBluetoothHandsFreeDeviceFeatureEnhancedCallStatus;
supportedFeatures |= IOBluetoothHandsFreeDeviceFeatureEnhancedCallControl;
supportedFeatures |= IOBluetoothHandsFreeDeviceFeatureCLIPresentation;
[_hfDevice setSupportedFeatures:supportedFeatures];
[_hfDevice connect];
// after connected delegate method...
// dial a phone number
const NSString* phoneNumber = #"...";
[_hfDevice dialNumber:phoneNumber];
// or activate siri
const NSString* activateSiri = #"AT+BVRA=1";
[_hfDevice sendATCommand:activateSiri];
I expect that I'm possibly overlooking something, but Apple's documentation on their bluetooth code doesn't contain any examples for creating hands free applications. Has anyone else had this experience?
This issue was resolved in an update to OS X
We are trying to communicate with a USB HIDDevice. This device is working fine in windows, where we can send a report and get a report back using WriteFile() and ReadFile().
On the Mac, we are trying to interface with the device using setReoprt() and getReport(). But getReport() is not returning any data, but an error.
What is the wrong in the application?
In order to make use of asynchronous behavior, the event source obtained using getAsyncEventSource must be added to a run loop.
The above note is part of the comment of setReport. U might need to learn the runloop mechanism of Runloop in Mac OS first.
Since it's impossible to explain the mechanism here. The following functions and orders might help u coding when u get familiar with RunLoop.(Try to search "CFRunLoop" in google)
CFRunLoopGetCurrent();
CFRunLoopRun();
CFRunLoopAddSource(CFRunLoopRef rl, CFRunLoopSourceRef source, CFStringRef mode);
CFRunLoopStop(CFRunLoopRef rl);(i usually call this function in the callback method)
I am writing an Objective-C++ framework which needs to host Audio Units. Everything works perfectly fine if I attempt to make use of Apple's default units like the DLS Synth and various effects. However, my application seems to be unable to find any third-party Audio Units (in /Library/Audio/Plug-Ins/Components).
For example, the following code snippet...
CAComponentDescription tInstrumentDesc =
CAComponentDescription('aumu','dls ','appl');
AUGraphAddNode(mGraph, &tInstrumentDesc, &mInstrumentNode);
AUGraphOpen(mGraph);
...works just fine. However, if I instead initialize tInstrumentDesc with 'aumu', 'NiMa', '-Ni-' (the description for Native Instruments' Massive Synth), then AUGraphOpen() will return the OSStatus error badComponentType and the AUGraph will fail to open. This holds true for all of my third party Audio Units.
The following code, modified from the Audacity source, sheds a little light on the problem. It loops through all of the available Audio Units of a certain type and prints out their name.
ComponentDescription d;
d.componentType = 'aumu';
d.componentSubType = 0;
d.componentManufacturer = 0;
d.componentFlags = 0;
d.componentFlagsMask = 0;
Component c = FindNextComponent(NULL, &d);
while(c != NULL)
{
ComponentDescription found;
Handle nameHandle = NewHandle(0);
GetComponentInfo(c, &found, nameHandle, 0, 0);
printf((*nameHandle)+1);
printf("\n");
c = FindNextComponent(c, &d);
}
After running this code, the only output is Apple: DLSMusicDevice (which is the Audio Unit fitting the description 'aumu', 'dls ', 'appl' above).
This doesn't seem to be a problem with the units themselves, as Apple's auval tool lists my third party Units (they validate too).
I've tried running my test application with sudo, and the custom framework I'm working on is in /Library/Frameworks.
Turns out, the issue was due to compiling for 64-bit. After switching to 32-bit, everything began to work as advertised. Not much of a solution I guess, but there you have it.
To clarify, I mean changing the XCode Build Setting ARCHS to "32-bit Intel" as opposed to the default "Standard 32/64-bit Intel".
First of all, I'm going to assume that you initialized mGraph by calling NewAUGraph(&mGraph) instead of just declaring it and then trying to open it. Beyond that, I suspect that the problem here is with your AU graph, not the AudioUnits themselves. But to be sure, you should probably try loading the AudioUnit manually (ie, outside of a graph) and see if you get any errors that way.