Where to initialize System Media Player - media

We want to port cobalt release 11 to our media player, but we don't know where to initialize the media player in Cobalt.
I wonder if it is a good place to call the media framework initialization in createWebMediaPlayer (cobalt/media/media_module_starboard.cc).
scoped_ptr<WebMediaPlayer>CreateWebMediaPlayer(WebMediaPlayerClient *client) OVERRIDE {
...
XXX_mediaplayer_initialize(); <<<< call our media player initialization
#if defined(COBALT_MEDIA_SOURCE_2016)
SbWindow window = kSbWindowInvalid;
if (system_window_) {
window = system_window_->GetSbWindow();
}
...
}
Since our media player initialization takes more than 1 sec, it may cause Youtube Movie begins to play slowly > 1sec.
Please advise any suitable place to initialize system media framework in Cobalt release 11.

Does this need to be called only once during system startup or every time a video is going to be played?
For the first case, you can put it into your Starboard Application initialization code. For the second case, you may still "pre-warm" it in your Starboard Application code.
You should always avoid modifying Cobalt code directly.

Related

What is the correct usage of FrameTiming and FrameTimingManager

I'm trying to log the time the GPU takes to render a frame. To do this I found that Unity implemented a struct FrameTiming, and a class named FrameTimingManager
The FrameTiming struct has a property gpuFrameTime which sounds like exactly what I need, however the value is never set, and the documentation on it doesn't provide much help either
public double gpuFrameTime;
Description
The GPU time for a given frame, in ms.
Looking further I found the FrameTimingManager class which contains a static method for GetGpuTimerFrequency(), which has the not so helpful documentation stating only:
Returns ulong GPU timer frequency for current platform.
Description
This returns the frequency of GPU timer on the current platform, used to interpret timing results. If the platform does not support returning this value it will return 0.
Calling this method in an update loop only ever yields 0 (on both Window 10 running Unity 2019.3 and Android phone running Android 10).
private void OnEnable()
{
frameTiming = new FrameTiming();
}
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result); //logs 0
var gpuFrameTime = frameTiming.gpuFrameTime;
Debug.LogFormat("gpuFrameTime: {0}", gpuFrameTime); //logs 0
}
So what's the deal here, am I using the FrameTimeManager incorrectly, or are Windows and Android not supported (Unity mentions in the docs that not all platforms are supported, but nowhere do they give a list of supported devices..)?
While grabbing documentation links for the question I stumbled across some forum posts that shed light on the issue, so leaving it here for future reference.
The FrameTimingManager is indeed not supported for Windows, and only has limited support for Android devices, more specifically only for Android Vulkan devices. As explained by jwtan_Unity on the forums here (emphasis mine):
FrameTimingManager was introduced to support Dynamic Resolution. Thus, it is only supported on platforms that support Dynamic Resolution. These platforms are currently Xbox One, PS4, Nintendo Switch, iOS, macOS and tvOS (Metal only), Android (Vulkan only), Windows Standalone and UWP (DirectX 12 only).
Now to be able to use the FrameTimingManager.GetGpuTimerFrequency() we need to do something else first. We need to take a snapshot of the current timings using FrameTimingManager.CaptureFrameTimings first (this needs to be done every frame). From the docs:
This function triggers the FrameTimingManager to capture a snapshot of FrameTiming's data, that can then be accessed by the user.
The FrameTimingManager tries to capture as many frames as the platform allows but will only capture complete timings from finished and valid frames so the number of frames it captures may vary. This will also capture platform specific extended frame timing data if the platform supports more in depth data specifically available to it.
As explained by Timothyh_Unity on the forums hereenter link description here
CaptureFrameTimings() - This should be called once per frame(presuming you want timing data that frame). Basically this function captures a user facing collection of timing data.
So the total code to get the GPU frequency (on a supported device) would be
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result);
}
Note that all FrameTimingManager methods are static, and do not require you to instantiate a manager first
Why none of this is properly documented by Unity beats me...

Why is WebViewControlProcess.CreateWebViewControlAsync() never completing?

I’m trying to write some Rust code that uses Windows.Web.UI.Interop.WebViewControl (which is a Universal Windows Platform out-of-process wrapper expressly designed so Win32 apps can use EdgeHTML), and it’s all compiling, but not working properly at runtime.
The relevant code boils down to this, using the winit, winapi and winrt crates:
use winit::os::windows::WindowExt;
use winit::{EventsLoop, WindowBuilder};
use winapi::winrt::roapi::{RoInitialize, RO_INIT_SINGLETHREADED};
use winapi::shared::winerror::S_OK;
use winrt::{RtDefaultConstructible, RtAsyncOperation};
use winrt::windows::foundation::Rect;
use winrt::windows::web::ui::interop::WebViewControlProcess;
fn main() {
assert!(unsafe { RoInitialize(RO_INIT_SINGLETHREADED) } == S_OK);
let mut events_loop = EventsLoop::new();
let window = WindowBuilder::new()
.build(&events_loop)
.unwrap();
WebViewControlProcess::new()
.create_web_view_control_async(
window.get_hwnd() as usize as i64,
Rect {
X: 0.0,
Y: 0.0,
Width: 800.0,
Height: 600.0,
},
)
.expect("Creation call failed")
.blocking_get()
.expect("Creation async task failed")
.expect("Creation produced None");
}
The WebViewControlProcess instantiation works, and the CreateWebViewControlAsync function does seem to care about the value it received as host_window_handle (pass it 0, or one off from the actual HWND value, and it complains). Yet the IAsyncOperation stays determinedly at AsyncStatus.Started (0), and so the blocking_get() call hangs indefinitely.
A full, runnable demonstration of the issue (with a bit more instrumentation).
I get the feeling that the WebViewControlProcess is at fault: its ProcessId is stuck at 0, and it doesn’t look to have spawned any subprocess. The ProcessExited event does not seem to be being fired (I attached something to it immediately after instantiation, is there opportunity for it to be fired before that?). Calling Terminate() fails as one might expect in such a situation, E_FAIL.
Have I missed some sort of initialization for using Windows.Web.UI.Interop? Or is there some other reason why it’s not working?
It turned out that the problem was threading-related: the winit crate was doing its event loop in a different thread, and I did not realise this; I had erroneously assumed winit to be a harmless abstraction, which it turned out not quite to be.
I discovered this when I tried minimising and porting a known-functioning C++ example, this time doing all the Win32 API calls manually rather than using winit, so that the translation was correct. I got it to work, and discovered this:
The IAsyncOperation is fulfilled in the event loop, deep inside a DispatchMessageW call. That is when the Completion handler is called. Thus, for the operation to complete, you must run an event loop on the same thread. (An event loop on another thread doesn’t do anything.) Otherwise, it stays in the Started state.
Fortunately, winit is already moving to a new event loop which operates in the same thread, with the Windows implementation having landed a few days ago; when I migrated my code to use the eventloop-2.0 branch of winit, and to using the Completed handler instead of blocking_get(), it all started working.
I shall clarify about the winrt crate’s blocking_get() call which would normally be the obvious solution while prototyping: you can’t use it in this case because it causes deadlock, since it blocks until the IAsyncOperation completes, but the IAsyncOperation will not complete until you process messages in the event loop (DispatchMessageW), which will never happen because you’re blocking the thread.
Try to initialize WebViewProcessControl with winrt::init_apartment(); And it may needs a single-threaded apartment(according to the this answer).
More attention on Microsoft Edge Developer Guide:
Lastly, power users might notice the apppearance of the Desktop App
Web Viewer (previously named Win32WebViewHost), an internal system app
representing the Win32 WebView, in the following places:
● In the Windows 10 Action Center. The source of these notifications
should be understood as from a WebView hosted from a Win32 app.
● In the device access settings UI
(Settings->Privacy->Camera/Location/Microphone). Disabling any of
these settings denies access from all WebViews hosted in Win32 apps.

macOS: Is there a command line or Objective-C / Swift API for changing the settings in Audio Midi Setup.app?

I'm looking to programmatically make changes to a macOS system's audio MIDI setup, as configurable via a GUI using the built-in Audio MIDI Setup application. Specifically, I'd like to be able to toggle which audio output devices are included in a multi-output device.
Is there any method available for accomplishing that? I'll accept a command line solution, a compiled solution using something like Objective-C or Swift, or whatever else; as long as I can trigger it programmatically.
Yes, there is.
On Mac there is this framework called Core Audio. The interface found in AudioHardware.h is an interface to the HAL (Hardware Abstraction Layer). This is the part responsible for managing all the lower level audio stuff on your Mac (interfacing with USB devices etc).
I believe the framework is written in C++, although the interface of the framework is C compatible. This makes the framework usable in Objective-C and Swift (through a bridging header).
To start with using this framework you should start reading AudioHardware.h in CoreAudio.framework. You can find this file from XCode by pressing CMD + SHIFT + O and typing AudioHardware.h.
To give you an example as starter (which creates a new aggregate with no subdevices):
// Create a CFDictionary to hold all the options associated with the to-be-created aggregate
CFMutableDictionaryRef params = CFDictionaryCreateMutable(kCFAllocatorDefault, 10, NULL, NULL);
// Define the UID of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceUIDKey), CFSTR("DemoAggregateUID"));
// Define the name of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceNameKey), CFSTR("DemoAggregateName"));
// Define if the aggregate should be a stacked aggregate (ie multi-output device)
static char stacked = 0; // 0 = stacked, 1 = aggregate
CFNumberRef cf_stacked = CFNumberCreate(kCFAllocatorDefault, kCFNumberCharType, &stacked);
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceIsStackedKey), cf_stacked);
// Create the actual aggrgate device
AudioObjectID resulting_id = 0;
OSStatus result = AudioHardwareCreateAggregateDevice(params, &resulting_id);
// Check if we got an error.
// Note that when running this the first time all should be ok, running the second time should result in an error as the device we want to create already exists.
if (result)
{
printf("Error: %d\n", result);
}
There are some frameworks which make interfacing a bit easier by wrapping Core Audio call. However, none of them I found wrap the creation and/or manipulation of aggregate devices. Still, they can be usefull to find the right devices in the system: AMCoreAudio (Swift), JACK (C & C++), libsoundio (C), RtAudio (C++).

NSInputStream never HasBytesAvailable on iOS9 (classic Bluetooth)

I have a cross platform (Xamrin) app that does some classic Bluetooth communication and is working absolutely fine on iOS8. However, after re-building and running it on iOS9 I can't get the NSInputStream to ever have "HasBytesAvailable"= true. Note: I followed all the instructions from Xamarin's website.
I tried both assigning a delegate to the InputStream and waiting on the NSRunLoop but the stream never seems to have bytes available. The event only fires (on iOS9) when the Input stream is opened (on iOS8 it fires as expected).
Here is a snippet of the code that does the reading successfully on iOS8 (delegate method):
EAsession.InputStream.Delegate = new Foo();
EAsession.InputStream.Schedule(NSRunLoop.Current,NSRunLoop.NSDefaultRunLoopMode);
EAsession.InputStream.Open();
(NSRunLoop.Current).RunUntil(NSDate.FromTimeIntervalSinceNow(2));
Where Foo is a class that implements: NSObject, INSStreamDelegate
public class Foo :NSObject, INSStreamDelegate
{
[Export("stream:handleEvent:")]
public void HandleEvent(Foundation.NSStream theStream, Foundation.NSStreamEvent streamEvent)
{
//Code to read bytes here
}
To make sure there really are bytes sent to the iPhone5 I modified the external Bluetooth device to just echo any bytes received.
Using either method (delegate or waiting on NSRunLoop) on iOS8, the echo arrives immediately. However when I change the target device to the iOS9 I can wait forever and HasBytesAvailable would always be false.
I even tried reading regardless of HasBytesAvailable being false, but nothing is being read (no big surprise there I guess).
Moreover, I tried building using both Xcode6.4 and Xcode 7 with same result.
At the moment I am out of ideas so any help would be greatly appreciated!
Thanks in advance!
EDIT:
I contacted Xamarin and I am writing a test app for them to test whether it is an Apple issue or Xamarin issue.
Also, see the comment in this link about Bluetooth... perhaps related?
So I finally solved it,
I am not entirely sure what was the problem eventually as I did a few things:
Update to Xamarin beta stream ( 5.9.7 build 12).
Update to the latest iOS (9.0.2)
Use Xcode 7.0
And change where I called the delegate, open stream and schedule.
The change to the code was that instead of handling the opening, scheduling and delegate assignment in the receiving and sending methods separately, I did it at the same place. Namely, when creating the EASession I assigned all of the above like so:
s = new EASession(device, protocol);
output_stream = s.OutputStream;
input_stream = s.InputStream;
output_stream.Delegate=this;
output_stream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
output_stream.Open();
input_stream.Delegate = this;
input_stream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
input_stream.Open();
At the moment I am not unscheduling the input and output streams and I am not sure it this is the right thing to do. I'll try and keep this updated as I move from working to a nice looking code that works...
Unlike what Bluecode describes I definitely needed to use my input stream and I was always opening it; so I am not sure if it was the same problem. I am still unsure what was the solution to the problem as it can be either one or a combination of the above. I might experiment a bit more into this later on when I have more time to see which one was the single solution? If any.
Hope this helps any.
Cheers
I was able to solve this problem by opening an InputStream to the EAAccessory that I had no intention of using. My guess is that both an Input and Output streams are required as part of the new iOS9 Bluetooth contract. This is the relevant update to my working code:
accessory.Delegate = new EAAccessoryDelegateHandler ();
if (PrinterSession == null)
{
PrinterSession = new EASession (accessory, PrintProtocol);
}
var mysession = PrinterSession;
mysession.InputStream.Open();
mysession.OutputStream.Delegate = this;
mysession.OutputStream.Schedule (NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
mysession.OutputStream.Open ();
Then when I close my OutputStream I close the unused InputStream immediately afterwards.

Obtaing supported GPUs on Cocos2d-x

I'm trying to obtain which GPU is supported in the device which runs the game in order to use the correct texture compression for that GPU (I don't know if this is the best way to do this, i'm open to any suggestion :) )
std::string GPUInfo::getTC()
{
std::string TC;
cocos2d::Configuration::getInstance()->gatherGPUInfo();
if(cocos2d::Configuration::getInstance()->supportsPVRTC())
TC = ".pvr.ccz";
else if(cocos2d::Configuration::getInstance()->supportsATITC())
TC = ".dds";
else
TC = ".png";
CCLOG("Texture compression format -> %s", TC.c_str());
return TC;
}
But this keeps causing this error:
call to OpenGL ES API with no current context (logged once per thread)
Is there another way to obtain which GPUs are supported in the current device?
You are almost there.
cocos2d::Configuration::getInstance()->gatherGPUInfo();
You don't need to call gatherGPUInfo() because it was automatically called from Director::setOpenGLView.
https://github.com/cocos2d/cocos2d-x/blob/fe4b34fcc3b6bb312bd66ca5b520630651575bc3/cocos/base/CCDirector.cpp#L361-L369
You can call supportsPVRTC() and supportsATITC() without GL error from anywhere in the main thread, but you should call it after Cocos2d-x initialization (setOpenGLView).

Resources