When registering the MinimalMediaRouteProvider/MediaRouteButton from Androids Chromecast SDK, we get a standard dialog for connecting to existing Cromecast Devices. Once connected to the device, same dialog also provides a way to set the volume using a draggable seek bar. I am having trouble synchronizing the position of this volume seek bar with the actual volume that is already set in the Chromecast device.
As part of registering the MinimalMediaRouteProvider we provide a com.google.cast.MediaRouteAdapter implementation. The onSetVolume(volume) of this interface is called when the user drags the volume seekBar above. This gives us a god way to update the volume level of the connected chromecast channel by using messageStream.setVolume(volume).
The problem is that once we update the volume, there is no way to tell back the MinimalMediaRouteProvider UI that the volume has changed so it can position itself accordingly - currently it always shows the volume as 0.
What is the proper way to notify the MinimalMediaRouteProvider about the current volume level so it can update its volume UI?
Looking at the MediaRoute sample included with support library 7, there seem to be a way to create MediaRouteDescriptor, update the volume there and thus communicate this back to the MediaRouteProvider, but but it is not very clear how to do this in the content of Chromecast/MinimalMediaRouteProvider.
You can call MediaRouteStateListener.onVolumeChanged() to update the volume seekbar.
I have a more detailed answer here:
https://stackoverflow.com/a/18867128/1334870
To get volume immediately from VideoCastManager:
mVolume = (int) (mCastManager.getVolume() * 100) // cast volume is double
To receive system-based volume changes in the receiver (i.e. the on-screen volume slider pop-up, other connected devices changing the volume), you'll need to add a cast consumer in your controller activity:
private VideoCastConsumerImpl mCastConsumer = new VideoCastConsumerImpl() {
public void onVolumeChanged(double volume, boolean isMute)
{
mVolumeSeekBar.setProgress((int) (volume *100));
}
};
For more details:
Android Sender App Development (Aug '14)
Related
In my Xamarin Forms Android app I'm sending audio through the ear piece instead of the normal speaker. I'm doing something like:
myMediaPlayer.SetAudioAttributes(new AudioAttributes.Builder().SetLegacyStreamType(Stream.VoiceCall).Build());
The deprecated version of the above is
myMediaPlayer.SetAudioStreamType(Stream.VoiceCall);
Either way, this seems to work (though, if there is a better way, please let me know).
HOWEVER, I cannot control the volume. The sound correctly comes out of the ear speaker only, but it is at a constant volume, ignoring my volume key presses (though the volume meter displays on the screen going up and down as a press the buttons). ... (for clarification, I'm not trying to control the volume programatically, I simply want the device to adjust the volume as one would expect.)
Any help?
UPDATE
I've also tried this code:
myMediaPlayer.SetAudioAttributes(new AudioAttributes.Builder().SetContentType(AudioContentType.Music).SetUsage(AudioUsageKind.VoiceCommunication).Build());
I have two goals:
Play only out of the ear piece (or headphones if attached)
Be able to control the volume (which I thought would be a given)
UPDATE with Code Sample
The following will play through the earpiece, but not change volume.
public void Play(string url)
{
var myMediaPlayer = new MediaPlayer();
myMediaPlayer.SetDataSource(url);
if (Build.VERSION.SdkInt >= BuildVersionCodes.Lollipop)
myMediaPlayer.SetAudioAttributes(new AudioAttributes.Builder().SetContentType(AudioContentType.Music).SetUsage(AudioUsageKind.VoiceCommunication).Build());
else
myMediaPlayer.SetAudioStreamType(Stream.VoiceCall);
myMediaPlayer.Prepare();
myMediaPlayer.Start();
}
I made my app can select audio-output. (like 'system default' or 'user's DAC')
but when user choose a output from system preferences panel - sound, my app's output follows the output user seleced.
I searched a lot and add some listener so I can change immediatly my app's output to previously user selected if system output has been changed.
BUT it makes very anonying few milliseconds swiching delay.
I guess it is because I switch my app's output after it's already changed to system default.
So I wonder If I can know BEFORE system default output's changing.
(Like viewWillAppear api from cocoa)
Thank you.
listener that I used for knowing chaninging of system default audio out is from the article below.
How to get notification if System Preferences Default Sound changed
thanks
more details
I used AudioUnitSetProperty(audioOut, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &deviceID, (UInt32)sizeof(deviceID)) for selecting output device. apple document
and add this listener
func addListenerBlock(listenerBlock: #escaping AudioObjectPropertyListenerBlock, onAudioObjectID: AudioObjectID, forPropertyAddress: inout AudioObjectPropertyAddress) {
if (kAudioHardwareNoError != AudioObjectAddPropertyListenerBlock(onAudioObjectID, &forPropertyAddress, nil, listenerBlock)) {
LOG("Error calling: AudioObjectAddPropertyListenerBlock") }
}
func add() {
var propertyAddress = AudioObjectPropertyAddress(mSelector: kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMaster)
self.addListenerBlock(listenerBlock: audioObjectPropertyListenerBlock,
onAudioObjectID: AudioObjectID(bitPattern: kAudioObjectSystemObject),
forPropertyAddress: &propertyAddress)
}
kAudioUnitSubType_DefaultOutput tracks the current output device selected by the user in the Sound Preferences. To play to a specific device use kAudioUnitSubType_HALOutput. The comments in AUComponent.h are helpful:
#enum Apple input/output audio unit sub types (OS X)
#constant kAudioUnitSubType_HALOutput
- desktop only
The audio unit that interfaces to any audio device. The user specifies which
audio device to track. The audio unit can do input from the device as well as
output to the device. Bus 0 is used for the output side, bus 1 is used
to get audio input from the device.
#constant kAudioUnitSubType_DefaultOutput
- desktop only
A specialisation of AUHAL that is used to track the user's selection of the
default device as set in the Sound Prefs
#constant kAudioUnitSubType_SystemOutput
- desktop only
A specialisation of AUHAL that is used to track the user's selection of the
device to use for sound effects, alerts
and other UI sounds.
You didn't specify how you're setting up your output (AUGraph?) so the way to use kAudioUnitSubType_HALOutput varies.
I have an WPF+SharpDX Windows application that displays to the OSVR HDK via a fullscreen window on the screen that is the HDK. This setup works well, but it requires users to state which screen the HDK is on.
I would like to have that automatically detected, but haven't seen anything in the API on which screen is the headset.
Currently I render in a window:
var bounds = dxgiDevice.Adapter.Outputs[_selectedOutput].Description.DesktopBounds;
form.DesktopBounds = new System.Drawing.Rectangle(
bounds.X, bounds.Y, bounds.Width, bounds.Height);
And _selectedOutputis the thing I'm looking for.
I don't support direct mode at this time and I'm using Managed-OSVR. The application will run on Windows 8/8.1/10.
It's been a while since I coded anything for OSVR, but here's from what I remember:
If you're running in extended mode, the OSVR is treated as a regular display. You can rearrange it as any other screen. The output location can be configured in the OSVR config file.
I used the following (Java) to retrieve the position and size to set up my window:
osvrContext.getRenderManagerConfig().getXPosition()
osvrContext.getRenderManagerConfig().getYPosition()
osvrContext.getDisplayParameters().getResolution(0).getWidth()
osvrContext.getDisplayParameters().getResolution(0).getHeight()
To clarify: I don't know if you can retrieve the id of the display in extended mode. From what I know, it's only defined as a position and size on the desktop.
I hope that it helps you, somewhat.
I have a Windows 7 system, a regular monitor as the primary display (serving as a desktop, etc.), and an additional screen attached to the same graphics card.
I want to write a program that takes control of the secondary display and uses it for fullscreen OpenGL rendering. I tried to enumerate displays with EnumDisplaySettings, pick the secondary display, create a device context associated with the display, set the pixel format on the DC, and create a WGL context associated with it. I can get this far without errors, but then the call to wglMakeCurrent fails for no apparent reason (return value is 0, GetLastError() is 0, and OpenGL does not function.)
The only way I could get it to work is to extend the desktop onto the secondary display (manually, from Windows display settings), create a window and move it onto the secondary display. Which is tolerable but undesirable (I don't want the secondary display to interfere with the desktop. For example, in this setup, I can move the mouse cursor from the desktop into the secondary display.) Is there a way to avoid this?
More generally, in order to get OpenGL to work on a display, do I need (1) to have the display attached to the desktop (or "a" desktop?), and/or (2) to have a window of my own on that display?
P.S. It seems that I might be able to get this to work with a third-party library such as glfw3, but I don't want extra baggage (I don't need 90% of functionality of glfw3) and I'd prefer to get this done directly through native API calls if possible.
Unfortunately the Windows graphics driver model does not allow to use displays independently. You will have to extend the desktop to the second display and create a fullscreen window on it. When it comes to constraining the mouse, the usual way is to hook into the system mouse events and whenever the mouse pointer is moved into the secondary screen remove it back to the primary screen.
I was wondering if it is possible in Cocoa/Carbon to detect whether a key combination (e. g. Ctrl + Z) comes from a Wacom button or the keyboard itself.
Thanks
best
xonic
I can only assume a Wacom tablet's driver is faking keyboard events that are bound to specific buttons. If this is the case, I don't think you'll be able to distinguish them as -pointingDeviceID, -tabletID, and friends are only valid for mouse events (which a keyboard event - faked or real - is not).
For the "Express Keys", Wacom provides custom events with the driver version 6.1+
From the Wacom developer docs:
WacomTabletDriver version 6.1.0 provides a set of Apple Events that enable applications to take control of tablet controls. There are three types of tablet controls: ExpressKeys, TouchStrip, and TouchRing. Each control has one or more functions associated with it. Do not make assumption of the number of controls of a specific tablet or the number of functions associated with a control. Always use the APIs to query for the information.
An application needs to do the following to override tablet controls:
Create a context for the tablet of interest.
Register with the distributed notification center to receive the overridden controls’ data from user actions.
Query for number of controls by control type (ExpressKeys, TouchStrip, > or TouchRing).
Query for number of functions of each control.
Enumerate the functions to find out which are available for override.
Set override flag for a control function that’s available.
Handle the control data notifications to implement functionality that the application desires for the control function.
Must destroy the context upon the application’s termination or when the application is done with it.
To create an override context for a tablet, send to the Tablet Driver an Apple Event of class / type {kAECoreSuite, kAECreateElement} with the keyAEObjectClass Param of the Apple Event filled with a DescType of cContext, the keyAEInsertHere Param filled with an object specifier of the index of the tablet (cWTDTablet) and the keyASPrepositionFor Param filled with a DescType of pContextTypeBlank.
To destroy a context, send to the Tablet Driver an Apple Event of class / Type {kAECore, kAEDelete} with the keyDirect Apple Event Parameter filled with an object specifier of the context’s (cContext) uniqueID (formUniqueID).
Most of this only makes sense in context of the documentation page where lots of C structs and helper functions are defined for both Carbon and Cocoa. (This particular part in the docs is pretty far down.)