MediaDevices.getUserMedia throws exception on Windows when Zoom or MSTeams desktop clients are running (with camera on) - microsoft-teams

Environment
Browser: Chrome 87.0.4280.141
OS: Windows 10 Home
Zoom Version: 5.4.6(59296.1207)
I have a website that can access the the user's camera and take a short video on request. I am attempting to achieve this using the MediaDevices web api.
This is all working fine except in two scenarios. When I am in a Zoom or MS Teams meeting on my Windows laptop (with camera on), I have noticed that my webapp fails to capture my video. If I use the web clients for zoom or msteams then it works as expected. Also, if I use mac OS instead of my Windows laptop then this works fine.
When I debug this I get the following error message thrown when trying to access userMedia.
DOMException: Could not start video source
The code that I am using to access UserMedia is the following:
return await navigator.mediaDevices.getUserMedia({video: true});
Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?

Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?
No, sorry to say.
The native videoconference client programs attach to your webcam, and so does the browser when you use gUM. The first process to attach wins.
A second webcam may solve your problem by letting your videoconference progam use one and your browser use the other. If you use Chrome, pick the camera you want to use with your browser from the pulldown menu on this page.
chrome://settings/content/camera

Related

How mobile Firefox addon can recieve information from it's Desktop version? (if both are logged in)

I want to be able to close tabs on my mobile Firefox, while using Desktop version of Firefox. I thought Tab-sync would make it work, but it doesn't. So now I'm on the quest to fix it with an extension.
I have thought about using sync area of storage (https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/storage/sync) but it's not supported on Android.
Do I have to build some external service to send command from Desktop version to that service and then use Mobile Firefox extension to do the polling or there is a better way to utilize fact that I'm logged on both of these devices.
Any ideas please?

Square POS Web API Android browser error

having an issue with Square POS Web API on Android.
We are trying to implement a kiosk app and the kiosk browser is Android based.
If run in Chrome browser the Square app loads fine with no error however if loaded in an Android browser (eg Dolphin) it fails with the following error.
Point of Sale API must be started with startActivityForResult() in the same task. It looks like the caller either used startActivity() or used startActivityForResult() from a finished activity or with the FLAG_ACTIVITY_NEW_TASK flag.
How can we fix this so the app can be used in kiosk apps that use android browser?
Square POS Web API only works with Chrome intents. Therefore you must use the Chrome browser with your Android device. We are in the process of updating our documentation to reflect this information, sorry for any confusion!

How to make an app ready for 'ovrweb' protocol (to be viewed in Gear VR)?

I have a web app that uses ThreeJS. I am currently trying to include WebVR to be used with Gear VR.
I am aware that I need to link to that web app using the ovrweb protocol in order to open it in Gear VR. My problem is that it does not.
Whenever I use window.location.href = "ovrweb:http://my-app-url", I am asked to attach the device to Gear VR. But once I do so, the screen remains black. I noticed that the same thing happens whenever I use some non-VR webpage as the URL (like ovrweb:https://www.google.com).
However the ovrweb protocol works fine as expected with certain URLs - such as ovrweb:https://playcanv.as/p/VNTAx5Eu/.
I am not sure what I am missing. My app has a VR button, on clicking which the display.requestPresent API call gets fired & the screen splits into two (works in Chrome Canary). Is that any list of requirements that my app needs to satisfy to be recognized via ovrweb protocol? If so, what are those?
I went through the Oculus docs, but did not find anything that could help me. How do I make my app run via ovrweb protocol?
Update: I found that ThreeJS example links (such as https://threejs.org/examples/webvr_rollercoaster.html) are not working over ovrweb protocol either.
Okay, found the solution myself.
Whenever we try to use ovrweb protocol, the device will try to open the URL provided in its Carmel Developer Preview browser (different from the "internet browser" one can use when inside Oculus Home experience).
Now the Carmel Developer Preview supports only "3D" websites. Navigating to 2D websites is not currently supported in Carmel Developer Preview. (https://developer.oculus.com/documentation/vrweb/latest/concepts/carmel-launching-content/). That's why my own web app, as well as links such as google.com, appeared black.
So all that I had to do was simply trigger display.requestPresent at the very start - thereby differentiating it from a 2D website.
Now display.requestPresent does not work without some kind of user interaction (such as click of a button). Same is the case with other JS APIs (such as fullscreen), for security concerns.
However it seems like navigating to a link over ovrweb protocol and thereby viewing it in Carmel Developer Preview also satisfies the user interaction condition. Hence now my VR scene is perfectly visible in Gear VR.
This solution should also work in the ThreeJS webvr examples (including the rollercoaster example linked in OP).
All one needs to do is trigger this snippet on page load.
display.requestPresent( [ { source: canvas } ] )
.then(function() {
// Presenting to WebVR display
}, function(e) {
// On error
});

How to detect if Safari power saver mode is enabled?

I'm experiencing issues with the YouTube player failing to load when the power save mode is enabled in Safari 6.1 and 7 on OSX. The issue doesn't happen if the youtube user is using the experimental HTML5 player, but it's still in beta and most people are still using the Flash player. The "disable plugins to save power" option is on by default in most new versions of Safari and this causes the YouTube iFrame API to enter an endless loop as it tries to initialize the player.
Is there any attribute on the window or navigator objects that would possibly indicate that the power save mode is enabled so that I can warn users?
This issue is semi-intentional. The Power Saver mode in Safari deliberately stops flash content. You can read more about it in this article.
If the flash content is 'front and centre' (within a 3000 x 3000 pixel boundary starting at the top left corner of the document) it should still play. So it may help, if the youtube video is off to the side of the page, to try and centre it. Apple says content will not play if it is in the margins (see this page under the Safari Power Saver heading).
Well i do not think there is any readable JS property to know that,
if so Apple would have a flawed design, and the Safari Users would get nagged to disable that mode, in order to have the web site working "properly" ...
What you could do of course is to try to make a server call on your web site via flash, and then try to read the changed session variable via JavaScript, then you would know ...

Remote Screen Sharing in realtime like SharedView, TeamViewer

What technologies would I need to know to write an app like the now defunct Microsoft SharedView or something like TeamViewer? Any way to do it with a browser and not need a client app?
I'm a .NET developer, but figure I'd need to know C++ or driver stuff?
How would you stream the users desktop to another user? How do you even capture it in realtime?
I can imagine how you could take screenshots of the desktop and transfer them, but how do you capture live video of the screen of application and stream it to another user.
There are many apps that do this: Skype, GotoMeeting, TeamViewer, SharedView, Citrix, logmein, etc. but I'd like to write my own.
How would I get this to work on Windows, tablets, droids, etc...?
The browser seems to be a good platform for this, but there are some limitations
1 - flash doesn't work at all on IOS, and is not widely available on android.
2- Webrtc works with chrome, firefox and opera on mac/pc/linux, and with firefox/chrome on android. There's librairies to use webrtc from an IOS native app(in objective C). Screen Sharing on the other hand only works with chrome (pc/mac/linux). There's a work in progress in firefox.
3- Installation of browser plugins will be hard if not impossible on various platforms, but it can open some possibility : on chrome and firefox you can make them with javascript. For example a javascript extention can share a tab in chrome.
Using javascript you can stream from a desktop to any other desktop / android.

Resources