Screen capture of web conference camera with Teams client SDK app - microsoft-teams

I would like to add a function to Teams that captures the camera video of the Teams web conference and saves it as an image file.
Since I want to operate regardless of devices such as PCs, tablets, and smartphones, I need a capture function that works only with the Teams app without using other apps.
The camera image I want to capture is not on my side,
This is the image of the other party's camera in the web conference.
Is it possible to develop such a function with the Teams client SDK?
Or do I need another method?
I looked at the Microsoft Teams SDK reference, but as far as I could see, I couldn't find a method that would allow an app to access the web conference camera video.

Related

What kind of UWP app to make for Windows camera filtering app?

The idea is to create an app that can read any local camera device, filter it, then have a new camera "virtual device" other apps can use as input.
How can this be done with UWP?
How can this be done with UWP?
Only the first part of your requirements could be done in UWP apps - can read any local camera device, filter it. UWP could connect to the local camera for capturing photo and video. You could also modify the media content as you want. More information could be known here: Camera - UWP.
But UWP apps are not able to send the camera to other apps as a virtual device. You might need to try other ways like win32 to achieve this.

MediaDevices.getUserMedia throws exception on Windows when Zoom or MSTeams desktop clients are running (with camera on)

Environment
Browser: Chrome 87.0.4280.141
OS: Windows 10 Home
Zoom Version: 5.4.6(59296.1207)
I have a website that can access the the user's camera and take a short video on request. I am attempting to achieve this using the MediaDevices web api.
This is all working fine except in two scenarios. When I am in a Zoom or MS Teams meeting on my Windows laptop (with camera on), I have noticed that my webapp fails to capture my video. If I use the web clients for zoom or msteams then it works as expected. Also, if I use mac OS instead of my Windows laptop then this works fine.
When I debug this I get the following error message thrown when trying to access userMedia.
DOMException: Could not start video source
The code that I am using to access UserMedia is the following:
return await navigator.mediaDevices.getUserMedia({video: true});
Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?
Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?
No, sorry to say.
The native videoconference client programs attach to your webcam, and so does the browser when you use gUM. The first process to attach wins.
A second webcam may solve your problem by letting your videoconference progam use one and your browser use the other. If you use Chrome, pick the camera you want to use with your browser from the pulldown menu on this page.
chrome://settings/content/camera

square point of sale/connect api - 2 devices?

I work for a repair shop that recently decided to implement a square chip/card reader. Up to this point, we have been manually entering credit card numbers into our internal silverlight app running on a pc on our domain.
We ordered the square point of sale system that includes chip reader, ipad, stand, the whole bundle.
We looked at the square pos api, which targets iOS and Android platforms, but also has a Web api for non-native apps. The api is supposed to be able to switch control from our browser based app to the square app to allow a customer to swipe their card, and upon completion, the api switches control back to our app.
Since the system is based on web api calls, we envisioned initiating the sale from our app running on our pc by calling the api, and control would be passed to the square app on the ipad.
However, when we contacted square about this system, we were advised that our app and the square app had to be running on the same device (the ipad in this case).
But it's a web api. Geographical separation shouldn't matter.
Has anyone implemented a similar architecture with a square device? Or does anyone have a potential workaround?
The web request must be initiated from the same device because it uses native App linking/ Intents to switch between the browser and the Square Point of Sale app. You can not currently initiate a transaction on a different device. The "web" in "web API" means that you are starting from a website (as opposed to a native application), not in that the request goes over the web.

getUserMedia video quality

I am working on a tablet (HP) with Windows 8.1. We developed a web application, accessed from the tablet with the Chrome browser, which accesses the tablet's webcam using the getUserMedia API (the implementation is simple, based on JavaScript, similar to the one here for example: https://davidwalsh.name/demo/camera.php).
Our application will be used to take photos of identity cards, and then submit them to a servlet.
The quality of the picture taken inside the browser, using the getUserMedia API, is quite poor, and the letters on the identity cards are sometimes not easily readable in the image.
If I use the "Camera" application from Windows 8.1 on the same tablet, and take pictures of the same identity cards, in the same light conditions and from the same distance, the resulting images (JPEG) are very clear.
Why is this difference in quality? I read all about the getUserMedia API, and I tried all the available parameters (constraints, width, height, jpeg quality), but I cannot obtain a good quality image.
What is the reason for which the same camera on the same tablet results in such a quality difference when used in the browser, and when used with the Windows camera application, and is there a way to obtain better quality in the browser (develop a custom plugin)?
To answer your question "Why is this difference in quality?" In short it is because the Browser emulates the camera feed and does image transformation underthehood to allow the ability to send different streams to different clients. WebRTC main focus is for P2P media streaming, not for taking high quality photos.
You can use ImageCapture to gain more camera properties (and to get it as an ImageBitmap) but support right now is still very weak.
Please read my answers below which goes into more depth on MediaCapture and ImageCapture for their use cases for more information.
Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?
Take photo when the camera is automatically focused

CameraCaptureTask in Windows Phone 8 - auto saves to camera roll

I have an existing application developed for Windows Phone 7, which uses CameraCaptureTask.
The captured image is returned back to the app, which will be processed for grayscale conversion.
While testing the same app (same binary to be precise) in Windows Phone 8 Lumia 920, I figured that a copy of all the images captured through the CameraCaptureTask are saved in "camera roll" folder.
This is a bit annoying as the users of my app are not expecting the captured images crowding the "camera roll" folder. I looked up the documentation http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh394006(v=vs.105).aspx and found the below quote,
On Windows Phone 8, if the user accepts a photo taken with the camera capture task, the photo is automatically saved to the phone’s camera roll. On previous versions of Windows Phone, the photo is not automatically saved.
So far I couldn't find a way to avoid this case in Windows Phone 8.
Is there a way to turn off this feature before calling the CameraCaptureTask's Show() method in Windows Phone 8?
No. This is a consumer feature request implemented on WP8 that's transparent to developers. The usecase here is that a consumer uses the CameraCaptureTask to line up a perfect shot, doesn't use it an app for whatever reason and can't find it ever again later.
As a side-note, I actually had this happen a few times to me when using various twitter and photo editing apps and it's quite annoying.
Makes no sense. CameraCaptureTask was created to allow apps to capture photos for the app use, not for users to push them into Camera Roll. That is what Lenses are for (either custom code can write into camera roll as well).
It is not transparent to developers because one of my apps has just been removed from the WP8 market. They say because can cause "undesired upload of an app photo to skydrive".
Justin are you sure it isn't a bug? is it going to be fixed?
This forces me to break my development into 2 now: WP7 and WP8. I don't want that hassle right now...

Resources