Way to Create Virtual Desktops and Stream Their Contents - macos

I'm looking for a way to programmatically create multiple "virtual" desktops, and stream their contents to whatever.
Essentially, what macOS apis are these guys https://cindori.org/vrdesktop/ using to create those virtual desktops and additionally how do they get a video stream of it?
Just looking for guidance to the right apis/docs. No code required :)

You'll need to create a kernel extension that simulates a graphics adapter, essentially just rendering everything into a framebuffer. The framebuffer can then be exported as a video stream or whatever you choose.
You can find example source codes to look at here:
https://github.com/tSoniq/displayx
https://github.com/andreacremaschi/Syphon-virtual-screen/tree/develop
https://github.com/mkernel/EWProxyFramebuffer
https://code.google.com/archive/p/ioproxyvideofamily/source/default/source
Please note that these projects are not up to date in terms of supporting the latest macOS versions. Later versions of macOS have introduced things such as mandatory kernel extension signing that makes it harder for "hobby-developers" to produce something that can be shared for free on the internet in terms of kernel extensions. It also makes it harder for malware authors, which is the upside.
So take a look at these source repositories and you'll find your guidance - but don't expect them to be complete solutions.

Related

Where can I find an example of creating a FaceTime comparable camera in OSX

Many of us are working from home more and I envy the Windows guys who have a virtual webcam plugging in OBS (Open Broadcast Studio). OBS and the Windows plugin are open source projects. As a competent software engineer I should be able to create a plugin that works on OSX -But- I am not a hardened OSX dev.
I am sure I am not googling for the correct APIs and Subsystems. If someone(s) could help me with the Apple concept map to this obscure topic. I would be grateful for a set of crumbs that leads to the OSX API to call(s) to create a camera. I know it can be done as SnapCam does it, but that is a closed-source app.
I am aware of the workaround for OBS that
1) uses code injection and requires disabling security features
2) Doesn't even work in the current versions of OSX
3) Requires yet another app running with video previews etc.
I like the challenge of creating this plugin. I am also wise enough to try and ask for a road map if one is available.
Someone beat me to it. https://github.com/johnboiles/obs-mac-virtualcam
I thought I would search just githib.com directly with the search "virtual camera macos site:github.com". Constraining the search to just GitHub was quite useful.

Virtual Driver Cam not recognized by browser

I'm playing with the "Capture Source Filter" from http://tmhare.mvps.org/downloads.htm.
After registering the ax driver, I'm trying to understand its compatibility across applications that use video sources.
For example, Skype recognize it while browsers (Edge, Chrome) don't.
I wonder if it's a limitation of the used approach (DirectShow filter) or it's just a matter of configuration.
The purpose of the question is to understand if that approach is still useful or it's better to move on Media Foundation.
I described this here: Applicability of Virtual DirectShow Sources
Your virtual camera and the applications capable to recognize and pick it up are highlighted with green on the figure below.
... if that approach is still useful or it's better to move on Media Foundation.
Media Foundation does not even have a concept of virtual video source. It does not have a compatibility layer to connect to DirectShow video sources. Obviously, in other direction DirectShow applications won't be able to see virtual Media Foundation streams (well, again, because they do not exist in compatible concept in first place).
If you want to expose your video source to all applications, you need a driver for this (see red box on the figure above). Applciations exist out there that implement such concept, even though writing a new one from the ground up is not something compatibly easy with the DirectShow virtual source you referenced in your question.
Further reading on MSDN on Media Foundation: How to register a live media source - media foundation
Though it is technically possible to write a virtual driver that shows up as a capture device, policies will probably prevent this. In Media Foundation, a device must have a certificate to appear as a capture device, and so far only actual hardware devices through the USB video class driver have been certified. Supporting a scheme through a scheme handler, or a file type with a byte stream handler, is the way to expose a new source to applications.

Low-latency audio playback from Ruby

I'm building an audio application in Ruby which needs low latency audio playback. So far, I'm using SDL, which is great for a prototype, but it's got nowhere near enough performance for what I need.
I've tried using the ruby-jack gem, but it doesn't seem complete enough to inject any audio into a playback port (and the documentation is wildly incomplete).
If it changes much, I'm on OS X (but I'd like something that's decently cross-platform), and I'm (currently) playing back small WAV files, but more formats would be better. I don't want especially want to call a system application to do this, either.
My application's full source is available on Github; the salient features of it are in a gist, for those who want to have a look.
I'm not too certain if I have the correct answer for you, but I believe it may be worth your time to look into rbSFML. It is a binding for SFML, a multi media library, which has been growing in popularity.
Go here for rbSFML
http://groogy.se/mainsite/rbsfml/
SFML main page
http://www.sfml-dev.org/
Wish I had more information for you!

Capture sound output on mac

I am trying to port my screensaver from windows to mac and one of its features was reacting on system sound output. On windows it was easy using Direct Sound, but I can't find any example of capturing sound output on mac. Is it possible even possible without writing something like kernel extension? Using flash it is also very easy — it even gives computeSpectrum method to get raw data or even fft transformed data.
All programs that I have already found use Soundflower or their own kernel extension. But I don't think that asking to install separate program or using kernel extension is a good way.
One thing you can do, considering that Soundflower is open source, is take a look at how they did it. You can't copy & paste GPL code, but you can surely study the techniques used and create your own solution (point you in the right direction).
You won't find Apple being very helpful here. Sound capturing, in this manner, can be used for all kinds of nefarious purposes. I'm not even sure if Core Audio lets you do this without hacks. In any case, you have a working implementation of what you're trying to accomplish. I'd take advantage of it.
I'm not on my Mac right now, but I'm pretty sure that Quartz Composer has a patch for just this thing. Depending on what language you're writing your screen saver in, it may be fairly easy for you to port your code into a QC patch. Well... it probably won't be easy, but it may be doable.

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Resources