Configuration of recording target by using the onvif interface - ip-camera

I have several cameras with Onvif profile S. Each has the current firmware installed.
It seems there is no possibility to configure the recording target by using the Onvif interface on this profile? (I found it in the core specifications, but the cameras are responding with errors on that method)
But it seems strange, since it's possible to configure the recording target by using the GUI and the vendors own API. (I have Axis and Samsung cameras)

Okay, I have an answer for my question.
Onvif defined the methods for configuring storage settings in the newest version, 2.5. Since my newest camera with the current firmware only supports 2.1, I simply do not own any camera with support for storage configuration and so I need to wait until the vendors implement the methods from newer specifications.

Related

Apple DriverKit SDK Camera driver registration

I am new to the Apple DriverKit SDK an i am not clear about how register my device driver so it would be available as a Camera in the OS. Do i have to register a streaming function in the Start function of the IOService? I searched all over the internet for an answer but i could not find on.
I need to read data from a custom USB Camera and then make it available via a custom driver.
Can any of you guys help me?
Support for cameras and video capture devices is not implemented as special I/O Kit classes in macOS (and therefore also not in DriverKit), but rather entirely in user space, via the Core Media I/O framework. Depending on the type of device a DriverKit component may be necessary. (e.g. PCI/Thunderbolt, which is not available directly from userspace, or USB devices where the camera functionality is not cleanly isolated to a USB interface descriptor) This dext would expose an entirely custom API that in turn can then be used from the user space CoreMediaIO based driver.
From macOS 13 (Ventura) onwards, the Core Media I/O extensions API should be used to implement a driver as this will run in its own standalone process and can be used from all apps using Core Media.
Prior to that (macOS 12 and earlier), only a so-called Device Abstraction Layer (DAL) plug-in API existed, which involved writing a dynamic library in a bundle, which would be loaded on-demand by any application wishing to use the device in question. Unfortunately, this raised code signing issues: applications built with the hardened runtime and library-validation flags can only load libraries signed by Apple or by the same team as signed the application itself. This means such apps can't load third party camera drivers. Examples of such apps are all of Apple's own, including FaceTime.

How to expose a virtual camera on macOS?

I want to write my own camera filters for videochat, and ideally apply them in any/all of the popular videochat applications (Zoom, Hangouts, Skype, etc.). The way I imagine this working is to write a macOS application that reads the camera feed, applies my filters, and exposes an additional virtual camera. This virtual camera could then be selected in whichever videochat application.
I've spent many hours researching how to do this and I'm still not clear if it's even possible with modern macOS APIs. There are a few similar questions on StackOverflow (e.g. here, here), but they are either unanswered or very old. I'm hoping this question will collect advice/links/ideas in the right direction for how to do this as of 2020.
Here's what I got so far:
There's a popular tool in the live streaming community called OBS Studio. It captures input from different sources (camera, desktop, etc.), has a plugin system for applying effects, and then streams the output to popular services (e.g. Twitch). However, there is no functionality to expose the stream as a virtual camera on macOS. In discussions about this (thread, thread), folks talk about a tool called Syphon and a tool called CamTwist.
Unfortunately, Syphon doesn't expose a virtual camera anymore: "SyphonInject NO LONGER WORKS IN macOS 10.14 (Mojave). Apple closed up the loophole that allows scripting additions in global directories to load into any process. Trying to inject into any process will silently fail. It will work if SIP is disabled, but that's a terrible idea and I'm not going to suggest or help anyone do that."
Fortunately, CamTwist works. I got it running on my macOS Catalina, applied some of its builtin effects on my camera stream, and saw it show up as a new camera in my Hangouts settings (after restarting Chrome). This was encouraging.
Unfortunately, CamTwist is rather old and not well maintained. It uses Quartz Composer for implementing effects, but Quartz Composer was deprecated by Apple and it's probably living its last days in Catalina.
The macOS SDK used to have an API called CoreMediaIO, which might have been the way to expose a virtual camera, but this API was also deprecated. It's not clear if/what is a modern alternative.
I guess another way of asking this whole question is: how is CamTwist implemented, how come it still works in macOS Catalina, and how would you implement the same thing in 2020?
Anything that sheds some light on all of this would be highly appreciated!
I also want to create own camera filter like Snap Camera.
So I researched around CoreMediaIO and Syphon.
Did you check this Github project?
https://github.com/lvsti/CoreMediaIO-DAL-Example
This repository started off as a fork of the official CoreMediaIO sample code by Apple.
You know, the original code didn't age well since it was last updated in 2012.
So the owner of the repository changed to make it compile on modern systems.
And you can know that the code works in macOS 10.14 (Mojave) to see the following issue.
https://github.com/lvsti/CoreMediaIO-DAL-Example/issues/4
Actually I have not created the camera filter yet because I don't know how to send images to virtual camera that builded by CoreMediaIO.
I would like to know more information. If you know please tell me.
CamTwist uses CoreMedioIO. What makes you think that's deprecated? Looking at the headers in the 10.15 SDK, I see no indication that it's deprecated. There were updates as recently as 10.14.

What is CoreMediaIo DAL virtual camera alternative in macOS Mojave

I try to implement the virtual camera using CoreMediaIO DAL plugin, the virtual device won't show up in Mojave with Photo Booth and other application, is CoreMediaIO plugin method deprecated in Mojave? what is the replacement?
The virtual camera I created is basically based on https://developer.apple.com/library/archive/samplecode/CoreMediaIO and some samples in github.
I expected when I open the Photo Booth application, the virtual device would show on the Camera list, but it is not. (Mojave)
It turns out I misunderstood input/output stream since I was from DirectShow world, kept the input stream only, ripped off all assistant, kext related, leave pure DAL plugin only, hooked with my transportation layer of frame data, now WebRTC, Zoom etc can find and use it under Mojave.

Virtual Driver Cam not recognized by browser

I'm playing with the "Capture Source Filter" from http://tmhare.mvps.org/downloads.htm.
After registering the ax driver, I'm trying to understand its compatibility across applications that use video sources.
For example, Skype recognize it while browsers (Edge, Chrome) don't.
I wonder if it's a limitation of the used approach (DirectShow filter) or it's just a matter of configuration.
The purpose of the question is to understand if that approach is still useful or it's better to move on Media Foundation.
I described this here: Applicability of Virtual DirectShow Sources
Your virtual camera and the applications capable to recognize and pick it up are highlighted with green on the figure below.
... if that approach is still useful or it's better to move on Media Foundation.
Media Foundation does not even have a concept of virtual video source. It does not have a compatibility layer to connect to DirectShow video sources. Obviously, in other direction DirectShow applications won't be able to see virtual Media Foundation streams (well, again, because they do not exist in compatible concept in first place).
If you want to expose your video source to all applications, you need a driver for this (see red box on the figure above). Applciations exist out there that implement such concept, even though writing a new one from the ground up is not something compatibly easy with the DirectShow virtual source you referenced in your question.
Further reading on MSDN on Media Foundation: How to register a live media source - media foundation
Though it is technically possible to write a virtual driver that shows up as a capture device, policies will probably prevent this. In Media Foundation, a device must have a certificate to appear as a capture device, and so far only actual hardware devices through the USB video class driver have been certified. Supporting a scheme through a scheme handler, or a file type with a byte stream handler, is the way to expose a new source to applications.

How can vimicro's AMCAP.exe capture video by cameras running different drivers?

I've been experimenting with two cameras, one is a webcam and another is an evaluation-kit camera that comes with its own drivers, I can run AMCAP.exe (provided from Vimicro) and it will display live streaming from the connected camera (any of the two) although each one uses a different driver.
My question is: Is it safe to assume that AMCAP.exe is only running as a video-stream display program?
In which case, I assume that most camera vendors follow a common standard interface for camera drivers. Could anyone comment on such assumption or explain how AMCAP.exe is able to do that.
I've been able to use both cameras in my c++ based OpenCv applications, but I'm asking because a 3rd company is going to provide me with a usb board-based camera (for evaluation) and they are asking about driver specs. so I suggested that it should work with windows default driver similar to how a webcam would so as to avoid compatibility issues but I'm wondering if there is a better option that I might be missing.
Note I: I've been able to verify that different drivers are being used for each camera by: Start->Devices and Printers->USB 2.0 Camera->Hardware tabe->USB 2.0 camera->Driver->Driver Details. When the webcam is connected, it is using the microsoft provided driver files: ksthunk.sys and usbvideo.sys. When I disconnect the webcam and connect the evaluation camera, I am able to verify that it is using it's own (non-windows) driver.
Note II: Vimicro's AMCAP.exe can be downloaded from: VIMICRO USB PC Camera (VC0303) - CNET Download.com
Note III: Computer is core i7. OS is win7 64bit.
Any help or input on this is truly appreciated and immensely needed.
Best,
Hasan.
Camera drivers must implement the OS-defined interface which isn't different for various devices of the same category (in your case it's USB camera). Default drivers for some devices are bundled with Windows and it's ok to use them. However, they aren't necessary optimal for each and every device. E.g., for cameras they might support only a limited set of supported resolutions or might not be optimal with power saving (just an example, not necessary what really happens!). This is because MS implements only the basic and necessary functionality that must be present in every device and doesn't depend on proprietary HW of various vendors. Dedicated camera driver should provide all the additional functionality - you're the one to decide if it's important for you or not.

Resources