Replicate camera and microphone to use across multiple programs - ffmpeg

I'd like to use the same webcam and microphone across multiple streams, for example Google Meets and Microsoft Teams at the same time.
When one of the sources use the webcam/mic, they lock it in and they can't be used anywhere else. Is there a way to replicate or unfreeze them?
I tried via ffmpeg, specificaly by outputting as dshow, but got:
Requested output format 'dshow' is not a suitable output format
I also tried this solution: https://superuser.com/a/1531380/934167 but didn't manage to replicate the device.
If possible, I'd like to stick to ffmpeg, but if it requires third party (preferably open source) applications, I can try that out.
Edit: I should also mention, I'd prefer this for Windows/Mac, other solutions were only relevant for linux.

Related

Saving and Image from RTSP Stream (VB6)

I am going to start off this question with; Yes. I have to use Visual Basic 6.0 to program this. It's out of my control which programming language is being used, and the one I have been told to use is VB6.
I am trying to create a program that can fetch the data from a RTSP Stream and save a single shot of the video feed to a bmp file. I have been looking on Google, and on Overflow, but I haven't been able to find a solution on how to go about accomplishing this.
One of the things I am worried about is compatibility issues. For example, one of my searches early on led me to EmguCV, but I can't get that working for VB6, and honestly I never really expected to get it working.
So are there any good libraries, or built in features for VB6 that can help me accomplish what I am trying to do? I am kind of just hitting my head against a brick wall here.
Try using Windows Media Control for the heavy lifting (streaming, video decoding, etc).
Just have to figure out how to pause first frame of the stream and capture it.

SmartEyeglasses and Subtitles - Accessibility

I work for a performing arts institution and have been asked to look into incorporating wearable technology into accessibility for our patrons. I am interested in finding out more information regarding the use of SmartEyeglasses for supertitles (aka, subtitles) in live or pre-recorded performance. Is it possible to program several glasses to show the user(s) the same supertitles at the same time? How does this programming process work? Can several pairs of SmartEyeglasses connect with the same host device?
Any information is very much appreciated. I look forward to hearing from you!
Your question is overly broad and liable to be closed as such, but I'll bite:
The documentation for the SDK is available here: https://developer.sony.com/develop/wearables/smarteyeglass-sdk/api-overview/ - it describes itself as being based on Android's. The content of the wearable display is defined in a "card" (an Android UI concept: https://developer.android.com/training/material/lists-cards.html ) and the software runs locally on the glasses.
Things like subtitles for prerecorded and pre-scripted live performances could be stored using file formats like .srt ( http://www.matroska.org/technical/specs/subtitles/srt.html ) which are easy to work with and already have a large ecosystem around them, such as freely available tools to create them and software libraries to read them.
Building such a system seems simple then: each performance has an .srt file stored on a webserver somewhere. The user selects the performance somehow, and you'd write software which reads the .srt file and displays text on the Card based on the current timecode through until the end of the script.
...this approach has the advantage of keeping server-side requirements to a minimum (just a static webserver will do).
If you have more complex requirements, such as live transcribing, support for interruptions and unscripted events then you'd have to write a custom server which sends "live" subtitles to the glasses, presumably over TCP, this would drain the device's battery life as the Wi-Fi radio would be active for much longer. An alternative might be to consider Bluetooth, but I don't know how you'd build a system that can handle 100+ simultaneous long-range Bluetooth connections.
A compromise is to use .srt files, but have the glasses poll the server every 30 seconds or so to check for any unscripted events. How you handle this is up to you.
(As an aside, this looks like a fun project - please contact me if you're looking to hire someone to build it :D)
Each phone can only host only 1 SmartEyeglass. So you would need separate host phones for each SmartEyeglass.

Mac OSX Audio keyboard

I am creating an application that will pre-record a user's voice for each letter on the keyboard and when the app is running, if the user calls out '5', the system types 5 to which ever application is capable of accepting the input at that time. I am .NET person and venturing into XCode.
I have done some research and I am pretty sure of using AV Foundation for recording the audio. The question is how to use speech recognition in OSX and use it to identify a particular key on the keyboard...Will highly appreciate any feedback even if it might be general advice for the approach that I should take to tackle this project!
ThankS IN ADVANCE :) !
Let me be clear first. I have never done this before, but i have a general idea of how it is done. You need to Bind a audio file to a certain number/Key. Whenever a user speaks into the mic, you record their voice and upload it to a server, which compares the Audio File from the User to the pre recorded audio file the user made.
Here is a SO Question that talks about Audio Fingerprinting.
How can I Compare 2 Audio Files Programmatically?
You can compare the audio files in PHP/Python, and have it return a value. For example. If audio file a.mp3 (on server) matches to the newRecorded.mp3 the user just recorded, return a.mp3, then just strip the .mp3 and keep the key.
As far as recording sentences and commands, you might be able to do the same. I will continue to do more research on this and help you out as much as i can.
Hopefully this gives you a better idea and easier way of doing things.
Also there is this
https://developer.apple.com/library/mac/documentation/cocoa/reference/ApplicationKit/Classes/NSSpeechRecognizer_Class/Reference/Reference.html
and
https://developer.apple.com/library/mac/documentation/cocoa/conceptual/speech/Articles/RecognizeSpeech.html#//apple_ref/doc/uid/20002081-BCIHEBFH
This could be really helpful and would use built in speech recognition.

Use an output device as a recording source under Vista/Win7 new sound API?

As I understand it, Vista introduced a completely rearchitectured sound input/output system to the OS. In particular, before Vista there was a single system-wide sound mixer, to which output devices could be connected. For recording, it was possible to retrieve data directly from a recording device or from this mixer.
In Vista and later, as I understand it, there is no longer a system-wide mixer. It is possible, in theory, to route some sounds to one output device and other sounds to a different output device,1 and this requires separate mixers for each output device.
Now, I have a simple recording application that I would like to update to take advantage of this new API. In particular, I was hoping it would be possible to let the user select one of the output devices as an audio data source. My reasoning is that the OS probably mixes all the inputs into each sound device anyway, and hopefully provides a way to tap into the mixed data.
Is it possible to select an output device as an input into my recording application, and if so, how?
1Although I am yet to find any UI that actually lets one do this.
Loopback Recording

Converter from IP based cameras to Flash player

I would like to know the file formats that IP based surveillance cameras produces. I would then need to build or use available codec/ source code to convert to a format Flash player 10 can support. These are formats other than the usual .FLV for my website which is run on JBoss with Flex 3. I would like to support as many file formats as possible.
I do not want to introduce a streaming server (FMS or RED5 Open source) because of various reasons.
Have anyone any idea about this? Any help would be amazing to get because I have not done anything like this before.
Thanks in advance,
Ranjith
There's no reason to expect these to be standard or consistent; they can do anything they want as long as the camera and the base station agree.
You would have to acquire the target surveillance system and read its documentation, or sniff its network traffic and see what it does. If the first brand seems too difficult, move on to another brand.

Resources