How to integrate our own player with Cobalt - cobalt

From the souce code of Cobalt, it can been seen that it used the ffmpeg related libraries(eg libasound/libavcodec/libavresample/libavutil...) to decode and render/play the video/audio as its own player(pull mode/push mode), as the playback code is high coupling from cobalt init to video decode, and there is no united interface for the use of integrating other player, so is there any guideline document or sample code to support/intergate other player except ffmpeg with Cobalt?

The porting interface for the player is centered around SbPlayer, defined in src/starboard/player.h -- everything under src/starboard/shared/ should be considered an example, or starter code for you to use to implement SbPlayer. You may use all or none of it as is convenient for you. The key is that you implement SbPlayerand the ancillary media porting APIs like SbMedia and SbDrm, and meet their described contracts.
Starboard (as defined in src/starboard/*.h) is the Cobalt porting interface, so you should not have to modify anything outside of your Starboard implementation in order to fully port Cobalt to a new platform. This will make later rebasing much easier as Starboard is a version-controlled API, but any other code is subject to change at any time, without warning. There are not and there won't ever be any direct references from Cobalt into any Starboard implementation code without going through the Starboard API, so you can swap out any portion of it as needed for your platform.

Related

Virtual Driver Cam not recognized by browser

I'm playing with the "Capture Source Filter" from http://tmhare.mvps.org/downloads.htm.
After registering the ax driver, I'm trying to understand its compatibility across applications that use video sources.
For example, Skype recognize it while browsers (Edge, Chrome) don't.
I wonder if it's a limitation of the used approach (DirectShow filter) or it's just a matter of configuration.
The purpose of the question is to understand if that approach is still useful or it's better to move on Media Foundation.
I described this here: Applicability of Virtual DirectShow Sources
Your virtual camera and the applications capable to recognize and pick it up are highlighted with green on the figure below.
... if that approach is still useful or it's better to move on Media Foundation.
Media Foundation does not even have a concept of virtual video source. It does not have a compatibility layer to connect to DirectShow video sources. Obviously, in other direction DirectShow applications won't be able to see virtual Media Foundation streams (well, again, because they do not exist in compatible concept in first place).
If you want to expose your video source to all applications, you need a driver for this (see red box on the figure above). Applciations exist out there that implement such concept, even though writing a new one from the ground up is not something compatibly easy with the DirectShow virtual source you referenced in your question.
Further reading on MSDN on Media Foundation: How to register a live media source - media foundation
Though it is technically possible to write a virtual driver that shows up as a capture device, policies will probably prevent this. In Media Foundation, a device must have a certificate to appear as a capture device, and so far only actual hardware devices through the USB video class driver have been certified. Supporting a scheme through a scheme handler, or a file type with a byte stream handler, is the way to expose a new source to applications.

How to get/set current volume level in Xamarin froms?

How to get device's current volume level and use that inside our application play sound? Also how to get notified about sound level changed by hardware button or software seekbar?
Xamarin.Forms doesn't have Audio API's yet, so you will need to implement the functionality for each platform on your own. However, there seems to be a library called Xamarin Audio Manager and it should do at least some of the things that you require, take a look at it here on GitHub.
You can then use the project as a good starting point for extending it to meet all your needs. Audio API's in both Android and iOS are relatively easy to understand.

Low-latency audio playback from Ruby

I'm building an audio application in Ruby which needs low latency audio playback. So far, I'm using SDL, which is great for a prototype, but it's got nowhere near enough performance for what I need.
I've tried using the ruby-jack gem, but it doesn't seem complete enough to inject any audio into a playback port (and the documentation is wildly incomplete).
If it changes much, I'm on OS X (but I'd like something that's decently cross-platform), and I'm (currently) playing back small WAV files, but more formats would be better. I don't want especially want to call a system application to do this, either.
My application's full source is available on Github; the salient features of it are in a gist, for those who want to have a look.
I'm not too certain if I have the correct answer for you, but I believe it may be worth your time to look into rbSFML. It is a binding for SFML, a multi media library, which has been growing in popularity.
Go here for rbSFML
http://groogy.se/mainsite/rbsfml/
SFML main page
http://www.sfml-dev.org/
Wish I had more information for you!

mix 2 real webcam into a fake webcam

i need to get the streaming from 2 webcams on the same computer, and mix it as a fake webcam (so then i can use the fake webcam on any software).
I have seen that camcamx is for mac, webcamstudio is for linux, but i need a solution for windows and i can't find it, so i was thinking to write my own small app.
I can program with C#, Java and lazarus, but examples or library or whatever in any language will help anyway.
i will need to make a fake webcam that can be used as a webcam (detected on my computer as a usb webcam), and some code to grasp the stream from two real webcam and mix everything together (there will be like a primary webcam that will be bigger and a secondary webcam that will be smaller, on a corner of the big image)
Anyone can help me on that?
This is not a trivial exercise but it can be done. I know because I've done it before. :)
I implemented this in C++.
What you need to do is to create what's known as a shared memory server. A shared memory server is a region of ram that more than one process can access. Here's how to create one using Named Shared Memory under Windows:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366551(v=vs.85).aspx
In your application that mixes the video from the two cameras, you need to create a DirectShow rendering filter (CBaseRenderer) that writes the mixed video frame into this shared memory.
On the other end, you need to create a separate Visual Studio DLL project that will implement a DirectShow capture filter (CSource and CSourceStream) that will read the video bitmaps your main application writes into this buffer. This VS project needs to be a registerable DLL that can be called to register it as a DirectShow capture device for windows.
Your main application will create and maintain this shared memory buffer when it is operating. If another application (like a video conferencing program) accesses the capture device, all that will come from the device will be a blank buffer until you main application stars feeding real video frames into it.
Tip #1: Since this is a multi-threaded operation, you will need an event handle to signal the capture filter that a frame is ready. You will also need a mutex to control access to the buffer by the "rendering" thread in your application and the "capture" thread in the capture device.
Tip #2: You won't need to call UnmapViewOfFile or CloseHandle on the memory pointers until the rendering or capture filters are disposed.
There is a lot of code you will need to grind out, so any useful examples will be beyond the scope of this discussion. This should get you going in the right direction. Good luck!
I think your question is too far out of scope for what this site is all about. You're talking about thousands and thousands of lines of code and intimate knowledge of drivers, video decoding, mixing, etc., etc. if you're going to write this software on your own.
With that said, there probably is software for this for Windows. I'd start here:
http://alternativeto.net/software/webcamstudio/
Capture video from real webcam: Video Capture on MSDN
Fake webcam: the well known starting point is Vivek's sample/project available at http://tmhare.mvps.org/downloads.htm, see also this post "Fake" DirectShow video capture device
Getting all together is doable, though not trivial.

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Resources