I'm building an audio application in Ruby which needs low latency audio playback. So far, I'm using SDL, which is great for a prototype, but it's got nowhere near enough performance for what I need.
I've tried using the ruby-jack gem, but it doesn't seem complete enough to inject any audio into a playback port (and the documentation is wildly incomplete).
If it changes much, I'm on OS X (but I'd like something that's decently cross-platform), and I'm (currently) playing back small WAV files, but more formats would be better. I don't want especially want to call a system application to do this, either.
My application's full source is available on Github; the salient features of it are in a gist, for those who want to have a look.
I'm not too certain if I have the correct answer for you, but I believe it may be worth your time to look into rbSFML. It is a binding for SFML, a multi media library, which has been growing in popularity.
Go here for rbSFML
http://groogy.se/mainsite/rbsfml/
SFML main page
http://www.sfml-dev.org/
Wish I had more information for you!
Related
I'm looking for a way to programmatically create multiple "virtual" desktops, and stream their contents to whatever.
Essentially, what macOS apis are these guys https://cindori.org/vrdesktop/ using to create those virtual desktops and additionally how do they get a video stream of it?
Just looking for guidance to the right apis/docs. No code required :)
You'll need to create a kernel extension that simulates a graphics adapter, essentially just rendering everything into a framebuffer. The framebuffer can then be exported as a video stream or whatever you choose.
You can find example source codes to look at here:
https://github.com/tSoniq/displayx
https://github.com/andreacremaschi/Syphon-virtual-screen/tree/develop
https://github.com/mkernel/EWProxyFramebuffer
https://code.google.com/archive/p/ioproxyvideofamily/source/default/source
Please note that these projects are not up to date in terms of supporting the latest macOS versions. Later versions of macOS have introduced things such as mandatory kernel extension signing that makes it harder for "hobby-developers" to produce something that can be shared for free on the internet in terms of kernel extensions. It also makes it harder for malware authors, which is the upside.
So take a look at these source repositories and you'll find your guidance - but don't expect them to be complete solutions.
So I'm looking to build an application that will be able to record the users screen and stream it at the same time. I would like the application to run on both Windows and OSX. I don't have a high level of programming experience in any language, just basic understanding in C, C++, JS, (funny how each class you take in college wants a different language). I'm also pretty well versed in HTML and CSS but that is kind of irrelevant for this topic.
I've been looking around and it looks like the best solution is going to be writing the core of the program in one language, and then developing the Interface side for each platform differently, using appropriate languages and bindings for the different platforms (Objective-C and Cocoa for OSX and so forth).
I'm open to all suggestions, this project doesn't have a deadline or anything, I'm really just intending it as a learning experience. I've never done anything with video capture and streaming before, so I'm looking for suggestions as to which road to go down language-wise for this project.
Thanks in advance :)
the simplest solution that comes to my mind is to use VLC.
this is obviously not a "language" but an application, but it supports screen-capture and streaming on all of your target platforms (and more).
if this is not an option (e.g. because you don't want a separate application), you could use VLC's C-api for acquiring screen capture and use whatever you like for streaming.
if you want to only rely on native functionality, i would use C/C++ for the application and write the OSX part in ObjC/ObjC++ and Cocoa.
This question arises out of a combination of this being my first time working with video and unfamiliarity with Macs. Basically I'm finding it difficult to figure out how to play a video (within a QWidget, or otherwise) using any standard format, e.g. avi, mpeg, mov, etc. In particular,
QMovie::supportedFormats() gives me only .gif and .mng, but I need to use standard formats. Is there a way to increase the number of supported formats?
Phonon requires the presence of a 'backend' which the user has to implement himself. I looked to see if I could somehow do this with Quicktime, but I couldn't get the application to launch--and anyway I didn't really see how to do that. Also, Phonon looks pretty heavyweight, I'd like to avoid it if I could.
While there are plenty of avi (et al.) players floating around on the web, I think it's probably unlikely I'd be able to use them--I need to start, stop, and change the playback speed of videos programmatically i.e. through my C++ program.
I'm not sure why this should be so hard--working with images in Qt is a snap by comparison. So: What's a good way to play videos from within a C++/Qt program?
Stop what you are doing right now: Phonon is the past, Qt Mobility is the future.
After you download, compile and install Qt Mobility, check the examples: videowidget and videographicsitem, located at: qt-mobility-opensource-src-1.2.0/examples/
They pretty much answer all your questions.
I am trying to port my screensaver from windows to mac and one of its features was reacting on system sound output. On windows it was easy using Direct Sound, but I can't find any example of capturing sound output on mac. Is it possible even possible without writing something like kernel extension? Using flash it is also very easy — it even gives computeSpectrum method to get raw data or even fft transformed data.
All programs that I have already found use Soundflower or their own kernel extension. But I don't think that asking to install separate program or using kernel extension is a good way.
One thing you can do, considering that Soundflower is open source, is take a look at how they did it. You can't copy & paste GPL code, but you can surely study the techniques used and create your own solution (point you in the right direction).
You won't find Apple being very helpful here. Sound capturing, in this manner, can be used for all kinds of nefarious purposes. I'm not even sure if Core Audio lets you do this without hacks. In any case, you have a working implementation of what you're trying to accomplish. I'd take advantage of it.
I'm not on my Mac right now, but I'm pretty sure that Quartz Composer has a patch for just this thing. Depending on what language you're writing your screen saver in, it may be fairly easy for you to port your code into a QC patch. Well... it probably won't be easy, but it may be doable.
I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.