What is a good cross platform audio processing library? - windows

I'm looking for an audio processing library that i can use to do some on-the-fly audio editing in my program, such as turn a knob and it'll increase the pitch of the audio file being played, without saving the change to the song file itself. And i plan to make this program for windows and mac so i would need a cross platform library. I don't have much money to spare so it can't cost too much either. My program will be commercially available if that changes anything. Thanks in advance for any help.

SoX at http://sox.sourceforge.net/
Wavesurfer at http://www.speech.kth.se/wavesurfer/

Related

Playing video in Qt (on a Mac)

This question arises out of a combination of this being my first time working with video and unfamiliarity with Macs. Basically I'm finding it difficult to figure out how to play a video (within a QWidget, or otherwise) using any standard format, e.g. avi, mpeg, mov, etc. In particular,
QMovie::supportedFormats() gives me only .gif and .mng, but I need to use standard formats. Is there a way to increase the number of supported formats?
Phonon requires the presence of a 'backend' which the user has to implement himself. I looked to see if I could somehow do this with Quicktime, but I couldn't get the application to launch--and anyway I didn't really see how to do that. Also, Phonon looks pretty heavyweight, I'd like to avoid it if I could.
While there are plenty of avi (et al.) players floating around on the web, I think it's probably unlikely I'd be able to use them--I need to start, stop, and change the playback speed of videos programmatically i.e. through my C++ program.
I'm not sure why this should be so hard--working with images in Qt is a snap by comparison. So: What's a good way to play videos from within a C++/Qt program?
Stop what you are doing right now: Phonon is the past, Qt Mobility is the future.
After you download, compile and install Qt Mobility, check the examples: videowidget and videographicsitem, located at: qt-mobility-opensource-src-1.2.0/examples/
They pretty much answer all your questions.

Changing output audio device of other Win32 applications?

I want to write a program that allows you to select the output audio device (based on currently connected devices) used by other applications on an individual basis. (E.g. Winamp to my headphones, VLC to my speakers, etc).
The program would (probably) be written in C++ for Vista/7. Most likely I'll try to use the Windows sound APIs, but not sure where to start, or if the whole attempt is futile. (seeing the answer here made me doubtful)
I'm not new to writing code, and this isn't a "please do my homework" request, but I am new to windows code and was having trouble finding much documentation on anything like this.
Is this possible? Where would you start with this? Do you know of any projects that have already done this? Thanks in advance

mix 2 real webcam into a fake webcam

i need to get the streaming from 2 webcams on the same computer, and mix it as a fake webcam (so then i can use the fake webcam on any software).
I have seen that camcamx is for mac, webcamstudio is for linux, but i need a solution for windows and i can't find it, so i was thinking to write my own small app.
I can program with C#, Java and lazarus, but examples or library or whatever in any language will help anyway.
i will need to make a fake webcam that can be used as a webcam (detected on my computer as a usb webcam), and some code to grasp the stream from two real webcam and mix everything together (there will be like a primary webcam that will be bigger and a secondary webcam that will be smaller, on a corner of the big image)
Anyone can help me on that?
This is not a trivial exercise but it can be done. I know because I've done it before. :)
I implemented this in C++.
What you need to do is to create what's known as a shared memory server. A shared memory server is a region of ram that more than one process can access. Here's how to create one using Named Shared Memory under Windows:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366551(v=vs.85).aspx
In your application that mixes the video from the two cameras, you need to create a DirectShow rendering filter (CBaseRenderer) that writes the mixed video frame into this shared memory.
On the other end, you need to create a separate Visual Studio DLL project that will implement a DirectShow capture filter (CSource and CSourceStream) that will read the video bitmaps your main application writes into this buffer. This VS project needs to be a registerable DLL that can be called to register it as a DirectShow capture device for windows.
Your main application will create and maintain this shared memory buffer when it is operating. If another application (like a video conferencing program) accesses the capture device, all that will come from the device will be a blank buffer until you main application stars feeding real video frames into it.
Tip #1: Since this is a multi-threaded operation, you will need an event handle to signal the capture filter that a frame is ready. You will also need a mutex to control access to the buffer by the "rendering" thread in your application and the "capture" thread in the capture device.
Tip #2: You won't need to call UnmapViewOfFile or CloseHandle on the memory pointers until the rendering or capture filters are disposed.
There is a lot of code you will need to grind out, so any useful examples will be beyond the scope of this discussion. This should get you going in the right direction. Good luck!
I think your question is too far out of scope for what this site is all about. You're talking about thousands and thousands of lines of code and intimate knowledge of drivers, video decoding, mixing, etc., etc. if you're going to write this software on your own.
With that said, there probably is software for this for Windows. I'd start here:
http://alternativeto.net/software/webcamstudio/
Capture video from real webcam: Video Capture on MSDN
Fake webcam: the well known starting point is Vivek's sample/project available at http://tmhare.mvps.org/downloads.htm, see also this post "Fake" DirectShow video capture device
Getting all together is doable, though not trivial.

I want to build a OSX app that applies some simple dsp to all audio. Where do I get started?

I want to build a little utility app that sits in the OSX tool bar and allows a user to apply a little dsp (equalization, etc.) to whatever audio is playing.
E.g., A user could adjust the equalization of the overall sound, regardless of what app the audio is playing in.
What Libraries, APIs, will allow me to tap into the audio stream?
This is my first time to program for OSX so any advice, help with gotchas, on this topic would be appreciated!
Look into Audio Units.
AU Lab can be configured as a system-wide equalizer, when combined with Soundflower.

System audio in Quartz Composer

I am building a music visualizer in Quartz Composer, and it works just fine. The problem is that the audio input is through a microphone, so any noise that I make while its running displays.
What I want it to do is take only the sound that is running digitally through the system. Not a line input, but whats running through the Mixer AU for output in the system. I haven't found any way to do this except for WireTap, but I don't want a demo and I can't currently afford the full version.
Thanks in advance.
Try Cycling 74's Soundflower to route audio between applications.
You might also want to check out Kineme AudioTools, which provides more analysis capability than QC's built-in Audio Input patch.

Resources