I have a simple audio playing app that uses QTMovie for a few of it's features. I'm also developing a little ethernet-enabled board to stream MP3 or PCM data to.
Is there any way of 'grabbing' what QTMovie is outputting, format it into an array of bytes and send it over ethernet to a specific IP? Somehow iTunes manages to do this with AirPlay, so there's some sort of way to do this.
Thanks for any answers!
There are off-the-shelf products like Rogue Ameoba's airfoil that you might want to look at:
http://www.rogueamoeba.com/airfoil/mac/
But if you really want to get your hands dirty and develop something yourself, it looks like QTMovie just outputs to Core Audio, and you can set which device:
http://developer.apple.com/library/mac/#qa/qa1578/_index.html
There's a bit of Q&A on the topic of how programs that intercept Core Audio devices do that:
Code sample for capturing audio from a Mac in Cocoa and saving to file?
Related
What i really want to achieve is this-->
Suppose i play an audio file(using my application) which can either be streamed from the internet/or accessed directly from the local storage.
Now i want to configure SAPI to listen to this source instead of the microphone and convert the speech from the audio to text like it does normally.
I do not think SAPI supports this itself.
There are some approaches you could use that are "external" to SAPI:
Get a male-to-male miniplug cable and plug your soundcard's output into your soundcard's input
Use Virtual Audio Cable which basically achieves #1 but with virtual soundcard software instead of hardware. It can be very tricky at first to understand how Virtual Audio Cable works, and how to use it, but it does work very well once you figure it out.
Some soundcards have a built-in loopback feature, which allows you to record what the soundcard is playing instead of recording from e.g. a microphone. Here are some good info links: What U Hear and Stereo Mix. Also try Googling those terms for more info.
Only WAV seems to be supported out of the box - See here
Quoting:
The wav file input scenario is special because it uses controlled, reproducible audio input and requires a dedicated SR engine, without interference from other applications (e.g., a shared desktop microphone). The file input scenario should use a generic SAPI audio stream connected to the input wav file and an InProc SR engine.
I am trying to build what I think is a basic app. Well, two apps one for windows and one for OS X. I would like to capture the audio signal that is playing (ie if the user is playing music out his/her speakers). Then take that signal and stream it out so another computer can "listen". The other computer would be Windows or OS X.
Any ideas on how to get the audio signal?
What's the most efficient way to stream out audio without a 3rd party plugin? If there is an open-source solution out there, I would be interested.
Thanks!
Chris
On Windows XP this isn't trivial at all because there's no way of intercepting the output signal without writing an audio filter driver (which is not somethign for the faint of heart).
On Windows Vista and above, you can capture the output of the audio engine by using the WASAPI APIs (built into Windows so they're free) and initializing an audio client with the AUDCLNT_STREAMFLAGS_LOOPBACK flag. This will give you a capture stream that's hooked to the output of the audio engine.
You can then package up that audio and send it to the other machine and render it with whatever audio rendering API you want.
I don't know how to do the equivilant on OSX though :(.
Please advise a combination of server and client technologies, tools and frameworks to implement a solution that meets the following requirements?
File server in the network has a huge library of mp3/aac/aiff/wav music files
Desktop cocoa application accesses audio files using URLs: rtmp, http, rtsp+rtp, ftp — how to make a choice?
Audio content should be streamed and played with seeking (it's crucial) without downloading the entire file: QuckTime, AudioQueue, AudioFile, AudioStream, CFHTTP, All of them? — how to develop a client?
After solid research I've ended up with myriads of options and articles. But it looks like a half of them is quite out-of-date (2001—2005), and the other half is about universal code (pure C) for Mac OS X and iPhone OS.
However the main goal here is to write a Desktop music player for Mac OS 10.5.
I cannot believe that all this raw C-coding is just required.
No wrappers? No handy libraries? No components?
P. S. Research has resulted in the following combination: qt_tools for hinting + DSS for RTSP streaming + QTMovie for playing back + setCurrentTime: for seeking. This selection requires double-space for storing hinted .MOV-versions of every music file but works anyway.
I am not sure, but I believe you can use [QTMovie movieWithURL:url error:err] to stream a movie from a URL, then pass it to a QTMovieView object. QuickTime treats audio like movies, so it may work. Or it may try to load the entire file.
Have a look at the QuickTime streaming Guide
Did you look at VLC as a streaming solution?
How would it be possible to capture the audio programmatically? I am implementing an application that streams in real time the desktop on the network. The video part is finished. I need to implement the audio part. I need a way to get PCM data from the sound card to feed to my encoder (implemented using Windows Media Format).
I think the answer is related to the openMixer(), waveInOpen() functions in Win32 API, but I am not sure exactly what should I do.
How to open the necessary channel and how to read PCM data from it?
Thanks in advance.
The new Windows Vista Core Audio APIs have support for this explicitly (called Loopback Recording), so if you can live with a Vista only application this is the way to go.
See the Loopback Recording article on MSDN for instructions on how to do this.
I don't think there is a direct way to do this using the OS - it's a feature that may (or may not) be present on the sound card. Some sound cards have a loopback interface - Creative calls it "What U Hear". You simply select this as the input rather than the microphone, and record from it using the normal waveInOpen() that you already know about.
If the sound card doesn't have this feature then I think you're out of luck other than by doing something crazy like making your own driver. Or you could convince your users to run a cable from the speaker output to the line input :)
I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you).
What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound?
This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language).
The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks.
To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply.
On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs.
I think portaudio is what you need.
Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib).
Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.