Programmatically stream audio in Cocoa on the Mac - cocoa

How do I go about programmatically creating audio streams using Cocoa on the Mac. To make, say a white-noise generator using core frameworks on Mac OSX in Cocoa apps?

One way is using the CoreAudio DefaultOutputUnit.
You can configure it with parameters such as output sampling rate, resolution, and output sample format. Then you can programmatically create a raw sound wave and provide this to the output unit.
Take a look at this example on your machine at /Developer/Examples/CoreAudio/SimpleSDK/DefaultOutputUnit/
Which uses the default output unit to play a programmatically rendered sine wave. Using that as a starting point and you can write a routine to render any thing else to output.
This location at /Developer/Examples/CoreAudio/ also contains tons of other core audio examples.

Look at Audio Queue Services.

Related

Send audio from QTMovie over internet

I have a simple audio playing app that uses QTMovie for a few of it's features. I'm also developing a little ethernet-enabled board to stream MP3 or PCM data to.
Is there any way of 'grabbing' what QTMovie is outputting, format it into an array of bytes and send it over ethernet to a specific IP? Somehow iTunes manages to do this with AirPlay, so there's some sort of way to do this.
Thanks for any answers!
There are off-the-shelf products like Rogue Ameoba's airfoil that you might want to look at:
http://www.rogueamoeba.com/airfoil/mac/
But if you really want to get your hands dirty and develop something yourself, it looks like QTMovie just outputs to Core Audio, and you can set which device:
http://developer.apple.com/library/mac/#qa/qa1578/_index.html
There's a bit of Q&A on the topic of how programs that intercept Core Audio devices do that:
Code sample for capturing audio from a Mac in Cocoa and saving to file?

Can I programatically save the data stream sent to the sound card as a WAV file?

In Windows XP, you can configure your sound card properties via the preloaded windows software. In the recording properties, if "stereo mix" or "wave out" (or something similar) is selected as the recording device, programs that can record audio ("Sound Recorder" in windows for example) record a decent quality wave file of the audio stream. I usually use Goldwave from download.com to do this as an example of a third-party application that functions the same.
Well, I've had trouble getting this scenario to happen on Windows Vista or later in a direct no-bullsh*t manner as described above. It's more than just Vista+, it's also that some sound cards don't have that option at all.
I was just wondering if there is a way to run a windows-friendly program (VB?) that takes your audio output stream and converts it (in realtime, obviously) to a WAV file with the default sampling rate as other WAV files have.
Ideally, it would cool if it worked on any operating system, so is it possible to write a web service that "listens" to your audio card like that without making the computer think it's getting a virus attack or something?
Possibly related question:
How to save web audio streaming to file ( c++ / java )
I'm only aware of one manufacturer of sound cards that enabled that option (Creative). However Vista and beyond support a "loopback" mode which gives you effectively the same functionality. You need to use the low level WASAPI rendering stack but it should work just fine.
https://github.com/rdp/virtual-audio-output-sniffer provides a directshow input device to capture the sum of wave out for vista+
You could use low level waveOut API injection and capture what it receives.
I have SkypeMXrecorder, a software that does just that - inject into any exe and 'sniffs' what's going out from it and into the sound hardware. But, it seems rather complicated to implement...

Can I use OSX AUGraph from monomac?

I have an mp3 playing application written in C# which I would like to port to OSX.
As it uses DirectShow to play mp3 I realise that I will need to recode the audio playback part. I found Apple's playfile sample which uses AUGraph.
The Binding Cocoa section of http://www.mono-project.com/MonoMac mentions the "much simpler AudioToolBox API".
Can anyone point me at sample code for using the AudioToolBox from C# or preferably using AUGraph from C#.
Is porting my code to monomac the best approach or would I be better taking the plunge and recoding in Objective C.
These are some samples of using AudioUnit using the same API that MonoMac has, except that these samples target the iPhone using MonoTouch:
https://github.com/migueldeicaza/MonoTouch.AudioUnit
Setting AudioUnit up is a little cumbersome, if all you want is to play MP3 files without doing any low-level processing or applying effects, you can use the MonoMac.AppKit.NSSound API instead.
The page you linked does say that AudioToolbox (i.e. CoreAudio) is fully bound. I don't know of any samples but it shouldn't be hard to port C code.
Alternatively you could go onto the mono-osx mailing list and request a binding of QTKit, or even do this binding yourself. I hear that the MonoMac binding generator makes it quite easy to bind Objective-C APIs.
It's going to be much quicker and easier to use your existing C# code and knowledge, even if you do have to do some bindings yourself.
AUGraph is part of Core Audio. It is used to assemble a graph of Audio units and can be used for audio playback. Core Audio is a low level framework that has a C API.
Maybe you can use QTKit (a cocoa wrapper around QuickTime) from Mono.
In my opinion it is always the best approach to work with a platforms "native" technology. (Which would be Objective-C and Cocoa for Mac OS X).
Apple has a nice sample that shows how to create a media player using QTKit:
http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/QTKitApplicationTutorial/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40008155-CH1-SW1

Getting the speaker audio signal and then streaming it out

I am trying to build what I think is a basic app. Well, two apps one for windows and one for OS X. I would like to capture the audio signal that is playing (ie if the user is playing music out his/her speakers). Then take that signal and stream it out so another computer can "listen". The other computer would be Windows or OS X.
Any ideas on how to get the audio signal?
What's the most efficient way to stream out audio without a 3rd party plugin? If there is an open-source solution out there, I would be interested.
Thanks!
Chris
On Windows XP this isn't trivial at all because there's no way of intercepting the output signal without writing an audio filter driver (which is not somethign for the faint of heart).
On Windows Vista and above, you can capture the output of the audio engine by using the WASAPI APIs (built into Windows so they're free) and initializing an audio client with the AUDCLNT_STREAMFLAGS_LOOPBACK flag. This will give you a capture stream that's hooked to the output of the audio engine.
You can then package up that audio and send it to the other machine and render it with whatever audio rendering API you want.
I don't know how to do the equivilant on OSX though :(.

Simple audio input API on a Mac?

I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you).
What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound?
This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language).
The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks.
To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply.
On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs.
I think portaudio is what you need.
Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib).
Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.

Resources