Redirecting audio output - macos

I need a method to redirect my Mac's audio output to a different computer on the same network as the Mac doesn't have audio output. I'm on Snow Leopard while the other computer which has speakers attached does not have any operating system at the moment.

Probably the simplest solution is to use Rogue Amoeba's Nicecast. It can hijack system audio and stream it locally so you can pick it up on another machine with any MP3 player that supports streams.

What's the software source of the audio? If the audio source is iTunes, there are a few things you can do specifically dealing with the support for Airport Express hardware. If you don't wish to spend more money on Team Apple, then the Enlightenment Sound Daemon runs on pretty much any hardware (Linux, OS/X, or Windows/Cygwin).
There's a quick tutorial here. It uses Rogue Audio's Hijack; looks like someone's recommended another Rogue Audio project as well. I'm a fan of Airfoil from them, as it lets me stream Pandora to the Airport Express in the main room.

Related

Can I send sound to the speakers when headphones are connected?

I'm running a roleplaying game that involves demonic forces, so to make it a little more fun, I wrote a tiny program with EZAudio that takes the microphone input and plays it back with the pitch lowered by 400%. Overlayed with my actual voice, it sounds pretty evil (when I can focus enough to avoid the speech jamming effect).
The problem is that I don't have a dedicated microphone, and the feedback is pretty intense when I run it with the internal microphone and the internal speakers. I do, however, have earbuds with a microphone and a 4-ring jack, which my MacBook pro recognizes. Now, the problem is that when I use it, the sound goes to the earbuds, which defeats the whole purpose.
Mac OS X supports multiple audio output devices. However, as far as I can tell, my 2010 MacBook Pro exposes just one, which dynamically routes sound to either the internal speakers when no headphones are connected, or to the headphones when they are. EZAudioDevice's outputDevices returns just one entry in both cases.
Is there a way I can divert the sound to my computer's internal speakers even when headphones are connected?
In core audio for iOS, it is not possible as explained here. You are running Mac os and I would assume that the behavior would be the same.
The best option is to use a mic not associated with the headphones. Use a usb preamp with a mic.

I want to build a OSX app that applies some simple dsp to all audio. Where do I get started?

I want to build a little utility app that sits in the OSX tool bar and allows a user to apply a little dsp (equalization, etc.) to whatever audio is playing.
E.g., A user could adjust the equalization of the overall sound, regardless of what app the audio is playing in.
What Libraries, APIs, will allow me to tap into the audio stream?
This is my first time to program for OSX so any advice, help with gotchas, on this topic would be appreciated!
Look into Audio Units.
AU Lab can be configured as a system-wide equalizer, when combined with Soundflower.

Getting the speaker audio signal and then streaming it out

I am trying to build what I think is a basic app. Well, two apps one for windows and one for OS X. I would like to capture the audio signal that is playing (ie if the user is playing music out his/her speakers). Then take that signal and stream it out so another computer can "listen". The other computer would be Windows or OS X.
Any ideas on how to get the audio signal?
What's the most efficient way to stream out audio without a 3rd party plugin? If there is an open-source solution out there, I would be interested.
Thanks!
Chris
On Windows XP this isn't trivial at all because there's no way of intercepting the output signal without writing an audio filter driver (which is not somethign for the faint of heart).
On Windows Vista and above, you can capture the output of the audio engine by using the WASAPI APIs (built into Windows so they're free) and initializing an audio client with the AUDCLNT_STREAMFLAGS_LOOPBACK flag. This will give you a capture stream that's hooked to the output of the audio engine.
You can then package up that audio and send it to the other machine and render it with whatever audio rendering API you want.
I don't know how to do the equivilant on OSX though :(.

How do I capture the audio that is being played?

Does anyone know how to programmatically capture the sound that is being played (that is, everything that is coming from the sound card, not the input devices such as a microphone).
Assuming that you are talking about Windows, there are essentially three ways to do this.
The first is to open the audio device's main output as a recording source. This is only possible when the driver supports it, although most do these days. Common names for the virtual device are "What You Hear" or "Wave Out". You will need to use a suitable API (see WaveIn or DirectSound in MSDN) to do the capturing.
The second way is to write a filter driver that can intercept the audio stream before it reaches the physical device. Again, this technique will only work for devices that have a suitable driver topology and it's certainly not for the faint-hearted.
This means that neither of these options will be guaranteed to work on a PC with arbitrary hardware.
The last alternative is to use a virtual audio device, such as Virtual Audio Cable. If this device is set as the defualt playback device in Windows then all well-behaved apps will play through it. You can then record from the same device to capture the summed output. As long as you have control over the device that the application you want to record uses then this option will always work.
All of these techniques have their pros and cons - it's up to you to decide which would be the most suitable for your needs.
You can use the Waveform Audio Interface, there is an MSDN article on how to access it per PInvoke.
In order to capture the sound that is being played, you just need to open the playback device instead of the microphone. Open for input, of course, not for output ;-)
If you were using OSX, Audio Hijack Pro from Rogue Amoeba probably is the easiest way to go.
Anyway, why not just looping your audio back into your line in and recording that? This is a very simple solution. Just plug a cable in your audio output jack and your line in jack and start recordung.
You have to enable device stero mix. if you do this, direct sound find this device.

Simple audio input API on a Mac?

I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you).
What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound?
This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language).
The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks.
To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply.
On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs.
I think portaudio is what you need.
Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib).
Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.

Resources