video4linux2 command for mac osx with isight - macos

So I was just wondering is there anything like this on a mac with the isight, where you can record video using the isight via the command line? Thanks in advance!

On Linux 'ffmpeg' uses the 'video4linux2' capture API, and on Windows there is a version called 'video4windows.' A version of this API does not exist for the Mac.
If you wish to record video from your iSight camera from the commandline, use this free software instead:
Wacaw - Webcam Tools for Mac OS X
Here is an example of its usage.
Step 1) See what video hardware is present:
wacaw -L
Step 2) Capture your video to file. On my MacBook, it reports my internal iSight camera as a USB device of ID '2' with an input of ID '0'. Here's how it looks for my MacBook. The 'video-device' may differ for your computer, and you might also be able to omit the '--video-input 0' section:
wacaw --video --video-device 2 --video-input 0 --duration 3 --VGA ~/MyMovie
Another alternative is to install Linux, and 'isight-firmware-tools', on your Mac hardware. There are details on using on this blog, though I did not test them. Unlike the Wacaw method above, you would have to boot into an entirely different OS to use this second alternative.
Hope this helps!

Related

How to test custom virtual audio device with shorter feedback loop?

I'm trying to create what's essentially a Krisp clone that creates a virtual audio device that will process my audio input using RNNoise. As a starting point, I'm using this example from the CoreAudio documentation: Creating an Audio Server Driver Plug-in. The first milestone I'm trying to achieve is to simply create an "echo" virtual audio input device that will mirror the input of a real input device such as a microphone with no processing.
What I've found so far is that testing is very cumbersome - I have to install the built plugin to /Library/Audio/Plug-Ins/HAL and then reboot my Mac. Is there a way to get a shorter feedback loop that does not involve rebooting?
Disclaimer: I have 0 experience with both macOS and audio programming. I also have almost no experience with C. If there's already an app that implements what I'm trying to achieve for free, I would be more than happy to use it instead of building my own.
sudo launchctl kickstart -k system/com.apple.audio.coreaudiod does exactly what I want.

Hog mode / Exclusive access to audio output device with SoX

I would like to know whether SoX/LibSoX offers the possibility to access a sound device in exclusive/hog mode. The idea is to prevent other applications from accessing the sound card / DAC that is being used by the focal app.
My main target is OSX CoreAudio output, but I am also eager to know about Linux (OSS/Alsa).
I know this is possible in CoreAudio, because I have seen it implemented in several apps, including this open source one.
On Mac OS X at least, the answer appears to be no. In http://sourceforge.net/p/sox/code/ci/master/tree/src/coreaudio.c SoX uses the default input or output device but there is no provision for hog mode.

OSX -- Output WAV independently to 12 speakers connected via USB soundcards

I wish to simultaneously play sounds through up to 12 mono speakers.
I could connect these to my MacBook using 6 USB soundcards, and use the left and right channel of each.
But how can I get the MacBook to play sound out of speaker #5, for example?
PS If anyone can see a smarter way to wire up 12 speakers to a MacBook, please do say!
You can setup an Aggregate Device (Audio Midi Setup > Create Aggregate Device), which allows the ability to combine multiple devices of the same model, or to combine multiple inputs and outputs for apps that don't support separate input and output devices. This Apple guide shows how it works and is surprisingly quite easy to setup.
Another way to route audio to multiple channels and outputs (up to 64) is with the free app/plug-in Soundflower. You can download a compiled version, or compile the source code if you want/need to specifically do something that the current compiled version might not.

Which /dev/... (devices) are the microphone and speaker in Mac OS X?

I have a MacBook Alluminium and I want to capture the microphone in a RAW format and output a RAW audio through the speakers, in a standard way, i.e., using the terminal with standard Unix commands and using the standard /dev/??? devices.
So, the question/s:
Which devices are the microphone and speakers? Those both should start with /dev/...
Are they different if they are built-in or external? Which ones?
(Also they have to start with /dev/...)
If you know also the unix commands to print the microphone input and to write the output for speakers that would be extra points! :) (I want to capture it from mic, modify it -I got it-, and send it modified to speakers)
If you know also the Assembly instructions for OSX that would be the perfection! But the main questions are which are in the bulleted list.
Thanks!
None of them. Not all devices have /dev nodes on Mac OS X, and audio devices are not among them. There is no way I'm aware of to access audio devices using only "standard" terminal commands. sox can be used if you install it, but it is not shipped with Mac OS X.
The primary supported API to access audio devices on Mac OS X is Core Audio. Third-party libraries, such as libao, are also available which can expose a simpler, platform-independent interface to Core Audio.
As #duskwuff says, you probably won't have any joy trying to access sound devices using /dev devices on OS X.
If, as you say in your comment above, your goal is cross-platform portability then perhaps PortAudio might be a solution.
From their homepage:
PortAudio is a free, cross-platform, open-source, audio I/O library.
It lets you write simple audio programs in 'C' or C++ that will
compile and run on many platforms including Windows, Macintosh OS X,
and Unix (OSS/ALSA). It is intended to promote the exchange of audio
software between developers on different platforms. Many applications
use PortAudio for Audio I/O.
On OS X I believe they use Core Audio and on Linux they use OSS/ALSA.

Pyaudio for external interfaces (Mac OSX)

Using Python and PyAudio, I can't seem to record sound to a wav file from an external audio interface (RME Fireface), but i am able to do so with the in built mic on my iMac. I set the default device to Fireface in System preferences, and when i run the code, the wav file is created but no sound comes out when i play it. The code is as given on the PyAudio webpage. Is there any way to rectify this?
A couple shots in the dark - Verify if you're opening the device correctly - looks like the Fireface can be both half or full duplex (pref pane configurable?), and pyaudio apparently cares (i.e. you can't specify an output if you specify an input or vise versa.)
Another thing to check out is the audio routing - under /Applications/Utilities/Audio Midi Setup.app, depending on how you have the signals coming in you might be connecting to the wrong one and not realizing it.

Resources