Real-time equalizer for all audio on computer - windows

Is it possible to capture all the sound from a computer and have it pass through a equalizer before reaching the speakers?
How can you program a band pass filter on it?
EDIT: I'm trying to get this on Windows (with Python? heh) but if there is a generic, cross-platform approach that would be great.

On the GNU/Linux platform with a real time pre-emption enabled Kernel, you have the JACK Audio Connection Kit. Put simply, JACK allows you to connect JACK-aware audio programs such that you could capture all the sound from your computer.
You would then pass this captured sound into another JACK audio program which hosts your equalizer plugin. The equalizer plugin, in Linux at least, will be either a LADSPA plugin, or, LADSPA's successor plugin standard LV2.
You can program a band pass filter if you have a very very very good grasp of very high level mathematics (IMHO) and excellent knowledge of Digital Signal Processing in general. If you don't have these skills I would strongly discourage you against coding a band pass filter, and to just use one of the many freely available implementations.
http://jackaudio.org
http://ladspa.org
http://lv2plug.in
see also:
http://musicdsp.org

You can implement an equalizer either using discrete bandpass filters or you can do it in the frequency domain (FFT -> equalize -> IFFT). For bandpass filters you can either combine a lowpass and a highpass filter or you can use one of various common designs, such as a damped resonator.
How you actually implement the above will depend on what OS, programming language, etc, you are using.

Related

Speech recognition, Such as Siri

Softwares such as Siri, takes voice command and responds to those questions appropriately(98%). I wanted to know that when we write a software to take input stream of voice signal and to respond to those questions,
Do we need to convert the input into human readable language? such as English?
As in nature we have so many different languages but when we speak, we basically make different noise. That's it. However, we have created the so called alphabet to denote those noise variations.
So, Again my question is when we write speech recognition algorithms, Do we match those noise variation signals with our database or first we convert those noise variation into English and then check what to answer from database?
The "noise variation signals" you are referring to are called phonemes. How a speech recognition system translates these phonemes int words depends on the type of system. Siri is not a grammar based system where you tell the speech recognition system what types of phrases you are expecting based on a set of rules. Since Siri translates speech in an open context it probably uses some type of statistical modeling. A popular statistical model for speech recognition today is the Hidden Markov Model. While there is a database of sorts involved it is not a simple search of groups of phonemes into words. There is a pretty good high level description of the process and issues with translation here.
Apple's Siri Based on Natural Language understanding..
I believe Nuance is behind the Scene.. Refer This Article
Nuance is Leader in Speech recognition system development.
Accuracy of Nuance Dragon Engine is just Amazing...
The Client whom i m working for is Consuming Nuance NOD service for their IVR system...
I have tried Nuance Dragon SDK for Android...
from my experience if you use Nuance you need not to worry about the noise variation etc etc... But when you going for enterprise release of you application Nuance might be costly..
If you are planning to use Power of voice to drive your application Google API is also a better choice...
There are API's like Sphinx and pocket sphinx can also help you better for speech application development.. All the above API will take care of the noise rejection and Converting Speech into text etc etc..
all you need to worry is building your system to understand semantic meaning of the given String or recognized Speech content.. Apple should have very good semantic meaning interpreter. So give a try for Nuance SDK. it is available for Android ,iOS , Windows phone and HTTP Client Versions.
I hope it can help you

Using AVFoundation / CoreMedia / Other Frameworks to Detect Beats / Onsets

(Preface: This is my first audio-related question on Stack Overflow, so I'll try to word this as best as I possibly can. Edits welcome.)
I'm creating an application that'll allow users to loop music. At the moment our prototypes allow these "loop markers" (implemented as UISliders) to snap at every second, specifying the beginning and end of a loop. Obviously, when looping music, seconds are a very crude manner to handle this, so I would like to use beats instead.
I don't want to do anything other than mark beats for the UISliders to snap to:
Feed our loadMusic method an audio file.
Run it through a library to detect beats or the intervals between them (maybe).
Feed that value into the slider's setNumberOfTickMarks: method.
Profit!
Unfortunately, most of the results I've run into via Google and SO have yielded much more advanced beat detection libraries like those that remixers would use. Overkill in my case.
Is this something that CoreMedia, AVFoundation or AudioToolbox can handle? If not, are there other libraries that can handle this? My research into Apple's documentation has only yielded relevant results... for MIDI files. But Apple's own software have features like this, such as iMovie's snap-to-beats functionality.
Any guidance, code or abstracts would be immensely helpful at this point.
EDIT: After doing a bit more digging around, it seems the correct terminology for what I'm looking for is onset detection.
Onset Detection algorithms come in many flavors from looking at the raw music signal to using frequency domain techniques.
if you want a quick and easy way to determin where beats are:
Chop up the music signal into small segments (20-50ms chunks)
Compute the squared sum average of the signal: Sum(Xn ^2) / N (where N is the number of sample per 20-50ms)
If you want more sophisticated techniques look into:
https://adamhess.github.io/Onset_Detection_Nov302011.pdf
or for hardcore treatment of it:
https://scholar.google.com/citations?view_op=view_citation&hl=en&user=PMHXcoAAAAAJ&citation_for_view=PMHXcoAAAAAJ:uJ-U7cs_P_0C

extracting a specific melody/beat/rhythm from a specific instument from a mixed wave (or other music format) file

Is it possible to write a program that can extract a melody/beat/rhythm provided by a specific instument in a wave (or other music format) file made up of multiple instruments?
Which algorithms could be used for this and what programming language would be best suited to it?
This is a fascinating area. The basic mathematical tool here is the Fourier Transform. To get an idea of how it works, and how challenging it can be, take a look at the analysis of the opening chord to A Hard Day's Night.
An instrument produces a sound signature, just the same way our voices do. There are algorithms out there that can pick a single voice out of a crowd and identify that voice from its signature in a database which is used in forensics. In the exact same way, the sound signature of a single instrument can be picked out of a soundscape (such as your mixed wave) and be used to pick out a beat, or make a copy of that instrument on its own track.
Obviously if you're thinking about making copies of tracks, i.e. to break down the mixed wave into a single track per instrument you're going to be looking at a lot of work. My understanding is that because of the frequency overlaps of instruments, this isn't going to be straightforward by any means... not impossible though as you've already been told.
There's quite an interesting blog post by Comparisonics about sound matching technologies which might be useful as a start for your quest for information: http://www.comparisonics.com/SearchingForSounds.html
To extract the beat or rhythm, you might not need perfect isolation of the instrument you're targeting. A general solution may be hard, but if you're trying to solve it for a particular piece, it may be possible. Try implementing a band-pass filter and see if you can tune it to selects th instrument you're after.
Also, I just found this Mac product called PhotoSounder. They have a blog showing different ways it can be used, including isolating an individual instrument (with manual intervention).
Look into Karaoke machine algorithms. If they can remove voice from a song, I'm sure the same principles can be applied to extract a single instrument.
Most instruments make sound within certain frequency ranges.
If you write a tunable bandpass filter - a filter that only lets a certain frequency range through - it'll be about as close as you're likely to get. It will not be anywhere near perfect; you're asking for black magic. The only way to perfectly extract a single instrument from a track is to have an audio sample of the track without that instrument, and do a difference of the two waveforms.
C, C++, Java, C#, Python, Perl should all be able to do all of this with the right libraries. Which one is "best" depends on what you already know.
It's possible in principle, but very difficult - an open area of research, even. You may be interested in the project paper for Dancing Monkeys, a step generation program for StepMania. It does some fairly sophisticated beat detection and music analysis, which is detailed in the paper (linked near the bottom of that page).

decoding 802.11 b

I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6.
(two laptops in ad-hoc ping each other).
I would like to decode this data in matlab.
I see them as complex vector with 4.6 mln of complex samples.
I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have).
I can share measurement data to other people.
There's now a few solutions around for decoding 802.11 using Software Defined Radio (SDR) techniques. As mentioned in a previous answer there is software that is based on gnuradio - specifically there's gr-ieee802-11 and also 802.11n+. Plus the higher end SDR boards like WARP utilise FPGA based implementations of 802.11. There's also a bunch of implementations of 802.11 for Matlab available e.g. 802.11a.
If your data is really raw then you basically have to build every piece of the signal processing chain in software, which is possible but not really straightforward. Have you checked the relevant wikipedia page? You might use gnuradio instead of starting from scratch.
I have used 802.11 IEEE standard to code and decode data on matlab.
Coding data is an easy task.
Decoding is a bit more sophisticated.
I agree with Stan, it is going to be tough doing everything yourself. you may get some ideas from the projects on CGRAN like :
https://www.cgran.org/wiki/WifiLocalization

How a marker-based augmented reality algorithm (like ARToolkit's one) works?

For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector

Resources