Flex4 - Microphone gain vs OS microphone input volume - flex4

I'm working on a Flex4 application that capture microphone input. Sometime recorded signal is too slow, or too high so is distort. I would like to have an initialization step before actually processing the audio to allow user to set up the best volume as possible. I've two solutions for that :
Have the user speak some sentences, calculate the volume level, and automatically adpat Microphone.gain in application code
Have the user speak some sentences, and show him a graphical representation of the volume and ask him to set up his OS microphone volume to get an acceptable level
Apart the user interaction required in solution 2, is one solution better than the user for the quality of the recorder audio, or are OS microphone volume and Flex micrphone gain the same ?

Related

Multiple XAudio2 instances needed for AUDIO_STREAM_CATEGORY?

In the newer XAudio2 API's for Windows 8 and 10, an AUDIO_STREAM_CATEGORY is passed to IXAudio2::CreateMasteringVoice.
The documentation goes on to say how these should be used for different types of audio. However an IXAudio2 is only allowed one master voice. To do this is completely separate IXAudio2 instances along with all associated interfaces required, or can categories be specified elsewhere in the audio graph by some means?
Games should categorize their music streams as AudioCategory_GameMedia so that game music mutes automatically if another application plays music in the background. Music or video applications should categorize their streams as AudioCategory_Media or AudioCategory_Movie so they will take priority over AudioCategory_GameMedia streams. Game audio for in-game cinematics or cutscenes, when the audio is premixed or for creative reasons should take priority over background audio, should also be categorized as Media or Movie.
You can create more than one IXAudio2 instance in a process so each will have it's own master voice. If you want to output more than one category of audio from a process, you need to create more than one IXAudio2 instance.
Generally you can get away with just one and always use AudioCategory_GameMedia.
I know this design is a bit of a kludge but the category is set on the WASAPI output voice, which is where XAudio2 sends it's mastering voice stuff to. Any other design would have required annotating category data within the internal XAudio audio graph which would have been quite complicated to implement for not a lot of value. We choose instead to just let applications have more than one audio-graph active at once each with it's own mastering voice and therefore it's own category.
How you choose to you support the audio category feature of WASAPI is up to you, and of course the best user experience depends on what exactly your application actually does.

Outputting Sound to Multiple Audio Devices Simultaneously

OK, the first issue. I am trying to write a virtual soundboard that will output to multiple devices at once. I would prefer OpenAL for this, but if I have to switch over to MS libs (I'm writing this initially on Windows 7) I will.
Anyway, the idea is that you have a bunch of sound files loaded up and ready to play. You're on Skype, and someone fails in a major way, so you hit the play button on the Price is Right fail ditty. Both you and your friends hear this sound at the same time, and have a good laugh about it.
I've gotten OAL to the point where I can play on the default device, and selecting a device at this point seems rather trivial. However, from what I understand, each OAL device needs its context to be current in order for the buffer to populate/propagate properly. Which means, in a standard program, the sound would play on one device, and then the device would be switched and the sound buffered then played on the second device.
Is this possible at all, with any audio library? Would threads be involved, and would those be safe?
Then, the next problem is, in order for it to integrate seamlessly with end-user setups, it would need to be able to either output to the default recording device, or intercept the recording device, mix it with the sound, and output it as another playback device. Is either of these possible, and if both are, which is more feasible? I think it would be preferable to be able to output to the recording device itself, as then the program wouldn't have to be running in order to have the microphone still work for calls.
If I understood well there are two questions here, mainly.
Is it possible to play a sound on two or more audio output devices simultaneously, and how to achieve this?
Is it possible to loop back data through a audio input (recording) device so that is is played on the respective monitor i.e for example sent through the audio stream of Skype to your partner, in your respective case.
Answer to 1: This is absolutely feasable, all independent audio outputs of your system can play sounds simultaneously. For example some professional audio interfaces (for music production) have 8, 16, 64 independent outputs of which all can be played sound simultaneously. That means that each output device maintains its own buffer that it consumes independently (apart from concurrency on eventual shared memory to feed the buffer).
How?
Most audio frameworks / systems provide functions to get a "device handle" which will need you to pass a callback for feeding the buffer with samples (so does Open AL for example). This will be called independently and asynchroneously by the framework / system (ultimately the audio device driver(s)).
Since this all works asynchroneously you dont necessarily need multi-threading here. All you need to do in principle is maintaining two (or more) audio output device handles, each with a seperate buffer consuming callback, to feed the two (or more) seperate devices.
Note You can also play several sounds on one single device. Most devices / systems allow this kind of "resources sharing". Actually, that is one purpose for which sound cards are actually made for. To mix together all the sounds produced by the various programs (and hence take off that heavy burden from the CPU). When you use one (physical) device to play several sounds, the concept is the same as with multiple device. For each sound you get a logical device handle. Only that those handle refers to several "channels" of one physical device.
What should you use?
Open AL seems a little like using heavy artillery for this simple task I would say (since you dont want that much portability, and probably dont plan to implement your own codec and effects ;) )
I would recommend you to use Qt here. It is highly portable (Win/Mac/Linux) and it has a very handy class that will do the job for you: http://qt-project.org/doc/qt-5.0/qtmultimedia/qaudiooutput.html
Check the example in the documentation to see how to play a WAV file, with a couple of lines of code. To play several WAV files simultaneously you simply have to open several QAudioOutput (basically put the code from the example in a function and call it as often as you want). Note that you have to close / stop the QAudioOutput in order for the sound to stop playing.
Answer to 2: What you want to do is called a loopback. Only a very limited number of sound cards i.e audio devices provide a so called loopback input device, which would permit for recording what is currently output by the main output mix of the soundcard for example. However, even this kind of device provided, it will not permit you to loop back anything into the microphone input device. The microphone input device only takes data from the microphone D/A converter. This is deep in the H/W, you can not mix in anything on your level there.
This said, it will be very very hard (IMHO practicably impossible) to have Skype send your sound with a standard setup to your conversation partner. Only thing I can think of would be having an audio device with loopback capabilities (or simply have a physical cable connection a possible monitor line out to any recording line in), and have then Skype set up to use this looped back device as an input. However, Skype will not pick up from your microphone anymore, hence, you wont have a conversation ;)
Note: When saying "simultaneous" playback here, we are talking about synchronizing the playback of two sounds as concerned by real-time perception (in the range of 10-20ms). We are not looking at actual synchronization on a sample level, and the related clock jitter and phase shifting issues that come into play when sending sound onto two physical devices with two independent (free running) clocks. Thus, when the application demands in phase signal generation on independent devices, clock recovery mechanisms are necessary, which may be provided by the drivers or OS.
Note: Virtual audio device software such as Virtual Audio Cable will provide virtual devices to achieve loopback functionnality in Windows. Frameworks such as Jack Audio may achieve the same in UX environment.
There is a very easy way to output audio on two devices at the same time:
For Realtek devices you can use the Audio-mixer "trick" (but this will give you a delay / echo);
For everything else (and without echo) you can use Voicemeeter (which is totaly free).
I have explained BOTH solutions in this video: https://youtu.be/lpvae_2WOSQ
Best Regards

Audio: How to set the level of the default microphone?

This one makes me crazy:
On a Vista+ computer dedicated to this sound playing/recording application, I need my application to make sure the (default) microphone level is pushed to the max. How do I that?
I found the Core Audio lib, found how to get an IMMDevice to the default microphone. Now what?
Docs lead me to think that I need an ISimpleAudioVolume interface pointer from my IMMDevice, but how do I do that?
Note that I'm interested in any programmatic way to set this micro level (whether Core Audio or anything else). Ideally system-wide, but application-wide is ok.
TIA,
The trick is that in Core Audio, recording (aka capture) and rendering devices are not considered different (as long as you don't dive too deep of course), as opposed to former APIs such as waveXXX where there are different APIs for input and output devices.
Therefore, this full example by Larry Osterman that sets the speaker volume can be modified to set the microphone volume by simply changing eRender to eCapture in the enumerator call that returns the default device.
Thanks Larry!

How can the audio data being sent to the speakers be captured from an application?

Is there an API that is suitable for doing this? A possible application of this is for writing a visualiser, and to play with real time signal processing.
EDIT: The operating system in question is Windows. On Linux, a roundabout way to accomplish this is with Jack, but I'm hoping for a way to read the data in the audio buffer without having to couple apps to Jack.
EDIT: A good answer is found here.
If sound board used for playback has recording device/line like "Stereo Mix", "What U Hear", etc., then it is enought to write simple recording application, that is capable to record from a specified recording device/line and record from the "Stereo Mix",...
General case (for "all sound boards") will require to write special driver. Examples of applications with such spesial drivers: Virtual Audio Cable (http://software.muzychenko.net/eng/vac.html); Total Recorder (http://www.totalrecorder/com).

Low level control of system speaker in Windows

Is there a way to take complete control of the motherboard speaker in Windows? So, instead of calling a function like this:
beep(durationMs, frequency);
I can use:
beepContinuous(frequency);
So all I have to specify is a frequency and it will output the correct voltage to play that frequency.
There's a good discussion of a bunch of audio API choices on this SO question. The most basic technique is to use the waveOut calls in the Waveform Audio API set.

Resources