I am currently developing DirectShow renderer (audio visualizer), but I am getting ASSERT error when I quit app, telling
szInfo 0x000000e313f8e530 L"Executable: WinDMC.exe Pid 1fe0 Tid 4768. Module AudioVisualizer.dll, 5 objects left active! \nAt line 350 of C:\Users\Hiroyuki\source\repos\App\baseclasses\dllentry.cpp\nContinue? (Cancel to debug)"
Can anyone tell me how to know the names that are active, not only the number of objects?
MessageBoxOtherThread() doesn't show up to UI(don't know why), so I capture the string to be shown.
DirectShow BaseClasses don't track leaked instances, it's just a global counter. You need to guess, or you can edit baseclasses\dllentry.cpp file and add more diagnostic code to identify/log pointers that remain active.
Related
In the middle of an experiment i got stuck with an issue for which i hope somebody out here might know the solution.
I am using TIMER1 in PWM mode which is supposed be to continuously running in the background. Since triggering ADC using Timer1 update event is not possible in STM32F401, I used the following settings.
TIM1: Trigger Event Selection_Output Compare(OC1REF)
ADC1: External Trigger Conversion Source_ Timer 1 Capture Compare 1 Event
On sensing a particular value through ADC1 i need the Main output to be disable(i don't want to disable the timer) So i cleared the MOE bit in BDTR register.
But disabling the MOE bit actually stops ADC Triggering.
What could be the possible problem for ADC not getting a proper trigger when only the main output is disable and timer is still running ?
If this is not the proper way can ,what is the proper way to turn off output alone ?
I happen to be using Python bindings here, but I suspect that the problem, and the eventual answer, are not Python-specific.
On Windows 10, using Python bindings to Windows' Core Audio library, specifically via pycaw.AudioUtilities.GetAllSessions(), I can see that there is a session with name 'conhost.exe' and pid 11512, and by elementary guesswork/experimentation I can see that this is the session that (a) governs the audio for my current process and (b) corresponds to the "Console Window Host" slider in the Windows 10 Volume Mixer. This is the one I want to manipulate from my program, and pycaw provides the bindings to do that.
So far so good. My problem is that I don't see any way of working backwards from the current process ID so that, on the next launch, my program can ask "which process corresponds to the session that governs my audio output?" Naively I expected that that session pid might appear in the current process's ancestry, but it does not:
import os, psutil
psutil.Process(os.getpid())
# prints: psutil.Process(pid=3628, name='python.exe', started='14:56:41')
psutil.Process(os.getpid()).parent()
# prints: psutil.Process(pid=16676, name='cmd.exe', started='13:00:57')
psutil.Process(os.getpid()).parent().parent()
# prints: psutil.Process(pid=1356, name='explorer.exe', started='12:21:26')
psutil.Process(os.getpid()).parent().parent().parent()
# prints nothing. It's `None`. We've reached the top.
How should I be querying the pid for whichever audio session governs the current process? (I presume the session won't always be named 'conhost.exe—sometimes I run pythonw.exe to execute the same code without a console.)
In an app, I'm driving a laser projection device using a connected USB audio interface on macOS.
The laser device takes analog audio as an input.
As a safety feature, it would be great if I could make the audio output from my app the exclusive output, because any other audio from other apps or from the OS itself which is routed to the USB audio interface is mixed with my laser control audio, is unwanted and a potential safety hazard.
Is it possible on macOS to make my app's audio output exclusive? I know you can configure AVAudioSession on iOS to achieve this (somewhat - you can duck other apps' audio, but notification sounds will in turn duck your app), but is something like this possible on the Mac? It does not need to be AppStore compatible.
Yes, you can request that CoreAudio gives you exclusive access to an audio output device. This is called hogging the device. If you hogged all of the devices, no other application (including the system) would be able to emit any sound.
Something like this would do the trick for a single device:
AudioObjectPropertyAddress HOG_MODE_PROPERTY = { kAudioDevicePropertyHogMode, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
AudioDeviceID deviceId = // your audio device ID
pid_t hoggingProcess = -1; // -1 means attempt to acquire exclusive access
UInt32 size = sizeof(pid_t);
AudioObjectSetPropertyData(deviceId, &HOG_MODE_PROPERTY, 0, NULL, size, &hoggingProcess);
assert(hoggingProcess == getpid()); // check that you have exclusive access
Hog mode works by setting an AudioObject property called kAudioDevicePropertyHogMode. The value of the property is -1 if the device is not hogged. If it is hogged the value is the process id of the hogging process.
If you jump to definition on kAudioDevicePropertyHogMode in Xcode you can read the header doc for the hog mode property. That is the best way to learn about how this property (and pretty much anything and everything else in CoreAudio) works.
For completeness, here's the header doc:
A pid_t indicating the process that currently owns exclusive access to the
AudioDevice or a value of -1 indicating that the device is currently
available to all processes. If the AudioDevice is in a non-mixable mode,
the HAL will automatically take hog mode on behalf of the first process to
start an IOProc.
Note that when setting this property, the value passed in is ignored. If
another process owns exclusive access, that remains unchanged. If the
current process owns exclusive access, it is released and made available to
all processes again. If no process has exclusive access (meaning the current
value is -1), this process gains ownership of exclusive access. On return,
the pid_t pointed to by inPropertyData will contain the new value of the
property.
I have a Core Data iOS app that uses private queue concurrency in a background process. I'm getting a deadlock that makes the UI freeze up from time to time (fairly regularly, to be honest) - but all the info I get from the debugger (LLDB) is that it is stuck on pthread_mutex_lock. The stack trace is no longer than that, which makes debugging near on impossible:
thread #1: tid = 0x2503, 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24, stop reason = signal SIGSTOP
frame #0: 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24
frame #1: 0x3b44f128 libsystem_c.dylib`pthread_mutex_lock + 392
The XCode process pane is similarly only showing those two entries on the stack.
I'm quite new to this multithreading stuff so am at a total loss where to begin with fixing the issue. Any suggestions for how to go about debugging this?
Your stack is obviously longer than two frames, you can't start a thread with pthread_mutex_lock. So the truncation of the stack frame is pretty clearly just a bug in the lldb unwinder. If you have an ADC account, please file a bug about this at bugreporter.apple.com. Also if you're not using the most recent version of lldb you can get your hands on you might want to try that, maybe it fixed whatever bug you are seeing. You can install multiple Xcode's side by side so you don't have to remove the one you are currently using to try a newer one.
You might also try another tool that will give you a backtrace (e.g. the Instruments time profiler) when your app gets into this state, since it uses a different unwinder. That will at least let you see what the full backtrace is.
Is there a way to programmatically detect whether the microphone is on on Windows?
No, microphones don't tell you whether they're ‘on’ or that a particular sound channel is connected to a microphone device. The best you can do is to read audio data from the input channel you suspect to be a microphone (eg. the Windows default input device/channel), and see if there's any signal on it.
To do that you'd have to remove any DC offset and look for any signal above a reasonable noise floor. (Be generous: many cheap audio input devices are quite noisy even when there is no signal coming in. A mid-band filter/FFT would also be useful to detect only signals in the mid-range of a voice and not low-frequency hum and transient clicks.)
This is not tested in any way, but I would try to read some samples and see if there is any variation. If the mike is on then you should get different values from the ambient sounds. If the mike is off you should get a 0. Again this is just how I imagine things should work - I don't know if they actually work that way.
Due to a happy accident, I may have discovered that yes there is a way to detect the presence of a connected microphone.
If your windows "recording devices" shows "no microphone", then this approach (using the Microsoft Speech API) will work and confirm you have no mic. If windows however thinks you have a mic, this won't disagree.
#include <sapi.h>
#include <sapiddk.h>
#include <sphelper.h>
CComPtr<ISpRecognizer> m_cpEngine;
m_cpEngine.CoCreateInstance(CLSID_SpInprocRecognizer);
CComPtr<ISpObjectToken> pAudioToken;
HRESULT hr = SpGetDefaultTokenFromCategoryId(SPCAT_AUDIOIN, &pAudioToken);
if (FAILED(hr)) ::OutputDebugString("no input, aka microphone, detected");
more specifically, hr will return this result:
SPERR_NOT_FOUND 0x8004503a -2147200966
The requested data item (data key, value, etc.) was not found.