In a VB6 application I use the WaveInRecorder class to record and monitor the audio input of the soundcard. With this application I record files from 1 hour. Then the recording stops, and a new recording wil start. This applications runs 24/7. When I look in the taskmanager from Windows 10, I see that the memory use every day is higher, so my conclusion is that there is a memory leak in my application.
I found out that it's happening when I call the clsRecorder.StartRecord. For a couple of hours I tryied to find where it's happening but I can't find it.
I test it on this way: every second I call clsRecorder.StartRecord and I see the memory use increases. I think there goes something wrong with 'InitWaveInWnd'.
What can cause a memory leak? All objects will be set to nothing. Can it be caused by variable? Have anybody an idee?
To start the recording I use:
If Not clsRecorder.StartRecord("44100", Val(ini.Stereo) + 1) Then
MsgBox "Unable to start monitoring!", vbExclamation
End If
To stop the recording I use:
clsRecorder.StopRecord
Download the WaveInRecorder class:
https://www.vbforums.com/attachment.php?attachmentid=82714&d=1298433000
Related
I am having some issues with my virtualHBA driver on Windows Server 2016. A ran the HLK crashdump support test. 3 times out of 10 the test passed. In those 3 failing tests, the crashdump hangs at 0% while taking Complete dump, or Kernel dump or minidump.
By kernel debugging my code, I found that the call to ExAllocatePoolWithTag() for buffer allocation never actually returns.
Below is the statement which never returns.
pDeviceExtension->pcmdbuf=(struct mycmdrsp *)ExAllocatePoolWithTag(NonPagedPoolCacheAligned,pcmdqSignalSize,((ULONG)'TA1'));
I searched on the web regarding this. However, all of the found pages are focusing on this function returning NULL which in my case never returns.
Any help on how to move forward would be highly appreciated.
Thanks in advance.
You can't allocate memory in crash dump mode. You're running at HIGH_LEVEL with interrupts disabled and so you're calling this API at the wrong IRQL.
The typical solution for a hardware adapter is to set the RequestedDumpBufferSize in the PORT_CONFIGURATION_INFORMATION structure during the normal HwFindAdapter call. Then when you're called again in crash dump mode you use the CrashDumpRegion field to get your dump buffer allocation. You then need to write your own "crash dump mode only" allocator to allocate buffers out of this memory region.
It's a huge pain, especially given that it's difficult/impossible to know how much memory you're ultimately going to need. I usually calculate some minimal configuration overhead (i.e. 1 channel, 8 I/O requests at a time, etc.) and then add in a registry configurable slush. The only benefit is that the environment is stripped down so you don't need to be in your all singing, all dancing configuration.
Problem
I need a kernel thread that is able to work for prolonged periods of time without yielding, basically fully dedicating a CPU core to it on demand:
int my_kthread(void *arg)
{
while(!kthread_should_stop()) {
do_some_work();
if(sleeping_enabled) msleep(1000);
else {
// What to do here to avoid lockup warnings
// and ensure system stability?
}
}
return 0;
}
Background
The thread is created like this when the module that I am working on is loaded:
my_task = kthread_run(&my_kthread, (void *)some_data, "My KThread")
set_cpus_allowed(my_task, *cpumask_of(10)); // Pin thread to core #10
and stopped like this when the module is unloaded:
kthread_stop(my_task);
Everything works just fine when sleeping_enabled is true.
Otherwise, soon after the thread is started, the kernel complains of the apparent lockup.
At first, I merely aimed to avoid the various warnings such as
BUG: soft lockup - CPU#10 stuck for 22s!
and
INFO: rcu_sched detected stalls on CPUs/tasks: { 10} (detected by 15, t=30567 jiffies)
since they tend to flood my console with dumps for all >20 cores, and the "lockup" is desired behavior.
I tried poking the watchdog like this:
if(sleeping_enabled) msleep(1000);
else touch_softlockup_watchdog();
in combination with (echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress)
and pretty much got what I want (a never-sleeping thread that successfully does what I want and no spam in the console).
However, not only does this "solution" feel like cheating, it seems I am completely breaking something by hogging that one core: when unloading the module via rmmod, the whole system freezes. The console starts periodically dumping soft lockups on all cores, with this call trace:
[<ffffffff810c96b0>] ? queue_stop_cpus_work+0xd0/0xd0
[<ffffffff810c9903>] cpu_stopper_thread+0xe3/0x1b0
[<ffffffff8108639a>] ? finish_task_switch+0x4a/0xf0
[<ffffffff8169e654>] ? __schedule+0x3c4/0x700
[<ffffffff81080e98>] ? __wake_up_common+0x58/0x90
[<ffffffff810c9820>] ? __stop_cpus+0x80/0x80
[<ffffffff81077e93>] kthread+0x93/0xa0
[<ffffffff816a9724>] kernel_thread_helper+0x4/0x10
[<ffffffff81077e00>] ? flush_kthread_worker+0xb0/0xb0
[<ffffffff816a9720>] ? gs_change+0x13/0x13
Meanwhile, my kernel thread continues running (as evidenced by some console messages that it prints out every now and then), so it never saw kthread_should_stop() return true.
Unloading did work correctly and stopped the thread before I switched to not sleeping at all. Now, I am unable to make iterative modifications without having to reboot.
Note that I have simplified the description here a lot. I am trying to add such a thread (to poll some hardware registers and log their changes) to a GPU driver, so there may be module-dependent reasons for the freeze on unload. However, this does not change my general question about how to best implement a thread that never sleeps.
I think this question is similar to your question "whole one core dedicated to single process" , please check the replies there.
I have a Core Data iOS app that uses private queue concurrency in a background process. I'm getting a deadlock that makes the UI freeze up from time to time (fairly regularly, to be honest) - but all the info I get from the debugger (LLDB) is that it is stuck on pthread_mutex_lock. The stack trace is no longer than that, which makes debugging near on impossible:
thread #1: tid = 0x2503, 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24, stop reason = signal SIGSTOP
frame #0: 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24
frame #1: 0x3b44f128 libsystem_c.dylib`pthread_mutex_lock + 392
The XCode process pane is similarly only showing those two entries on the stack.
I'm quite new to this multithreading stuff so am at a total loss where to begin with fixing the issue. Any suggestions for how to go about debugging this?
Your stack is obviously longer than two frames, you can't start a thread with pthread_mutex_lock. So the truncation of the stack frame is pretty clearly just a bug in the lldb unwinder. If you have an ADC account, please file a bug about this at bugreporter.apple.com. Also if you're not using the most recent version of lldb you can get your hands on you might want to try that, maybe it fixed whatever bug you are seeing. You can install multiple Xcode's side by side so you don't have to remove the one you are currently using to try a newer one.
You might also try another tool that will give you a backtrace (e.g. the Instruments time profiler) when your app gets into this state, since it uses a different unwinder. That will at least let you see what the full backtrace is.
Before anyone says it, I know this isn't the way it should be done, but it's the way it was done and I'm trying to support it without rewriting it all.
I can assure you this isn't the worst bit by far.
The problem occurs when the application reads an entire file into a string variable.
Normally this work OK because the files are small, but one user created a file of 107MB and that falls over.
intFreeFile = FreeFile
Open strFilename For Binary Access Read As intFreeFile
ReadFile = String(LOF(intFreeFile), " ")
Get intFreeFile, , ReadFile
Close intFreeFile
Now, it doesn't fall over at the line
ReadFile = String(LOF(intFreeFile), " ")
but on the
Get intFreeFile, , ReadFile
So what's going on here, surely the String has done the memory allocation so why would it complain about running out of memory on the Get?
Usually reading a file involves some buffering, which takes space. I'm guessing here, but I'd look at the space needed for byte to character conversion. VB6 strings are 16 bit, but (binary) files are 8 bit. You'll need 107MB for the file content, plus 214 MB for the converted results. The string allocation only reserves the 214 MB.
You do not need that "GET" call, just remove it, you are already putting the file into a string, so there is no need to use the GET call.
ReadFile = Input(LOF(intFreeFile), intFreeFile)
I got the same error . And we just checked the taskmanager showing 100% resource usage . we found out one of the update application was taking too much ram memory and we just killed it.
this solved the issue for me. One more thing was we gone in to config settings.
START->RUN->MSCONFIG
and go to startup tab and uncheck the application that looks like a updater application or some odd application that you dont use.
I hit a bug in my code which uses WSARecv and WSAGetOverlapped result on an overlapped socket. Under heavy load, WSAGetOverlapped returns with WSASYSCALLFAILURE ('A system call that should never fail has failed') and my TCP stream is out of sync afterwards, causing mayhem in the upper levels of my program.
So far I have not been able to isolate it to a given set of hardware or drivers. Has somebody hit this issue as well, and found a solution or workaround?
How many connections, how many pending recvs, how many outsanding sends? What does perfmon or task manager say about the amount of non-paged pool used? How much memory in the box? Does it go away if you run the program on Vista or above? Do you have any LSPs installed?
You could be exhausting non-paged pool and causing a badly written driver to misbehave when it fails to allocate memory. This issue is less likely to bite on Vista or later as the amount of non-paged pool available has increased dramatically (see http://www.lenholgate.com/blog/2009/03/excellent-article-on-non-paged-pool.html for details). Alternatively you might be hitting the "locked pages" limit (you can only lock a fixed number of pages in memory on the OS and each pending I/O operation locks one or more pages depending on buffer size and allocation alignment).
It seems I have solved this issue by sleeping 1ms and retrying the WSAGetOverlapped result when it reports a WSASYSCALLFAILURE.
I had another issue related to overlapped events firing, even though there is no data, which I also had to solve first. The test is now running for over an hour, with a few WSASYSCALLFAILURE handled correctly. Hopefully the overnight test will succeed as well.
#Len: thanks again for your help.
EDIT: The overnight test was successful. My bug was caused by two interdependent issues:
Issue 1: WaitForMultipleObjects in ConnectionSet::select occasionally
signals data on an empty socket, causing SocketConnection::readSync to
deadlock.
Fix: Do a non-blocking read on the first byte of each packet. Reset
ConnectionSet if socket was empty
Issue 2: WSAGetOverlappedResult returns occasionally WSASYSCALLFAILURE,
causing out-of-sync on the TCP stream.
Fix: Retry WSAGetOverlappedResult after a small sleep period.
http://equalizer.svn.sourceforge.net/viewvc/equalizer?view=revision&revision=4649