I am using LeakCanary2 in my app which has two processes. One is main process, the other is remote process.
I try to make a leak in main process, but leak canary is not detecting the leak.
Related
I'm trying to troubleshoot a hairy bug which involves corruption of a particular integer in memory. I can set a watchpoint and hope to capture a backtrace of whatever is changing this particular value.
Complicating matters, the bug only occurs in production, and only a few times per day. And the bug occurs in a Python webserver called gunicorn, which is a pre-fork server. The corruption happens in one of the worker children, not the master process.
Trouble is, gdb by default won't debug children produced by fork(). If configured to do so with set detach-on-fork off, then it might debug the worker processes, but it will also debug other subprocesses if one of the workers does a fork() and exec().
So is there a way to configure gdb to:
debug child processes produced by fork(), and
detach from a process when it does exec()?
Or perhaps there's some other approach to the issue of debugging the worker children of a pre-fork server?
We develop a user-space process running on Linux 3.4.11 in an embedded MIPS system. The process creates multiple (>10) threads using pthreads. The process has a SIGSEGV signal handler which, among other things, generates a log message which goes to our log file. As part of this flow, it acquires a semaphore (bad, I know...).
During our testing the process appeared to hang. We're currently unable to build gdb for the target platform, so I wrote a CLI tool that uses ptrace to extract the register values and USER data using PTRACE_PEEKUSR.
What surprised me to see is that all of our threads were inside our crash handler, trying to acquire the semaphore. This (obviously?) indicates a deadlock on the semaphore, which means that a thread died while holding it. When I dug up the stack, it seemed that almost all of the threads (except one) were in a blocking call (recv, poll, sleep) when the signal handler started running. Manual stack reconstruction on MIPS is a pain so we have not fully done it yet. One thread appeared to be in the middle of a malloc call, which to me indicates that it crashed due to a heap corruption.
A couple of things are still unclear:
1) Assuming one thread crashed in malloc, why would all other threads be running the SIGSEGV handler? As I understand it, a SIGSEGV signal is delivered to the faulting thread, no? Does it mean that each and every one of our threads crashed?
2) Looking at the sigcontext struct for MIPS, it seems it does not contain the memory address which was accessed (badaddr). Is there another place that has it? I couldn't find it anywhere, but it seemed odd to me that it would not be available.
And of course, if anyone can suggest ways to continue the analysis, it would be appreciated!
Yes, it is likely that all of your threads crashed in turn, assuming that you have captured the thread state correctly.
siginfo_t has a si_addr member, which should give you the address of the fault. Whether your kernel fills that in is a different matter.
In-process crash handlers will always be unreliable. You should use an out-of-process handler, and set kernel.core_pattern to invoke it. In current kernels, it is not necessary to write the core file to disk; you can either read the core file from standard input, or just map the process memory of the zombie process (which is still available when the kernel invokes the crash handler).
I'm having some problems debugging an Android app which runs a string of memory-intensive operations on bitmaps. From Google's Debugging tips, I know that
The debugger and garbage collector are currently loosely integrated. The VM guarantees that any object the debugger is aware of is not garbage collected until after the debugger disconnects. This can result in a buildup of objects over time while the debugger is connected. For example, if the debugger sees a running thread, the associated Thread object is not garbage collected even after the thread terminates.
Unfortunately, this means that while my app runs fine in release mode, any memory-intensive thread running in debug mode will be ignored by the garbage collector and be kept around so more and more memory is used as more and more memory-intensive threads are created, resulting in the app crashing because it fails to allocate the required memory.
Is there any way to explicitly tell the garbage collector that these threads should be collected, or some other way around this issue?
I eventually solved this by spawning an AsyncTask rather than a Thread. It seems that AsyncTasks are cleared up more readily by the garbage collector and hence I was able to run the app in debug mode without issue.
An AsyncTask is the recommended way of spawning background operations. Any work to be done in the background should be placed in the task's doInBackground(Params...) method. AsyncTasks are usually intended to perform actions on the UI thread after completion, but you can avoid disturbing the UI thread by simply leaving the onPostExecute(Result)method empty or not present.
On Linux I used to be sure that whatever resources a process allocates, they are released after process termination. Memory is freed, open file descriptors are closed. No memory is leaked when I loop starting and terminating a process several times.
Recently I've started working with opencl.
I understand that the opencl-compiler keeps compiled kernels in a cache. So when I run a program that uses the same kernels like a previous run (or probably even those from another process running the same kernels) they don't need to be compiled again. I guess that cache is on the device.
From that behaviour I suspect that maybe allocated device-memory might be cached as well (maybe associated with a magic cookie for later reuse or something like that) if it was not released explicitly before termination.
So I pose this question to rule out any such suspicion.
kernels survive in chache => other memory-allocations survive somehow ???
My short answer would be yes based on this tool http://www.techpowerup.com/gpuz/
I'm investigating a memory leak on my device and I noticed that memory is freed when my process terminates... most of the time. If you have a memory leak like me, it may linger around even after the process is finished.
Another tool that may help is http://www.gremedy.com/download.php
but its really buggy so use it judiciously.
my question may seem too weird but i thought about the windows hibernation thing and i was wondering if there is a way to hibernate a specific process or application.
i.e : when windows start up from a normal shutdown/restart it will load all startup programs normally but in addition of that it will load a specific program with it`s previous status before shutting down the computer.
I have though about reserving the memory location and retrieve it back when computer start up , but is there any application that does that in windows environment ?
That cannot work. The state of a process is almost never contained in just the process itself. A gui app creates user32 and gdi objects that are stored in a heap associated with the desktop. It makes calls to Windows that affect the window manager state. It makes I/O calls that cause code inside drivers to run. Which in turn affects allocations inside the kernel pools. Multiply the trouble by every pipe or rpc channel it opens to talk to other processes. And shared resources like the clipboard.
Only making a snapshot of the entire operating system state works.
There are multiple solutions for this now, in Linux OS: CRIU, CryoPID2, BLCR.
I think docker can be used (both for windows & linux), but it requires pre-packaging your app in a docker, which bears some overhead(s).