I'm using Clang to compile my project, on x86_64 OS X(MacOS 10.15.5 Catalina).
I want to identify exactly from which file, which function, which line causes memory leaks. I am trying to use Address Sanitizer, specifically Leak Sanitizer.
Here are flags that I'm using when compiling:
-Wall -Wextra -flto -O3 -march=native -ffast-math -fsanitize=address
It successfully compiles. However, when I try to use run-time flag ASAN_OPTIONS=detect_leaks=1 in order to enable Leak Sanitizer, I see the following error:
==26454==AddressSanitizer: detect_leaks is not supported on this platform.
Abort trap: 6
What am I doing wrong? How could I fix this?
Or, is there another good alternative to a Valgrind? Valgrind doesn't work for me because 1)I'm using the MacOS Catalina, 2)My program runs with an infinite loop. If I'm right, Valgrind displays messages after exiting the program, so it won't work.
I would appreciate it if anyone could give me advice on this issue.
What am I doing wrong?
Nothing. The issue is that your version of Clang does not support leak detection. However, it looks like the latest version does. See this answer and this recipe.
Valgrind displays messages after exiting the program, so it won't work.
You are somewhat correct: by default, Valgrind will perform leak analysis only at program exit.
There are two ways around this:
Make your program exit at some well defined place in the execution, e.g. after performing N calculations, or drawing K frames, etc.
Make your program perform VALGRIND_DO_LEAK_CHECK client request.
If you want to perform the leak check only when certain conditions hold, and it's hard to detect whether these conditions are true from within the program, you could use GDB and the monitor command to ask Valgrind to perform leak check when desired.
Related
I have an C-application which runs on many machines and from time to time an instance makes issues and behaves weird. Unfortunately, this happens almost never. These prod-instances are compiled with heavy optimizations (-march=XXX -Ofast) and do not include any debug-symbols, so I cannot easily attach a debugger to analyze their state.
But I thought it should be possible to compile the application again with the same flags plus -g3 and then I can load it in gdb with symbol-file application_executable_with_debug_symbols. However, if I do that then breakpoints never trigger.
Is there another way to attach a debugger to a running application and loading debug-symbols? Or is there something (obvious) which I do wrong?
Thanks
The best practice is to build the application with debug symbols, keep the resulting binary for debugging, but run strip -g app.debug -o app.release and run the stripped binary in production.
When you find a misbehaving instance, you can copy the full-debug version to target machine, and run gdb -ex 'attach $PID' app.debug. Voila: you have full debug symbols.
The most likely reason that compiling the application again didn't work is that you got a new binary with different symbols (compare nm app.debug vs. nm app.release), and the most likely reason for that (if using GCC) is that you omitted some of the optimization flags used to build app.release, or you used slightly different sources -- you must use exactly the same flags (and add -g) and exactly the same sources for any hope of success with that approach.
Based upon the memory requirements I had to use -mcmodel=large option while compiling the c program. To my surprise the executable is taking more time to complete.
I donot understand why this is happening? How to check this issue.
Is it because of page faults?
I'm trying to do some debugging on a server on an issue that I suspect is related to a buffer overflow, so I tried to compile my code with -fsanitize=address to enable address sanitizing.
It compiled, and the resulting software runs. However, I'm trying to get a core dump when the address sanitizer detects an error since that is pretty much the only way I can get information out of the system due to the setup.
I am calling the software with ASAN_OPTIONS=abort_on_error=1 prepended on the command line (using a shell script to do that), and have checked that ulimit -c gives unlimited as result, but it just won't produce a core dump.
What am I missing?
This is on an ubuntu 14.04 server with gcc version 4.8.4
EDIT: sysctl kernel.core_pattern gives back kernel.core_pattern = |/usr/share/apport/apport %p %s %c %P. This probably means that apport is enabled (at least in some form). However, I have been able to get proper core files on this system from asserts and SIGFPEs in the software (that is where the suspicion of array overruns comes from).
Let me guess, is this x64 target? Coredumps are disabled there to avoid dumping 16 TB shadow memory (see docs for disable_coredump here for details).
Newer versions of GCC/Clang remove shadow from core by default so that one could do something like
export ASAN_OPTIONS=abort_on_error=1:disable_coredump=0
but I'm afraid 4.8 is too old for this.
As an alternative suggestion, why backtraces are not enough for you? You could use log_path or log_to_syslog to preserve them if you do not have access to programs stderr.
NB: I posted suggestion to enable coredumps on all platforms.
I implemented the program that uses mmap() system call, but Segmentation Fault occurs during process runtime.
So, I ran this program with gdb, but when I did it, it worked well without segment fault.
I wonder if it is possible that running with gdb can affect segment fault.
Could you tell me about it?
if it is possible that running with gdb can affect segment fault.
One possibility: GDB disables address randomization (so as to make reproducing the bug easier). You can re-enable it with:
(gdb) set disable-randomization off
GDB may also affect timing of threads, but you didn't mention threads, so that's less likely.
You are probably invoking Undefined Behavior somewhere in your code that is breaking C or C++ rules. Try to run the program under Valgrind. It should give you more information if this is the case.
On my MacBook at work, I'm trying to use LLDB to attach to a running Ruby process.
It's normally suggested that one compile Ruby with the debug flag -ggdb(3) to use GDB. But I can't find anything equivalent for LLDB. My Google-fu is failing me, so I thought I'd ask, since this seems like an obscure request.
I would assume that all that -ggdb does is produce debug information. The format for this debug information is most likely DWARF, which both GDB and LLDB understand
If that is the case, -ggdb is a misnomer and should be fixed. But, for your intents and purposes, you should be able to just compile with -ggdb and then attach with LLDB and things should be all right