I am trying to suppress ASAN issues in an external library, therefore I am following llvm-asan-suppressing-reports-in-external-libraries, the docs says:
If you run into an issue in external libraries, we recommend immediately reporting it to the library maintainer so that it gets addressed
Blockquote
Update: Here the link to the issue, Issue 45842: AddressSanitizer: bad-free - hello world c extension - Python tracker
ASAN trace
==6968==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x01e7aceb3be0 in thread T0
#0 0x7ffec9a97f31 (D:\a\min_reprex_python_c_extension_asan\min_reprex_python_c_extension_asan\llvm\lib\clang\13.0.0\lib\windows\clang_rt.asan_dynamic-x86_64.dll+0x180037f31)
#1 0x7ffeca696030 (C:\hostedtoolcache\windows\Python\3.10.0\x64\python310.dll+0x180026030)
#2 0x7ffeca67aaaf (C:\hostedtoolcache\windows\Python\3.10.0\x64\python310.dll+0x18000aaaf)
...
#114 0x7ff72208122f (C:\hostedtoolcache\windows\Python\3.10.0\x64\python.exe+0x14000122f)
#115 0x7ffefee17973 (C:\Windows\System32\KERNEL32.DLL+0x180017973)
#116 0x7fff0071a2f0 (C:\Windows\SYSTEM32\ntdll.dll+0x18005a2f0)
Address 0x01e7aceb3be0 is a wild pointer inside of access range of size 0x000000000001.
SUMMARY: AddressSanitizer: bad-free (D:\a\min_reprex_python_c_extension_asan\min_reprex_python_c_extension_asan\llvm\lib\clang\13.0.0\lib\windows\clang_rt.asan_dynamic-x86_64.dll+0x180037f31)
==6968==ABORTING
Here a link to the full ASAN trace.
What I did so far
I created a my_asan.supp and loaded it with ASAN_OPTIONS=suppressions=my_asan.suppas suggested in the docs with the following contents:
interceptor_via_fun:_PyObject_Realloc
interceptor_via_fun:realloc
interceptor_via_lib:C:/Python39/python3.dll
interceptor_via_lib:C:/s/eklang/DevOps/clang/bin/LLVM-13.0.0-win64/lib/clang/13.0.0/lib/windows/clang_rt.asan_dynamic-x86_64.dll
interceptor_via_lib:C:/Windows/System32/KERNEL32.DLL
interceptor_via_lib:C:/Windows/SYSTEM32/ntdll.dll
interceptor_via_lib:C:\Python39\python3.dll
interceptor_via_lib:C:\s\eklang\DevOps\clang\bin\LLVM-13.0.0-win64\lib\clang\13.0.0\lib\windows\clang_rt.asan_dynamic-x86_64.dll
interceptor_via_lib:C:\Windows\System32\KERNEL32.DLL
interceptor_via_lib:C:\Windows\SYSTEM32\ntdll.dll
interceptor_via_lib:clang_rt.asan_dynamic-x86_64.dll
interceptor_via_lib:ntdll
interceptor_via_lib:ntdll.dll
interceptor_via_lib:python3
interceptor_via_lib:python3.dll
interceptor_via_lib:KERNEL32
interceptor_via_lib:KERNEL32.dll
None of these seemed to work, what am I doing wrong? I tried full-path, forward-slash, backslash, dll names ...
Info
LLVM 13, Windows 10
When an executable which was not compiled with asan (in this case python.exe) loads a dll which was compiled with i.
One needs to make sure asan runtime is loaded first for it to do it's magic and properly intercept malloc and free, under linux that's easy, one would use LD_PRELOAD (there are many examples in the internet on how to do that).
In windows though, that seems not possible, therefore a workarounds seems to be to make an executable wrapper with clang_rt.asan-preinit-x86_64.lib and clang_rt.asan-x86_64.lib linked and call python from there, which later on will load the dll needs to be compiled with clang_rt.asan_dll_thunk-x86_64.lib.
The whole things sounds quite convoluted but seems to work so far. This article đź‘€ helped me.
Related
I am trying to debug a driver in UEFI firmware (OVMF) via gdb as described here:
https://github.com/tianocore/tianocore.github.io/wiki/How-to-debug-OVMF-with-QEMU-using-GDB
It works well, but I discovered that just having debug symbols for my driver is not enough. I also need debug symbols for the whole OVMF image to properly see what's going on. I have a lot of .debug files after OVMF is built with edk2, but I don't understand which ones I need to load into gdb, and what addresses I should use.
I found some instructions involving DebugPkg, but I couldn't make gdb_uefi.py work no matter what. It always failed to locate EFI_SYSTEM_TABLE_POINTER.
In the end, I ended up writing my own script, which implements gdb command that does manage to successfully load all debug symbols. It is probably a worse solution, since it requires a setup: "debug.log" with driver addresses must be present when loading is performed, so you need to run QEMU at least once first. But, this is good enough for me.
My script can be found here:
https://github.com/artem-nefedov/uefi-gdb
I've just compiled BPF examples from kernel tools/testing/selftests/bpf and tried to load as explained in http://cilium.readthedocs.io/en/v0.10/bpf/:
% tc filter add dev enp0s1 ingress bpf \
object-file ./net-next.git/tools/testing/selftests/bpf/sockmap_parse_prog.o \
section sk_skb1 verbose
Program section 'sk_skb1' not found in ELF file!
Error fetching program/map!
This happens on Ubuntu 16.04.3 LTS with kernel 4.4.0-98, llvm and clang of version 3.8 installed from packages, iproute2 is the latest from github.
I suspect I'm running into some toolchain/kernel version/features mismatch.
What am I doing wrong?
I do not know why tc complains. On my setup, with a similar command, the program loads. Still, here are some hints:
I think the problem might come, as you suggest, from some incompatibility between kernel headers version and iproute2, and that some relocation fails to occur, although on a quick investigation I did not find exactly why it refuses to load the section. On my side I'm using clang-3.8, latest iproute2, but also the latest kernel (some commit close to 4.14).
If you manage to load the section somehow, I believe you would still encounter problems when trying to attach the program in the kernel. The feature called “direct packet access” is only present on kernels 4.7 and higher. This is what makes you able to use skb->data and skb->data_end in your programs.
Then as a side note, this program sockmap_parse_prog.c is not meant to be used with tc. It is supposed to be attached directly to a socket (search for SOCKMAP_PARSE_PROG in file test_maps.c in the same directory to see how it is loaded there). Technically this does not prevent one to attach the program as a tc filter, but it will probably not work as expected. In particular, the value returned from the program will probably not have a meaning that tc classifier hook will understand.
So I would advise to try with a recent kernel, and to see if you have more success. Alternatively, try compiling and running the examples that you can find in your own kernel sources. Good luck!
I'm trying to do some debugging on a server on an issue that I suspect is related to a buffer overflow, so I tried to compile my code with -fsanitize=address to enable address sanitizing.
It compiled, and the resulting software runs. However, I'm trying to get a core dump when the address sanitizer detects an error since that is pretty much the only way I can get information out of the system due to the setup.
I am calling the software with ASAN_OPTIONS=abort_on_error=1 prepended on the command line (using a shell script to do that), and have checked that ulimit -c gives unlimited as result, but it just won't produce a core dump.
What am I missing?
This is on an ubuntu 14.04 server with gcc version 4.8.4
EDIT: sysctl kernel.core_pattern gives back kernel.core_pattern = |/usr/share/apport/apport %p %s %c %P. This probably means that apport is enabled (at least in some form). However, I have been able to get proper core files on this system from asserts and SIGFPEs in the software (that is where the suspicion of array overruns comes from).
Let me guess, is this x64 target? Coredumps are disabled there to avoid dumping 16 TB shadow memory (see docs for disable_coredump here for details).
Newer versions of GCC/Clang remove shadow from core by default so that one could do something like
export ASAN_OPTIONS=abort_on_error=1:disable_coredump=0
but I'm afraid 4.8 is too old for this.
As an alternative suggestion, why backtraces are not enough for you? You could use log_path or log_to_syslog to preserve them if you do not have access to programs stderr.
NB: I posted suggestion to enable coredumps on all platforms.
I have a mixed managed and unmanaged C++ application that's working quite well. I'm using Visual Studio 2013 to compile it, and all is well. Recently I upgraded my computer to Windows 10, and now it's not working.
If I get the executable compiled on Windows 8, it runs properly on Windows 10. It only fails if I compile it on Windows 10.
The failure is peculiar as well. I run the EXE and nothing happens. When I run it from Visual Studio it doesn't even reach the first line of main. Breakpoints are all marked as 'disabled'. When I break the running process the debugger shows an empty stack trace.
UPDATE: Hunch about DLL loading turned into a fact:
I used Process Explorer and I see the process has two threads. The one starting at !CorExeMain is stuck at !LdrLoadDll, but I can't tell which DLL that is.
OK, found the DLL that causes the problem. I've created a C++/CLI console application, used that DLL and got the same behavior. The DLL is part of the application (and part of the VS solution). It's a native C++ DLL, compiled with the same compiler and settings. This DLL references other DLLs unfortunately.
This is a generic problem called "LoaderLock". The operating system makes very strong guarantees when it calls the DllMain() entrypoint of a DLL. Strictly in loading order, they never run at the same time. There is a lock in the OS loader that ensures these promises are kept.
And a lock always has the potential to cause deadlock. It will happen when the DllMain() entrypoint does something unwise like loading the DLL itself with LoadLibrary(). Or call a function that requires the OS to have a DLL already loaded. Can't work, its DllMain() entrypoint can't be called because the loader lock is held. The program will freeze. C++/CLI apps are prone to this problem, lots of stuff tends to happen in DllMain(). Indirectly, you can't see it in your code.
You can only see it with the debugger. You must change its flavor, Project > Properties > Debugging > Debugger Type, change it from "Auto" to "Mixed". You'll now also see the unmanaged code that is running including the OS loader functions, name starts with "Ldr". Be sure to enable the Microsoft Symbol Server with Tools > Options > Debugging > Symbols. And be sure to use the Debug > Windows > Threads debugging window as well, the truly tricky loader deadlocks that don't repeat well or appear to be affected by the OS version are caused by another thread loading a DLL.
Diagnosing and fixing it can be difficult, be sure to reserve the time you need to dig in. If you can't make heads or tails of the stack traces then post them in your question.
Before taking #Hans Passant advice I carefully combed through the code, dumpbin /dependentsed the executables and DLLs and made sure there were no custom DllMains. There were none. DLLs were indeed loaded with LoadLibrary, but that was happening long after the loading DLLs were loaded.
So I took #Hans Passant advice. Set up the debugger properly and checked the state of the process during the deadlock. One of the threads was stuck in LdrLoadDll.
It took a little tinkering to find the name of the DLL that was passed to LdrLoadDll. It was AVGHOOK.DLL .
I disabled AVG, and lo and behold - everything is back to normal.
This is the second time AVG is messing with me. The previous time I nearly replaced a printer until I figured out all the PCL errors disappeared when I disabled AVG. I think I'm not going to use it any more.
I have put our comment ping pong into a full text:
#1:
As you found out, that your application did not load, you needed to check if applications on your system (W10, VS2013) run at all.
Reply: A test console app is running fine.
#2:
If your application doesn't run, build up a similar application and step-by-step put code of your app into the new app until it fails.
If the failure is causes by a DLL (which cannot be loaded, as it was in your case), remove DLLs from your app until it works. Alternatively build a dummy console app, include DLL #1 and use some functions of that DLL. Compile, run, check. Go on until DLL #n...
Reply: faulty DLL was found.
#3:
Reference only this DLL in the test app to ensure it's this DLL only and not a combination of DLLs.
Is it a managed DLL or a native C++ DLL?
If the faulty DLL is from a foreign source: bad luck. Ask the developer for support.
If it's your own: Did you compile the faulty DLL on W10+VS2013, too, or did you copy it from your previous system? I suggest you compile this DLL again on the new system.
Reply: it's a native C++ DLL, which is part of the solution and is compiled together with the main app.
This DLL references other DLLs unfortunately.
#4:
Create a new console app, that references not your faulty application DLL, but the DLLs which are referenced by YOUR DLL. Omit the intermediate step to detect if the failure comes from the other DLLs.
The general procedure is: Split up faulty code to find out out which part is causing trouble. Romans already knew this 2000 years ago: Divide et impera ;-) Though they did it in a different context...
We had the exact same problem here during several weeks ! And we finally found a solution !
In our case, it was the anti-virus Avast which was corrupting the generated .exe !
The solution was to simply disable all agents while generating the release !
If you use another anti-virus, try to disable it.
I am using gdb attached to a serial port of a virtual machine to debug linux kernel.
I am wondering, if there is any patches/plugins which can make the gdb understand some of linux kernel's data structure and make it "thread aware"?
By that I mean under gdb I can see how many kernel threads are there, their status, and for each thread, their stack information.
libvmi
https://github.com/libvmi/libvmi
This project does "LibVMI: Simplified Virtual Machine Introspection" which sounds really close.
This project in particular https://github.com/Wenzel/pyvmidbg uses libvmi and features a demo video of debugging a Windows userland application form inside it, without memory conflicts.
As of May 2019, there are two limitations however as of May 2019, both of which could be overcome with some work: https://github.com/Wenzel/pyvmidbg/issues/24
Linux memory parsing is not yet complete
requires Xen
The developer of that project also answered further at: https://stackoverflow.com/a/56369454/895245
Implementing it with those libraries would be in my opinion the best way to achieve this goal today.
Linaro lkd-python
First, this Linaro page claims to have a working setup: https://wiki.linaro.org/LandingTeams/ST/GDB that allows you to do usual thread operations such as thread, bt, etc., but it relies on a GDB fork. I will test it out later. In 2016, https://youtu.be/pqn5hIrz3A8 says that the implementation was in C, not as Python scripts unfortunately, which would be better and avoid forking. The sketch for lkd-python can be found at: https://git.linaro.org/people/lee.jones/kieran.bingham/binutils-gdb.git/log/?h=lkd-python
Linux kernel in-tree GDB scripts + my brain
I then tried to see what I could do with the kernel in-tree Python scripts at v4.17 + some manual intervention as a prototype, but didn't quite get there yet.
I have tested using this highly automated QEMU + Buildroot setup.
First follow the procedure I described at: How to debug the Linux kernel with GDB and QEMU? to get GDB working.
Then, as described at: How to debug Linux kernel modules with QEMU? run GDB with:
gdb -ex add-auto-load-safe-path /full/path/to/linux/kernel
This loads the in-tree GDB Python scripts from scripts/gdb.
One of those scripts provides:
lx-ps
which lists all threads with format:
0xffff88000ed08000 1 init
0xffff88000ed08ac0 2 kthreadd
The first field is the address of the task_struct struct, so we can see the entire struct with:
p (struct task_struct)*0xffff88000ed08000
which should in theory allow us to get any information we want about the process.
Now I wanted to find the PC. For ARM, I've seen: Find program counter of process in kernel and I tried:
task_pt_regs((struct thread_info *)((struct task_struct)*0xffffffc00e8f8000))->uregs[ARM_pc]
but task_pt_regs is a #define and GDB cannot see defines without -ggdb3: How do I print a #defined constant in GDB? which are apparently not set?
I don't think GDB understands kernel data structures, that would make them version dependent. GDB uses ptrace for gathering information on any running process.
That's all I know :(
pyvmidbg developer here.
I will add some clarifications:
yes the goal of the project is indeed to have a cross-platform, guest-aware GDB stub.
Most of the implementation is already done for Windows, where we are aware of processes and their threads context.
It's possible to intercept a specific process (cmd.exe in the demo) and singlestep its execution (this is limited to 1 process with 1 thread for now), as well as attaching to a new process's entrypoint.
Regarding Linux, I looked at the internals and the resources that I could find, but I'm lacking the whole picture to figure out how I can:
- intercept a task when it's being scheduled (core/sched.c:switch_to() ?)
- read the task state (Windows's KTRAP_FRAME equivalent for Linux ?)
I asked a question on SO, but nobody answered :/
Linux context switch internals: how does a process goes back to userland after the switch?
If you can help with this, I can guide you through the implementation :)
Regarding the hypervisor support, only Xen is fully supported in the Libvmi interface at the moment.
I added a section in the README to describe where we are in terms of VMI APIs with other hypervisors.
Thanks !