I am working on porting an application that uses LLVM MCJIT (RPCS3) to macOS. I have run into an issue when calling existing functions from the JIT: calling such functions results in a segfault, as an offset of 0x100000000 is added to the address of each function. llvm::ExecutionEngine::addGlobalMapping is used to add the mappings for these existing functions. To confirm this, I tried replacing the address of one of these functions with 0x1234. When calling the function from the JIT, the application now segfaults at 0x100001234 (not 0x1234 as expected). This behaved as expected (segfault at 0x1234) on Linux. Where could this offset be coming from? Is it possible to manually specify the image base in MCJIT? I am not too familiar with LLVM MCJIT internals.
Related
I've just compiled BPF examples from kernel tools/testing/selftests/bpf and tried to load as explained in http://cilium.readthedocs.io/en/v0.10/bpf/:
% tc filter add dev enp0s1 ingress bpf \
object-file ./net-next.git/tools/testing/selftests/bpf/sockmap_parse_prog.o \
section sk_skb1 verbose
Program section 'sk_skb1' not found in ELF file!
Error fetching program/map!
This happens on Ubuntu 16.04.3 LTS with kernel 4.4.0-98, llvm and clang of version 3.8 installed from packages, iproute2 is the latest from github.
I suspect I'm running into some toolchain/kernel version/features mismatch.
What am I doing wrong?
I do not know why tc complains. On my setup, with a similar command, the program loads. Still, here are some hints:
I think the problem might come, as you suggest, from some incompatibility between kernel headers version and iproute2, and that some relocation fails to occur, although on a quick investigation I did not find exactly why it refuses to load the section. On my side I'm using clang-3.8, latest iproute2, but also the latest kernel (some commit close to 4.14).
If you manage to load the section somehow, I believe you would still encounter problems when trying to attach the program in the kernel. The feature called “direct packet access” is only present on kernels 4.7 and higher. This is what makes you able to use skb->data and skb->data_end in your programs.
Then as a side note, this program sockmap_parse_prog.c is not meant to be used with tc. It is supposed to be attached directly to a socket (search for SOCKMAP_PARSE_PROG in file test_maps.c in the same directory to see how it is loaded there). Technically this does not prevent one to attach the program as a tc filter, but it will probably not work as expected. In particular, the value returned from the program will probably not have a meaning that tc classifier hook will understand.
So I would advise to try with a recent kernel, and to see if you have more success. Alternatively, try compiling and running the examples that you can find in your own kernel sources. Good luck!
I have a Java program using OpenGL via JOGL, and there are some bugs that only appear on Windows that I'd like to debug. For this purpose, I tried setting up a spare computer with Windows, but encountered a strange problem when I went to debug my program:
When I run the program "normally" via Java Web Start, it works perfectly normally, but when I compiled the program and try to run it either via the command-line java launcher or via NetBeans (which I presume does the same thing), it appears to be using a different and very primitive OpenGL implementation that doesn't support programmable shading or anything.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it. (It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 232.)
Using Process Explorer, I see that, when I run the program under Java Web Start (where it works), it loads the library ig4icd64.dll, which I assume is the actual OpenGL implementation library for the Intel GPU driver; whereas when trying to run the program via java.exe, opengl32.dll is loaded, but ig4icd64.dll is never loaded, which appears to confirm my suspicion that it's using a different OpenGL implementation.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this? (And what is different between these two contexts that causes it to choose different implementations? In both cases, 64-bit Java is used, so there should be no confusion between 32- or 64-bit implementations.)
Update: I found this page at Microsoft's site that claims that the OpenGL ICD is found by way of the OpenGLDriverName value in the HKLM/System/CurrentControlSet/Control/Class/{Adapter GUID}/0000/ registry key. That value does correctly contain ig4icd64.dll, however, and perhaps more strangely, using Process Monitor to monitor the syscalls (if that's the correct Windows terminology) of the Java process reveals that it never attempts to access that key. I can't say I know if that means that the article is incorrect, or if I'm using Process Monitor incorrectly, or if it's something else.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it.
Yes, this is exactly how it works. opengl32.dll acts as a conduit between the Installable Client Driver (ICD) and the programs using OpenGL.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this?
It chooses based on the window class flags (that's not a Java class, but a set of settings for a window as part of the Windows API, see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633577(v=vs.85).aspx for details), the window style flags the pixel format set for the window, the position of the window (which means which screen and graphics device it's on) and the context creation flags.
For example if you were to start it as a service then there's be no graphics device to create a window on at all. If you were to start it in a remote desktop session it would run on a headless, software rasterizer implementation.
I don't know the particular details in how the CLI java interpreter differs from WebStart. But IIRC you use javaw (note the extra w) for GUI programs.
(It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 2^32.)
It's not just opengl32.dll but all Windows system DLLs that are named …32 even in a 64 bit environment, and they're even located in \Windows\System32 to add to the confustion. For a very simple reason: Source code level backwards compatibility when compiling for 64 bits. If all the library names would have been changed to …64 then for compiling programs for a 64 bit environment all the string literals and references to the libraries would have to be renamed to …64.
If it makes you feel better about the naming, think of the …32 as a version designator, not an architecture thing: The Win32 API was developed in parallel for Windows 9x and Windows NT 3, so just in your mind let that …32 stand for "API version created for Windows NT 3.2".
People,
I have just working on OSX 10.9, and have a crash at hand to debug.
I see OSX has LLVM and LLDB which replaces well known and well documented gdb.
Anyway, I see in crashreport the precise stack trace, and that's pretty impressive from Apple.
However, I can get to image lookup in lldb and print the API name.
When I use verbose option with image lookup it prints few extra information, however, I am still not able to view local variables in the specific API.
I tried image dump, image sym-tab etc and other lldb options.
None of them seems to helping. Scanned through StackOverflow to see if its there's but not luck yet.
Therefore I have the Q
From OSX crashreports we cannot get stack-trace with local variables/arguments values?
How do we see a function arguments/locals variables using LLDB when we a OSX crashreport handy.
I see frame variable etc works fine when attaching to a running process, however these doesn't work when I crash the process and try to see the locals/arguments.
Request you to please guide.
THank you.
The CrashReporter output consists (mainly) of a backtrace of all the threads in the program, the list of libraries/frameworks that were loaded at the time (with their load addresses and Mach-O UUIDs to identify which build of each binary was being used) and the register context for the frame that crashed.
image lookup -va in lldb will show you where the arguments/variables were located at that given pc. If a variable is stored in register rbx at that point, image lookup will say it's in rbx. Often times a variable is stored on the stack and the location will say something like rbp-40 where rbp is the frame pointer register on x86_64. In that case, it means the variable is currently available on the stack at the value of rbp minus 40.
Crash reports do not include any of the stack contents -- they intentionally don't include things that might be sensitive. For instance, your program might have a password in plaintext in a stack local variable.
If a variable you're interested in is stored in a register at the point of the crash, you can use the register context section of the crash report to figure out what value was in there.
Often it's not quite that simple. If you can read your way through assembly language, the best way to figure out what really happened is to disassemble the crashed function and use the register context information to understand why the final instruction resulted in the crash.
I encounter an access violation when calling a DLL inside LabVIEW. Let's call the DLL "extcode.dll". I don't have its code, it comes from an external manufacturer.
Running it in Windbg, it stopped with the message:
(724.1200): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a:
And call stack is:
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a
ntdll!MD5Final+0xedfc
ntdll!RtlFindClearBitsAndSet+0xdf4
ntdll!RtlFindClearBitsAndSet+0x3a8
ntdll!RtlFindClearBitsAndSet+0x4b9
ntdll!RtlCreateProcessParametersEx+0x829
ntdll!LdrLoadDll+0x9e
KERNELBASE!LoadLibraryExW+0x19c
KERNELBASE!LoadLibraryExA+0x51
LabVIEW!ChangeVINameWrapper+0x36f5
LabVIEW!ChangeVINameWrapper+0x3970
LabVIEW!ExtFuncDynLibWrapper+0x211
Note that dependencies of extcode.dll are loaded before access violation.
The situation is random, but when it happens all subsequent tries lead to it.
The code is a simple LabVIEW function calling a function in the DLL, and prototype is super simple (int function(void)) so it cannot be an misconfiguration of the call parameters, nor pointer arithmetics. I checked every combination of calling conventions and error checking levels.
The DLL runs perfectly fine when called in other environments (.NET and C).
I found that RtlFindClearBitsAndSet is related to bit array manipulations
What does it make you think about? Do you think it is a problem in extcode.dll, LabVIEW, or Windows?
PS: I use LabVIEW 2010 64 bit, on Windows 7 64 bit (and extcode.dll is 64 bit). I didn't manage to reproduce it on 32 bit system.
11/18 EDIT
I ended up making a standalone exe that wraps the DLL; LabVIEW communicates with it through pipes. It works perfectly, but I stil don't understand why loading a DLL into LabVIEW can crash.
If it works ok when called from C, you can quit working with Windbg because the DLL is probably ok. Something is wrong with how the DLL is being called, and once the DLL overwrites some of LabView's memory it is all over, even though it might take 1000 iterations before something actually goes kablooey.
First check your calling conventions, C or StdCall. C calling convention is the default and StdCall is almost certainly what you want. (Check the DLL header file.) LabView 2009 apparently did some auto-checking and fixing of calling conventions, but the switch to LLVM in LV 2010 has made this impossible; now it just tanks.
If it still tanks after changing this, check your calling arguments again. what you are passing, scalars or pointer data? You cannot access memory allocated by the DLL from LabView without doing some sneaky things, although you can allocate memory (i.e. byte array) in LabView and pass a pointer to it to the DLL for it to modify.
Also, if you are getting a pointer (such as a refnum) from an earlier call to DLL and returning it, check your pointer size. LabView's Call Library function now has a "pointer size integer" type, which generates the appropriately-sized type depending on whether it is invoked in 32-bit or 64-bit LabView. (It is always 64 bits on the wire, because that has to be defined at compile time.) The fact that your DLL works in 32 suggests this is a possibility.
Also keep in mind that C structs are often aligned by the (C) compiler. If you are passing a pointer to a struct made of a Uint8 and an UInt16, the C compiler will allocate 32 bits (or maybe even 64 bits) for this. You'll have to pad your struct (cluster) in LabView to make it match, or write a wrapper DLL to assemble the struct.
-Rob
An access violation (0xc0000005) will also be reported if DEP (Data Execution Prevention) is enabled on your machine and disapproves of something your binary (EXE or DLL) is trying to do. DEP is usually off by default on Windows XP, but active on Windows Vista / Windows 7.
DEP is a hardware-supported security measure designed to prevent malicious code executing some bytes that previously were considered "just some data"; I've had a few run-ins with it, all of which required re-compiling the offending binaries with a recent version of Microsoft Visual Studio; this allows to you set a flag which defines whether or not your binary supports DEP.
Some useful resources:
I found this MSDN blog entry
very helpful for gaining an
understanding of what DEP is and does
You might also want to consult this
technet article on how to turn
DEP on and off on Windows XP.
This is hard to diagnose remotely, but here are a couple of ideas.
The fact that your function takes no arguments means that either the function is truly trivial, or there is some stored state in the dll that takes into account previous function calls. Maybe the crash in this function is only an indicator, and you have a problem with a previous function call? Is there an initilization routine you're not calling?
If you only have problem when using 64 bit labview, my first guess would be that there's a problem with the 64 bit version of the dll, but if you are sure you don't have any problem with the exact same calls when using the dll in other environments, I'm stumped. One possibility is that you are using the wrong calling convention (stdcall vs. cdecl) in labview.
Have you tried importing the dll and header using the labview import wizard? This might help avoid silly mistakes with the prototypes.
One other thing to try: right click on the DLL call, choose configure and make sure you're running in the UI thread instead of any thread. Sometimes this helps.
When working with git and cygwin under NTFS, i found that sometimes the executable bit is not set (or un-set during checkout or some file operations) - inside cygwin cd to the folder and do
chmod a+rwx *.dll
and check if it changes a thing (and check if you want it this way!). I found this question while searching for LoadLibrary() failing with GetLastError() returning 5 (not "0xc0000005" btw) and solved the issue with this chmod call.
Subject: PPC Assembly Language - Linux Loadble Kernel Module
Detail: How access local TOC area (r2) when called from kernel in syscall table hook?
I have written a loadable kernel module for Linux that uses syscall table hooking to intercept system calls and log information about them before passing the call on to the original handler. This is part of a security product. My module runs well and is in production code running on a large variety of Linux kernel versions and distributions with both 32 and 64 bit kernels all running on x86 hardware.
I am trying to port this code to run on Linux for PPC processors and ran into a few problems. Using the Linux kernel source, it is easy enough to see how the system call table is implemented differently on PPC. I can replace entries in the table with function addresses from my own compiled handlers, no problem.
But, here's the issue I'm having trouble with. The PPC ABI uses this thing called a Table Of Contents (TOC) address which is stored in the CPU's R2 register and expects to address a module's global and local data by using an offset from the address (TOC address) contained in that register. This works fine in normal cases where a function call is made because the compiler knows to load the module's TOC address into the register before making the call (or its already there becasue normally your functions are called by your own code).
However, when I place the address of my own function (from my loaded kernel module at runtime) into the system call table, the kernel calls my handler with an R2 value that is not the one my compiled C code expects, so my code gets control without being able to access its data.
Does anybody know of any example code out there showing how to handle this situation? I cannot recompile the kernel. This should be a straightforward case of runtime syscall table hooking, but I have yet to figure it out, or find any examples specific to PPC.
Ideas include:
Hand coding an assembly language stub that saves the R2 value, loads the register with my local TOC address, executes my code, then restores the old value before calling the original handler. I don't have the depth of PPC assembly experience to do this, nor am I sure it would work.
Some magic gcc option that will generate my code without using TOC. There is a documented gcc option "-mno-toc" that doesn't work on my PPC6 Linux. It looks like it may only be an option for system V.4 and embedded PowerPC.
Any help is greatly appreciated !
Thanks!
Linux has a generic syscall audit infrastructure which works on powerpc and you can access from user space. Have you considered using that rather than writing a kernel module?
You need a stub to load r2. There are examples in the kernel source.