Access violation inside LoadLibrary() call - winapi

I encounter an access violation when calling a DLL inside LabVIEW. Let's call the DLL "extcode.dll". I don't have its code, it comes from an external manufacturer.
Running it in Windbg, it stopped with the message:
(724.1200): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a:
And call stack is:
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a
ntdll!MD5Final+0xedfc
ntdll!RtlFindClearBitsAndSet+0xdf4
ntdll!RtlFindClearBitsAndSet+0x3a8
ntdll!RtlFindClearBitsAndSet+0x4b9
ntdll!RtlCreateProcessParametersEx+0x829
ntdll!LdrLoadDll+0x9e
KERNELBASE!LoadLibraryExW+0x19c
KERNELBASE!LoadLibraryExA+0x51
LabVIEW!ChangeVINameWrapper+0x36f5
LabVIEW!ChangeVINameWrapper+0x3970
LabVIEW!ExtFuncDynLibWrapper+0x211
Note that dependencies of extcode.dll are loaded before access violation.
The situation is random, but when it happens all subsequent tries lead to it.
The code is a simple LabVIEW function calling a function in the DLL, and prototype is super simple (int function(void)) so it cannot be an misconfiguration of the call parameters, nor pointer arithmetics. I checked every combination of calling conventions and error checking levels.
The DLL runs perfectly fine when called in other environments (.NET and C).
I found that RtlFindClearBitsAndSet is related to bit array manipulations
What does it make you think about? Do you think it is a problem in extcode.dll, LabVIEW, or Windows?
PS: I use LabVIEW 2010 64 bit, on Windows 7 64 bit (and extcode.dll is 64 bit). I didn't manage to reproduce it on 32 bit system.
11/18 EDIT
I ended up making a standalone exe that wraps the DLL; LabVIEW communicates with it through pipes. It works perfectly, but I stil don't understand why loading a DLL into LabVIEW can crash.

If it works ok when called from C, you can quit working with Windbg because the DLL is probably ok. Something is wrong with how the DLL is being called, and once the DLL overwrites some of LabView's memory it is all over, even though it might take 1000 iterations before something actually goes kablooey.
First check your calling conventions, C or StdCall. C calling convention is the default and StdCall is almost certainly what you want. (Check the DLL header file.) LabView 2009 apparently did some auto-checking and fixing of calling conventions, but the switch to LLVM in LV 2010 has made this impossible; now it just tanks.
If it still tanks after changing this, check your calling arguments again. what you are passing, scalars or pointer data? You cannot access memory allocated by the DLL from LabView without doing some sneaky things, although you can allocate memory (i.e. byte array) in LabView and pass a pointer to it to the DLL for it to modify.
Also, if you are getting a pointer (such as a refnum) from an earlier call to DLL and returning it, check your pointer size. LabView's Call Library function now has a "pointer size integer" type, which generates the appropriately-sized type depending on whether it is invoked in 32-bit or 64-bit LabView. (It is always 64 bits on the wire, because that has to be defined at compile time.) The fact that your DLL works in 32 suggests this is a possibility.
Also keep in mind that C structs are often aligned by the (C) compiler. If you are passing a pointer to a struct made of a Uint8 and an UInt16, the C compiler will allocate 32 bits (or maybe even 64 bits) for this. You'll have to pad your struct (cluster) in LabView to make it match, or write a wrapper DLL to assemble the struct.
-Rob

An access violation (0xc0000005) will also be reported if DEP (Data Execution Prevention) is enabled on your machine and disapproves of something your binary (EXE or DLL) is trying to do. DEP is usually off by default on Windows XP, but active on Windows Vista / Windows 7.
DEP is a hardware-supported security measure designed to prevent malicious code executing some bytes that previously were considered "just some data"; I've had a few run-ins with it, all of which required re-compiling the offending binaries with a recent version of Microsoft Visual Studio; this allows to you set a flag which defines whether or not your binary supports DEP.
Some useful resources:
I found this MSDN blog entry
very helpful for gaining an
understanding of what DEP is and does
You might also want to consult this
technet article on how to turn
DEP on and off on Windows XP.

This is hard to diagnose remotely, but here are a couple of ideas.
The fact that your function takes no arguments means that either the function is truly trivial, or there is some stored state in the dll that takes into account previous function calls. Maybe the crash in this function is only an indicator, and you have a problem with a previous function call? Is there an initilization routine you're not calling?
If you only have problem when using 64 bit labview, my first guess would be that there's a problem with the 64 bit version of the dll, but if you are sure you don't have any problem with the exact same calls when using the dll in other environments, I'm stumped. One possibility is that you are using the wrong calling convention (stdcall vs. cdecl) in labview.
Have you tried importing the dll and header using the labview import wizard? This might help avoid silly mistakes with the prototypes.

One other thing to try: right click on the DLL call, choose configure and make sure you're running in the UI thread instead of any thread. Sometimes this helps.

When working with git and cygwin under NTFS, i found that sometimes the executable bit is not set (or un-set during checkout or some file operations) - inside cygwin cd to the folder and do
chmod a+rwx *.dll
and check if it changes a thing (and check if you want it this way!). I found this question while searching for LoadLibrary() failing with GetLastError() returning 5 (not "0xc0000005" btw) and solved the issue with this chmod call.

Related

Windows: problem closing 32-bit consolle application

I am working on an application that when compiled in debug mode for Win32 (on a x64 machine), just before exiting, throws this exception:
From what I was able to reconstruct, it seems that the offending line of code is this, in the xmemroy0.h header:
// If the following asserts, it likely means that we are performing
// an aligned delete on memory coming from an unaligned allocation.
_STL_ASSERT(_Ptr_user[-2] == _Big_allocation_sentinel, "invalid argument");
It seems that this happens when a dll of mine (which is part of the application) is unloaded by the SO. It doesn't seem to be a directly a problem of my software, but I fear I could be the indirect cause of the problem by not handling correctly the dll boilerplate code. For example I am doing nothing special to handle attachments/detachments in the dll. What seems wired to me is that the x64 version seems to work just fine.
Any clue from a senior system programmer that could point me in the right direction would be highly appreciated. I don't really know how to debug this, I am not an expert of the Windows internals.
Sincerely,

MapViewOfFileEx(): Does porting an application to x64 require us to specify a different address (64-bit) for the argument lpBaseAddress?

I'm in the process of porting an existing win32 application to x64.
In one of the modules, I see a fixed based address passed to MapViewOfFileEx() as "lpBaseAddress" argument. The value passed is 0x20000000.
In one of the porting guidelines, I read that we should stay away from such "magic numbers" while porting to x64.
But, the code using the base address 0x20000000 is a legacy one and is called from lots of other modules for shared memory allocation. So, I'm hesitant to change the value of this address while porting to x64.
I'd like to know if the code ported to x64 will work well with the same base address?
As a side note, I also see the current (x86) code links, ie invokes the linker with /base option value of 0x1C000000, ie -base:0x1C000000.
Does this have any relation to the valid value of base address we can request from MapViewOfFileEx()?
Any insight/ideas will be greatly appreciated.
Edit:
To clarify, this question doesn't pertain to any addresses per se. What I want to know is whether a 32-bit constant address passed to MapViewOfFileEx() can be reused while porting to x64 platform. The reference to linker option "base" was to ask if the address specified as the base address while linking has any relation to the address lpBaseAddress we pass to MapViewOfFileEx().
This is a bit of a non-question. The real issue is why the file must be mapped at that address, and I'm having a tough time believing that changing the 'legacy' code to be more flexible is completely off the table.
Calling MapViewOfFileEx with a specific base address is really, really dangerous. There is never any guarantee that Windows will be able to honour that request, since, even if it's only one time in a hundred (which is the worst kind of bug, no?), that address will already be occupied. ASLR is a case in point, or Windows might have put the heap there, or whatever.
So, tl;dr: don't do that. Just don't. Find another way.

GetProcAddress fails on Win 7 even though the DLL actually exports the function (works on Win 10)

I have a 32-bit application and I have a problem with it on Windows 7 x64. I'm loading a DLL. LoadLibraryW succeeds and the subsequent call to GetProcAddress fails with the error code 127 ("procedure not found" or something like that).
The funny part is that I know for a fact the function is exported by the DLL. I made no typos in the GetProcAddress call. I can see the function with Depends.exe and DllExp.exe. The exact same application binary successfully loads the function from the exact same DLL on Windows 10 x64, but not on Windows 7 x64.
Some more details: the library is dbghelp.dll and the "missing" function is MiniDumpWriteDump.
And the fun bit: dbghelp.dll provides API for inspecting the modules loaded into the process and for enumerating functions exported by those modules. So, first I took the HMODULE for this problematic dbghelp.dll and ran
auto ptrSymInitialize = (decltype(&SymInitialize))GetProcAddress(hDbgHelpDll, "SymInitialize");
It worked, this function did load! Then I loaded SymEnumSymbols, written the enumerator callback and finally ran the following to enumerate all the functions in this very `dbghelp.dll":
ptrSymEnum(GetCurrentProcess(), 0, "dbghelp*!*", &Enumerator, nullptr);
And what do you know, MiniDumpWriteDump is, in fact, listed there. Go figure.
Thoughts?
I can see your intent is to use MiniDumpWriteDump. We also make minidumps in our product, and I'm the one to support this.
I would suggest against using dbghelp.dll supplied with OS. First, they tend to be outdated and not support the latest minidump capabilities, which you would want to have. Second, they have proven to be rather unreliable. I believe they simply lack too many bugfixes.
What I found to work quite well is to take dbghelp.dll from Debugging Tools for Windows package (currently part of Windows SDK) and ship it along with our product. This way, I can be sure minidumps will have all the latest features and it works reliably on all OS. It has been some 8 years now, with OS ranging from WinXP to Win10, and I didn't have any issues.
I'm not exactly sure which version of SDK I used to extract the currently used dbghelp.dll, probably it was Win7 SDK. I simply didn't have a reason to update since then. However, we do use Debugging Tools for Windows package from Win10 SDK on Win7 without any issues, so I guess you can use Win10 version as well.
that's exactly what I've been doing, and I didn't bring dbgcore.dll
This was just a plain bad idea. Microsoft makes no effort to make the DLLs that are included with the OS to be backwards compatible. They don't have to. In their implementation, only the interface needs to be compatible. They do take advantage of new capabilities or design changes to improve the implementation.
Like you saw here, a side-effect of the MinWin project. There is no reasonable guess where that ended, if it happens to work now on the Win7 machine then you got lucky. Maybe you won't be so lucky on a Win7 machine without SP1, maybe some minwin glue DLLs are missing on a clean install, maybe the minidump itself is affected negatively some way. Impossible to predict.
So never do this. Afaik you should not be doing this at all, a Win7 machine already has dbghelp.dll available. Not 100% sure, it has been too long and Win7 is rapidly turning into the new XP. If you find it to be necessary then always use the redistributable version. Included with the SDK's Debugging Tools for Windows. Copy it into the same folder as the EXE that needs it so you don't mess up a machine.

Is DLL loaded in kernel mode or user mode?

I was asked such a question in an interview:
In windows, suppose there is an exe which depends on some dlls, when you start
the exe, and then the dependent dlls will be loaded, are these dlls
loaded in kernel mode or user mode?
I am not quite sure about the question, not the mention the answer - could you help to explain?
Thanks.
I'm not an expert about how Windows internally works, but for what i know the correct answer is user mode, simply because only the processes related to your Operative System are admitted in the kernel space http://en.wikibooks.org/wiki/Windows_Programming/User_Mode_vs_Kernel_Mode
Basically if it's not an OS process, it's going to be allocated in the user space.
The question is very imprecise/ambiguous. "In Windows" suggests something but isn't clear what. Likely the interviewer was referring to the Win32 subsystem - i.e. the part of Windows that you usually get to see as an end-user. The last part of the question is even more ambiguous.
Now while process and section objects (in MSDN referred to as MMF, loaded PE images such as .exe and .dll and .sys) are indeed kernel objects and require some assistance from the underlying executive (and memory manager etc) the respective code in the DLL (including that in DllMain) will behave exactly the same as for any other user mode process, when called from a user mode process. That is, each thread that is running code from the DLL will transition to kernel mode to make use of OS services eventually (opening files, loading PE files, creating events etc) or do some stuff in user mode whenever that is sufficient.
Perhaps the interviewer was even interested in the memory ranges that are sometimes referred to as "kernel space" and "user space", traditionally at the 2 GB boundary for 32bit. And yes, DLLs usually end up below the 2 GB boundary, i.e. in "user space", while other shared memory (memory mapped files, MMF) usually end up above that boundary.
It is even possible that the interviewer fell victim to a common misunderstanding about DLLs. The DLL itself is merely a dormant piece of memory, it isn't running anything on its own ever (and yes, this is also true for DllMain). Sure, the loader will take care of all kinds of things such as relocations, but in the end nothing will run without being called explicitly or implicitly (in the context of some thread of the process loading the DLL). So for all practical purposes the question would require you to ask back.
Define "in Windows".
Also "dlls loaded in kernel mode or user mode", does this refer to the code doing the loading or to the end result (i.e. where the code runs or in what memory range it gets loaded)? Parts of that code run in user mode, others in kernel mode.
I wonder whether the interviewer has a clear idea of the concepts s/he is asking about.
Let me add some more information. It seems from the comments on the other answer that people have the same misconception that exists about DLLs also about drivers. Drivers are much closer to the idea of DLLs than to that of EXEs (or ultimately "processes"). The thing is that a driver doesn't do anything on its own most of the time (though it can create system threads to change that). Drivers are not processes and they do not create processes.
The answer is quite obviously User mode for anybody who does any kind of significant application development for windows. Let me explain two things.
DLL
A dynamic link library is closely similar to a regular old link library or .lib. When your application uses a .lib it pastes in function definitions just after compile time. You typically use a .lib to store API's and to modify the functions with out having to rebuild the whole project, just paste new .lib with same name over the old and as long as the interface(function name and parameters) hasn't changed it still works. Great modularity.
A .dll does exactly the same thing however it doesn't require re-linking or any compilation. You can think of a .dll as essentially a .lib which gets compiled to an .exe just the same as applications which use it. Simply put the new .dll which shares the name and function signatures and it all just works. You can update your application simply by replacing .dlls. This is why most windows software consists of .dlls and a few exe's.
The usage of a .dll is done in two ways
Implicit linking
To link this way if you had a .dll userapplication.dll you would have an userapplication.lib which defines all the entry points in the dll. You simply link to the static link library and then include the .dll in the working directory.
Explicit linking
Alernatively you can programmatically load the .dll by first calling LoadLibrary(userapplication.dll) which returns a handle to your .dll. Then GetProcAddress(handle, "FunctionInUserApplicationDll") which returns a function pointer you can use. This way your application can check stuff before attempting to use it. c# is a little different but easier.
USER/KERNEL MODES
Windows has two major modes of execution. User mode and Kernel modes (kernel further divided into system and sessions). For user mode the physical memory address is opaque. User mode makes use of virtual memory which is mapped to real memory spaces. User mode driver's are coincidentally also .dll's. A user mode application typically gets around 4Gb of virtual addressing space to work with. Two different applications can not meaningfully use those address because they are with in context of that application or process. There is no way for a user mode application to know it's physical memory address with out falling back to kernel mode driver. Basically everything your used to programming (unless you develop drivers).
Kernel mode is protected from user mode applications. Most hardware drivers work in the context of kernel mode and typically all windows api's are broken into two categories user and kernel. Kernel mode drivers use kernel mode api's and do not use user mode api's and hence don't user .dll's(You can't even print to a console cause that is a user mode api set). Instead they use .sys files which are drivers and essentially work exactly the same way in user mode. A .sys is an pe format so basically an .exe just like a .dll is like an .exe with out a main() entry point.
So from the askers perspective you have two groups
[kernel/.sys] and [user/.dll or .exe]
There really isn't .exe's in kernel because the operating system does everything not users. When system or another kernel component starts something they do it by calling DriverEntry() method so I guess that is like main().
So this question in this sense is quite simple.

Meaning of hex number in Windows crash dialog

Every now and then (ahem...) my code crashes on some system; quite often, my users send screenshots of Windows crash dialogs. For instance, I recently received this:
Unhandled win32 exception # 0x3a009598 in launcher2g.exe:
0xC00000005: Access violation writing location 0x00000000.
It's clear to me (due to the 0xc0000005 code as well as the written out error message) that I'm following a null pointer somewhere in my launcher2g.exe process. What's not clear to me is the significance of the '0x3a009598' number. Is this the code offset in the process' address space where the assembler instruction is stored which triggered the problem?
Under the assumption that 0x3a000000 is the position where the launcher2g.exe module was loaded into the process, I used the Visual Studio debugger to check the assembler code at 0x3a009598 but unfortunately that was just lots of 'int 3' instructions (this was a debug build, so there's lots of int 3 padding).
I always wondered how to make the most of these # 0x12345678 numbers - it would be great if somebody here could shed some light on it, or share some pointers to further explanations.
UPDATE: In case anybody finds this question in the future, here's a very interesting read I found which explains how to make sense of error messages as the one I quoted above: Finding crash information using the MAP file.
0x3a009598 would be the address of the x86 instruction that caused the crash.
The EXE typically gets loaded at its preferred load address - usually 0x04000000 iirc. So its probably bloody far away from 0x3a009598. Some DLL loaded by the process is probably located at this address.
Crash dumps are usually the most useful way to debug this kind of thing if you can get your users to generate and send them. You can load them with Visual Studio 2005 and up and get automatic symbol resolution of system dlls.
Next up, the .map files produced by your build process should help you determine the offending function - assuming you do manage to figure out which exe/dll module the crash was inside, and what its actual load address was.
On XP users can use DrWatsn32 to produce and send you crash dumps. On Vista and up, Windows Error Reporting writes the crash dumps to c:\users\\AppData\Local\Temp*.mdmp

Resources