I am working with windows 7 x64. I understand that patchguard is enabled and should prevent write access to the SSDT structure in ntoskrnl.exe. However for learning purposes, I was wondering if my driver can call a function like ZwXxxx directly.
By directly i mean, obtain the kernel base. Lets say the offset to the function is 0xDeadBeef. Can I just create a typedef'd function pointer to that location and call it like that? Without going through the SSDT? I know this is how I would be in user-mode, not sure if its the same case in kernel mode.
Thanks.
As you said patchguard prevents SSDT modification. So, reading is ok. And if you have a function address you can call it. There is no difference how did you manage to obtain the function address: from SSDT, by signature, hardcoded value or else.
Related
I am following the process to create a trampoline in order to hook a dll function (in my case Direct3DCreate9 from d3d9.dll) as outlined here: https://www.malwaretech.com/2015/01/inline-hooking-for-programmers-part-1.html and https://www.malwaretech.com/2015/01/inline-hooking-for-programmers-part-2.html
My code differs slightly as I am working out the offset bytes manually using a disassembler rather than using the hde32_disasm function.
Everything seems to work fine, the victim process calls my injected dll wrapper function, the new function does some stuff, then calls the original function (Direct3DCreate9), once the original returns the wrapper should then call some other stuff, before returning to the victim process.
Unfortunately when the original function is called from the hook wrapper it returns back to the victim application and not the hook wrapper, this means it misses out some of the code from the wrapper.
Having stepped through the disassembly it looks as though the call stack is being overwritten so when the Direct3DCreate9 returns it pops back to the victim application instead of my hook function that made the call.
I'm guessing I need to push the hook function manually onto the call-stack? How would I go about this?
Other potentially pertinent information: Both the victim process and the hook have been built in debug mode. Direct3DCreate9 is a __stdcall and I am using vs2010 for the hook dll, but the victim process was compiled with vs2015.
It turns out the callstack was being knobbled by the NVidia graphics driver nvd3d9wrap.dll. This dll was injecting into the d3d9 application in the same manner as what I was trying to do. This lead to the madness explained in the original post.
The solution was to open Device Manager in windows and disable the NVidia graphics driver. Thankfully my pc has an integrated graphics chip, so I am able to use that.
I'm in the process of porting an existing win32 application to x64.
In one of the modules, I see a fixed based address passed to MapViewOfFileEx() as "lpBaseAddress" argument. The value passed is 0x20000000.
In one of the porting guidelines, I read that we should stay away from such "magic numbers" while porting to x64.
But, the code using the base address 0x20000000 is a legacy one and is called from lots of other modules for shared memory allocation. So, I'm hesitant to change the value of this address while porting to x64.
I'd like to know if the code ported to x64 will work well with the same base address?
As a side note, I also see the current (x86) code links, ie invokes the linker with /base option value of 0x1C000000, ie -base:0x1C000000.
Does this have any relation to the valid value of base address we can request from MapViewOfFileEx()?
Any insight/ideas will be greatly appreciated.
Edit:
To clarify, this question doesn't pertain to any addresses per se. What I want to know is whether a 32-bit constant address passed to MapViewOfFileEx() can be reused while porting to x64 platform. The reference to linker option "base" was to ask if the address specified as the base address while linking has any relation to the address lpBaseAddress we pass to MapViewOfFileEx().
This is a bit of a non-question. The real issue is why the file must be mapped at that address, and I'm having a tough time believing that changing the 'legacy' code to be more flexible is completely off the table.
Calling MapViewOfFileEx with a specific base address is really, really dangerous. There is never any guarantee that Windows will be able to honour that request, since, even if it's only one time in a hundred (which is the worst kind of bug, no?), that address will already be occupied. ASLR is a case in point, or Windows might have put the heap there, or whatever.
So, tl;dr: don't do that. Just don't. Find another way.
I'm writing a driver for Windows NT that provides Ring-0 access for userspace application. I want to make a utility with exclusive rights to execute any user's commands that would be protected from any external harmful influence.
Surfing the Internet I found that it is necessary to hook some native kernel functions, such as NtOpenProcess, NtTerminateProcess, NtDublicateObject, etc. I've made a working driver which protects an application but then I realized that it would be better to prevent it also from external attempts of removing the driver or forbidding its loading during OS starting like firewall. I divided the task into two parts: to prevent physical removing of the driver from \system32\drivers\ and to prevent changing/removing registry key responsible for loading the driver (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services).
The matter is that I do not understand how to hook the access to the registry key from kernel space and even not sure that it is possible: all functions from ntdll that work with registry are in the userspace, unavailable from kernelspace. Also all API hooks that I can set from userspace would be in specific proccess's memory context. So we need to inject Dll into every proccess be it current or new.
Is there a method to hook all NT-calls in one place without injecting Dll into every proccess?
You do this in wrong way. Registry calls is also nt syscalls and reside in SSDT (as another Zw* syscalls). But hooking SSDT is bad practice. Major drawbacks - its dont working on x64 systems because of PathGuard. Right way is use documented specific filtering mechanisms of OS. For registry calls it is Configuration Manager callbacks. There are some caveats for windows xp version of this callbacks (some facilities are unimplemented or bogus) but xp is dead now =). It`s very simple to use it. You can start (and end =) ) from this guide http://msdn.microsoft.com/en-us/library/windows/hardware/ff545879(v=vs.85).aspx
I encounter an access violation when calling a DLL inside LabVIEW. Let's call the DLL "extcode.dll". I don't have its code, it comes from an external manufacturer.
Running it in Windbg, it stopped with the message:
(724.1200): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a:
And call stack is:
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a
ntdll!MD5Final+0xedfc
ntdll!RtlFindClearBitsAndSet+0xdf4
ntdll!RtlFindClearBitsAndSet+0x3a8
ntdll!RtlFindClearBitsAndSet+0x4b9
ntdll!RtlCreateProcessParametersEx+0x829
ntdll!LdrLoadDll+0x9e
KERNELBASE!LoadLibraryExW+0x19c
KERNELBASE!LoadLibraryExA+0x51
LabVIEW!ChangeVINameWrapper+0x36f5
LabVIEW!ChangeVINameWrapper+0x3970
LabVIEW!ExtFuncDynLibWrapper+0x211
Note that dependencies of extcode.dll are loaded before access violation.
The situation is random, but when it happens all subsequent tries lead to it.
The code is a simple LabVIEW function calling a function in the DLL, and prototype is super simple (int function(void)) so it cannot be an misconfiguration of the call parameters, nor pointer arithmetics. I checked every combination of calling conventions and error checking levels.
The DLL runs perfectly fine when called in other environments (.NET and C).
I found that RtlFindClearBitsAndSet is related to bit array manipulations
What does it make you think about? Do you think it is a problem in extcode.dll, LabVIEW, or Windows?
PS: I use LabVIEW 2010 64 bit, on Windows 7 64 bit (and extcode.dll is 64 bit). I didn't manage to reproduce it on 32 bit system.
11/18 EDIT
I ended up making a standalone exe that wraps the DLL; LabVIEW communicates with it through pipes. It works perfectly, but I stil don't understand why loading a DLL into LabVIEW can crash.
If it works ok when called from C, you can quit working with Windbg because the DLL is probably ok. Something is wrong with how the DLL is being called, and once the DLL overwrites some of LabView's memory it is all over, even though it might take 1000 iterations before something actually goes kablooey.
First check your calling conventions, C or StdCall. C calling convention is the default and StdCall is almost certainly what you want. (Check the DLL header file.) LabView 2009 apparently did some auto-checking and fixing of calling conventions, but the switch to LLVM in LV 2010 has made this impossible; now it just tanks.
If it still tanks after changing this, check your calling arguments again. what you are passing, scalars or pointer data? You cannot access memory allocated by the DLL from LabView without doing some sneaky things, although you can allocate memory (i.e. byte array) in LabView and pass a pointer to it to the DLL for it to modify.
Also, if you are getting a pointer (such as a refnum) from an earlier call to DLL and returning it, check your pointer size. LabView's Call Library function now has a "pointer size integer" type, which generates the appropriately-sized type depending on whether it is invoked in 32-bit or 64-bit LabView. (It is always 64 bits on the wire, because that has to be defined at compile time.) The fact that your DLL works in 32 suggests this is a possibility.
Also keep in mind that C structs are often aligned by the (C) compiler. If you are passing a pointer to a struct made of a Uint8 and an UInt16, the C compiler will allocate 32 bits (or maybe even 64 bits) for this. You'll have to pad your struct (cluster) in LabView to make it match, or write a wrapper DLL to assemble the struct.
-Rob
An access violation (0xc0000005) will also be reported if DEP (Data Execution Prevention) is enabled on your machine and disapproves of something your binary (EXE or DLL) is trying to do. DEP is usually off by default on Windows XP, but active on Windows Vista / Windows 7.
DEP is a hardware-supported security measure designed to prevent malicious code executing some bytes that previously were considered "just some data"; I've had a few run-ins with it, all of which required re-compiling the offending binaries with a recent version of Microsoft Visual Studio; this allows to you set a flag which defines whether or not your binary supports DEP.
Some useful resources:
I found this MSDN blog entry
very helpful for gaining an
understanding of what DEP is and does
You might also want to consult this
technet article on how to turn
DEP on and off on Windows XP.
This is hard to diagnose remotely, but here are a couple of ideas.
The fact that your function takes no arguments means that either the function is truly trivial, or there is some stored state in the dll that takes into account previous function calls. Maybe the crash in this function is only an indicator, and you have a problem with a previous function call? Is there an initilization routine you're not calling?
If you only have problem when using 64 bit labview, my first guess would be that there's a problem with the 64 bit version of the dll, but if you are sure you don't have any problem with the exact same calls when using the dll in other environments, I'm stumped. One possibility is that you are using the wrong calling convention (stdcall vs. cdecl) in labview.
Have you tried importing the dll and header using the labview import wizard? This might help avoid silly mistakes with the prototypes.
One other thing to try: right click on the DLL call, choose configure and make sure you're running in the UI thread instead of any thread. Sometimes this helps.
When working with git and cygwin under NTFS, i found that sometimes the executable bit is not set (or un-set during checkout or some file operations) - inside cygwin cd to the folder and do
chmod a+rwx *.dll
and check if it changes a thing (and check if you want it this way!). I found this question while searching for LoadLibrary() failing with GetLastError() returning 5 (not "0xc0000005" btw) and solved the issue with this chmod call.
Just curious here: is it possible to invoke a Windows Blue Screen of Death using .net managed code under Windows XP/Vista? And if it is possible, what could the example code be?
Just for the record, this is not for any malicious purpose, I am just wondering what kind of code it would take to actually kill the operating system as specified.
The keyboard thing is probably a good option, but if you need to do it by code, continue reading...
You don't really need anything to barf, per se, all you need to do is find the KeBugCheck(Ex) function and invoke that.
http://msdn.microsoft.com/en-us/library/ms801640.aspx
http://msdn.microsoft.com/en-us/library/ms801645.aspx
For manually initiated crashes, you want to used 0xE2 (MANUALLY_INITIATED_CRASH) or 0xDEADDEAD (MANUALLY_INITIATED_CRASH1) as the bug check code. They are reserved explicitly for that use.
However, finding the function may prove to be a bit tricky. The Windows DDK may help (check Ntddk.h) - I don't have it available at the moment, and I can't seem to find decisive info right now - I think it's in ntoskrnl.exe or ntkrnlpa.exe, but I'm not sure, and don't currently have the tools to verify it.
You might find it easier to just write a simple C++ app or something that calls the function, and then just running that.
Mind you, I'm assuming that Windows doesn't block you from accessing the function from user-space (.NET might have some special provisions). I have not tested it myself.
I do not know if it really works and I am sure you need Admin rights, but you could set the CrashOnCtrlScroll Registry Key and then use a SendKeys to send CTRL+Scroll Lock+Scroll Lock.
But I believe that this HAS to come from the Keyboard Driver, so I guess a simple SendKeys is not good enough and you would either need to somehow hook into the Keyboard Driver (sounds really messy) or check of that CrashDump has an API that can be called with P/Invoke.
http://support.microsoft.com/kb/244139
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters
Name: CrashOnCtrlScroll
Data Type: REG_DWORD
Value: 1
Restart
I would have to say no. You'd have to p/invoke and interact with a driver or other code that lives in kernel space. .NET code lives far removed from this area, although there has been some talk about managed drivers in future versions of Windows. Just wait a few more years and you can crash away just like our unmanaged friends.
As far as I know a real BSOD requires failure in kernel mode code. Vista still has BSOD's but they're less frequent because the new driver model has less drivers in kernel mode. Any user-mode failures will just result in your application being killed.
You can't run managed code in kernel mode. So if you want to BSOD you need to use PInvoke. But even this is quite difficult. You need to do some really fancy PInvokes to get something in kernel mode to barf.
But among the thousands of SO users there is probably someone who has done this :-)
You could use OSR Online's tool that triggers a kernel crash. I've never tried it myself but I imagine you could just run it via the standard .net Process class:
http://www.osronline.com/article.cfm?article=153
I once managed to generate a BSOD on Windows XP using System.Net.Sockets in .NET 1.1 irresponsibly. I could repeat it fairly regularly, but unfortunately that was a couple of years ago and I don't remember exactly how I triggered it, or have the source code around anymore.
Try live videoinput using directshow in directx8 or directx9, most of the calls go to kernel mode video drivers. I succeded in lots of blue screens when running a callback procedure from live videocaptureing source, particulary if your callback takes a long time, can halt the entire Kernel driver.
It's possible for managed code to cause a bugcheck when it has access to faulty kernel drivers. However, it would be the kernel driver that directly causes the BSOD (for example, uffe's DirectShow BSODs, Terence Lewis's socket BSODs, or BSODs seen when using BitTorrent with certain network adapters).
Direct user-mode access to privileged low-level resources may cause a bugcheck (for example, scribbling on Device\PhysicalMemory, if it doesn't corrupt your hard disk first; Vista doesn't allow user-mode access to physical memory).
If you just want a dump file, Mendelt's suggestion of using WinDbg is a much better idea than exploiting a bug in a kernel driver. Unfortunately, the .dump command is not supported for local kernel debugging, so you would need a second PC connected over serial or 1394, or a VM connected over a virtual serial port. LiveKd may be a single-PC option, if you don't need the state of the memory dump to be completely self-consistent.
This one doesn't need any kernel-mode drivers, just a SeDebugPrivilege. You can set your process critical by NtSetInformationProcess, or RtlSetProcessIsCritical and just kill your process. You will see same bugcheck code as you kill csrss.exe, because you set same "critical" flag on your process.
Unfortunately, I know how to do this as a .NET service on our server was causing a blue screen. (Note: Windows Server 2008 R2, not XP/Vista).
I could hardly believe a .NET program was the culprit, but it was. Furthermore, I've just replicated the BSOD in a virtual machine.
The offending code, causes a 0x00000f4:
string name = string.Empty; // This is the cause of the problem, should check for IsNullOrWhiteSpace
foreach (Process process in Process.GetProcesses().Where(p => p.ProcessName.StartsWith(name, StringComparison.OrdinalIgnoreCase)))
{
Check.Logging.Write("FindAndKillProcess THIS SHOULD BLUE SCREEN " + process.ProcessName);
process.Kill();
r = true;
}
If anyone's wondering why I'd want to replicate the blue screen, it's nothing malicious. I've modified our logging class to take an argument telling it to write direct to disk as the actions prior to the BSOD weren't appearing in the log despite .Flush() being called. I replicated the server crash to test the logging change. The VM duly crashed but the logging worked.
EDIT: Killing csrss.exe appears to be what causes the blue screen. As per comments, this is likely happening in kernel code.
I found that if you run taskkill /F /IM svchost.exe as an Administrator, it tries to kill just about every service host at once.