I try to write simple debugger. For simplicity, assume the debugger runs under Windows XP.
At first I create new process as follows:
CreateProcess(processName,
NULL,
NULL,
NULL,
false,
DEBUG_PROCESS | DEBUG_ONLY_THIS_PROCESS,
NULL,
NULL,
&startInfo,
&openedProcessInfo);
And when I try to read or write something in memory of debugging process there are some problems. For example:
DWORD oldProtect;
if(!VirtualProtectEx(hProcess, breakpointAddr, 1, PAGE_EXECUTE_READWRITE, &oldProtect)) {
printf("Error: %d\n", GetLastError());
}
SIZE_T bytesRead = 0;
SIZE_T bytesWritten = 0;
BYTE instruction;
BOOL isOk = ReadProcessMemory(hProcess, breakpointAddr, &instruction, 1, &bytesRead);
BYTE originalByte = instruction;
instruction = 0xCC;
if(isOk && bytesRead == 1) {
isOk = WriteProcessMemory(hProcess, breakpointAddr, &instruction, 1, &bytesWritten);
if(isOk) {
isOk = FlushInstructionCache(hProcess, breakpointAddr, 1);
}
}
if(!isOk) {
printf("Error: %d\n", GetLastError());
}
It works, but not everywhere. It works when the address to which I want to write(read) something, is located within executable module (.exe).
But when I try to write(read) something within DLL library (for example, read at address of function VirtualAlloc) VirtualProtectEx returns false and GetLastError = 487 (Attempt to access invalid address) and ReadProcessMemory also returns false and GetLastError = 299 (Only part of a ReadProcessMemory or WriteProcessMemory request was completed.)
Debug privileges are enabled but it has no effect.
Your code looks fine, if you're running as administrator than the most likely cause of the problem is that breakpointAddr is an invalid address. VirtualProtectEx giving you the "Attempt to access invalid address" error supports this conclusion.
Related
I am writing a WinDBG extension to debug a device driver, and need to call an external binary to debug the device's firmware. I would like to show the output of this binary in the WinDBG console.
My initial idea was to simply pipe the output of the binary to a buffer and print that buffer with ControlledOutput. However, I get a 'broken pipe' error when I try to read from the pipe in my extension.
Here is how I create the external process in my extension:
SECURITY_ATTRIBUTES sAttr;
HANDLE childOutRead = NULL;
HANDLE childOutWrite = NULL;
PROCESS_INFORMATION childProcInfo;
STARTUPINFO childStartInfo;
char buf[4096];
sAttr.nLength = sizeof(SECURITY_ATTRIBUTES);
sAttr.bInheritHandle = TRUE;
sAttr.lpSecurityDescriptor = NULL;
CreatePipe(&childOutRead, &childOutWrite, &sAttr, 0);
// don't inherit read end
SetHandleInformation(childOutRead, HANDLE_FLAG_INHERIT, 0);
ZeroMemory(&childProcInfo, sizeof(PROCESS_INFORMATION));
ZeroMemory(&childStartInfo, sizeof(STARTUPINFO));
childStartInfo.cb = sizeof(STARTUPINFO);
childStartInfo.hStdError = childOutWrite;
childStartInfo.hStdOut = childOutWrite;
childStartInfo.hStdIn = GetStdHandle(STD_INPUT_HANDLE);
childStartInfo.dwFlags |= STARTF_USESTDHANDLES;
CreateProcessA(NULL, "myBinary.exe someArgs",
NULL, NULL, TRUE, 0, NULL, NULL,
&childStartInfo, &childProcInfo);
// close the handle not used in parent
CloseHandle(childOutWrite);
// read output
while (1) {
DWORD read;
BOOL r;
DWORD error;
r = ReadFile(childOutRead, buf, sizeof(buf), &read, NULL);
if (!r) {
error = GetLastError();
windbgPrintf("got error 0x%x\n", error);
break;
}
if (read == 0) break;
windbgPrint(buf, read);
}
ReadFile fails with error 0x6D, BROKEN_PIPE. This makes me suspect that the pipe is somehow not being inherited.
I have nearly identical code working in a test outside of WinDBG, so it must be doing something differently. How do I get pipes working in this way inside WinDBG?
I have created a device in kernel space and the access it in user space using CreateFile I am able to send ioctl to the driver and they are executed properly. The don't know how to trace what happens after WdfRequestComplete and upon return I end with error 1 (invalid function). Before this is flagged as dup please note there is a difference with this in that I write my driver ioctl and in that I am using synch io not asynch.
In user space:
fd = CreateFile(dev_path,
(FILE_GENERIC_READ | FILE_GENERIC_WRITE),
(FILE_SHARE_READ | FILE_SHARE_WRITE),
NULL, OPEN_EXISTING, 0, NULL);
// ... error checking code here
DeviceIoControl(fd, // device handler
VIRTQC_CMD_MMAP, // command to send
&inputBuffer,
inputBufferLength,
&outputBuffer,
outputBufferLength,
&returnLength,
(LPOVERLAPPED)NULL); // overlapped structure not needed using sync io
and in Kernel space
status = WdfRequestRetrieveInputBuffer(Request, InputBufferLength, &inputBuffer, NULL);
if (!NT_SUCCESS(status))
{
WdfRequestComplete(Request, STATUS_INVALID_PARAMETER);
return;
}
inputVirtArg = (VirtioQCArg*)inputBuffer;
status = WdfRequestRetrieveOutputBuffer(Request, OutputBufferLength, &outputBuffer, NULL);
if (!NT_SUCCESS(status))
{
WdfRequestComplete(Request, STATUS_INVALID_PARAMETER);
return;
}
outputVirtArg = (VirtioQCArg*)outputBuffer;
switch (IoControlCode)
{
case VIRTQC_CMD_MMAP:
if (PsGetCurrentThread() == irp->Tail.Overlay.Thread)
{
status = CreateAndMapMemory(device, &(inputVirtArg), &(outputVirtArg));
outputVirtArg->flag = (!NT_SUCCESS(status)) ? 0 : 1;
}
else
status = STATUS_UNSUCCESSFUL;
break;
default:
status = STATUS_INVALID_DEVICE_REQUEST;
break;
}
WdfRequestComplete(Request, status);
Update 1:
I have tried WdfRequestCompleteWithInformation(Request, status, OutputBufferLength); but same result.
Also, I notice that the address of inputBuffer and outputBuffer are the same.
Update 2:
I tried doing
temp = ExAllocatePoolWithTag(
NonPagedPool,
PAGE_SIZE,
MEMORY_TAG
);
// other code to
RtlCopyMemory((VirtioQCArg*)outputBuffer, temp, OutputBufferLength);
still get error 1
I had defined my ioctl cmds as enums in my linux driver (which works fine) and when implementing the driver in windows I used the same enum definition.
enum
{
// Module & Execution control (driver API)
VIRTIQC_SET = 200,
VIRTIQC_UNSET,
// more cmds.....
}
In windows defining control codes take a bit more. As explained here the CTL_CODE macro should be used to define new IOCTL control codes.
#define IOCTL_Device_Function CTL_CODE(DeviceType, Function, Method, Access)
In my case I ended up with defining this:
#define VIRTQC_MAP CTL_CODE(FILE_DEVICE_NETWORK, 0xC8, METHOD_IN_DIRECT, FILE_READ_DATA | FILE_WRITE_DATA)
#define VIRTQC_UNMAP CTL_CODE(FILE_DEVICE_NETWORK, 0xC9, METHOD_OUT_DIRECT, FILE_READ_DATA)
I know the function code less than 0x800 are reserved for MS but the device on my host requires this codes so I'm just providing what is being asked.
I'm using ReadProcessMemory to read a single byte out of a process i've created.
Since i'm attaching as a debugger, i'm reading addresses that are being executed now (or in the near past).
but i get a 299 error for ReadProcessMemory via GetLastError() on some addresses only (some works fine..)
On the cases i get an error, i call VirtualQueryEx, and the memInfo protect is 0x1, while the type & baseAddress are 0x0 (but the region size is some normal number), also VirtualQueryEx isn't failing..
if i call VirtualProtectEx for those cases, i get error 487 (Attempt to access invalid address).
i thought maybe the address i'm trying to read is paged out, thus all the errors, but it doesn't seem right since, as i've already mentioned, its an address that was executed recently.
ideas anyone?
You want to loop through all the memory, calling VirtualQueryEx() to make sure the memory is valid before calling ReadProcessMemory()
You need to make sure that MEMORY_BASIC_INFORMATION.State is MEM_COMMIT in most cases.
This whole operation can be easy to screw up, because you didn't supply any code I will provide a working solution that should work in 95% of situations. It's ok for bytesRead to be different than regionSize, you just need to handle that situation correctly. You don't need to take permissions in most cases using VirtualProtect because all valid memory should have read access.
int main()
{
DWORD procid = GetProcId("notepad.exe");
unsigned char* addr = 0;
HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, procid);
MEMORY_BASIC_INFORMATION mbi;
while (VirtualQueryEx(hProc, addr, &mbi, sizeof(mbi)))
{
if (mbi.State == MEM_COMMIT && mbi.Protect != PAGE_NOACCESS)
{
char* buffer = new char[mbi.RegionSize]{ 0 };
SIZE_T bytesRead = 0;
if (ReadProcessMemory(hProc, addr, buffer, mbi.RegionSize, &bytesRead))
{
if (bytesRead)
{
//scan from buffer[0] to buffer[bytesRead]
}
else
{
//scan from buffer[0] to buffer[mbi.RegionSize]
}
}
delete[] buffer;
}
addr += mbi.RegionSize;
}
CloseHandle(hProc);
}
Reminder: This is just a PoC to teach you the concept.
I'm writing a driver that needs to allocate a Non Paged pool of memory and this memory, for performance sake, must be directly accessible from a usermode program.
In the driver entry I've allocated some memory with these two type of methods:
pMdl = IoAllocateMdl(NULL,
4096,
FALSE,
FALSE,
NULL);
if(!pMdl) {
DbgPrintEx(DPFLTR_IHVVIDEO_ID, DPFLTR_INFO_LEVEL, "Error on IoAllocateMdl. Returning from driver early.\n");
return STATUS_INSUFFICIENT_RESOURCES;
}
MmBuildMdlForNonPagedPool(pMdl);
userMemory = (void *)MmMapLockedPagesSpecifyCache(pMdl, UserMode, MmWriteCombined, NULL, FALSE, LowPagePriority);
and
userMemory = ExAllocatePoolWithTag(
NonPagedPool,
4096,
POOL_TAG);
Now I don't want to issue a DeviceIoControl every time I need to write/read from this memory, but instead I want to do something like this:
char* sharedMem;
.....
transactionResult = DeviceIoControl ( hDevice,
(DWORD) IOCTL_MMAP,
NULL,
0,
sharedMem,
sizeof(int),
&bRetur,
NULL
);
.....
sharedMem[0]='c';
Using a DeviceIoControl to get the address in kernel memory and then using it directly, like it were an mmap under Linux.
Is there some kind of way to do this in Windows?
I've done this:
hMapFile = OpenFileMapping(
FILE_MAP_ALL_ACCESS, // Read/write access
TRUE,
"Global\\SharedMemory"); // Name of mapping object
lastError = GetLastError();
if (hMapFile == NULL)
{
printf("Could not create file mapping object (%d).\n" ,GetLastError());
return 1;
}
pBuf = (char*)MapViewOfFile(hMapFile, // Handle to map object
FILE_MAP_ALL_ACCESS, // Read/write permission
0,
0,
4096);
if (pBuf == NULL)
{
printf("Could not map view of file (%d).\n", GetLastError());
CloseHandle(hMapFile);
return 1;
}
pBuf[0] = 'c';
pBuf[1] = '\n';
CloseHandle(hMapFile);
And I've created the view in Kernel like this:
RtlInitUnicodeString(&name, L"\\BaseNamedObjects\\SharedMemory");
InitializeObjectAttributes(&oa, &name, 0, 0, NULL);
ZwCreateSection(&hsection, SECTION_ALL_ACCESS, &oa, &Li, PAGE_READWRITE, SEC_COMMIT, NULL);
ZwMapViewOfSection(hsection, NtCurrentProcess(),
&userMem, 0, MEM_WIDTH, NULL,
&j, ViewShare, 0, PAGE_READWRITE);
But in the kernel when I read the memory it's empty: how can it be?
I finally understood how this needs to work.
First I've created a structure like the following.
typedef struct _MEMORY_ENTRY
{
PVOID pBuffer;
} MEMORY_ENTRY, *PMEMORY_ENTRY;
This will be used to return the virtual address from the kernel space to the user space.
In the DriverEntry I used
userMem = ExAllocatePoolWithTag(NonPagedPool,
MEM_WIDTH,
POOL_TAG );
to set up the NonPaged memory.
Then I've created an IOCTL working in DIRECT_OUT mode that does the following snippet:
...
PMDL mdl = NULL;
PVOID buffer = NULL;
MEMORY_ENTRY returnedValue;
void* UserVirtualAddress = NULL;
...
buffer = MmGetSystemAddressForMdlSafe(Irp->MdlAddress, NormalPagePriority); // Gets safely the pointer for the output in the IRP
mdl = IoAllocateMdl(userMem, MEM_WIDTH, FALSE, FALSE, NULL); // Allocate the memory descriptor list
MmBuildMdlForNonPagedPool(mdl); // This is needed when we're managing NonPaged memory
UserVirtualAddress = MmMapLockedPagesSpecifyCache(
mdl,
UserMode,
MmNonCached,
NULL,
FALSE,
NormalPagePriority); // Return the virtual address in the context of
// the user space program who called the IOCTL
returnedValue.pBuffer = UserVirtualAddress;
RtlCopyMemory(buffer,
&returnedValue,
sizeof(PVOID)); // I copy the virtual address in the structure that will
// be returned to the user mode program by the IRP
In the user mode program I just needed to to this
transactionResult = DeviceIoControl(
hDevice,
(DWORD) IOCTL_MMAP,
NULL,
0,
sharedMem,
sizeof(void*),
&bRetur,
NULL
);
In (MEMORY_ENTRY*)sharedMem->pBuffer we will find the memory area created and shared by the kernel space directly accessible by the kernel and by the user program.
I haven't wrote it but we need to remember to wrap the entire MmGetSystemAddressForMdlSafe(...)----->RtlCopyMemory(...) in a Try...Except block because we can encounter various problems here that may eventually cause a BugCheck so better be safe than sorry.
Anyway, if you're compiling this kind of code in a checked environment the Microsoft AutocodeReview will be pointing this out.
If someone needs more clarifications, or if I wrote something wrong just let me know and I will be happy to modify this post.
I would like to crash a running program of my choice (e.g., notepad++, becrypt, word) for software testing purposes.
I know how to BSOD, I know how to cause a program I write to crash, I know how to end process - but how to crash an existing process I do not!
any help?
Well, use CreateRemoteThread on a remote process and invoke something [1] that crashes the process reliably. I'm not sure whether CreateRemoteThread guards against null pointers, but you could pass an address in the null page to it and have the remote process execute that.
[1] null pointer or null page access, division by zero, invoking a privileged instruction, int3 ...
Example:
#include <stdio.h>
#include <tchar.h>
#include <Windows.h>
BOOL setCurrentPrivilege(BOOL bEnable, LPCTSTR lpszPrivilege)
{
HANDLE hToken = 0;
if(::OpenThreadToken(::GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &hToken)
|| ::OpenProcessToken(::GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken))
{
TOKEN_PRIVILEGES tp;
LUID luid;
if(!::LookupPrivilegeValue(
NULL, // lookup privilege on local system
lpszPrivilege, // privilege to lookup
&luid ) ) // receives LUID of privilege
{
::CloseHandle(hToken);
return FALSE;
}
tp.PrivilegeCount = 1;
tp.Privileges[0].Luid = luid;
tp.Privileges[0].Attributes = (bEnable) ? SE_PRIVILEGE_ENABLED : 0;
// Enable the privilege or disable all privileges.
if(!::AdjustTokenPrivileges(
hToken,
FALSE,
&tp,
sizeof(TOKEN_PRIVILEGES),
(PTOKEN_PRIVILEGES) NULL,
(PDWORD) NULL)
)
{
CloseHandle(hToken);
return FALSE;
}
::CloseHandle(hToken);
}
return TRUE;
}
int killProcess(DWORD processID)
{
HANDLE hProcess = ::OpenProcess(PROCESS_ALL_ACCESS, FALSE, processID);
if(hProcess)
{
if(!setCurrentPrivilege(TRUE, SE_DEBUG_NAME))
{
_tprintf(TEXT("Could not enable debug privilege\n"));
}
HANDLE hThread = ::CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE)1, NULL, 0, NULL);
if(hThread)
{
::CloseHandle(hThread);
}
else
{
_tprintf(TEXT("Error: %d\n"), GetLastError());
::CloseHandle(hProcess);
return 1;
}
::CloseHandle(hProcess);
}
return 0;
}
int __cdecl _tmain(int argc, _TCHAR *argv[])
{
killProcess(3016);
}
Of course you'll want to adjust the PID in the call to killProcess. Compiled with WNET DDK and tested on 2003 Server R2.
The gist here is that we tell the remote process to execute code at address 0x1 ((LPTHREAD_START_ROUTINE)1), which is inside the null page but not a null pointer (in case there are checks against that). The crud around the function, in particular setCurrentPrivilege is used to gain full debug privileges so we can do our evil deed.
You can use DLL injection technique in order to inject your code into another process. Then in your injected code do something simple like abort() or division by zero.
A two steps mechanism is needed:
inject the process to crash (using an injection library, using Detours, using a Hook installation, etc..). What you choose depends on the time and knowledge you have and other preconditions (like credentials, anti-injection protection, size of the foot-print you want to leave..)
perform an invalid operation in the injected process (like int 2Eh, divide by null, etc..)
Here's how to do that with the winapiexec tool:
winapiexec64.exe CreateRemoteThread ( OpenProcess 0x1F0FFF 0 1234 ) 0 0 0xDEAD 0 0 0
Replace 1234 with the process id and run the command, the process will crash.