Would it be accurate to call the Heartbleed bug a stack overflow? - stack-overflow

Would it be accurate to call the Heartbleed bug a stack overflow? In my understanding, this is quite a typical example. Is this technically correct?

The heartbleed bug is not a stack overflow error, but a type of a buffer overrun error. A stack overflow error happens when a program runs out of stack space. This usually results in a crash, and is not directly exploitable.

A stack is a data structure with "last in, first out" as its primary characteristic. It allows a caller (a piece of a program) to "push" information onto the stack, and to "pop" off the last item pushed. For a strict stack, no other operations are allowed.
The stack is used for programs when they call subprograms (functions, methods, subroutines are all subprograms, they have different names in different contexts). When a program calls a subprogram, a bunch of information needs to be saved so that it's available when the subprogram returns. So this "execution context" is pushed onto the stack, and then retrieved on return. This operation is so vital to computers that computer hardware supports it directly; in other words, there are machine instructions to do this so that it doesn't have to be done (slower) in software.
There is usually an amount of memory in the computer dedicated to this runtime stack, and even usually to a stack for each program running and a few for the operating system, etc. If subroutines calls get so "deep" that the amount of stack space allocated won't hold all the information needed for a call that occurs, that is a stackoverflow error.
This was not what the heartbleed problem was about. It allowed an exertnal program to set an amount of buffer space to be returned to it, and returned whatever happened to be in the memory beyond the little bit of data that this external program sent.
So the real answer to the question is "no", and I cannot imagine who would have thought that this was a typical example.

Technically, yes. But not in the traditional overflow sense where you try to smash the stack and fiddle with return values and try to execute code. This was purely a "leak private data" problem.
The OpenSSL specification requires that a client sent a chunk of randomish data in its heartbeat packet. The server is required to turn that data exactly as is to the client.
The bug is that the client basically sends two bits of data:
size_of_heartbeat (16bit integer presenting heartbeat data size)
heartbeat_data (up to 64k of data)
A malicious client can LIE about the data it's sending, and say:
size_of_hearbeat = 64k
heartbeat_data = '' (1 byte)
OpenSSL failed to verify that the size_of_hearbeat == actual_size(heartbeat_data), and would trust the size_of_heartbeat, so basically you'd have:
-- allocate as much memory as the client claims they sent to you
-- copy the user's heartbeat packet into the response packet.
Since the user claims they sent you 64k, OpenSSL properly allocated a 64k buffer, but then then did an unbounded memcpy() and would happily copy up to 64k of ram past where there client's heartbeat data actually occurred.
Given enough attempts at this, you could build up a pretty complete picture of what's in the server's memory, 64k at a time, and eventually be able to extract things like the server's SSL certificates, temporary data from previous users who'd passed through the encryption layers, etc...

Related

Call to ExAllocatePoolWithTag never returns

I am having some issues with my virtualHBA driver on Windows Server 2016. A ran the HLK crashdump support test. 3 times out of 10 the test passed. In those 3 failing tests, the crashdump hangs at 0% while taking Complete dump, or Kernel dump or minidump.
By kernel debugging my code, I found that the call to ExAllocatePoolWithTag() for buffer allocation never actually returns.
Below is the statement which never returns.
pDeviceExtension->pcmdbuf=(struct mycmdrsp *)ExAllocatePoolWithTag(NonPagedPoolCacheAligned,pcmdqSignalSize,((ULONG)'TA1'));
I searched on the web regarding this. However, all of the found pages are focusing on this function returning NULL which in my case never returns.
Any help on how to move forward would be highly appreciated.
Thanks in advance.
You can't allocate memory in crash dump mode. You're running at HIGH_LEVEL with interrupts disabled and so you're calling this API at the wrong IRQL.
The typical solution for a hardware adapter is to set the RequestedDumpBufferSize in the PORT_CONFIGURATION_INFORMATION structure during the normal HwFindAdapter call. Then when you're called again in crash dump mode you use the CrashDumpRegion field to get your dump buffer allocation. You then need to write your own "crash dump mode only" allocator to allocate buffers out of this memory region.
It's a huge pain, especially given that it's difficult/impossible to know how much memory you're ultimately going to need. I usually calculate some minimal configuration overhead (i.e. 1 channel, 8 I/O requests at a time, etc.) and then add in a registry configurable slush. The only benefit is that the environment is stripped down so you don't need to be in your all singing, all dancing configuration.

get_user_pages -EFAULT error caused by VM_GROWSDOWN flag not set

I'm continue my work on the FGPA driver.
Now I'm adding OpenCL support. So I have a following test.
It's just add NUM_OF_EXEC times write and read requests of same buffers and after that waits for completion.
Each write/read request serialized in driver and sequentially executed as DMA transaction. DMA related code can be viewed here.
So the driver takes a transaction, execute it (rsp_setup_dma and fpga_push_data_to_device), waits for interrupt from FPGA (fpga_int_handler), release resources (fpga_finish_dma_write) and begin a new one. When NUM_OF_EXEC equals to 1, all seems to work, but if I increase it, problem appears. At some point get_user_pages (at rsp_setup_dma) returns -EFAULT. Debugging the kernel, I found out, that allocated vma doesn't have VM_GROWSDOWN flag set (at find_extend_vma in mmap.c). But at this point I stuck, because neither I'm sure that I understand why this flag is needed, neither I have an idea why it is not set. Why can get_user_pages fail with the above symptomps? How can I debug this?
On some architectures the stack grows up and on others the stack grows down. See hppa and hppa64 for the weirdos that created the need for such a flag.
So whenever you have to deal with setting up the stack for a kernel thread or process you'll have to provide the direction in which the stack grows as well.

How an assembler instruction could not read the memory it is placed at

Using some software in Windows XP that works as a Windows service and doing a restart from the logon screen I see an infamous error message
The instruction at "00x..." referenced memory at "00x...". The memory
could not be read.
I reported the problem to the developers, but looking at the message once again, I noticed that the addresses are the same. So
The instruction at "00xdf3251" referenced memory at "00xdf3251". The memory
could not be read.
Whether this is a bug in the program or not, but what is the state of the memory/access rights or something else that prevents an instruction from reading the memory it is placed. Is it something specific to services?
I would guess there was an attempt to execute an instruction at the address 0xdf3251 and that location wasn't backed up by a readable and executable page of memory (perhaps, completely unmapped).
If that's the case, the exception (page fault, in fact) originates from that instruction and the exception handler has its address on the stack (the location to return to, in case the exception can be somehow resolved and the faulting instruction restarted when the handler returns). And that's the first address you're seeing.
The CR2 register that the page fault handler reads, which is the second address you're seeing, also has the same address because it has to contain the address of an inaccessible memory location irrespective of whether the page fault has been caused by:
complete absence of mapping (there's no page mapped at all)
lack of write permission (the page is read-only)
lack of execute permission (the page has the no-execute bit set) OR
lack of kernel privilege (the page is marked as accessible only in the kernel)
and irrespective of whether it was during a data access or while fetching an instruction (the latter being our case).
That's how you can get the instruction and memory access addresses equal.
Most likely the code had a bug resulting in a memory corruption and some pointer (or a return address on the stack) was overwritten with a bogus value pointing to an inaccessible memory location. And then one way or the other the CPU was directed to continue execution there (most likely using one of these instructions: jmp, call, ret). There's also a chance of having a race condition somewhere.
This kind of crash is most typically caused by stack corruption. A very common kind is a stack buffer overflow. Write too much data in an array stored on the stack and it overwrites a function's return address with the data. When the function then returns, it jumps to the bogus return address and the program falls over because there's no code at the address. They'll have a hard time fixing the bug since there's no easy way to find out where the corruption occurred.
This is a rather infamous kind of bug, it is a major attack vector for malware. Since it can commandeer a program to jump to arbitrary code with data. You ought to have a sitdown with these devs and point this out, it is a major security risk. The cure is easy enough, they should update their tools. Countermeasures against buffer overflow are built into the compilers these days.

Allocating a buffer of more a page size on stack will corrupt memory?

In Windows, stack is implemented as followed: a specified page is followed committed stack pages. It's protection flag is as guarded. So when thead references an address on the guared page, an memory fault rises which makes memory manager commits the guarded page to the stack and clean the page's guarded flag, then it reserves a new page as guarded.
when I allocate an buffer which size is more than one page(4KB), however, an expected error haven't happen. Why?
Excellent question (+1).
There's a trick, and few people know about it (besides driver writers).
When you allocate large buffer on the stack - the compiler automatically adds so-called stack probes. It's an extra code (implemented in CRT usually), which probes the allocated region, page-by-page, in the needed order.
EDIT:
The function is _chkstk.
The fault doesn't reach your program - it is handled by the operating system. Similar thing happens when your program tries to read memory that happens to be written into the swap file - a trap occurs and the operating system unswaps the page and your program continues.

Can address space be recycled for multiple calls to MapViewOfFileEx without chance of failure?

Consider a complex, memory hungry, multi threaded application running within a 32bit address space on windows XP.
Certain operations require n large buffers of fixed size, where only one buffer needs to be accessed at a time.
The application uses a pattern where some address space the size of one buffer is reserved early and is used to contain the currently needed buffer.
This follows the sequence:
(initial run) VirtualAlloc -> VirtualFree -> MapViewOfFileEx
(buffer changes) UnMapViewOfFile -> MapViewOfFileEx
Here the pointer to the buffer location is provided by the call to VirtualAlloc and then that same location is used on each call to MapViewOfFileEx.
The problem is that windows does not (as far as I know) provide any handshake type operation for passing the memory space between the different users.
Therefore there is a small opportunity (at each -> in my above sequence) where the memory is not locked and another thread can jump in and perform an allocation within the buffer.
The next call to MapViewOfFileEx is broken and the system can no longer guarantee that there will be a big enough space in the address space for a buffer.
Obviously refactoring to use smaller buffers reduces the rate of failures to reallocate space.
Some use of HeapLock has had some success but this still has issues - something still manages to steal some memory from within the address space.
(We tried Calling GetProcessHeaps then using HeapLock to lock all of the heaps)
What I'd like to know is there anyway to lock a specific block of address space that is compatible with MapViewOfFileEx?
Edit: I should add that ultimately this code lives in a library that gets called by an application outside of my control
You could brute force it; suspend every thread in the process that isn't the one performing the mapping, Unmap/Remap, unsuspend the suspended threads. It ain't elegant, but it's the only way I can think of off-hand to provide the kind of mutual exclusion you need.
Have you looked at creating your own private heap via HeapCreate? You could set the heap to your desired buffer size. The only remaining problem is then how to get MapViewOfFileto use your private heap instead of the default heap.
I'd assume that MapViewOfFile internally calls GetProcessHeap to get the default heap and then it requests a contiguous block of memory. You can surround the call to MapViewOfFile with a detour, i.e., you rewire the GetProcessHeap call by overwriting the method in memory effectively inserting a jump to your own code which can return your private heap.
Microsoft has published the Detour Library that I'm not directly familiar with however. I know that detouring is surprisingly common. Security software, virus scanners etc all use such frameworks. It's not pretty, but may work:
HANDLE g_hndPrivateHeap;
HANDLE WINAPI GetProcessHeapImpl() {
return g_hndPrivateHeap;
}
struct SDetourGetProcessHeap { // object for exception safety
SDetourGetProcessHeap() {
// put detour in place
}
~SDetourGetProcessHeap() {
// remove detour again
}
};
void MapFile() {
g_hndPrivateHeap = HeapCreate( ... );
{
SDetourGetProcessHeap d;
MapViewOfFile(...);
}
}
These may also help:
How to replace WinAPI functions calls in the MS VC++ project with my own implementation (name and parameters set are the same)?
How can I hook Windows functions in C/C++?
http://research.microsoft.com/pubs/68568/huntusenixnt99.pdf
Imagine if I came to you with a piece of code like this:
void *foo;
foo = malloc(n);
if (foo)
free(foo);
foo = malloc(n);
Then I came to you and said, help! foo does not have the same address on the second allocation!
I'd be crazy, right?
It seems to me like you've already demonstrated clear knowledge of why this doesn't work. There's a reason that the documention for any API that takes an explicit address to map into lets you know that the address is just a suggestion, and it can't be guaranteed. This also goes for mmap() on POSIX.
I would suggest you write the program in such a way that a change in address doesn't matter. That is, don't store too many pointers to quantities inside the buffer, or if you do, patch them up after reallocation. Similar to the way you'd treat a buffer that you were going to pass into realloc().
Even the documentation for MapViewOfFileEx() explicitly suggests this:
While it is possible to specify an address that is safe now (not used by the operating system), there is no guarantee that the address will remain safe over time. Therefore, it is better to let the operating system choose the address. In this case, you would not store pointers in the memory mapped file, you would store offsets from the base of the file mapping so that the mapping can be used at any address.
Update from your comments
In that case, I suppose you could:
Not map into contiguous blocks. Perhaps you could map in chunks and write some intermediate function to decide which to read from/write to?
Try porting to 64 bit.
As the earlier post suggests, you can suspend every thread in the process while you change the memory mappings. You can use SuspendThread()/ResumeThread() for that. This has the disadvantage that your code has to know about all the other threads and hold thread handles for them.
An alternative is to use the Windows debug API to suspend all threads. If a process has a debugger attached, then every time the process faults, Windows will suspend all of the process's threads until the debugger handles the fault and resumes the process.
Also see this question which is very similar, but phrased differently:
Replacing memory mappings atomically on Windows

Resources