Verifying process integrity in memory? - windows

It looks like it's impossible to prevent determined attackers from modifying one's process code/data. I'm hoping that its at least possible to detect such tampering.
Under Windows, is it possible to listen for DLL injections, WriteProcessMemory and CreateRemoteThread into the current process?
Under Linux, is it possible to listen for LD_PRELOAD and the DR rootkit?

with some really involved code you could be able to detect those...
It all depends on how determined the attacker is... IF they are really determined then they would use some rootkit approach - in that case your app can do nothing about it (no detection, no stopping as long as the attackers know what they are doing)...
another approach could be to try to do some hashing of your segments in memory while running but that would account for some snake oil since the hashing code itself would present an entry point to circumvent this method.
Executing your code inside a self-built VM which in turn communicates with the rest of the system through a hypervisor... the hypervisor has to be made the boot loader for the system of course so that the OS is just a "child" of your hypervisor... should do the trick...although you would have to write all that yourself and make sure it has no exploitable weakness (pretty sure noone can do that for such a complex piece of software)...
not sure what you are up against but as long as the HW+SW your code is running on is not directly under your full control there is always a way to do the things you mention and with a bit of planning avoid detection too...
OR is this "only" about protection from software piracy/reversing ?
IF so then there are some measures, even some 100% secure ones though it all is about balance of security versus usability...

Huh, how do you tell if LD_PRELOAD is malicious or not? What about ptrace? /dev/[k]mem? What about when one process plants a malicious plugin or something similar in the folder of another process's config directory? What about shared memory / IPC tampering?
First of all, this is a waste of time and complete nonsense to actually sell as a legitimate product. Not sure what you're trying to do. If it's antivirus, anticheat for a game, or DRM, then this is futile. The only thing you can do is run the process as another user, preventing other processes from modifying it in the first place. If that's not good enough, too bad, Linux isn't a secure operating system and never will be.
In theory, it's impossible to detect a process's memory being tampered with. In practice, it depends on how you define detect, and what kind of false positives and false negatives you care about.
If you know the normal behavior of a program is not to modify itself, you know exactly what segments of memory are meant to be static, and you know that there aren't any legitimate 3rd party programs on your PC that tamper with the program, then you may be able to detect memory tampering pretty easily.
The most general solution is to hook the OSs interprocess memory modification mechanisms like you said. This works as long as the enemy process doesn't have enough privileges to remove your hook or make certain OS calls that bypass your hook.
You can also just scan the entire process over and over, checksumming the memory using a secure hashing algorithm. Then again if the enemy process has privileges to modify your scanner, you lose.
So yeah, if the process hasn't the privileges to subvert your scanner, why would it have the privileges to modify the process you care about? Sounds like antivirus/anticheat/DRM to me.

Related

Does process termination automatically free all memory used? Any reason to do it explicitly?

In Windows NT and later I assume that when a process expires, either because it terminated itself or was forcefully terminated, the OS automatically reclaims all the memory used by the process. Are there any situations in which this is not true? Is there any reason to free all the memory used by a user-mode application explicitly?
Whenever a process ends, all memory pages mapped to it are returned to the available state. This could qualify as "reclaiming the memory," as you say. However, it does not do things such as running destructors (if you are using C++).
I highly recommend freeing all memory, not from a resources perspective, but from a development perspective. Trying to free up memory encourages you to think about the lifespan of memory, and helps you make sure you actually clean up properly.
This does not matter in the short run, but I have dealt with countless software programs which assumed that they owned the process, so didn't have to clean up after themselves. However, there are a lot of reasons to want to run a program in a sandbox. Many randomized testing scenarios can run much faster if they don't have to recreate the process every time. I've also had several programs which thought they would be standalone, only to find a desire to integrate into a larger software package. At those times, we found out all the shortcuts that had been taken with memory management.

Executing a third-party compiled program on a client's computer

I'd like to ask for your advice about improving security of executing a compiled program on a client's computer. The idea is that we send a compiled program to a client but the program has been written and compiled by a third-party. How to make sure that the program won't make any harm to a client's operating system while running? What would be the best to achieve that goal and not decrease dramatically performance of executing a program?
UPDATE:
I assume that third-party don't want to harm client's OS but it can happen that they make some mistake or their program is infected by someone else.
The program could be compiled to either bytecode or native, it depends on third-party.
There are two main options, depending on whether or not you trust the third party.
If you trust the 3rd party, then you just care that it actually came from them, and that it hasn't changed in transit. Code signing is a good solution here. If the third party signs the code, and you check the signature, then you can check nothing has changed in the middle, and prove it was them who wrote it.
If you don't trust the third party, then it is a difficult problem. The usual solution is to run code in a "sandbox", where it is allowed to perform a limited set of operations. This concept has been implemented for a number of languages - google "sandbox" and you'll find a lot about it. For Perl, see SafePerl, for Java see "Java Permissions". Variations exist for other languages too.
Depending on the language involved and what kind of permissions are required, you may be able to use the language's built in sandboxing capabilities. For example, earlier versions of .NET have a "Trust Level" that can be set to control how much access a program has when it's run (newer versions have a similar feature called Code Access Security (CAS)). Java has policy files that control the same thing.
Another method that may be helpful is to run the program using (Microsoft) Sysinternals process monitor, while scanning all operations that the program is doing.
If it's developed by a third party, then it's very difficult to know exactly what it's going to do without reviewing the code. This may be more of a contractual solution - adding penalties into the contract with the third-party and agreeing on their liability for any damages.
sign it. Google for 'digital signature' or 'code signing'
If you have the resources, use a virtual machine. That is -- usually -- a pretty good sandbox for untrusted applications.
If this happens to be a Unix system, check out what you can do with chroot.
The other thing is that don't underestimate the value of thorough testing. you can run the app (in a non production environment) and verify the following (escalating levels of paranoia!)
CPU/Disk usage is acceptable
doesn't talk to any networked hosts it shouldn't do - i.e no 'phone home capability'
Scan with your AV program of choice
you could even hook up pSpy or something to find out more about what it's doing.
additionally, if possible run the application with a low privileged user. this will offer some degree of 'sandboxing', i.e the app won't be able to interfere with other processes
..also don't overlook the value of the legal contracts with the vendor that may often give you some kind of recompense if there is a problem. of course, choosing a reputable vendor in the first place offers a level of assurance as well.
-ace

Fast restart technique instead of keeping the good state (availability and consistency)

How often do you solve your problems by restarting a computer, router, program, browser? Or even by reinstalling the operating system or software component?
This seems to be a common pattern when there is a suspect that software component does not keep its state in the right way, then you just get the initial state by restarting the component.
I've heard that Amazon/Google has a cluster of many-many nodes. And one important property of each node is that it can restart in seconds. So, if one of them fails, then returning it back to initial state is just a matter of restarting it.
Are there any languages/frameworks/design patterns out there that leverage this techinque as a first-class citizen?
EDIT The link that describes some principles behind Amazon as well as overall principles of availability and consistency:
http://www.infoq.com/presentations/availability-consistency
This is actually very rare in the unix/linux world. Those oses were designed (and so was windows) to protect themselves from badly behaved processes. I am sure google is not relying on hard restarts to correct misbehaved software. I would say this technique should not be employed and if someone says that the fatest route to recovery for their software you should look for something else!
This is common in the embedded systems world, and in telecommunications. It's much less common in the server based world.
There's a research group you might be interested in. They've been working on Recovery-Oriented Computing or "ROC". The key principle in ROC is that the cleanest, best, most reliable state that any program can be in is right after starting up. Therefore, on detecting a fault, they prefer to restart the software rather than attempt to recover from the fault.
Sounds simple enough, right? Well, most of the research has gone into implementing that idea. The reason is exactly what you and other commenters have pointed out: OS restarts are too slow to be a viable recovery method.
ROC relies on three major parts:
A method to detect faults as early as possible.
A means of isolating the faulty component while preserving the rest of the system.
Component-level restarts.
The real key difference between ROC and the typical "nightly restart" approach is that ROC is a strategy where the reboots are a reaction. What I mean is that most software is written with some degree of error handling and recovery (throw-and-catch, logging, retry loops, etc.) A ROC program would detect the fault (exception) and immediately exit. Mixing up the two paradigms just leaves you with the worst of both worlds---low reliability and errors.
Microcontrollers typically have a watchdog timer, which must be reset (by a line of code) every so often or else the microcontroller will reset. This keeps the firmware from getting stuck in an endless loop, stuck waiting for input, etc.
Unused memory is sometimes set to an instruction which causes a reset, or a jump to a the same location that the microcontroller starts at when it is reset. This will reset the microcontroller if it somehow jumps to a location outside the program memory.
Embedded systems may have a checkpoint feature where every n ms, the current stack is saved.
The memory is non-volatile on power restart(ie battery backed), so on a power start, a test is made to see if the code needs to jump to an old checkpoint, or if it's a fresh system.
I'm going to guess that a similar technique(but more sophisticated) is used for Amazon/Google.
Though I can't think of a design pattern per se, in my experience, it's a result of "select is broken" from developers.
I've seen a 50-user site cripple both SQL Server Enterprise Edition (with a 750 MB database) and a Novell server because of poor connection management coupled with excessive calls and no caching. Novell was always the culprit according to developers until we found a missing "CloseConnection" call in a core library. By then, thousands were spent, unsuccessfully, on upgrades to address that one missing line of code.
(Why they had Enterprise Edition was beyond me so don't ask!!)
If you look at scripting languages like php running on Apache, each invocation starts a new process. In the basic case there is no shared state between processes and once the invocation has finished the process is terminated.
The advantages are less onus on resource management as they will be released when the process finishes and less need for error handling as the process is designed to fail-fast and it cannot be left in an inconsistent state.
I've seen it a few places at the application level (an app restarting itself if it bombs).
I've implemented the pattern at an application level, where a service reading from Dbase files starts getting errors after reading x amount of times. It looks for a particular error that gets thrown, and if it sees that error, the service calls a console app that kills the process and restarts the service. It's kludgey, and I hate it, but for this particular situation, I could find no better answer.
AND bear in mind that IIS has a built in feature that restarts the application pool under certain conditions.
For that matter, restarting a service is an option for any service on Windows as one of the actions to take when the service fails.

Why does my antivirus program not detect this malicious behavior?

I wrote this C program and ran on my Windows system. My system hanged and not even Task manager was opening. Finally, I had to reboot. This is clearly a malicious program, but my antivirus does not detect this. Why?
#include<unistd.h>
main() {
while(1)
fork();
}
Antivirus programs don't recognize malicious behavior - they recognize patterns of know viruses that are already in the wild (file names, process names, binary signatures, etc.).
This is why they can often be subverted since they are a reactive solution to an evolving problem.
Developers don't typically use AV software due to the huge speed penalty, or at least they disable it on the filesystem subtree they work in.
But even so, that isn't the sort of pattern AV software tries to detect. The AV software looks for files you are reading and writing and changes to system state, or specific identified viruses or their prior identified signatures.
And how would it decide, anyway? From the point of view of a program there would be a fine line between an overloaded web server and a fork bomb.
Finally, this sort of behavior is kind of self-correcting. If we really had viruses arriving with nothing more damaging than a fork bomb we might just declare victory and say "don't run that".
BTW, did you run the fork bomb as administrator?
Your program is not a virus, because it cannot spread itself, that is, it can't infect other files/computers
Well, it is not malicious behavior, looks more like a logic error in your code. I wish there will be an antivirus one day that could detect applications, drivers, ms updates, ms products that cause BSOD's =)

What exactly is the risk when using TerminateProcess?

My Win32 console applicaton uses a third-party library. After it exits WinMain global objects destruction begins and an AV happens somewhere deep inside. I'm really tempted to just write
TerminateProcess( GetCurrentProcess(), 0 );
somewhere near the end of WinMain. If I do this the application ends gracefully.
But MSDN says that doing so can compromise the state of global data maintained by dynamic-link libraries (DLLs) which is not clear. I understand that if I have some global object its destructor is not run and I risk not finalizing a database connection or something similar. I don't have anything like that in my program.
What exactly is the risk when using TerminateProcess? How do I determine if I can use it for my purpose?
Based on the documentation for that and ExtiProcess it seems the primary concern is that DLL's are unloaded without a call to DllMain with the flag DLL_PROCESS_DETACH.
My 2cents: The documentation is being paranoid that you will upset some critical operation which runs in DllMain + DLL_PROCESS_DETACH. Anyone who depends on that to maintain critical state is already at the mercy of task manager so I don't see a huge risk in using this API.
Generally the bad things will happen when interacting with objects outside of your process. For an example say you have some shared memory used by multiple processes that your process will write to and other processes read and or write to. Typically to synchronize the reading and writing a mutex is used. If a thread in your process has acquired the mutex and is in the middle of making changes when TerminatePorcess is called, the mutex will be abandoned and the shared memory potentially left in an inconsistent state.
I suspect you are miss using one of the third party libraries. DllMain is somewhat limiting so the library may have initialize and uninitialize functions that you are supposed to call.
AFAIK, if you're not doing anything "fancy" (which includes but is not limited to: creating threads, locks, DB connections, using COM objects), nothing terrible will happen. But as Earwicker says, you don't know what OS-wide stuff a DLL is doing, and you certainly don't know if that will change in the future, so relying on this is very fragile.
Aren't you curious to know why this access violation is occurring? It may well be the sign of something that became corrupted much earlier on. Please at least confirm that the bug is caused by this 3rd-party library, e.g. by writing a program that links with the library but whose main() does nothing, and confirming that this causes the same crash.
It depends how you interpret "global data". If you take it to mean (as I normally would) data stored in the process's address space, then the advice makes no sense - we know that memory is going to disappear, so who cares what happens to that?
So it may be referring to OS-wide stuff that a DLL may have done, that persists outside the lifetime of any process. A simple example would be a temporary file that might need to be cleaned up; crash the process too many times and you'll run out of disk space, so probably best not to make a habit of it.

Resources