I must intercept execution in very big application in many places.
What programs I can use to do this? What techniques exists for this problems?
Manually reverse engineering and adding hooks is maybe not optimal solution for this problem, because application is very big and some part of application can be updated in some time, i think with some tools or good practices for this problem i can do this faster, anyone know how to do?
Anybody help me?
seeing as the tools part has been covered, here is something for the techniques.
Depending what it is you need to hook and whether or not there is protection invloved, there are a few methods:
Relative call/jmp patching in the virtualized binary: this is the simplest, but also a lot of work if you can't automatically find all references to a function, this probably won't work in this cause due to your criteria.
IAT/EAT hooking: this is use for imports(IAT) and exports(EAT), great if your targeting a known importted/exported set of API functions. a good example of this can be found here or here
Hot-Patching: Windows XP SP2 introduced something called "hot-patching" (used for realtime system function updates), where all its (the WinAPI) functions start with a 'mov edi,edi', allowing a relative jump to be patched into the free space created above every hot-patchable function(one can do it too). this is generally used for programs that checksum there IAT's or have other funny forms of protection, more info can be found here and here
Code-Caving: capturing execution flow by placing redirections in arbitrary code space. see here, here or here
VFT/COM Redirection: basically overwriting entries in a objects virtual function table, useful for OOP/COM based applications. see this
There are a lot of 3rd party libraries, most famous would probably be MS Detours, one can also look at APIHijack or a mini-hook engine.
Ofcourse nothing can substitute for the initial poking you'll need to do with a debugger like ollydbg, but knowing the method your gonna use can drastically short them amount time time spent poking around
Some details on what exactly you need to do (e.g. how do you determine where to break) would be nice. Depending on your situation, something like Pin might work.
I suggest using Deviare API Hook. It's the easiest way you can do what you need. It has some COM objects that you can use to hook an application from a different process. In your process you get full parameter information and you can use it in any programming language (I'm using C# and it works like a charm).
If you need to intercept registry API I suggest using Deviare to debug what you need to intercept but then you will have to make your own hooks, otherwise, you'll find performance issues.
You can do API Hooking if you are interested in intercepting method calls.
Or use some disassembler like softice or ollydbg or win32dasm.
Related
I'm looking to develop a program that detects calls to certain Windows API functions and simply records the calling process, call count, and hopefully their arguments, to later mark them as benign or malicious.
The GUI program API monitor is a good example of the functionality I'm trying to achieve. Ideally I would like to track each desired API function individually and get the caller PID and parameters when or after it is used, without user input. The program should be able to run on any windows 7 machine, but can be limited to 32bit applications.
I understand there are several methods of hooking a function, and from my understanding Microsoft detours implements one of these, but I don't know if its the one best suited to what I want to do. I've seen detours, easyhook, deviare API hook, and others mentioned on very old posts, but I have a hard time getting my head around the differences and features of each.
So my question is, given what I'm trying to do, what do you recommend and why?
For reference I'm an intermediate level programmer, but a beginner at Windows programming.
Thanks for your help
I'm part of Nektra Deviare API Hook team. Our hooking engine is used by a large number of companies all over the world, in different types of end-user products (e.g.: Anti Virus, Data Loss Prevention, AI, handicapped software, Data Classification, App virtualization).
Deviare-InProc is the MS Detours replacement and Deviare2 has built in all the RPC you need to hook another process and get the calls in your own process.
We continuously fix all reported issues. You can verify it in our GitHub:
https://github.com/nektra/Deviare2
https://github.com/nektra/Deviare-InProc
You can see Deviare2 running in Nektra's SpyStudio API Monitor.
Detours is an excellent software but very expensive (USD 10k). In addition to this, it completely lacks of support. It can be compared to Deviare InProc,
EasyHook used to be a good start point because it was the only free option. But, now Deviare2 family is open source and EasyHook has a lot of stability issues for the real world.
I am interested in hooking the function which return the content of a directory in Windows.
I have came across a tool called EasyHook, however I saw this in their page
Unlike what some (commercial) hooking libraries out there are advertising to boost sales, user-mode hooking can never be an option to apply additional security checks in any safe manner. If you only want to “sandbox” a dedicated process you know well about, and the process in fact doesn’t know about EasyHook, this might succeed! But, don’t ever attempt to write any security software based on user mode hooking. It won’t work, I promise you… This is also why EasyHook does not support a so called “System wide” injection, which in fact is just an illusion, because as I said, with user-mode hooks, this will always be impossible.
http://www.codeproject.com/Articles/27637/EasyHook-The-reinvention-of-Windows-API-hooking
I have asked in the forum there but it seems that no one knows there.
Why is this kind of hooking is not suitable for security analysis?
Basically, I would like to change the output of the function so it will return extra non existing files, such that every calling process will see this changes.
(This is done for security analysis).
Thanks,
Or.
On Windows, all disk I/O ultimately happens via Win32 API calls like CreateFile, SetFilePointer, etc.
Now, is it possible to intercept these disk I/O Win32 calls and hook in your own code, at run time, for all dynamically-linked Windows applications? That is, applications that get their CreateFile functionality via a Windows DLL instead of a static, C library.
Some constraints that I have are:
No source code: I won't have the source code for the processes I'd like to intercept.
Thread safety: My hook code may dynamically allocate its own memory. Further, because this memory is going to be shared with multiple intercepted processes (and their threads), I'd like to be able to serialize access to it.
Conditional delegation and overriding : In my hook code, I would like to be able to decide whether to delegate to the original Win32 API functionality, or to use my own functionality, or both. (Much like the optional invocation of the super class method in the overriding method of the subclass in C++ or Java.)
Regular user-space code: I want to be able to accomplish the above without having to write any device-driver, mainly due to the complexity involved in writing one.
If this is possible, I'd appreciate some pointers. Source code is not necessary, but is always welcome!
You may want to look into mhook if Detours isn't what you want.
Here are a couple of problems you may run into while working with hooks:
ASLR can prevent injected code from intercepting the intended calls.
If your hooks are global (using AppInit_DLLs for example), only Kernel32.dll and User32.dll are available when your DLL is loaded. If you want to target functions outside of those modules, you'll need to manually make sure they're available.
I suggest you start with Microsoft Detours. It's free edition also exists and its rather powerful stable as well. For injections you will have to find which injection method will work for your applications in target. Not sure whether you need to code those on your own or not, but a simple tool like "Extreme Injector" would serve you well for testing your approaches. And you definitely do not need any kernel-land drivers to be developed for such a simple task, in my opinion at least. In order to get the full help of me and others, I'd like to see your approach first or list more constraints to the problem at hand or where have you started so far, but had problems. This narrows down a lot chit-chats and can save your time as well.
Now, if you are not familiar with Detours from Microsoft (MSFT) please go ahead and download it from the following link: http://research.microsoft.com/en-us/projects/detours/ once you download it. You are required to compile it yourself. It's very straightforward and it comes with a compiled HTML help file and samples. So far your profiles falls under IAT (Import Address Table) and EAT (Export Address Table).
I hope this non-snippet answer helps you a little bit in your approach to the solution, and if you get stuck come back again and ask. Best of luck!
When working in a big project that requires debugging (like every project) you realize how much people love "printf" before the IDE's built-in debugger. By this I mean
Sometimes you need to render variable values to screen (specially for interactive debugging).
Sometimes to log them in a file
Sometimes you have to change the visibility (make them public) just to another class to access it (a logger or a renderer for example).
Some other times you need to save the previous value in a member just to contrast it with the new during debugging
...
When a project gets huge with a lot of people working on it, all this debugging-specific code can get messy and hard to differentiate from normal code. This can be crazy for those who have to update/change someone else's code or to prepare it for a release.
How do you solve this?
It is always good to have naming standards and I guess that debug-coding standards should be quite useful (like marking every debug-variable with a _DBG sufix). But I also guess naming is just not enough. Maybe centralizing it into a friendly tracker class, or creating a robust base of macros in order to erase it all for the release. I don't know.
What design techniques, patterns and standards would you embrace if you are asked to write a debug-coding document for all others in the project to follow?
I am not talking about tools, libraries or IDE-specific commands, but for OO design decisions.
Thanks.
Don't commit debugging code, just debuggin tools.
Loggin OTOH has a natural place in execption handling routines and such. Also a few well placed logging statments in a few commonly used APIs can be good for debugging.
Like one log statment to log all SQL executed from the system.
My vote would be with what you described as a friendly tracker class. This class would keep all of that centralized, and potentially even allow you to change debug/logging strategies dynamically.
I would avoid things like Macros simply because that's a compiler trick, and not true OO. By abstracting the concept of debug/logging, you have the opportunity to do lots of things with it including making it a no-op if needed.
Logging or debugging? I believe that well-designed and properly unit-tested application should not need to be permanently instrumented for debugging. Logging on the other hand can be very useful, both in finding bugs and auditing program actions. Rather than cover a lot of information that you can get elsewhere, I would point you at logging.apache.org for either concrete implementations that you can use or a template for a reasonable design of a logging infrastructure.
I think it's particularly important to avoid using System.outs / printfs directly and instead use (even a custom) logging class. That at least gives you a centralized kill-switch for all the loggings (minus the call costs in Java).
It is also useful to have that log class have info/warn/error/caveat, etc.
I would be careful about error levels, user ids, metadata, etc. since people don't always add them.
Also, one of the most common problems that I've seen is that people put temporary printfs in the code as they debug something, and then forget where they put them. I use a tool that tracks everything that I do so I can quickly identify all my recent edits since an abstract checkpoint and delete them. In most cases, however, you may want to pose special rules on debug code that can be checked into your source control.
In VB6 you've got
Debug.Print
which sends output to a window in the IDE. It's bearable for small projects. VB6 also has
#If <some var set in the project properties>
'debugging code
#End If
I have a logging class which I declare at the top with
Dim Trc as Std.Traces
and use in various places (often inside #If/#End If blocks)
Trc.Tracing = True
Trc.Tracefile = "c:\temp\app.log"
Trc.Trace 'with no argument stores date stamp
Trc.Trace "Var=" & var
Yes it does get messy, and yes I wish there was a better way.
We routinely are beginning to use a static class that we write trace messages to. It is very basic and still requires a call from the executing method, but it serves our purpose.
In the .NET world, there is already a fair amount of built in trace information available, so we do not need to worry about which methods are called or what the execution time is. These are more for specific events which occur in the execution of the code.
If your language does not support, through its tracing constructs, categorization of messages, it should be something that you add to your tracing code. Something to the effect that will identify different levels of importance and/or functional areas is a great start.
Just avoid instrumenting your code by modifying it. Learn to use a debugger. Make logging and error handling easy. Have a look at Aspect Oriented Programming
Debugging/Logging code can indeed be intrusive. In our C++ projects, we wrap common debug/log code in macros - very much like asserts. I often find that logging is most usefull in the lower level components so it doesn't have to go everywhere.
There is a lot in the other answers to both agree and disagree with :) Having debug/logging code can be a very valuable tool for troubleshooting problems. In Windows, there are many techniques - the two major ones are:
Extensive use of checked (DBG) build asserts and lots of testing on DBG builds.
the use of ETW in what we call 'fre' or 'retail' builds.
Checked builds (what most ohter call DEBUG builds) are very helpfull for us as well. We run all our tests on both 'fre' and 'chk' builds (on x86 and AMD64 as well, all serever stuff runs on Itanium too...). Some people even self host (dogfood) on checked builds. This does two things
Finds lots of bugs that woldn't be found otherwise.
Quickly elimintes noisy or unnessary asserts.
In Windows, we use Event Tracing for Windows (ETW) extensively. ETW is an efficient static logging mechanism. The NT kernel and many components are very well instrumented. ETW has a lot of advantages:
Any ETW event provider can be dynamically enabled/disabled at run time - no rebooting or process restarts required. Most ETW providers provide granular control over individual events, or groups of events.
Events from any provider (most importantly the kernel) can be merged into a single trace so all events can be correlated.
A merged trace can be copied off the box and fully processed - with symbols.
The NT kernel sample pofile interrupt can generate an ETW event - this yeilds a very light weight sample profiler that can be used any time
On Vista and Windows Server 2008, logging an event is lock free and fully multi-core aware - a thread on each processor can independently log events with no synchronization needed between them.
This is hugly valuable for us, and can be for your Windows code as well - ETW is usuable by any component - including user mode, drivers and other kernel components.
One thing we often do is write a streaming ETW consumer. Instead of putting printfs in the code, I just put ETW events at intersting places. When my componetn is running, I can then just run my ETW watcher at any time - the watcher receivs the events and displays them, conts them, or does other interesting things with them.
I very much respectfully disagree with tvanfosson. Even the best code can benefit from well implemented logging. Well implimented static run-time logging can make finding many problems straight forward - without it, you have zero visiblilty into what is happening in your component. You can look at inputs, outputs and guess -that's about it.
They key here is the term 'well implimented'. Instrumentation must be in the right place. Like any thing else, this requries some thought and planning. If it is not in helpfull/intersting places, then it will not help you find problems in a a development, testing, or deployed scenario. You can also have too much instrumeation causing perf problems when it is on - or even off!
Of course, different software products or componetns will have different needs. Some things may need very little instrumenation. But a widely depolyed or critical component can greatly benefit from weill egineered instrumeantion.
Here is a scenario for you (note, this very well may not apply to you...:) ). Lets say you have a line-of-business app deployed on every desktop in your company - hundreds or thousands of users. What do you do when someone has a pbolem? Do yo stop by their office and hookup a debugger? If so, how do you know what version they have? Where do you get the right symbols? How do you get the debuger on their system? What if it only happens once every few hours or days? Are you going to let the system run with the debugger connected all that time?
As you can imagine - hooking up debugger in this scenario is disruptive.
If your component is instrumented with ETW, then you could ask your user to simply turn on tracing; continue to do his/her work; then hit the "WTF" button when the problem happens. Even better: your app may have be able to self log - detecting problems at run time and turning on logging auto-magicaly. It could even send you ETW files when problems occured.
These are just simple exmples - logging can be handled many different ways. My key recomendation here is to think about how loging might be able to help you find, debug, and fix problems in your componetns at dev time, test time, and after they are deployed.
I was burnt by the same issue in about every project I've been involved with, so now I have this habit that involves extensive use of logging libraries (whatever the language/platform provides) from the start. Any Log4X port is fine for me.
Building yourself some proper debug tools can be extremely valuable. For example in a 3D environment, you might have an option to display the octree, or to render planned AI paths, or to draw waypoints that are normally invisible. You'd probably also want some on-screen display to aid with profiling too: the current framerate, count of polygons on screen, texture memory usage, and so on.
Although this takes some time and effort to do, in the long run it can save you a lot of time and frustration.
What would be the best way of inserting functionality into a binary application (3d party, closed source).
The target application is on OSX and seems to have been compiled using gcc 3+. I can see the listing of functions implemented in the binary and have debugged and isolated one particular function which I would like to remotely call.
Specifically, I would like to call this function - let's call it void zoomByFactor(x,y) - when I receive certain data from a complex HIDevice.
I can easily modify or inject instructions into the binary file itself (ie. the patching does not need to occur only in RAM).
What would you recommend as a way of "nicely" doing this?
Edit:
I do indeed need to entire application. So I can't ditch it and use a library. (For those who need an ethical explanation: this is a proprietary piece of CAD software whose company website hasn't been updated since 2006. I have paid for this product (quite a lot of money for what it is, really) and have project data which I can not easily migrate away from it. The product suits me just fine as it is, but I want to use a new HID which I recently got. I've examined the internals of the application, and I'm fairly confident that I can call the correct function with the relevant data and get it to work properly).
Here's what I've done so far, and it is quite gheto.
I've already modified parts of the application through this process:
xxd -g 0 binary > binary.hex
cat binary.hex | awk 'substitute work' > modified.hex
xxd -r modified.hex > newbinary
chmod 777 newbinary
I'm doing this kind of jumping through hoops because the binary is almost 100 megs large.
The jist of what I'm thinking is that I'd jmp somewhere in the main application loop, launch a thread, and return to the main function.
Now, the questions are: where can I insert the new code? do I need to modify symbol tables? alternatively, how could I make a dylib load automatically so that the only "hacking" I need to do is inserting a call to a normally loaded dylib into the main function?
For those interested in what I've ended up doing, here's a summary:
I've looked at several possibilities. They fall into runtime patching, and static binary file patching.
As far as file patching is concerned, I essentially tried two approaches:
modifying the assembly in the code
segments (__TEXT) of the binary.
modifying the load commands in the
mach header.
The first method requires there to be free space, or methods you can overwrite. It also suffers from extremely poor maintainability. Any new binaries will require hand patching them once again, especially if their source code has even slightly changed.
The second method was to try and add a LC_ LOAD_ DYLIB entry into the mach header. There aren't many mach-o editors out there, so it's hairy, but I actually modified the structures so that my entry was visible by otool -l. However, this didn't actually work as there was a dyld: bad external relocation length at runtime. I'm assuming I need to muck around with import tables etc. And this is way too much effort to get right without an editor.
Second path was to inject code at runtime. There isn't much out there to do this. Even for apps you have control over (ie. a child application you launch). Maybe there's a way to fork() and get the initialization process launched, but I never go that.
There is SIMBL, but this requires your app to be Cocoa because SIMBL will pose as a system wide InputManager and selectively load bundles. I dismissed this because my app was not Cocoa, and besides, I dislike system wide stuff.
Next up was mach_ inject and the mach_star project. There is also a newer project called
PlugSuit hosted at google which seems to be nothing more than a thin wrapper around mach_inject.
Mach_inject provides an API to do what the name implies. I did find a problem in the code though. On 10.5.4, the mmap method in the mach_inject.c file requires there to be a MAP_ SHARED or'd with the MAP_READ or else the mmap will fail.
Aside from that, the whole thing actually works as advertised. I ended up using mach_ inject_ bundle to do what I had intended to do with the static addition of a DYLIB to the mach header: namely launching a new thread on module init that does its dirty business.
Anyways, I've made this a wiki. Feel free to add, correct or update information. There's practically no information available on this kind of work on OSX. The more info, the better.
In MacOS X releases prior to 10.5 you'd do this using an Input Manager extension. Input Manager was intended to handle things like input for non-roman languages, where the extension could popup a window to input the appropriate glyphs and then pass the completed text to the app. The application only needed to make sure it was Unicode-clean, and didn't have to worry about the exact details of every language and region.
Input Manager was wildly abused to patch all sorts of unrelated functionality into applications, and often destabilized the app. It was also becoming an attack vector for trojans, such as "Oompa-Loompa". MacOS 10.5 tightens restrictions on Input Managers: it won't run them in a process owned by root or wheel, nor in a process which has modified its uid. Most significantly, 10.5 won't load an Input Manager into a 64 bit process and has indicated that even 32 bit use is unsupported and will be removed in a future release.
So if you can live with the restrictions, an Input Manager can do what you want. Future MacOS releases will almost certainly introduce another (safer, more limited) way to do this, as the functionality really is needed for language input support.
I believe you could also use the DYLD_INSERT_LIBRARIES method.
This post is also related to what you were trying to do;
I recently took a stab at injection/overriding using the mach_star sources. I ended up writing a tutorial for it since documentation for this stuff is always so sketchy and often out of date.
http://soundly.me/osx-injection-override-tutorial-hello-world/
Interesting problem. If I understand you correctly, you'd like to add the ability to remotely call functions in a running executable.
If you don't really need the whole application, you might be able to strip out the main function and turn it into a library file that you can link against. It'll be up to you to figure out how to make sure all the required initialization occurs.
Another approach could be to act like a virus. Inject a function that handles the remote calls, probably in another thread. You'll need to launch this thread by injecting some code into the main function, or wherever else is appropriate. Most likely you'll run into major issues with initialization, thread safety, and/or maintaining proper program state.
The best option, if its available, is to get the vendor of your application to expose a plugin API that lets you do this cleanly and reliably in a supported manner.
If you go with either hack-the-binary route, it'll be time consuming and brittle, but you'll learn a lot in the process.
On Windows, this is simple to do, is actually very widely done and is known as DLL/code injection.
There is a commercial SDK for OSX which allows doing this: Application Enhancer (free for non-commercial use).