Patching an EXE using IDA - overflow

Say there is a buggy program that contains a sprintf() and i want to change it to a snprintf so it doesn't have a buffer overflow.. how do I do that in IDA??

You really don't want to make that kind of change using information from IDA pro.
Although IDA's disassembly is relatively high quality, it's not high quality enough to support executable rewriting. Converting a call to sprintf to a call to snprintf requires pushing a new argument on to the stack. That requires the introduction of a new instruction, which impacts the EA of everything that follows it in the executable image. Updating those effective addresses requires extremely high quality disassembly. In particular, you need to be able to:
Identify which addresses in the executable are data, and which ones are code
Identify which instruction operands are symbolic (address references) and which instruction operands are numeric.
Ida can't (reliably) give you that information. Also, if the executable is statically linked against the crt, it may not contain snpritnf, which would make performing the rewriting by hand VERY difficult.
There are a few potential workarounds. If there is sufficient padding available in (or after) the function making the call, you might be able to get away with only rewriting a single function. Alternatively, if you have access to object files, and those object files were compiled with the /GY switch (assuming you are using Visual Studio) then you may be able to edit the object file. However, editing the object file may still require substantial fix ups.
Presumably, however, if you have access to the object files you probably also have access to the source. Changing the source is probably your best bet.

Related

What is the ".scatterload" file in macOS?

I've downloaded Apple's TextEdit example app (here) and I'm a bit puzzled by one thing I see there: the TextEdit.scatterload file. It contains a list of functions and methods. My guess is that it provides information to the linker as to which functions/methods will be needed, and in what order, when the app launches, and that this is used to order the binary generated by the linker for maximum efficiency. Oddly, I seem to be unable to find any information whatsoever about this file through Google. So. First of all, is my guess as to the function of this file correct? And second, if so, can I generate a .scatterload file for my own macOS app, to make it launch faster? How would I do that? Seems like a good idea! (I am using Objective-C, but perhaps this question is not specific to that, so I'm not going to tag for it here.)
Scatter loading refers to a way to organize the mapping of your code in memory by specifying which part of code must be near which one, etc. This is to optimize page faults, etc.
You can read about it here Improving locality of reference (HTML)
or here Improving locality of reference (PDF).
.scatterload file is used by the linker to position code in memory layout of the executable.
Except if your app really need tight performance tuning, I would not encourage you to have a look at this.

LoadLibrary from offset in a file

I am writing a scriptable game engine, for which I have a large number of classes that perform various tasks. The size of the engine is growing rapidly, and so I thought of splitting the large executable up into dll modules so that only the components that the game writer actually uses can be included. When the user compiles their game (which is to say their script), I want the correct dll's to be part of the final executable. I already have quite a bit of overlay data, so I figured I might be able to store the dll's as part of this block. My question boils down to this:
Is it possible to trick LoadLibrary to start reading the file at a certain offset? That would save me from having to either extract the dll into a temporary file which is not clean, or alternatively scrapping the automatic inclusion of dll's altogether and simply instructing my users to package the dll's along with their games.
Initially I thought of going for the "load dll from memory" approach but rejected it on grounds of portability and simply because it seems like such a horrible hack.
Any thoughts?
Kind regards,
Philip Bennefall
You are trying to solve a problem that doesn't exist. Loading a DLL doesn't actually require any physical memory. Windows creates a memory mapped file for the DLL content. Code from the DLL only ever gets loaded when your program calls that code. Unused code doesn't require any system resources beyond reserved memory pages. You have 2 billion bytes worth of that on a 32-bit operating system. You have to write a lot of code to consume them all, 50 megabytes of machine code is already a very large program.
The memory mapping is also the reason you cannot make LoadLibrary() do what you want to do. There is no realistic scenario where you need to.
Look into the linker's /DELAYLOAD option to improve startup performance.
I think every solution for that task is "horrible hack" and nothing more.
Simplest way that I see is create your own virtual drive that present custom filesystem and hacks system access path from one real file (compilation of your libraries) to multiple separate DLL-s. For example like TrueCrypt does (it's open-source). And than you may use LoadLibrary function without changes.
But only right way I see is change your task and don't use this approach. I think you need to create your own script interpreter and compiler, using structures, pointers and so on.
The main thing is that I don't understand your benefit from use of libraries. I think any compiled code in current time does not weigh so much and may be packed very good. Any other resources may be loaded dynamically at first call. All you need to do is to organize the working cycles of all components of the script engine in right way.

Clever uses of linker scripts?

A great comment on my answer describing how to use linker scripts to make a ctor-like function list pointed out that recent GNU ld has much improved support for grafting new sections into system linker scripts with -Wl,-T... and INSERT BEFORE/INSERT AFTER. This got me thinking about other linker script tricks.
For a network card firmware I modified the linker script to group together the runtime modules of the firmware so that they would all be in a contiguous block that could be in L1 cache without conflicts. To clean up stragglers (where I couldn't group by .o) I used section attributes on individual functions. Performance counters verified that it actually worked (reduced L1 instruction cache misses to almost nothing).
What other clever things have you accomplished with linker scripts?
On a certain platform, for reasons I won't go into, I needed to have a section of executable which I could discard after load. Now unfortunately unmapping the memory for the executable was not possible so I was compelled to resort to linker trickery.
What I ended up doing was introducing a section of the executable which aliased the bss. That way, presuming I could sneak some code in early enough, I could copy the data out, reinitialize the bss, and so long as my aliased section was smaller than the total bss of the executable, paid no cost for the privilege. There are a couple of problems in that I couldn't really change the crt at all and the earliest point I could inject code was still after tls initialization (which used some bss), but nothing impossible to work around.
I'm still sort of surprised it worked, I would have thought that the bss was initialized by the crt after all the program sections were loaded. I haven't tried it on any platform where I have access to the loader or crt source.

Edit library in hex editor while preserving its integrity

I'm attempting to edit a library in hex editor, insert mode. The main point is to rename a few entries in it. If I make it in "Otherwrite" mode, everything works fine, but every time I try to add a few symbols to the end of string in "Insert" mode, the library fails to load. Anything I'm missing here?
Yes, you're missing plenty. A library follows the PE/COFF format, which is quite heavy on pointers throughout the file. (Eg, towards the beginning of the file is a table which points to the locations of each section in the file).
In the case that you are editing resources, there's the potential to do it without breaking things if you make sure you correct any pointers and sizes for anything pointing to after your edits, but I doubt it'll be easy. In the case that you are editing the .text section (ie, the code), then I doubt you'll get it done, since the operands of function calls and jumps are relative locations to their position in code - you would need to update the entire code to account for edits.
One technique to overcome this is a "code cave", where you replace a piece of the existing code with an explicit JMP instruction to some empty location (You can do this at runtime, where you have the ability to create new memory) - where you define some new code which can be of arbitrary length - then you explicitly JMP back to where you called from (+5 bytes say for the JMP opcode + operand).
Are the names you're changing them to the same length as the old names? If not, then the offsets of everything is shifted. And do any of the functions call one another? That could be another problem point. It'd be easier to obtain the source code (from the project's website if it's not in-house, or from the vendor if it's closed) and change them in that, and then recompile it. I'm curious as to why you are changing the names anyway.
DLLs are a complex binary format (ie compiled code). The compiling process turns named function calls into hard-wired references to specific positions in the file ("offsets"). Therefore if you insert characters into the middle of the file, the offsets after that point will no longer match what is actually at the position they reference, meaning that the function calls in your library will run the wrong code (if they manage to run anything at all).
Basically, the bottom line is what you're doing is always going to break stuff. If you're unlucky, it might even break it really badly and cause serious damage.
Sure - a detailed knowledge of the format, and what has to change. If you're wondering why some of your edits cause loading to fail, you are missing that knowledge.
Libraries are intended to be written by the linker for the use of the linker. They follow a well-defined format that is intended to be easy for the linker to write and read. They don't need tolerance for human input like a compiler does.
Very simply, libraries aren't intended to be modified by hex editors. It may be possible to change entries by overwriting them with names of the same length, or that may screw up an index somewhere. If you change the length of anything, you're likely breaking pointers and metadata.
You don't give any reason for wanting to do this. If it's for fun, well, it's harder than you expected. If you have another reason, you're better off getting the source, or getting somebody who has the source to rename and rebuild.

C Runtime objects, dll boundaries

What is the best way to design a C API for dlls which deals with the problem of passing "objects" which are C runtime dependent (FILE*, pointer returned by malloc, etc...). For example, if two dlls are linked with a different version of the runtime, my understanding is that you cannot pass a FILE* from one dll to the other safely.
Is the only solution to use windows-dependent API (which are guaranteed to work across dlls) ? The C API already exists and is mature, but was designed from a unix POV, mostly (and still has to work on unix, of course).
You asked for a C, not a C++ solution.
The usual method(s) for doing this kind of thing in C are:
Design the modules API to simply not require CRT objects. Get stuff passed accross in raw C types - i.e. get the consumer to load the file and simply pass you the pointer. Or, get the consumer to pass a fully qualified file name, that is opened , read, and closed, internally.
An approach used by other c modules, the MS cabinet SD and parts of the OpenSSL library iirc come to mind, get the consuming application to pass in pointers to functions to the initialization function. So, any API you pass a FILE* to would at some point during initialization have taken a pointer to a struct with function pointers matching the signatures of fread, fopen etc. When dealing with the external FILE*s the dll always uses the passed in functions rather than the CRT functions.
With some simple tricks like this you can make your C DLLs interface entirely independent of the hosts CRT - or in fact require the host to be written in C or C++ at all.
Neither existing answer is correct: Given the following on Windows: you have two DLLs, each is statically linked with two different versions of the C/C++ standard libraries.
In this case, you should not pass pointers to structures created by the C/C++ standard library in one DLL to the other. The reason is that these structures may be different between the two C/C++ standard library implementations.
The other thing you should not do is free a pointer allocated by new or malloc from one DLL that was allocated in the other. The heap manger may be differently implemented as well.
Note, you can use the pointers between the DLLs - they just point to memory. It is the free that is the issue.
Now, you may find that this works, but if it does, then you are just luck. This is likely to cause you problems in the future.
One potential solution to your problem is dynamically linking to the CRT. For example,you could dynamically link to MSVCRT.DLL. That way your DLL's will always use the same CRT.
Note, I suggest that it is not a best practice to pass CRT data structures between DLLs. You might want to see if you can factor things better.
Note, I am not a Linux/Unix expert - but you will have the same issues on those OSes as well.
The problem with the different runtimes isn't solvable because the FILE* struct belongs
to one runtime on a windows system.
But if you write a small wrapper Interface your done and it does not really hurt.
stdcall IFile* IFileFactory(const char* filename, const char* mode);
class IFile {
virtual fwrite(...) = 0;
virtual fread(...) = 0;
virtual delete() = 0;
}
This is save to be passed accross dll boundaries everywhere and does not really hurt.
P.S.: Be careful if you start throwing exceptions across dll boundaries. This will work quiet well if you fulfill some design creterions on windows OS but will fail on some others.
If the C API exists and is mature, bypassing the CRT internally by using pure Win32 API stuff gets you half the way. The other half is making sure the DLL's user uses the corresponding Win32 API functions. This will make your API less portable, in both use and documentation. Also, even if you go this way with memory allocation, where both the CRT functions and the Win32 ones deal with void*, you're still in trouble with the file stuff - Win32 API uses handles, and knows nothing about the FILE structure.
I'm not quite sure what are the limitations of the FILE*, but I assume the problem is the same as with CRT allocations across modules. MSVCRT uses Win32 internally to handle the file operations, and the underlying file handle can be used from every module within the same process. What might not work is closing a file that was opened by another module, which involves freeing the FILE structure on a possibly different CRT.
What I would do, if changing the API is still an option, is export cleanup functions for any possible "object" created within the DLL. These cleanup functions will handle the disposal of the given object in the way that corresponds to the way it was created within that DLL. This will also make the DLL absolutely portable in terms of usage. The only worry you'll have then is making sure the DLL's user does indeed use your cleanup functions rather than the regular CRT ones. This can be done using several tricks, which deserve another question...

Resources