Is anyone hardening their code in an attempt to detect injections? For example, if someone is trying to intercept a username/password via NSUrlConnection, they could use LD_PRELOAD/DYLD_LIBRARY_PATH, provide exports for my calls into NSUrlConnection, and then forward the calls to the real NSUrlConnection.
Ali gave excellent information below, but I'm trying to determine what measures should be take for a hostile environment, where a phone might be jail broken. Most applications don't have to care, but one class of apps do - high integrity software.
If you are hardening, what method(s) are you using? Is there a standard way to detect injections on Macs and iPhones? How are you defeating framework injections?
For iOS / CocoaTouch, loading dynamic libraries is not allowed* (except for the System frameworks). To build and distribute an Application thru the AppStore, you can only link with static libraries and system frameworks, no dynamic library.
So on iOS you can't use that for code injection, neither can you use LD_PRELOAD of course (as you don't have access to such environment variables on iOS).
Except for jailbroken iPhones probably, but people jailbreaking their iPhone should take upon themselves that jailbreaking is by definition lifting all securities provided by iOS to avoid things such as injections (so you can't expect to remove the lock on your door to avoid having to use your key… and still expect that you're still protected against thieves robbing your house ;-))
That's the advantage of the Sandboxing + CodeSigning + No dylib constraints on iOS. No Code injection possible.
(On OSX it is still possible anyway, inparticular using LD_PRELOAD)
[EDIT] Since iOS8, iOS also allows dynamic frameworks. But as that's still sandboxed (you can only load code-signed frameworks that are inside your application bundles, and can't load frameworks that comes from outside your app bundle) injection is still not possible*
*except if the user jailbreaks its phone but it means that s/he chose to get rid of all protections and purpose and thus put its phone at risk — we can't crack our phone security and still expect it to provide all the protections those securities provided
This is an answer specific to UNIX like operating systems, I apologize if it doesn't make sense for your question but I don't know your platform well. Simply don't create a dynamically linked executable.
There are two ways I can think of to do this. Method #2 is probably best for you. They're both similar.
Important for both, the executable must be statically compiled using -static at build time
Method 1 - static exe, manual load shared libraries by their trusted full paths
Manually dlopen each library you need via a full path and then get the function addresses via dlsym at runtime and assign them to function pointers to use them. You'll need to do this for every external function you want to use. I believe reentrant unsafe functions won't like this so for those that use static variables- you'll need to use the reentrant safe versions, these end with "_r" i.e. use strtok_r instead of strtok
This will be difficult or simple depending on what your app does and how many functions you're using.
Method 2 - Statically link the executable, period
You can solve your subversion problem by just linking a static executable to avoid using dynamic libraries at all. This will generate a much larger exe than the the dlopen()/dlsym() method. Build using the -static compile flag and instead of using, for example gcc bah.c -o bah lssl use gcc -static bah.c -o bah /usr/lib/libssl.a to use the statically compiled version of the libraries you need instead of the dynamic shared libraries. In other words, use -static and don't use -l while building
For either method:
Once built, use file bah to confirm the executable is statically linked. Or confirm by running ldd on it
Note you'll need statically compiled versions of all the libraries you're linking against present in your system. These files end with.a instead of .so)
Also note upgrading system libraries will not update your executable. If there's a new security bug in OpenSSL, you'll need to get the latest libssl.a and recompile it. If you use the dlopen()/dlsym() method you won't have this problem but you will have portability issues if symbols change in different versions
Each method has its pros and cons based on your needs.
Taking the method 1 dlopen and dlsym approach makes your code more "obfuscated" and smaller, but sacrifices portability in most cases so probably isn't what you want. The upside is that it can possibly benefit when security bugs are fixed system wide.
Related
Assuming you only have access to the final product (i.e. in form of the exe file), how would you go about finding out which libraries/components the developer used to create the application?
In my specific case the question is about an application developed in VC++ using a few third party components and I'm curious which those are.
But I think the question is generally valid, e.g. when it should be proven if a developer is in line with license requirements of a specific library.
So, what you're saying is that if I suspect that a binary is using a certain library, I could try to map the respective function calls and see if I get a result. But there is no shortcut to this and unless I am willing to try out hundreds of mappings or the dev left some information in some strings or other resources, I have little chance of finding this out. Yes?
There is small shortcut, here's what I'd do:
check executable for strings and constants, and try to find out what library is that.
IF used libraries are open-source, compile them on my own and create FLAIR signatures (IDA Pro).
Use generated flair signatures on target executable.
In some situations, that can really work like a charm and can let you distinguish actual code from used libraries.
The IDA Pro Book - Ch 12. Library Recognition Using FLIRT Signatures
If I build a DLL with Rust language, does it require libgcc*.dll to be present on run time?
On one hand:
I've seen a post somewhere on the Internet, claiming that yes it does;
rustc.exe has libgcc_s_dw2-1.dll in its directory, and cargo.exe won't run without the dll when downloaded from the http://crates.io website;
On the other hand:
I've seen articles about building toy OS kernels in Rust, so they most certainly don't require libgcc dynamic library to be present.
So, I'm confused. What's the definite answer?
Rust provides two main toolchains for Windows: x86_64-pc-windows-gnu and x86_64-pc-windows-msvc.
The -gnu toolchain includes an msys environment and uses GCC's ld.exe to link object files. This toolchain requires libgcc*.dll to be present at runtime. The main advantage of this toolchain is that it allows you to link against other msys provided libraries which can make it easier to link with certain C\C++ libraries that are difficult to under the normal Windows environment.
The -msvc toolchain uses the standard, native Windows development tools (either a Windows SDK install or a Visual Studio install). This toolchain does not use libgcc*.dll at either compile or runtime. Since this toolchain uses the normal windows linker, you are free to link against any normal Windows native libraries.
If you need to target 32-bit Windows, i686- variants of both of these toolchains are available.
NOTE: below answer summarizes situation as of Sep'2014; I'm not aware if it's still current, or if things have changed to better or worse since then. But I strongly suspect things have changed, given that 2 years have already passed since then. It would be cool if somebody tried to ask steveklabnik about it again, then update below info, or write a new, fresher answer!
Quick & raw transcript of a Rust IRC chat with steveklabnik, who gave me a kind of answer:
Hi; I have a question: if I build a DLL with Rust, does it require libgcc*.dll to be present on run time? (on Windows)
I believe that if you use the standard library, then it does require it;
IIRC we depend on one symbol from it;
but I am unsure.
How can I avoid using the standard library, or those parts of it that do? (and/or do you know which symbol exactly?)
It involves #[no_std] at your crate root; I think the unsafe guide has more.
Running nm -D | grep gcc shows me __gc_personality_v0, and then there is this: What is __gxx_personality_v0 for?,
so it looks like our stack unwinding implementation depends on that.
I seem to recall I've seen some RFCs to the effect of splitting standard library, too; are there parts I can use without pulling libgcc in?
Yes, libcore doesn't require any of that.
You give up libstd.
Also, quoting parts of the unsafe guide:
The core library (libcore) has very few dependencies and is much more portable than the standard library (libstd) itself. Additionally, the core library has most of the necessary functionality for writing idiomatic and effective Rust code. (...)
Further libraries, such as liballoc, add functionality to libcore which make other platform-specific assumptions, but continue to be more portable than the standard library itself.
And fragment of the current docs for unwind module:
Currently Rust uses unwind runtime provided by libgcc.
(The transcript was edited slightly for readability. Still, I'll happily delete this answer if anyone provides something better formatted and more thorough!)
I have a product which bootloader and application are compiled using a compiler (gnuarm GCC 4.1.1) that generates "arm-elf".
The bootloader and application are segregated in different FLASH memory areas in the linker script.
The application has a feature that enables it to call the bootloader (as a simple c-function with 2 parameters).
I need to be able to upgrade existing products around the world, and I can safely do this using always the same compiler.
Now I'd like to be able to compile this product application using a new GCC version that outputs arm-eabi.
Everything will be fine for new products, where both application and bootloader are compiled using the same toolchain, but what happens with existing products?
If I flash a new application, compiled with GCC 4.6.x and arm-none-eabi, will my application still be able to call the bootloader function from the old arm-elf bootloader?
Furthermore, not directly related to the above question, can I mix object files compiled with arm-elf into a binary compiled with arm-eabi?
EDIT:
I think is good to make clear I am building for a bare metal ARM7, if it makes any difference...
No. An ABI is the magic that makes binaries compatible. The Application Binary Interface determines various conventions on how to communicate with other libraries/applications. For example, an ABI will define calling convention, which makes implicit assumptions about things like which registers are used for passing arguments to C functions, and how to deal with excess arguments.
I don't know the exact differences between EABI and ABI, but you can find some of them by reading up on EABI. Debian's page mentions the syscall convention is different, along with some alignment changes.
Given the above, of course, you cannot mix arm-elf and arm-eabi objects.
The above answer is given on the assumption that you talk to the bootloader code in your main application. Given that the interface may be very simple (just a function call with two parameters), it's possible that it might work. It'd be an interesting experiment to try. However, it is not ** guaranteed** to work.
Please keep in mind you do not have to use EABI. You can generate an arm-elf toolchain with gcc 4.6 just as well as with older versions. Since you're using a binary toolchain on windows, you may have more of a challenge. I'd suggest investigating crosstool-ng, which works quite well on Linux, and may work okay on cygwin to build the appropriate toolchain.
There is always the option of making the call to bootloader in inline assembly, in which case you can adhere to any calling standard you need :).
However, besides the portability issue it introduces, this approach will also make two assumptions about your bootloader and application:
you are able to detect in your app that a particular device has a bootloader built with your non-EABI toolchain, as you can only call the older type bootloader using the assembly code.
the two parameters you mentioned are used as primitive data by your bootloader. Should the bootloader use them, for example, as pointers to structs then you could be facing issues with incorrect alignment, padding and so forth.
I Think that this will be OK. I did a migration something like this myself, from what I remember I only ran into a problem to do with handling division.
This is the best info I can find about the differences, it suggests that if you don't have struct alignment issues, you may be OK.
Most applications created with Microsoft developer tools need some kind of runtime to be installed first.
However most viruses never need any kind of runtime to work. Also they also seem to use undocumented core/kernel APIs without have lib files etc.
So what runtime/application do most virus /virus writers use ?
If the runtime is statically linked in (as opposed to dynamically), then an EXE will be self-contained and you won't need a runtime DLL. However, really, you don't even need a runtime library at all if your code can do everything without calling standard library functions.
As for Windows APIs, in many cases you don't strictly need an import library either -- particularly if you load addresses dynamically via GetProcAddress. Some development tools will even let you link directly against the DLLs (and will generate method stubs or whatever for you). MS tries to ensure that names for documented API calls stay the same between versions. Undocumented functions, not so much...but then, compatibility typically isn't the foremost of concerns anyway when you're deliberately writing malicious software.
On Windows there a few libraries that allow you to intercept calls to DLLs:
http://www.codeproject.com/kb/system/hooksys.aspx
Is it possible to do this on Mac OS? If so, how is it done?
The answer depends on whether you want to do this in your own application or systemwide. In your own application, it's pretty easy; the dynamic linker provides features such as DYLD_INSERT_LIBRARIES. If you're doing this for debugging/instrumentation purposes, also check out DTrace.
You can replace Objective-C method implementations with method swizzling, e.g. JRSwizzle or Apple's method_exchangeImplementations (10.5+).
If you want to modify library behavior systemwide, you're going to need to load into other processes' address spaces.
Two loading mechanisms originally designed for other purposes (input managers and scripting additions) are commonly abused for this purpose, but I wouldn't really recommend them.
mach_inject/mach_override are an open-source set of libraries for loading code and replacing function implementations, respectively; however, you're responsible for writing your own application which uses the libraries. (Also, take a look at this answer; you need special permissions to inject code into other processes.)
Please keep in mind that application patching/code injection for non-debugging purposes is strongly discouraged by Apple and some Mac users (and developers) are extremely critical of the practice. Much of this criticism is poorly informed, but there have been a number of legitimately poorly written "plug-ins" (particularly those which patch Safari) that have been implicated in application crashes and problems. Code defensively.
(Disclaimer: I am the author of a (free) APE module and an application which uses mach_inject.)