Is modularity important in libraries? - modularity

I'm about to start writing a library that aims to be as lightweight as possible.
This library will have several modules that can act independently, but can still work together to achieve a larger goal if the user so chooses.
Should I provide a means of compiling "just part" of the library? Should I wait until it gets big enough to be "worth it?" Where do I draw that line?

If you want anyone to use your library (and like it), then yes, you need to make it more than just modular. Modularity implies that "these components are designed to be used together, and if you use them with anything else, it'll be an uphill struggle".
Each of your modules must be as easy to use from my code as they are from yours.
You need to treat each component as a separate library, not just as a separate module.
It is up to the user which libraries to pull in, and how to plug them into the user's code. With some kind of module system, you have already taken the architectural decisions and you try to force the user's application into your design.

Related

text user interface library for clojure

I'm writing an application in Clojure. The internals are already written, but what I am struggling with now is the interface.
I do intend at some point to use Seesaw for a GUI interface. However, I would also like to provide a text-based interface that can be used in a terminal, for instance.
I chose to try and implement the text-based interface first, and I do have something that works as a menu, per se, but I have been wondering if there are any libraries I could drop in that would make this easier and prevent me from having to write a bunch of tests for something that someone else may have already tested and implemented.
However, despite searching under a number of different terms, I have yet to find anything like what I'm looking for.
Charva does some of what I want but I have no need for a full ncurses-like interface, just something that will display a bunch of choices I can pick from and act on the choice. Plus Charva isn't even a fully-native Java library; it depends on an ncurses shared library, and since I'm going to want this application to be portable, that will create more headaches. Ideally I would like to find something actually written in Clojure so I don't have to write interop code to use it, but I can if necessary.
Am I just not looking hard enough or does a CLI-based menu interface / presentation layer for Clojure just not exist?

Embed and execute native code from memory

I want to do these two things with my application (Windows only):
Allow user to insert (with a tool) a native code into my application before starting it.
Run this user-inserted code straight from memory during runtime.
Ideally, it would have to be easy for user to specify this code.
I have two ideas how to do this that I'm considering right now:
User would embed a native dll into application's resources. Application would load this dll straight from memory using techniques from this article.
Somehow copy assembly code of .dll method specified by user into my application resources, and execute this code from heap as described in this article.
Are there any better options to do this? If not, any thoughts on what might cause problems in those solutions?
EDIT
I specifically do not want to use LoadLibrary* calls as they require dll file to be already on hard drive which I'm trying to avoid. I'm also trying to make dissasembling harder.
EDIT
Some more details:
Application code is under my control and is native. I simply want to provide user with a way to embed his own customized functions after my application is compiled and deployed.
User code can have arbitrary restrictions placed on it by me, it is not a problem.
The aim is to allow third parties to statically link code into a native application.
The obvious way to do this is to supply the third parties with the application's object files and a linker. This could be wrapped up in a tool to make it easy to use.
As ever, the devil is in the detail. In addition to object files, applications contain manifests, resources, etc. You need to find a linker that you are entitled to distribute. You need to use a compiler that is compatible with said linker. And so on. But this is certainly feasible, and likely to be more reliable than trying to roll your own solution.
Your option 2 is pretty much intractable in my view. For small amounts of self-contained code it's viable. For any serious amount of code you cannot realistically hope for success without re-inventing the wheel that is your option 1.
For example, real code is going to link to Win32 functions, and how are you going to resolve those? You'd have to invent something just like a PE import table. So, why do so when DLLs already exist. If you invented your own PE-like file format for this code, how would anyone generate it? All the standard tools are in the business of making PE format DLLs.
As for option 1, loading a DLL from memory is not supported. So you have to do all the work that the loader would do for you if it were loading from file. So, if you want to load a DLL that is not present on the disk, then option 1 is your only choice.
Any half competent hacker will readily pull the DLL from the executing process though so don't kid yourself that running DLLs from memory will somehow protect your code from inspection.
This is something called "application virtualization", there are 3rd party tools for that, check them on google.
In a simple case, you may just load "DLL" into memory, apply relocs, setup imports and call entry point.

Why creating DLLs instead of compiling everything to a one big executable?

I saw and done myself a lot of small products where a same piece of software is separated into one executable and several DLLs, and those DLLs are not just shared libraries done by somebody else, but libraries which are done exclusively for this software, by the same developer team. (I'm not talking here about big scale products which just require hundreds of DLLs and share them extensively with other products.)
I understand that separating code into several parts, each one compiling into a separate DLL, is good from the point of view of a developer. It means that:
If a developer changes one project, he has to recompile only this one, and dependent ones, which can be much faster.
A project can be done by a single developer in a team, while other developers will just use provided interfaces, without stepping into the code.
Auto updates of the software may sometimes be faster, with lower server impact.
But what about the end user? Isn't it just bad to deliver a piece of software composed of one EXE & several DLLs, when everything could be grouped together? After all:
The user may not even understand what are those files and why they fill memory on his hard disk,
The user may want to move a program, for example save it on an USB flash drive. Having one big executable makes things easier,
Most anti-virus software will check each DLL. Checking one executable will be much faster than the smaller executable and dozens of libraries.
Using DLLs makes some things slower (for example, in .NET Framework, a "good" library must be found and checked if it is signed),
What happens if a DLL is removed or replaced by a bad version? Does every program handle this? Or does it crash without even explaining what's wrong with it?
Having one big executable has some other advantages.
So isn't it better from end users point of view, for small/medium size programs, to deliver one big executable? If so, why there are no tools allowing to do it easily (for example a magic tool integrated in common IDEs which compiles the whole solution into one executable, not each time, of course, but on-demand or during deployment).
This is someway similar to putting all CSS or all JavaScript files into one big file for the user. Having several files is much smarter for the developer and easier to maintain, but linking each page of a website to two files instead of dozens optimizes performance. In the same manner, CSS sprites are awful for the designers, because they require much more work, but are better from users point of view.
It's a tradeoff
(You figured that out yourself already ;))
For most projects, the customer doesn't care about how many files get installed, but he cares how many features are completed in time. Making life easier for developers benefits the user, too.
Some more reasons for DLL's
Some libraries don't play well together in the same build, but can be made to behave in a DLL (e.g. one DLL may use WTL3, the other requires WTL8).
Some of the DLL's may contain components to be loaded into other executables (global hooks, shell extensions, browser addons).
Some of the DLL's might be 3rd party, only available as DLL.
There may be reuse within the company - even if you see only one "public" product, it might be used in a dozen of internal projects using that DLL.
Some of the DLL's might have been built with a different environment thats not available for all developers in the company.
Standalone EXE vs. Installed product
Many products won't work as standalone executable anyway. They require installation, and the user not touching things he's not supposed to touch. Having one or more binaries doesn't matter.
Build Time Impact
Maybe you underestimate the impact of build times, and maintaining a stable build for large projects. If a build takes even 5 minutes, you could ephemistically call that "make developers think ahead, instead of tinker until it seems to work ok". But it's a serious time eater, and creates a serious distraction.
Build time of a single project is hard to improve. Working on VC9, build parallelization within one project is shaky, as is the incremental linker. Link times are especially hard to "optimize away" by faster machines.
Developer Independence
Another thing you might underestimate.
To use a DLL, you need a .dll and a .h.
To compile and link source code, you usually need to set up include directories, output directories, install 3rd party libraries, etc. It's a pain, really.
Yes, it is better IMHO - and I always use static linking for exactly the reasons you give, wherever possible. Lots of the reasons that dynamic linkage was invented for (saving memory, for example) no longer really apply. OTOH, there are architectural reasons, for example plugin architectures, why dynamic linking may be preferable to static.
I think your general point about considering carefully the final packaging of deliverables is well made. In the case of JavaScript such packaging is indeed possible, and with compression makes a significant difference.
Done lots of projects, never met an end-user which has any problem with some dll files residing on his box.
As a developer I would say yes it could matter. As an end-user who cares...
Yes, it may often be better from the end user's point of view. However, the benefits to the developer (and the development process) that you mention often mean that a business will prefer the cost-effective option.
It's a feature that too few users will appreciate, and that will cost a non-trivial amount to deliver.
Remember that we on StackOverflow are "above average" users. How many (non-geek) family members and friends do you have that would really value the ability to install their software to a USB stick?
The big advantages for dll are linked to the introduction of borders and independance .
For example in C/C++ only symbols exported are visible. Imagine a module A with a global variable "scale" and a module B with another global variable "scale" if you put all together you go to desaster ; in this case a dll may help you.
You can distribute those dll as component for customers without exactly the same compiler / linker options ; and this is often a good way to do cross language interop.

Modular Extension

I came accross the term Modular Extension as a requirement of an application of I am developing. Any body know what a Modular Extension is all about?
In general, if something is modular it means it's independent to the rest of your application, so that you can switch it on or off as needed, or remove it entirely, without affecting other things.
If something is an extension it means it's not considered a core part of your application, but rather separate functionality that can be developed on its own. Usually, the ability to write extensions implies a relatively well thought-out design and a sophisticated API that allows outside clients to get at the relevant internals of your core application.
Otherwise, though, your question is a little too generic to give a precise answer without more information.

Best way to inject functionality into a binary

What would be the best way of inserting functionality into a binary application (3d party, closed source).
The target application is on OSX and seems to have been compiled using gcc 3+. I can see the listing of functions implemented in the binary and have debugged and isolated one particular function which I would like to remotely call.
Specifically, I would like to call this function - let's call it void zoomByFactor(x,y) - when I receive certain data from a complex HIDevice.
I can easily modify or inject instructions into the binary file itself (ie. the patching does not need to occur only in RAM).
What would you recommend as a way of "nicely" doing this?
Edit:
I do indeed need to entire application. So I can't ditch it and use a library. (For those who need an ethical explanation: this is a proprietary piece of CAD software whose company website hasn't been updated since 2006. I have paid for this product (quite a lot of money for what it is, really) and have project data which I can not easily migrate away from it. The product suits me just fine as it is, but I want to use a new HID which I recently got. I've examined the internals of the application, and I'm fairly confident that I can call the correct function with the relevant data and get it to work properly).
Here's what I've done so far, and it is quite gheto.
I've already modified parts of the application through this process:
xxd -g 0 binary > binary.hex
cat binary.hex | awk 'substitute work' > modified.hex
xxd -r modified.hex > newbinary
chmod 777 newbinary
I'm doing this kind of jumping through hoops because the binary is almost 100 megs large.
The jist of what I'm thinking is that I'd jmp somewhere in the main application loop, launch a thread, and return to the main function.
Now, the questions are: where can I insert the new code? do I need to modify symbol tables? alternatively, how could I make a dylib load automatically so that the only "hacking" I need to do is inserting a call to a normally loaded dylib into the main function?
For those interested in what I've ended up doing, here's a summary:
I've looked at several possibilities. They fall into runtime patching, and static binary file patching.
As far as file patching is concerned, I essentially tried two approaches:
modifying the assembly in the code
segments (__TEXT) of the binary.
modifying the load commands in the
mach header.
The first method requires there to be free space, or methods you can overwrite. It also suffers from extremely poor maintainability. Any new binaries will require hand patching them once again, especially if their source code has even slightly changed.
The second method was to try and add a LC_ LOAD_ DYLIB entry into the mach header. There aren't many mach-o editors out there, so it's hairy, but I actually modified the structures so that my entry was visible by otool -l. However, this didn't actually work as there was a dyld: bad external relocation length at runtime. I'm assuming I need to muck around with import tables etc. And this is way too much effort to get right without an editor.
Second path was to inject code at runtime. There isn't much out there to do this. Even for apps you have control over (ie. a child application you launch). Maybe there's a way to fork() and get the initialization process launched, but I never go that.
There is SIMBL, but this requires your app to be Cocoa because SIMBL will pose as a system wide InputManager and selectively load bundles. I dismissed this because my app was not Cocoa, and besides, I dislike system wide stuff.
Next up was mach_ inject and the mach_star project. There is also a newer project called
PlugSuit hosted at google which seems to be nothing more than a thin wrapper around mach_inject.
Mach_inject provides an API to do what the name implies. I did find a problem in the code though. On 10.5.4, the mmap method in the mach_inject.c file requires there to be a MAP_ SHARED or'd with the MAP_READ or else the mmap will fail.
Aside from that, the whole thing actually works as advertised. I ended up using mach_ inject_ bundle to do what I had intended to do with the static addition of a DYLIB to the mach header: namely launching a new thread on module init that does its dirty business.
Anyways, I've made this a wiki. Feel free to add, correct or update information. There's practically no information available on this kind of work on OSX. The more info, the better.
In MacOS X releases prior to 10.5 you'd do this using an Input Manager extension. Input Manager was intended to handle things like input for non-roman languages, where the extension could popup a window to input the appropriate glyphs and then pass the completed text to the app. The application only needed to make sure it was Unicode-clean, and didn't have to worry about the exact details of every language and region.
Input Manager was wildly abused to patch all sorts of unrelated functionality into applications, and often destabilized the app. It was also becoming an attack vector for trojans, such as "Oompa-Loompa". MacOS 10.5 tightens restrictions on Input Managers: it won't run them in a process owned by root or wheel, nor in a process which has modified its uid. Most significantly, 10.5 won't load an Input Manager into a 64 bit process and has indicated that even 32 bit use is unsupported and will be removed in a future release.
So if you can live with the restrictions, an Input Manager can do what you want. Future MacOS releases will almost certainly introduce another (safer, more limited) way to do this, as the functionality really is needed for language input support.
I believe you could also use the DYLD_INSERT_LIBRARIES method.
This post is also related to what you were trying to do;
I recently took a stab at injection/overriding using the mach_star sources. I ended up writing a tutorial for it since documentation for this stuff is always so sketchy and often out of date.
http://soundly.me/osx-injection-override-tutorial-hello-world/
Interesting problem. If I understand you correctly, you'd like to add the ability to remotely call functions in a running executable.
If you don't really need the whole application, you might be able to strip out the main function and turn it into a library file that you can link against. It'll be up to you to figure out how to make sure all the required initialization occurs.
Another approach could be to act like a virus. Inject a function that handles the remote calls, probably in another thread. You'll need to launch this thread by injecting some code into the main function, or wherever else is appropriate. Most likely you'll run into major issues with initialization, thread safety, and/or maintaining proper program state.
The best option, if its available, is to get the vendor of your application to expose a plugin API that lets you do this cleanly and reliably in a supported manner.
If you go with either hack-the-binary route, it'll be time consuming and brittle, but you'll learn a lot in the process.
On Windows, this is simple to do, is actually very widely done and is known as DLL/code injection.
There is a commercial SDK for OSX which allows doing this: Application Enhancer (free for non-commercial use).

Resources