Software design: how to get DLL version number - windows

I have a DLL used in a compliance scenario (the details of which are irrelevant). The important point is that the main executeable must display the DLL version number. My solution was that the DLL has a function to return it's own version - ie obtain it from the its own version resource and return it as a string.
My reviewer says that the main program should work out the DLL version number. He even gave me some code to get the DLL module handle and extract the version using that.
My question is, which is a better design and why? My feeling is that, using OO principles, I should ask the DLL for its version number. Doing it the other way means that the main program needs to know how the version information is stored and is hence more tightly coupled to the implementation.
Note that I know exactly how to extract the version information from a DLL. My question is about the best place for the code that does this.

Can you clarify the environment that you're working in? For now, since you've already mentioned getting the module handle, I'll assume you're using C++ and calling one of a handful of Win32 functions (GetModuleHandle, LoadLibrary etc).
First of all, I'd be careful about applying OO principles in too wide a context. The object oriented paradigm helps you structure your software in a more maintainable and understandable way, the problem you're describing sounds like it maybe stretches outside of the boundaries of your application. If you want to get information about a separate resource, such as a DLL, you should consider using a standard approach to achieving this to ensure that your code is decoupled from the items that it needs to inspect.
If you introduce a function into the DLL to return the version number to your main application, you have created a tight coupling between your main application and any DLL that needs to supply it's version information (by essentially defining a bespoke API or interface for this).
You should consider using standard, platform-wide functionality to retrieve the information instead This will allow your application to version any DLL for which it can obtain a handle.
Assuming you have an HMODULE for the dll (and you're using C++), call the following functions to get the version...
GetModuleFileNameEx (to get the full path and filename of the DLL if you don't already know this)
http://msdn.microsoft.com/en-us/library/windows/desktop/ms683198(v=vs.85).aspx
using this filename, call
GetFileVersionInfoSize (look these up on MSDN)
This will tell you some crucial information about the file's version metadata (how much info, if any the file has). Assuming this function succeeds, call
GetFileVersionInfo
This will load all the file info metadata into a buffer, then call
VerQueryValue
Supply '\' as the lpSubBlock parameter to get the standard file info metadata (including the version number)
The above functions will allow you to write code to get the version number of any module that your code can get a handle to.
Of course, if you're using C# the solution is much simpler. Hope this helps...

Related

How to call functions defined by caller of a dll from dll's source in C/C++?

I want to call functions defined by caller of a dll from dll. I am working on a C/C++ application. Main application may or may not use certain dlls as per the configuration provided and both the sides(Main application and dll) need to invoke each others functions. These example explained fairly well how one can create a dll and call it's functions at runtime.
https://learn.microsoft.com/en-us/cpp/build/walkthrough-creating-and-using-a-dynamic-link-library-cpp?view=vs-2019
https://learn.microsoft.com/en-us/cpp/build/linking-an-executable-to-a-dll?view=vs-2019
However it doesn't explain how can I call functions defined in Main application from dll.
When I try to declare the same function in dll's headers it throws error saying failed to find function.
How can I tell to linker that certain function will be available at run time. In gcc there is --export-dynamic. This can be used for exporting the symbols and later shared object is able to use these symbols. Is there something similar in windows?
Edit:
Following discussion gives a possible solution, Here caller must pass the function to dll as argument.
How to call a function defined in my exe inside my DLL?
This will be little cumbersome to use in my situation. I am trying to port here a linux application on windows. There are many methods which are called by dll. Passing all of them to dll as function doesn't sound as a good idea however this can be doable as last resort. Other downside I see on taking this approach is, this will require Main application to be changed on every time plug in(dll) wish to use some new function of Main application.

How to Get Some Basic Info from a DLL File?

Assume you have a .DLL file, and you want to know some basic information regarding it,
for example:
Is it a Managed (.NET) or Non-Managed (e.g. COM, etc) DLL
If it's Managed, what minimal version of .NET it requires
(and if it's non-managed, then any other minimal version that it requires? for example minimal version of Windows, or of something else)
Any other kind of such metadata
Please note that I am less interested in seeing the Function Headers (or Classes) that are inside.
Visual Studio's Object Browser can show the Functions/Classes very well,
yet I did not find a way to make Object Browser show the metadata that I mentioned above..
So maybe another utility can do it?

Delphi link to windows dll statically or dynamically

I am aware that implicitly linking to libraries at load time can lead to performance increases and as such I was wondering if it was good practice to link in this way at compile time thus increasing executable size (admittedly this is only marginal) compared to linking explicitly at runtime. My question is when linking against Microsoft Windows dll files located in System32, is it 'better' to link at load time as you can be mostly certain that the libraries will be present or follow the explicit approach?
Language used is Delphi (pascal) and the library in question is the WTsAPI32.dll - Terminal Services.
EDIT: As pointed out - my choice of language was incorrect and has been amended. Also, due to having only really every extensively linked to libraries in Unix, my comments about executable size can be omitted, I believed at the time I WAS in fact referring to static linking which bundles the library code into the executable and I now realise this is impossible when using dll files (DUH!). Thanks all.
The two forms of DLL linking are perhaps better named implicit and explicit. Implicit linking is what you refer to as static linking. And explicit linking is what you refer to as runtime linking
For implicit linking the linker writes entries into the import table of the executable file. This import table is metadata that is used by the loader to resolve DLL imports at module load time. A stub function is included for each implicit import that is only a few bytes in size. The executable size implications of implicit linking are negligible.
With explicit linking the imported function's address is resolved by a call to GetProcAddress. This call is made when the programmer chooses. If the DLL or the function cannot be resolved, the programmer can code fall back behaviour. There are size implications to explicit linking that I estimate to be similar to implicit linking. If the function address is evaluated once and remembered between calls then the performance characteristics are similar to implicit linking.
My advice is as follows:
Prefer implicit linking. It is more convenient to code.
If the DLL may not be present, use explicit linking.
If the DLL must be loaded using a full path, use explicit linking.
If you want to unload the DLL during program execution, use explicit linking.
You specifically mention Windows DLLs. You can safely assume that they will be present. Don't try to code to allow your program to run in case user32.dll is missing. Some functions may not be present in older versions of Windows. If you support those older versions you'll need to use explicit linking and provide a fallback. Decide which version you support and use MSDN to be sure that a function is available on your minimum supported platform.
If your only two options are static linking and run-time dynamic linking, then the latter is the best choice for linking with Windows DLLs because it's your only choice. You cannot link statically to a DLL because DLLs are exclusively for dynamic linking; that's what the D stands for. Microsoft does not provide static libraries for the OS modules, so you cannot link to them statically.
But those typically aren't your only two options. There's a third, namely load-time dynamic linking.
In Delphi, you use load-time dynamic linking by marking a function declaration external and specifying the name of the DLL where the function resides. If you use the function, then an entry is created in your module's import table, and when the OS loads your module, it reads the table, loads the referenced DLL, looks up the address of the function, and stores the address in your program's memory image so that your program can call it directly.
You use run-time dyanmic linking by declaring a function pointer, and then using LoadLibrary and GetProcAddress to look up the function's address prior to calling it. In newer Delphi versions, you can also declare a function in the same style that load-time dynamic linking uses, but then mark it with delay. In that case, the Delphi run-time library will call LoadLibrary and GetProcAddress on your behalf the first time you call the function.
The size differences are negligible. Run-time dynamic linking requires your program to contain code to load and link to libraries, but load-time dynamic linking stores more function references in the import table.
Run-time dynamic linking offers more flexibility in the face of uncertain DLL availability. With load-time dynamic linking, if a DLL is missing, or if it doesn't have all the functions mentioned in your import table, then the OS will fail to load your program — none of your code will run. With run-time dynamic linking, however, you have the opportunity to recover from the problem. You can disable certain parts of your program that the missing DLL depends on, or you can search for DLLs in non-standard places, or you can provide alternative implementations of missing functions.
If the functions you're calling are integral to your program's ability to operate, and there's ample reason to expect the functions to be present wherever your program is installed, then you should choose to link at load time. It allows you to write simpler code. You can be confident that you'll have the required functions if they are available on a certain version of windows that you check for in your installer, or if they're provided by DLLs that you distribute with your program.
On the other hand, if the functions you're calling are optional, then you should prefer to link at run time. Use that for loading plug-ins, or for taking advantage of advanced OS features while maintaining backward compatibility. (For example, you might want to take advantage of Windows Vista theme support when it's present, but still allow your program to run on Windows XP.)
Why do you think that compile-time linking to dynamic libraries would increase EXE size ? I believe you are mislead by somewhat poor choice of terms, used in windows programming from far ago. Let us better use relative terms "early binding" and "late binding" instead for the choice who should search for procedure names, compiler/loader or programmer's custom code.
Using early binding (aka static linking against dynamic library) your EXE contains the values (in a special tables):
DLL1 Name:
procedure "aaaaa" into the variable $1234
procedure "bbbbb" into the variable $5678
.
DLL2 Name:
procedure "ccccc" into the variable $4567
...et cetera.
Now, when you turn this into runtime loading (dynamic linking against dynamic libraries) it would look like
VarH1 := SafeLoatLibrary(DLL1 Name);
if Error-Loading-DLL then do-error-handling;
Var1234 := GetProcAfdress(VarH1, "aaaaa");
if Error-Searching-For-Function then do-error-handling;
Var5678 := GetProcAfdress(VarH1, "bbbbb");
if Error-Searching-For-Function then do-error-handling;
et cetera.
Obviously in the latter case your EXE contains all those values like in the 1st case, but more so - it contains a lot of code to deal with those values, that was just absent before.
So, while EXE size difference is not really large for today memory sizes, it is still in favor of early binding (static compilation against dynamic library).
Then what are the benefits for late binding? For example you can load different DLLs from different paths, determined in runtime by configuration - the flexibility and avoiding of DLL Hell (funny, concept of avoiding DLL Hell is against concept of volume saving). You can make your application work with limited functionality, if DLL load failed while statically binded EXE would just not load - graceful degradation concept. And at least you may give user much better, full of semantics, error messages than Windows could ever do.
And the last word, where you got that concept of EXE size from. I believe you mistaken it from talks about - attention! - static linking against static libraries. That is when OBJ/LIB/DCU files are not the part of distribution, but are just temporary code containers, that ultimately takes its place inside the monolythic EXE. Then yes - then your EXE has all those libraries insideitself and thus grows larger. However this case have nothing about dynamic libraries - DLLs.
The wording chosen once ago overuses static/dynamic terms in two closely related topics: how the library is loaded (compile-time vs runtime) and how functions inside the library are located (or bound. By developer's custom codeing ro by some OS-provided or compiler-provided toolset way before 1st line of your sources started execution).
Due to that ambiguity those close but different concepts start overlapping and sometimes this leads to a total confusion.
Now, what more static linking may give you in modern Windows versions. That is WinSxS folder Novadays Windows tends to keep multiple versions of each system DLL and your program may ask for the specific version of it (while in System32 folder there would be the most recent version that your program may be not get used to. Then you can make a special MANIFEST resource and compile it into EXE asking windows to load not DLLs not be name, but by name+version instead. You can replicaty that functionality with dynamic loading as well, but using Windows-provided toolset it is much easier.
Now you can decide which of those options do or do not have importance for your particular case and make somewhat better informed choice.
HTH.

Quickly testing a function that is a part of a big DLL project

I use VS2010 for C++ development, and I often end up doing work in some dll project and after everything compiles nicely I would like to try to run dummy data on some classes, but ofc the fact that it is a dll and not an exe with main makes that a no go. So is there a simple way to do what I want, or Im cursed till eternity to c/p parts of a big project into small testing one?
Ofc changing the type of the project also works, but I would like to have some almost like iteractive shell way of testing functions.
I know this isn't a library or anything, but if you want to run the dll on windows simply without framing it into anything, or writing a script, you can use rundll32.exe within windows. It allows you to run any of the exported functions in the dll. The syntax should be similiar to:
rundll32.exe PathAndNameofDll,exportedFunctionName [ArgsToTheExportedFunction]
http://best-windows.vlaurie.com/rundll32.html -- is a good simple still relevant tutorial on how to use this binary. Its got some cool tricks in there that may surprise you.
If you are wondering about a 64-bit version, it has the same name (seriously microsoft?) check it out here:
rundll32.exe equivalent for 64-bit DLLs
Furthermore, if you wanted to go low level, you could in theory utilize OllyDbg which comes with a DLL loader for running DLL's you want to debug (in assembly), which you can do the same type of stuff in (call exported functions and pass args) but the debugger is more for reverse engineering than code debugging.
I think you have basically two options.
First, is to use some sort of unit tests on the function. For C++ you can find a variety of implementations, for one take a look at CppUnit
The second option is to open the DLL, get the function via the Win32API and call it that way (this would still qualify as unit testing on some level). You could generalize this approach somewhat by creating an executable that does the above parametrized with the required information (e.g. dll path, function name) to achieve the "interactive shell" you mentioned -- if you decide to take this path, you can check out this CodeProject article on loading DLLs from C++
Besides using unit tests as provided by CppUnit, you can still write your own
small testing framework. That way you can setup your Dll projects as needed,
load it, link it, whatever you want and prove it with some simple data as
you like.
This is valueable if you have many Dlls that depend on each other to do a certain job.
(legacy Dlls projects in C++ tend to be hardly testable in my experience).
Having done some frame application, you can also inspect the possibilities that
CppUnit will give you and combine it with your test frame.
That way you will end up with a good set of automated test, which still are
valueable unit tests. It is somewhat hard starting to make unit tests if
a project already has a certain size. Having your own framework will let you
write tests whenever you make some change to a dll. Just insert it into your
framework, test what you expect it to do and enhance your frame more and more.
The basic idea is to separate the test, the testrunner, the testdata and the asserts
to be made.
I’m using python + ctypes to build quick testing routines for my DLL applications.
If you are using the extended attribute syntax, will be easy for you.
Google for Python + ctypes + test unit and you will find several examples.
I would recommend Window Powershell commandlets.
If you look at the article here - http://msdn.microsoft.com/en-us/magazine/cc163430.aspx you can see how easy it is to set up. Of course this article is mostly about testing C# code, but you can see how they talk about also being able to load any COM enabled DLL in the same way.
Here you can see how to load a COM assembly - http://blogs.technet.com/b/heyscriptingguy/archive/2009/01/26/how-do-i-use-windows-powershell-to-work-with-junk-e-mail-in-office-outlook.aspx
EDIT: I know a very successful storage virtualization software company that uses Powershell extensively to test both it's managaged and unmanaged (drivers) code.

In Ruby, what's the equivalent of Java's technique of limiting access to source in a cowork situation?

In Java when you compile a .java file which defines a class, it creates a .class file. If you provide these class files to your coworkers then they cannot modify your source. You can also bundle all of these class files into a jar file to package it up more neatly and distribute it as a single library.
Does Ruby have any features like these when you want to share your functionality with your coworkers but you don't want them to be able to modify the source (unless they ask you for the actual .rb source file and tell you that they want to change it)?
I believe the feature you are looking for is called "trust" (and a source code control repository). Ruby isn't compiled in the same way that Java is, so no you can't do this.
I have to say your are in a rough position, not wanting to share code with a coworker. However, given that this is an unassailable constraint perhaps you could change the nature of the problem.
If you have a coworker that needs access to some service provided by a library of yours, perhaps you could expose it by providing a web/rest service instead of as a .rb file.
This way you can hide your code behind a web server, and if there is a network architecture that allows for low latency making these service calls, you can effectively achive the same goal.
Trust is a lot easier though.
edit:
Just saw this on HN: http://blog.astrails.com/2009/5/12/ruby-http-require, allows a ruby file to include another file through http instead of the filesystem.
Ruby is
A dynamic, interpreted, open source programming language with a focus on simplicity and productivity.
So like all interpreted languages, you need to give the source code to anyone who want's to execute your program/script.
By the way searching "compiled ruby" on google returned quiet a few results.
I don't think there is one. Ruby is purely an interpreted language, which means ruby interprets your source code directly in order to run it. Java is compiled, so there's an intermediate bytecode (the .class). You can obfuscate your ruby if you really wish, but it's probably more trouble than it's worth.
Just to make sure you realize, however, upwards of 95% of Java can be decompiled back into source using various free utilities, so in reality, Java's compilation isn't much better than distributing Ruby source.
This is not a language specific problem and one that can be managed more effectively through source control software.
There is a library called ruby2c that compiles a subset of Ruby into C code (which you can then compile into native code, if you want).
It was actually originally written as a Ruby code obfuscator (but has since been used for lots of other stuff, including Ruby Arduino development).

Resources