msvcr100 and msvcrt - visual-studio

I noticed that DLL compiled with the old VC6 (msvcrt.dll) still runnable and "callable" even into a DLL (or a program) that is linked against msvcr100.dll
Very convenient, but do you just think it's a good idea to have both runtimes at the same time in a process ?

While it's not exactly good idea to combine multiple C runtimes in one process, on Windows, there is often no way around it. It should work without any problems as long as you don't pass structures implemented by CRT's between parts using separate CRT implementations (most common case: FILE*), at least in C. with C++, things get slightly more complex with different exception handling models and by virtue of C++ being C++.

Related

Is it a good idea to call a dll made of VC++6.0 from Visual Studio 2015

I have to deal with one ancient software module which only support its API being called from VC++6.0.
If I add an indirection layer, that is, wrapping the module in a dynamic library written with VC++6.0 and transferring API to the underlying module. The dll will be called from a VC++ tool chain provided in Visual Studio 2015.
My question is:
Is this doable in principle?
Are there any pitfalls one might want to may attention?
Can it be done in a better way?
Update: I guess that the C ABI is stable through different windows versions as well as corresponding supported VS version. If the dll is made with pure C, this might be done without too much trouble.
There are at least two issues preventing portability between versions/vendors: C++ object/class layout and the C/C++ memory allocators.
To avoid the first issue you should design the visible API as a set of C functions (like most of the Win32 API).
The memory issue can be solved by switching from new/malloc to one of the native Windows functions like LocalAlloc, HeapAlloc or CoTaskMemAlloc. Assuming your old .dll is not statically linked it would theoretically be possible to force newer code to use the malloc in msvcrt.dll but it makes your build setup a bit fragile and it goes against current best practices.
Another alternative is for the middleman .dll to implement everything as a COM object:
EXTERN_C HRESULT WINAPI CreateMyObject(IMyObject**ppv) ...

(Windows) How to make auto-update function for plugin?

I have a question. I'm writing a plugin (.dll) for an application (.exe). And I want to code auto-update function for my plugin but I catched an issue, it could not apply. Because my plugin has loaded while application is running, in run-time it cannot replace. It just apply until application has exited. So, how can I do it?
This is my code: http://codepad.org/4a22ccMa
Thanks!
(this answers the original question, which has been edited to something completely different)
It depends upon the operating system (which I guess might be Windows, since you speak of DLLs; on Linux you have shared objects (in ELF) with a different semantics). Read Levine's book Linkers and Loaders.
You might read more about Dynamic Software Updates. This is a entire research subject with a lot of scientific literature about it. Read e.g. the paper about Kitsune: Efficient, General-purpose Dynamic Software Updating for C (at least to understand several issues in your question).
On Linux, you could rename(2) the old .so and dlopen(3) the new one (and probably dlclose the older one, but you should do that later, when no active call frame on the call stack points into the old plugin), and my manydl.c example shows that you practically can dlopen a big lot (more than a million in practice) of shared objects.
On Windows (which I don't know) you probably need to dynamically load the new version of the plugin in a different file path. Probably, restarting your program after a plugin update should make things much easier.
(if you can afford that, switching to Linux might be very helpful, because I guess that it is much easier)
Notice that in some languages (and some of their implementations) replacing some code is much easier than in C or C++ on Windows. I guess that in CLR (managed code, e.g. in C#) or in a JVM (e.g. in Java, Scala, Clojure) it should be easier. And in Common Lisp (at least with SBCL) it is quite easy (in particular since Common Lisp is an homoiconic language).
Take care of the continuation, i.e. of the call stack. You'll understand that your question (also related to orthogonal persistence & application checkpointing) is much deeper and more difficult than you have imagined. And upgrading classes and their instances is also very difficult.

How compilation and linking at runtime is happening?

In a tutorial I've encountered a new concept (for me), that I never thought is possible. Actually, I thought that compilation is an entirely pre-run-time process. This is the phrase from tutorial: "Compile time occurs before link time (when the output of one or more compiled files are joined together) and runtime (when a program is executed). In some programming languages it may be necessary for some compilation and linking to occur at runtime".
My questions are:
Is pre-run-time compilation and linking processes absolutely different from run-time compilation and linking? If yes, please explain the main differences.
How are code sections that need to be compiled(linked) during run-time marked and where is that information kept? (This may be different from language to language, if possible, please give a specific example).
Thank you very much for your time!
Runtime compilation
The best (most well known) example I'm personally aware of is the just in time compilation used by Java. As you might know Java code is being compiled into bytecode which can be interpreted by the Java Virtual Machine. It's therefore different from let's say C++ which is first fully (preprocessed) compiled (and linked) into an executable which can be ran directly by the OS without any virtual machine.
The Java bytecode is instead interpreted by the VM, which maps them to processor specific instructions. That being said the JVM does JIT, which takes that bytecode and compiles it (during runtime) into machine code. Here we arrive at your second question. Even in Java it can depend on which JVM you are using but basically there are pieces of code called hotspots, the pieces of code that are run frequently and which might be compiled so the application's performance improves. This is done during runtime because the normal compiler does not have (or well might not have) all the necessary data to make a proper judgement which pieces of code are in fact ran frequently. Therefore JIT requires some kind of runtime statistics gathering, which is done parallel to the program execution and is done by the JVM. What kind of statistics are gathered, what can be optimised (compiled in runtime) etc. depends on the implementation (you obviously cannot do everything a normal compiler would do due to memory and time constraints - guess this partly answers the first question? you don't compile everything and usually only a limited set of optimisations are supported in runtime compilation). You can try looking for such info but from my experience usually it's very badly documented and hard to find (at least when it comes to official sources, not presentations/blogs etc.)
Runtime linking
Linker is a different pair of shoes. We cannot use the Java example anymore since it doesn't really have a linker like C or C++ (instead it has a classloader which takes care of loading files and putting it all together).
Usually linking is performed by a linker after the compilation step (static linking), this has pros (no dependencies) and cons (higher memory imprint as we cannot use a shared library, when the library number changes you need to recompile your sources).
Runtime linking (dynamic/late linking) is actually performed by the OS and it's the OS linker's job to first load shared libraries and then attach them to a running process. Furthermore there are also different types of dynamic linking: explicit and implicit. This has the benefit of not having to recompile the source when the version number changes since it's dynamic and library sharing but also drawbacks, what if you have different programs that use the same library but require different versions (look for DLL hell). So yes those two concepts are also quite different.
Again how it's all done, how it's decided what and how should be linked, is OS specific, for instance Microsoft has the dynamic-link library concept.

Why is bytecode JIT compiled at execution time and not at installation time?

Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.

Creating GUI desktop applications that call into either OCaml or Haskell -- Is it a fool's errand?

In both Haskell and OCaml, it's possible to call into the language from C programs. How feasible would it be to create Native applications for either Windows, Mac, or Linux which made extensive use of this technique?
(I know that there are GUI libraries like wxHaskell, but suppose one wanted to just have a portion of your application logic in the foreign language.)
Or is this a terrible idea?
Well, the main risk is that while facilities exist, they're not well tested -- not a lot of apps do this. You shouldn't have much trouble calling Haskell from C, looks pretty easy:
http://www.haskell.org/haskellwiki/Calling_Haskell_from_C
I'd say if there is some compelling reason to use C for the front end (e.g. you have a legacy app) and you really need a Haskell library, or want to use Haskell for some other reason, then, yes, go for it. The main risk is just that not a lot of people do this, so less documentation and examples than for calling the other way.
You can embed OCaml in C as well (see the manual), although this is not as commonly done as extending OCaml with C.
I believe that the best approach, even if both GUI and logic are written in the same language, is to run two processes which communicates via a human-readable, text-based protocol (a DSL of some sort). This architecture applies to your case as well.
Advantages are obvious: GUI is detachable and replaceable, automated tests are easier, logging and debugging are much easier.
I make extensive use of this by compiling haskell shared libs that are called outside Haskell.
usually the tasks involved would be to
create the proper foreign export declarations
create Storable instances for any datatypes you need to marshal
create the C structures (or structures in the language you're using) to read this information
since I don't want to manually initialize the haskell RTS, i add initiallisation/termination code to the lib itself. (dllmain in windows __attribute__ ((constructor)) on unix)
since I no longer need any of them, I create a .def file to hide all the closure and rts functions from being in the export table (windows)
use GHC to compile everything together
These tasks are rather robotic and structured, to a point you could write something to automate them. Infact what I use myself to do this is a tool I created which does dependency tracing on functions you marked to be exported, and it'll wrap them up and compile the shared lib for you along with giving you the declarations in C/C++.
(unfortunately, this tool is not yet on hackage, because there is something I still need to fix and test alot more before I'm comfortable doing so)
Tool is available here http://hackage.haskell.org/package/Hs2lib-0.4.8
Or is this a terrible idea?
It's not a terrible idea at all. But as Don Stewart notes, it's probably a less-trodden path. You could certainly launch your program as Haskell or OCaml, then have it do a foreign-function call right out of the starting gate—and I recommend you structure your code that way—but it doesn't change the fact that many more people call from Haskell into C than from C into Haskell. Likewise for OCaml.

Resources