Currently app structure is as follows:
Our C# GUI
Our managed C++ library
3rd-party unmanaged 32-bit C++ library
What we need is to make our application 64-bit but leave 3rd-party library 32-bit (there is no 64-bit version). The issue is that this library is decoding large arrays (10-100 MB) all the time, so marshaling time is an issue.
Few options that we thought of:
Wrap 3rd-party library into Managed C++ ActiveX and call it from C# - simple, but we expect heavy marshaling penalties
Use Boost.Interprocess at both sides - seems to be more complex, but probably faster
Any suggestions regarding which way to choose for sake of execution speed? Are there other ways?
Related
I have to deal with one ancient software module which only support its API being called from VC++6.0.
If I add an indirection layer, that is, wrapping the module in a dynamic library written with VC++6.0 and transferring API to the underlying module. The dll will be called from a VC++ tool chain provided in Visual Studio 2015.
My question is:
Is this doable in principle?
Are there any pitfalls one might want to may attention?
Can it be done in a better way?
Update: I guess that the C ABI is stable through different windows versions as well as corresponding supported VS version. If the dll is made with pure C, this might be done without too much trouble.
There are at least two issues preventing portability between versions/vendors: C++ object/class layout and the C/C++ memory allocators.
To avoid the first issue you should design the visible API as a set of C functions (like most of the Win32 API).
The memory issue can be solved by switching from new/malloc to one of the native Windows functions like LocalAlloc, HeapAlloc or CoTaskMemAlloc. Assuming your old .dll is not statically linked it would theoretically be possible to force newer code to use the malloc in msvcrt.dll but it makes your build setup a bit fragile and it goes against current best practices.
Another alternative is for the middleman .dll to implement everything as a COM object:
EXTERN_C HRESULT WINAPI CreateMyObject(IMyObject**ppv) ...
I noticed that DLL compiled with the old VC6 (msvcrt.dll) still runnable and "callable" even into a DLL (or a program) that is linked against msvcr100.dll
Very convenient, but do you just think it's a good idea to have both runtimes at the same time in a process ?
While it's not exactly good idea to combine multiple C runtimes in one process, on Windows, there is often no way around it. It should work without any problems as long as you don't pass structures implemented by CRT's between parts using separate CRT implementations (most common case: FILE*), at least in C. with C++, things get slightly more complex with different exception handling models and by virtue of C++ being C++.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
After some "games" with the Visual Studio Configuration Manager I found that every new C#/VB.NET project I create in my Visual Studio only has the 'x86' solution platform. It should be 'Any CPU', and I should be able to choose x86 or x64 if required. How do I reset these settings for all new projects?
Microsoft changed the default for EXE's to x86. Check out this post by Rick Byers:
http://blogs.msdn.com/b/rmbyers/archive/2009/06/09/anycpu-exes-are-usually-more-trouble-then-they-re-worth.aspx
Here's the list I've been using in our discussions to justify making
x86 the default for EXE projects in Visual Studio:
Running in two very different modes increases product complexity and the cost of testing * Often people don't realize the
implications on native-interop of architecture-neutral assemblies. It
means you need to ensure that equivalent 32-bit and 64-bit versions of
the native DLLs you depend on are available, and (most significantly)
the appropriate one is selected automatically. This is fairly easy
when calling OS APIs due to the OS re-mapping of c:\windows\system32
to c:\windows\syswow64 when running in the WOW and extensive testing
the OS team does. But many people who ship native DLLs alongside
their managed app get this wrong at first and are surprised then their
application blows up on 64-bit systems with an exception about their
32-bit DLL being in a bad format. Also, although it's much rarer than
for native-code, pointer-size bugs can still manifest in .NET (eg.
assuming IntPtr is the same as Int32, or incorrect marshalling
declarations when interopping with native code). * Also, in addition
to the rules you need to know to follow, there's just the issue that
you've now really got twice as much code to test. Eg., there could
easily be (and certainly have been many) CLR bugs that reproduce only
on one architecture of the CLR, and this applies all the way across
the stack (from OS, framework, 3rd-party libraries, to your code). Of
course in an ideal world everyone has done a great job testing both
32-bit and 64-bit and you won't see any differences, but in practice
for any large application that tends not to be the case, and (at
Microsoft at least) we end up duplicating our entire test system for
32 and 64-bit and paying a significant ongoing cost to testing and
supporting all platforms. * [Edit: Rico - of CLR and VS performance
architect fame - just posted a great blog entry on why Visual Studio
will not be a pure 64-bit application anytmie soon]
32-bit tends to be faster anyway * When an application can run fine either in 32-bit or 64-bit mode, the 32-bit mode tends to be a
little faster. Larger pointers means more memory and cache
consumption, and the number of bytes of CPU cache available is the
same for both 32-bit and 64-bit processes. Of course the WOW layer
does add some overhead, but the performance numbers I've seen indicate
that in most real-world scenarios running in the WOW is faster than
running as a native 64-bit process
Some features aren't avaiable in 64-bit * Although we all want to have perfect parity between 32-bit and 64-bit, the reality is that
we're not quite there yet. CLR v2 only supported mixed-mode debugging
on x86, and although we've finally added x64 support in CLR V4,
edit-and-continue still doesn't support x64. On the CLR team, we
consider x64 to be a first-class citizen whenever we add new
functionality, but the reality is that we've got a complicated
code-base (eg. completely separate 32-bit and 64-bit JIT compilers)
and we sometimes have to make trade-offs (for example, adding 64-bit
EnC would have been a very significant cost to the JIT team, and we
decided that their time was better spent on higher priority features).
There are other cool features outside of the CLR that are also
specific to x86 - like historical debugging in VS 2010. Complicating
matters here is that we haven't always done a great job with the error
messages, and so sometimes people "upgrade" to a 64-bit OS and are
then disgusted to see that some of their features no longer appear to
work (without realizing that if they just re-targetted the WOW they'd
work fine). For example, the EnC error in VS isn't very clear
("Changes to 64-bit applications are not allowed"), and has lead to
some confusion in practice. I believe we're doing the right thing in
VS2010 and fixing that dialog to make it clear that switching your
project to x86 can resolve the issue, but still there's no getting
back the time people have wasted on errors like this.
Are you sure this is the problem? Maybe you should just set "Show Advanced Build Configurations" as detailed here.
I have a static linkable library of C and Fortran routines compiled and linked together using the Visual Studio 2002 C (v7.0) Compiler and the Intel Fortran 9.0.018 Compiler.
The C code in my library calls and links to the Microsoft C-RunTime (MSCRT) 2002 static libraries (single-threaded). I believe the actual version number of the 2002 CRT libraries is v7.0
I will refer to this static library as "vs2002if9.lib"
Can I statically link to my "vs2002if9.lib" safely using any later version of Visual Studio (2003, 2005 or 2008) without any worries about how the calling program behaves regarding the C runtime calls?
Or am I creating problems by mixing version of the CRT static libs?
What if I provide my "vs2002if9.lib" to 3rd party software developers? What requirements am I imposing on them?
Mixing C runtimes hasn't worked for me in the past. The only manner in which I can see this working {maybe} is if you're completely isolating the use of the stack/heap within the boundaries of the statically linked C-Runtime [nothing crosses boundaries via parameters but then what value is your vs2009if9.lib providing].
As an example, if you're to be allocating a pointer [heap memory] within the application and passing this pointer to the library you provided, which heap manager should be used? The correct answer is the heap manager managing the pointer, but your library won't know about the other heap manager. It gets uglier if your library allocates memory for use by the application and it's the applications responsibility to free/delete using the provided pointer (bad design yes but still possible). Again, the wrong heap manager will be utilized.