We are trying to use OCCI with GCC. OCCI is compiled using sun studio compiler. Is there any possibility to use OCCI with GCC instead of sun native compiler CC?
You effectively can not mix multiple C++ run-time libraries.
A C++ run-time implementation is extremely complex. This posting explains some of the complexity:
Stability of the C++ ABI: Evolution of a Programming Language
The C++ ABI
The C++ ABI includes the C ABI. In addition, it covers the following
features:
Layout of hierarchical class objects, that is, base classes and virtual base classes
Layout of pointer-to-member
Passing of hidden function parameters (for example, this)
How to call a virtual function:
Vtable contents and layout
Location in objects of pointers to vtables
Finding adjustment for the this pointer
Finding base-class offsets
Calling a function via pointer-to-member
Managing template instances
External spelling of names ("name mangling")
Construction and destruction of static objects
Throwing and catching exceptions
Some details of the standard library:
Implementation-defined details
typeinfo and run-time type information
Inline function access to members
You can also add in different C++ compilers implement name-mangling differently, making it effective impossible to use OCCI directly with GCC on Solaris.
You might be able get something working, but anything you do will be extremely fragile at best. The next OS or C++ run-time-update could break things, and you may not be able to fix the problem.
Unless you're writing very simple applications, I strongly recommend just using the Solaris Studio compiler - and getting familiar with the entire suite of tools that include performance profiling, memory checking, and even race-condition detection, most of which is in my opinion superior to the tools used with GCC.
Related
I have to deal with one ancient software module which only support its API being called from VC++6.0.
If I add an indirection layer, that is, wrapping the module in a dynamic library written with VC++6.0 and transferring API to the underlying module. The dll will be called from a VC++ tool chain provided in Visual Studio 2015.
My question is:
Is this doable in principle?
Are there any pitfalls one might want to may attention?
Can it be done in a better way?
Update: I guess that the C ABI is stable through different windows versions as well as corresponding supported VS version. If the dll is made with pure C, this might be done without too much trouble.
There are at least two issues preventing portability between versions/vendors: C++ object/class layout and the C/C++ memory allocators.
To avoid the first issue you should design the visible API as a set of C functions (like most of the Win32 API).
The memory issue can be solved by switching from new/malloc to one of the native Windows functions like LocalAlloc, HeapAlloc or CoTaskMemAlloc. Assuming your old .dll is not statically linked it would theoretically be possible to force newer code to use the malloc in msvcrt.dll but it makes your build setup a bit fragile and it goes against current best practices.
Another alternative is for the middleman .dll to implement everything as a COM object:
EXTERN_C HRESULT WINAPI CreateMyObject(IMyObject**ppv) ...
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
I want to use "printf" in driver code (DDK), therefore I've included stdio.h. But the compiler says:
error LNK2001: unresolved external symbol __imp__printf
Any ideas? I seen somewhere that it is not possible - but that's awful - I can't believe it. Why can't I use standard C routines in kernel code?
C functions like printf come from a static cstd.lib or something AFAIK don't they?
Why would WDK provide me with stdio.h then?
The Windows kernel only supports part of the standard C runtime. In particular, high-level functionality — like file streams, console I/O, and networking — is not supported. Instead, you need to use native kernel APIs for similar functionality.
The reason that stdio.h is included with the WDK is because some parts of the C runtime are provided for your convenience. For example, you can use memcmp (although the native RtlCompareMemory is preferred). Microsoft has not picked through the CRT headers to #ifdef out the bits and pieces that are not available in kernel mode. Once you develop some experience writing kernel drivers, you'll get the hang of what's possible in the kernel, and what probably won't work.
To address your high-level question: you're probably looking for some debug/logging mechanism. You really have two options:
DbgPrintEx is the easiest to use. It's basically a drop-in for printf (although you need to be careful about certain types of string inserts when running >=DISPATCH_LEVEL). Output goes to the debugger, or, if you like, to DbgView.
WPP is the industrial-strength option. The initial learning curve is pretty steep (although there are samples in the WDK). However, it is very flexible (e.g., you can create your own shrieks, like Print("My IP address is: %!IPV4!", ip);), and it is very fast (Microsoft ships WPP tracing in the non-debug builds of most Windows components).
I have read in Foundations of C++ CLI the following:
If you try to compile a native C++
application with /clr:pure, it will
work only if the code being compiled
has no constructs that generate
machine-specific code. You can,
however, link with native libraries.
What is meant by "constructs that generate machine-specific code" ? Example?
Also the book says that this mode is not verifiably safe, so we can use pointers for example, now i am confusing in how can say that the compiler will produce pure MSIL code and also we can use pointers! What i know is that, the pointer is somehow native concept! how it will be made as pure MSIL!?
MSIL is quite powerful, it has no trouble with standard compliant C++03 code. The only "constructs" I know of that it cannot deal with is the __fastcall calling convention, r-value references as implemented in the C++0x additions in VS2010 and obvious machine specific extensions like the __asm keyword (inline assembly) and intrinsics.
Most any native C++ you compile with /clr in effect will use pointers, either explicitly or compiler-generated to implement things like C++ classes and references. MSIL has no trouble with it but that code is not verifiable, any MSIL that uses pointers is rejected by the verifier. This is not necessarily a problem, lots of code runs in full-trust. You'll only get in trouble when you try to run it on sandboxed execution environments, like a browser, a phone or a game console. Or when your customer has a nervous system administrator.
Using /clr:pure is otherwise pretty useless. The point of using C++/CLI is for its great support for interop with native code. Compiling native code to MSIL is largely a waste, the JIT optimizer can't do as effective a job as the native code optimizer.
A C++ program that uses several DLLs and QT should be equipped with a malloc replacement (like tcmalloc) for performance problems that can be verified to be caused by Windows malloc. With linux, there is no problem, but with windows, there are several approaches, and I find none of them appealing:
1. Put new malloc in lib and make sure to link it first (Other SO-question)
This has the disadvantage, that for example strdup will still use the old malloc and a free may crash the program.
2. Remove malloc from the static libcrt library with lib.exe (Chrome)
This is tested/used(?) for chrome/chromium, but has the disadvantage that it just works with static linking the crt. Static linking has the problem if one system library is linked dynamically against msvcrt there may be mismatches in the heap allocation/deallocation. If I understand it correctly, tcmalloc could be linked dynamically such that there is a common heap for all self-compiled dlls (which is good).
3. Patch crt-source code (firefox)
Firefox's jemalloc apparently patches the windows CRT source code and builds a new crt. This has again the static/dynamic linking problem above.
One could think of using this to generate a dynamic MSVCRT, but I think this is not possible, because the license forbids providing a patched MSVCRT with the same name.
4. Dynamically patching loaded CRT at run time
Some commercial memory allocators can do such magic. tcmalloc can do, too, but this seems rather ugly. It had some issues, but they have been fixed. Currently, with tcmalloc it does not work under 64 bit windows.
Are there better approaches? Any comments?
Q: A C++ program that is split accross several dlls should:
A) replace malloc?
B) ensure that allocation and de-allocation happens in the same dll module?
A: The correct answer is B. A c++ application design that incorporates multiple DLLs SHOULD ensure that a mechanism exists to ensure that things that are allocated on the heap in one dll, are free'd by the same dll module.
Why would you split a c++ program into several dlls anyway? By c++ program I mean that the objects and types you are dealing with are c++ templates, STL objects, classes etc. You CAN'T pass c++ objects accross dll boundries without either lot of very careful design and lots of compiler specific magic, or suffering from massive duplication of object code in the various dlls, and as a result an application that is extremely version sensitive. Any small change to a class definition will force a rebuild of all exe's and dll's, removing at least one of the major benefits of a dll approach to app development.
Either stick to a straight C interface between app and dll's, suffer hell, or just compile the entire c++ app as one exe.
It's a bold claim that a C++ program "should be equipped with a malloc replacement (like tcmalloc) for performance problems...."
"[In] 6 out of 8 popular benchmarks ... [real-sized applications] replacing back the custom allocator, in which people had invested significant amounts of time and money, ... with the system-provided dumb allocator [yielded] better performance. ... The simplest custom allocators, tuned for very special situations, are the only ones that can provide gains." --Andrei Alexandrescu
Most system allocators are about as good as a general purpose allocator can be. You can do better only if you have a very specific allocation pattern.
Typically, such special patterns apply only to a portion of the program, in which case, it's better to apply the custom allocator to the specific portion that can benefit than it is to globally replace the allocator.
C++ provides a few ways to selectively replace the allocator. For example, you can provide an allocator to an STL container or you can override new and delete on a class by class basis. Both of these give you much better control than any hack which globally replaces the allocator.
Note also that replacing malloc and free will not necessarily change the allocator used by operators new and delete. While the global new operator is typically implemented using malloc, there is no requirement that it do so. So replacing malloc may not even affect most of the allocations.
If you're using C, chances are you can wrap or replace key malloc and free calls with your custom allocator just where it matters and leave the rest of the program to use the default allocator. (If that's not the case, you might want to consider some refactoring.)
System allocators have decades of development behind them. They are stable and well-tested. They perform extremely well for general cases (in terms of raw speed, thread contention, and fragmentation). They have debugging versions for leak detection and support for tracking tools. Some even improve the security of your application by providing defenses against heap buffer overrun vulnerabilities. Chances are, the libraries you want to use have been tested only with the system allocator.
Most of the techniques to replace the system allocator forfeit these benefits. In some cases, they can even increase memory demand (because they can't be shared with the DLL runtime possibly used by other processes). They also tend to be extremely fragile in the face of changes in the compiler version, runtime version, and even OS version. Using a tweaked version of the runtime prevents your users from getting benefits of runtime updates from the OS vendor. Why give all that up when you can retain those benefits by applying a custom allocator just to the exceptional part of the program that can benefit from it?
Where does your premise "A C++ program that uses several DLLs and QT should be equipped with a malloc replacement" come from?
On Windows, if the all the dlls use the shared MSVCRT, then there is no need to replace malloc. By default, Qt builds against the shared MSVCRT dll.
One will run into problems if they:
1) mix dlls that use static linking vs using the shared VCRT
2) AND also free memory that was not allocated where it came from (ie, free memory in a statically linked dll that was allocated by the shared VCRT or vice versa).
Note that adding your own ref counted wrapper around a resource can help mitigate that problems associated with resources that need to be deallocated in particular ways (ie, a wrapper that disposes of one type of resource via a call back to the originating dll, a different wrapper for a resource that originates from another dll, etc).
nedmalloc? also NB that smplayer uses a special patch to override malloc, which may be the direction you're headed in.