I have read in Foundations of C++ CLI the following:
If you try to compile a native C++
application with /clr:pure, it will
work only if the code being compiled
has no constructs that generate
machine-specific code. You can,
however, link with native libraries.
What is meant by "constructs that generate machine-specific code" ? Example?
Also the book says that this mode is not verifiably safe, so we can use pointers for example, now i am confusing in how can say that the compiler will produce pure MSIL code and also we can use pointers! What i know is that, the pointer is somehow native concept! how it will be made as pure MSIL!?
MSIL is quite powerful, it has no trouble with standard compliant C++03 code. The only "constructs" I know of that it cannot deal with is the __fastcall calling convention, r-value references as implemented in the C++0x additions in VS2010 and obvious machine specific extensions like the __asm keyword (inline assembly) and intrinsics.
Most any native C++ you compile with /clr in effect will use pointers, either explicitly or compiler-generated to implement things like C++ classes and references. MSIL has no trouble with it but that code is not verifiable, any MSIL that uses pointers is rejected by the verifier. This is not necessarily a problem, lots of code runs in full-trust. You'll only get in trouble when you try to run it on sandboxed execution environments, like a browser, a phone or a game console. Or when your customer has a nervous system administrator.
Using /clr:pure is otherwise pretty useless. The point of using C++/CLI is for its great support for interop with native code. Compiling native code to MSIL is largely a waste, the JIT optimizer can't do as effective a job as the native code optimizer.
Related
I have to deal with one ancient software module which only support its API being called from VC++6.0.
If I add an indirection layer, that is, wrapping the module in a dynamic library written with VC++6.0 and transferring API to the underlying module. The dll will be called from a VC++ tool chain provided in Visual Studio 2015.
My question is:
Is this doable in principle?
Are there any pitfalls one might want to may attention?
Can it be done in a better way?
Update: I guess that the C ABI is stable through different windows versions as well as corresponding supported VS version. If the dll is made with pure C, this might be done without too much trouble.
There are at least two issues preventing portability between versions/vendors: C++ object/class layout and the C/C++ memory allocators.
To avoid the first issue you should design the visible API as a set of C functions (like most of the Win32 API).
The memory issue can be solved by switching from new/malloc to one of the native Windows functions like LocalAlloc, HeapAlloc or CoTaskMemAlloc. Assuming your old .dll is not statically linked it would theoretically be possible to force newer code to use the malloc in msvcrt.dll but it makes your build setup a bit fragile and it goes against current best practices.
Another alternative is for the middleman .dll to implement everything as a COM object:
EXTERN_C HRESULT WINAPI CreateMyObject(IMyObject**ppv) ...
We are trying to use OCCI with GCC. OCCI is compiled using sun studio compiler. Is there any possibility to use OCCI with GCC instead of sun native compiler CC?
You effectively can not mix multiple C++ run-time libraries.
A C++ run-time implementation is extremely complex. This posting explains some of the complexity:
Stability of the C++ ABI: Evolution of a Programming Language
The C++ ABI
The C++ ABI includes the C ABI. In addition, it covers the following
features:
Layout of hierarchical class objects, that is, base classes and virtual base classes
Layout of pointer-to-member
Passing of hidden function parameters (for example, this)
How to call a virtual function:
Vtable contents and layout
Location in objects of pointers to vtables
Finding adjustment for the this pointer
Finding base-class offsets
Calling a function via pointer-to-member
Managing template instances
External spelling of names ("name mangling")
Construction and destruction of static objects
Throwing and catching exceptions
Some details of the standard library:
Implementation-defined details
typeinfo and run-time type information
Inline function access to members
You can also add in different C++ compilers implement name-mangling differently, making it effective impossible to use OCCI directly with GCC on Solaris.
You might be able get something working, but anything you do will be extremely fragile at best. The next OS or C++ run-time-update could break things, and you may not be able to fix the problem.
Unless you're writing very simple applications, I strongly recommend just using the Solaris Studio compiler - and getting familiar with the entire suite of tools that include performance profiling, memory checking, and even race-condition detection, most of which is in my opinion superior to the tools used with GCC.
I have some C++/CLI code which derives from the .NET System Namespace classes.
Is there a way to reuse this code for Universal Windows Platform Apps?
I can't get a reference to the System Namespace in C++, though in C# it is possible. It looks like there is only support for C++/Cx code and not for managed C++/CLI.
The syntax and keywords of the C++/CX extension resembles C++/CLI a great deal. But that's where similarity ends, they have nothing whatsoever in common. C++/CX gets compiled directly to native code, just like native C++. But C++/CLI is compiled to MSIL, the intermediate language of .NET. Their syntax looks so similar because they both solve the same problem, interfacing C++ to a foreign type system. .NET's in the case of C++/CLI, WinRT in the case of C++/CX.
Which is the basic reason why you cannot use the System namespace, it is a .NET namespace. You instead use the std namespace, along with the Platform and Windows namespaces for WinRT specific types. The compiler cannot import .NET reference assemblies with the /ZW compile option in effect, only WinRT metadata files, the ones with the .winmd filename extension. Which are an extension to the COM .tlb type library file format, the ones you previously had to import with the #import directive.
Which in itself is another major source for confusion, the internal .winmd file format was based on the format of .NET metadata. Most .NET decompilers can show you the content of a .winmd file because of this. But again just a superficial similarity, it is completely unrelated to a .NET assembly. It can just contain declarations, not code. Best to compare it to a .h file you'd use in a native C++ project. Or a .tlb file if you previously had exposure to COM.
Knowing how COM works can be very helpful to grok what this is all about. It is in fact COM that lies at the core of WinRT, the basic reason why your C++/CX project can be easily used by a program written in a completely different language like Javascript or VB.NET. A WinRT app is actually an out-of-process COM server. A class library or WinRT component is actually an in-process COM server. COM object factories work differently, the scope is limited to the files named in the package manifest. C++/CX is part of the language projection that hides COM, along with the C++ libraries you link that implement the Platform namespaces. WinRT would be still-born if programmers had to write traditional COM client code. You still can in native C++, the WRL library does little to hide the plumbing.
WinRT readily supports code written in a managed language like C# or VB.NET, the language projection is built into the framework and highly invisible. But not C++/CLI, a structural limitation. A Store/Phone/Universal app targets a subset of the .NET Framework named .NETCore. Better known these days as CoreCLR, the parts that were open-sourced. Which does not support module initializers, critical to C++/CLI.
Enough introduction and getting to the answer: no, you have no use for your C++/CLI code and you'll have to rewrite it. You'll have a decent shot at porting the native C++ code that your C++/CLI wrapper interfaced with, as long as it observes the api limitations. You should always start there first, given that it is easy to do and instantly tells you if your native C++ code is using verboten api functions, the kind that drains a battery too quickly or violates the sandbox restrictions.
The ref class wrappers however have to be significantly tweaked. Little reason to assume that will be a major obstacle, it could still structurally be similar. Biggest limitations are the lack of support for implementation inheritance, a COM restriction, and having to replace the code that used .NET Framework types with equivalent C++ code. The typical hangup is that there tends to be a lot of it, the original author would normally have favored the very convenient .NET types over the standard C++ library types. YMMV.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
I'm pretty sure this is possible but I'm not sure how to go about it. I'm very new to building with GCC in general and I have never used FreeRTOS, but I'd like to try getting the OS up and running on a TI ARM Cortex MCU but with a slight twist: I'd like to get it up and running with Pascal. I'm curious:
Is this even possible to get work? If not, the next issues are kind of moot points.
From my Delphi days, I vaguely recall the ability to access functions in C libraries. I'm wondering if I would have access to the C routines in FreeRTOS.
If I use the GCC version (preferable) would I be able to debug using OpenOCD on the target? I'm not quite sure how debug symbols work and if it's more or less language agnostic (hopefully, in this case).
As kind of a bonus question a bit outside the scope of the original query, can I simulate FreeRTOS on an x86 processor (e.g. my development PC) for easier debugging during development? (With a Pascal program, of course..)
I haven't found any documentation on achieving this, so hopefully someone here can shed some light! Any resources would be most helpful. Like I said, I'm very new to this kind of development. I'm also open to suggestions if you think there is a better alternative.
FYI, my preferred host configuration would be something similar to:
Linux (Ubuntu/Debian)
Eclipse IDE for development, unit testing, and hopefully simulation / debugging
OpenOCD for target debugging
GNU Pascal + FreeRTOS on target
FreeRTOS is C source code, so like you say you would have to have some mechanism for linking C with your Pascal programs. Also, FreeRTOS relies on certain registers to be used for things like passing a parameter into a task (as a hypothetical example, the task might always expect the parameter to be in register R0) so you would have to ensure the ABI for the C compiler and the Pascal compiler was the same - or have your task entry in C then have it call a Pascal function (very nasty). Then there is the issue of interrupts, calling inline macros, etc. I would say this would be extremely difficult to achieve.
Both GNU Pascal and Free Pascal support linking to C (gcc) and ARM, as well as calling pascal code from C etc. Writing a header and declaring the prototypes with cdecl is all there is to it.
Macros are a bit bigger problem. Usually I just rewrite them to inline functions (what they should have been anyway). Except for the macro/header issue, the problems are more compiler specific functionality (which you also would have a problem with when porting from one C compiler to the next)
If you prefer TP/Delphi dialect, Free Pascal is the better choice.
I run my old Delphi code fine on my sheevaplug.
There is already an example for FreeRTOS/GCC/OpenOCD on a TI Cortex-M3 (was Luminary Micro Cortex-M3). Be aware though that this is a really old example and both the Eclipse and OpenOCD versions used are out of date.
Although there is an Eclipse project provided, the project is configured as a standard make (as opposed to a managed make) project, so there is a standard makefile that can be just as easily executed from the command line as from within Eclipse.
http://www.freertos.org/portLM3Sxxxx_Eclipse.html