I am working on a delphi 7 project with a minimalistic system.pas /sysinit.pas
When I try to use records in my project my compiler brings this error:
System unit out of date or corrupted: missing '#InitializeRecord'
Since I am trying to program in pure pascal / no RTL is there a way to manually enable/call the Initialization for the records?!
Thank you for your help.
Delphi compiler relies on some "intrinsic functions", which are called by the generated code.
For instance, when you define a record in your code, the Delphi compiler will generate a call to InitializeRecord, even if you do not use any RTL. This is the same for string and dynamic array handling.
So you won't be able to by-pass and ignore those functions, since they are expected to exist by the compiler itself.
Delphi is not meant to strip down the low-level RTL units. I've done that in some cases:
For our LVCL units (similar to your expections), our enhanced RTL files can be compiled especially to be stripped down when LVCL conditional is defined;
For DWPL-based projects, targeting DOS with the Delphi compiler;
The TORO kernel.
FreePascal is much better when down-stripping the system units. Since it targets even embedded systems, you can optionally strip string support, FPU, or even whole heap process.
Related
I need to mix together code where one library uses Apple SIMD types such as simd_packed_float4, and another part of the code uses NEON SIMD types such as float32x4_t. I don't want to change the types used in either part, so I need to cast the data types where they interface. This is using the latest Xcode/clang where they appear not to be simple aliases of each other.
Obviously the correct C++ way is to use a memcpy() but I wondered if I had missed another simpler way that wasn't an undefined behaviour cast or lane by lane copying. And thus was better for code generation across debug builds?
Is it possible to enable an enhanced instruction set (SSE/AVX) for a single function or file within a visual studio project? I'd like to have multiple versions of a function which target different instruction sets, all within the same output binary
It is not possible to enable custom instruction set for a single function or a single file. However, you can enable custom instruction set for a single translation unit, which is usually a c/cpp file. Note that the instruction set used in headers depends on how the translation unit is compiled (which includes it) and may be different in different cpp files.
I suppose that if you compile different cpp files with different instruction sets, you can then link them together, and the resulting binary would work. Actually, it is important to ensure that calling conventions are compatible everywhere, and I think they would be, unless you use something like __vectorcall (it requires at least SSE2 BTW).
If you want to compile some functions with multiple instruction sets, you might want to look at the this question. In overall it is called "CPU dispatch"
I have worked with OpenCL on a couple of projects, but have always written the kernel as one (sometimes rather large) function. Now I am working on a more complex project and would like to share functions across several kernels.
But the examples I can find all show the kernel as a single file (very few even call secondary functions). It seems like it should be possible to use multiple files - clCreateProgramWithSource() accepts multiple strings (and combines them, I assume) - although pyopencl's Program() takes only a single source.
So I would like to hear from anyone with experience doing this:
Are there any problems associated with multiple source files?
Is the best workaround for pyopencl to simply concatenate files?
Is there any way to compile a library of functions (instead of passing in the library source with each kernel, even if not all are used)?
If it's necessary to pass in the library source every time, are unused functions discarded (no overhead)?
Any other best practices/suggestions?
Thanks.
I don't think OpenCL has a concept of multiple source files in a program - a program is one compilation unit. You can, however, use #include and pull in headers or other .cl files at compile time.
You can have multiple kernels in an OpenCL program - so, after one compilation, you can invoke any of the set of kernels compiled.
Any code not used - functions, or anything statically known to be unreachable - can be assumed to be eliminated during compilation, at some minor cost to compile time.
In OpenCL 1.2 you link different object files together.
Why can't the compiler just compile my code as I type it?
From the user's point of view, it could work as smoothly as syntax colouring does today. If you stop typing for long enough (maybe a couple of seconds) the compilation (not linking) would finish, and code errors would be identified using something like syntax colouring.
It's not like my 3GHz quad core monster computer was really busy doing something else. Why not let it compile all the time?
That's exactly what the VB.NET code editor in Visual Studio does.
The advantage is much more accurate IntelliSense than C#. The disadvantage is that it wastes truly vast amounts of processor time and memory. :-(
It can. Or, to be more useful, the answer to this question depends on
What language
What degree of optimization you require
How annoyed you will be if you temporarily type something dumb, and the compiler compiles and injects the result into the binary your are debugging before you can fix it.
Some really strong optimizations would be very messy to mess with on the fly. On the other hand, a basic compilation, if there's no need to worry about assigning offsets for X86 instructions? Sure.
Some IDEs do compile (or at least check syntax and some semantics) code as it is typed. For example, I think Eclipse does it. I think Visual Basic 6 (and maybe earlier versions) did this.
Note sure what IDE you're using, but that's how VB.NET works.
I'm not well-versed in compilers or the methods by which code is converted to IL and machine language, etc. But even so I can see how altering my program by one flow control statement can completely invalidate the work a compiler has done up to that point. By adding or changing a single line of code, entire portions of a program may become obsolete, unused, or in some other way require re-evaluation.
I think I'd rather save those CPU cycles for distributed.net or SETI # Home instead of constantly recompiling my code as I alter it.
That totally depend on the language.
Languages that have context-independent syntaxes "could" pre-compile expressions once typed. However, compilation of such languages project is always fast, so why use the cpu when you can batch quickly the work when the code is ready?
Other languages, like infamously C++, are context-dependent. In most cases, the compiler can't understand an expression without having already read the whole code before the expression. It's really really hard to parse and that's why we have error checking before compilation only now (in VS2010 and other recent ide). In this case it looks like impossible to implement the feature you're asking for.
That said, I'm not a specialist at all. That's all I know about it.
Even interpreted languages like PHP have support for this in the Komodo editor. I'm sure there's many more editors out there that support this for almost any language.
What is the best way to design a C API for dlls which deals with the problem of passing "objects" which are C runtime dependent (FILE*, pointer returned by malloc, etc...). For example, if two dlls are linked with a different version of the runtime, my understanding is that you cannot pass a FILE* from one dll to the other safely.
Is the only solution to use windows-dependent API (which are guaranteed to work across dlls) ? The C API already exists and is mature, but was designed from a unix POV, mostly (and still has to work on unix, of course).
You asked for a C, not a C++ solution.
The usual method(s) for doing this kind of thing in C are:
Design the modules API to simply not require CRT objects. Get stuff passed accross in raw C types - i.e. get the consumer to load the file and simply pass you the pointer. Or, get the consumer to pass a fully qualified file name, that is opened , read, and closed, internally.
An approach used by other c modules, the MS cabinet SD and parts of the OpenSSL library iirc come to mind, get the consuming application to pass in pointers to functions to the initialization function. So, any API you pass a FILE* to would at some point during initialization have taken a pointer to a struct with function pointers matching the signatures of fread, fopen etc. When dealing with the external FILE*s the dll always uses the passed in functions rather than the CRT functions.
With some simple tricks like this you can make your C DLLs interface entirely independent of the hosts CRT - or in fact require the host to be written in C or C++ at all.
Neither existing answer is correct: Given the following on Windows: you have two DLLs, each is statically linked with two different versions of the C/C++ standard libraries.
In this case, you should not pass pointers to structures created by the C/C++ standard library in one DLL to the other. The reason is that these structures may be different between the two C/C++ standard library implementations.
The other thing you should not do is free a pointer allocated by new or malloc from one DLL that was allocated in the other. The heap manger may be differently implemented as well.
Note, you can use the pointers between the DLLs - they just point to memory. It is the free that is the issue.
Now, you may find that this works, but if it does, then you are just luck. This is likely to cause you problems in the future.
One potential solution to your problem is dynamically linking to the CRT. For example,you could dynamically link to MSVCRT.DLL. That way your DLL's will always use the same CRT.
Note, I suggest that it is not a best practice to pass CRT data structures between DLLs. You might want to see if you can factor things better.
Note, I am not a Linux/Unix expert - but you will have the same issues on those OSes as well.
The problem with the different runtimes isn't solvable because the FILE* struct belongs
to one runtime on a windows system.
But if you write a small wrapper Interface your done and it does not really hurt.
stdcall IFile* IFileFactory(const char* filename, const char* mode);
class IFile {
virtual fwrite(...) = 0;
virtual fread(...) = 0;
virtual delete() = 0;
}
This is save to be passed accross dll boundaries everywhere and does not really hurt.
P.S.: Be careful if you start throwing exceptions across dll boundaries. This will work quiet well if you fulfill some design creterions on windows OS but will fail on some others.
If the C API exists and is mature, bypassing the CRT internally by using pure Win32 API stuff gets you half the way. The other half is making sure the DLL's user uses the corresponding Win32 API functions. This will make your API less portable, in both use and documentation. Also, even if you go this way with memory allocation, where both the CRT functions and the Win32 ones deal with void*, you're still in trouble with the file stuff - Win32 API uses handles, and knows nothing about the FILE structure.
I'm not quite sure what are the limitations of the FILE*, but I assume the problem is the same as with CRT allocations across modules. MSVCRT uses Win32 internally to handle the file operations, and the underlying file handle can be used from every module within the same process. What might not work is closing a file that was opened by another module, which involves freeing the FILE structure on a possibly different CRT.
What I would do, if changing the API is still an option, is export cleanup functions for any possible "object" created within the DLL. These cleanup functions will handle the disposal of the given object in the way that corresponds to the way it was created within that DLL. This will also make the DLL absolutely portable in terms of usage. The only worry you'll have then is making sure the DLL's user does indeed use your cleanup functions rather than the regular CRT ones. This can be done using several tricks, which deserve another question...