Is there a real performance gain when I turn {$IMPORTEDDATA} off? - performance

Is there a real performance gain when I turn {$IMPORTEDDATA} off ?
The manual only says this: "The {$G-} directive disables creation of imported data references. Using {$G-} increases memory-access efficiency, but prevents a packaged unit where it occurs from referencing variables in other packages."
Update:
Here is more info I could find:
"The Debugging section has the new option Use imported data references (mapped to $G), which
controls the creation of imported data references (increasing memory efficiency but preventing the
access of global variables defined in other runtime packages)"

Almost never
This directive only refers to accessing global unit variables from another unit.
If you use {$G+}
unit1;
interface
var
Global1: integer; //<-- this is a global var in unit1.
Form1: TForm1; //<-- also a global var, but really a pointer
Global1 will be accessed indirectly via a pointer (if and when accessed from outside unit1)
Form1 will also be accessed indirectly (i.e. change from a direct pointer to an indirect pointer).
if you use {$G-}, the access to integer global will be direct and thus slightly faster.
This will only make a difference if you use global public unit variables in another unit and in time critical code, i.e. almost never.
See this article: http://hallvards.blogspot.com/2006/09/hack13-access-globals-faster.html

Related

Golang: are global variables protected from garbage collection?

I'm fairly new to Golang. I'm working on an application that builds an in-memory object-oriented data model (basically an ORM) to support the application functionality. I realize this isn't really idiomatic Go but it makes sense in this situation.
All my core objects are allocated on the heap then stored in global (though not necessarily exported) map structures that allow the code to look them up based on database IDs. Objects that reference instances of other objects have pointer fields in their structure definitions.
I was under the impression that any data that can be reached from a global variable is protected from being garbage collected. However, I am seeing intermittent cases of pointer references apparently becoming nil over time. If I restart the application, and rebuild the object model, then try the same operation, the problem disappears.
Is GC freeing my memory out from under me? Or should I look elsewhere to understand this problem? And if the answer to my first question is yes... how can I stop this from happening?
The garbage collector does not free memory as long as it is reachable. Global or package level variables are accessible during the whole lifetime of your app, so they can't be freed by the GC.
If you see the opposite, that is definitely a bug / mistake on your part (unless the Go runtime itself has a bug). For example you may have a data race initializing / accessing your global variables, or you (or some library you use) may use package unsafe or the uintptr type incorrectly. For example, quoting from unsafe.Pointer:
A uintptr is an integer, not a reference. Converting a Pointer to a uintptr creates an integer value with no pointer semantics. Even if a uintptr holds the address of some object, the garbage collector will not update that uintptr's value if the object moves, nor will that uintptr keep the object from being reclaimed.

Does Ada deallocate memory automatically under some circumstances?

I was trying to find some information as to why the keyword new can be used to dynamically allocate objects but there is no keyword like delete that could be used to deallocate them. Going through mentions of Ada.Unchecked_Deallocation in Ada 2012 Reference Manual I found a few interesting excerpts:
Every object is finalized before being destroyed (for example, by
leaving a subprogram_body containing an object_declaration, or by a call to an instance of
Unchecked_Deallocation)
Each access-to-object type has an associated storage pool. The storage allocated by an allocator comes
from the pool; instances of Unchecked_Deallocation return storage to the pool.
The Deallocate procedure of a user-defined storage pool object P may be called by the implementation to
deallocate storage for a type T whose pool is P only at the places when an Allocate call is allowed for P,
during the execution of an instance of Unchecked_Deallocation for T, or as part of the finalization of the
collection of T.
If I had to guess, what that means is that it is possible for an implementation to automatically deallocate an object associated with an access when the execution leaves the scope in which access was declared. No need for explicit calls to Unchecked_Deallocation.
This seems to be supported by a section in Ada 95 Quality and Style Guide which states:
The unchecked storage deallocation mechanism is one method for overriding the default time at which allocated storage is reclaimed. The earliest default time is when an object is no longer accessible, for example, when control leaves the scope where an access type was declared (the exact point after this time is implementation-dependent). Any unchecked deallocation of storage performed prior to this may result in an erroneous Ada program if an attempt is made to access the object.
But the wording is rather unclear. If I were to run this code, what exactly would happen on the memory side of things?
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
procedure Run is
X : access Integer := new Integer'(64);
begin
Put (Integer'Image (X.all));
end Run;
begin
for I in 1 .. 16 loop
Run;
end loop;
end Main;
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
procedure Outer is
type Integer_Access is not null access Integer;
procedure Run is
Y : Integer_Access := new Integer'(64);
begin
Put (Integer'Image (Y.all));
end Run;
begin
for I in 1 .. 16 loop
Run;
end loop;
end Outer;
begin
Outer;
end Main;
Is there a guaranteed memory leak or is X deallocated when Run finishes?
As outlined in Memory Management with Ada 2012, cited here, a local variable is typically allocated on a stack; its memory is automatically released when the variable's scope exits. In contrast, a dynamic a variable is typically allocated on a heap; its memory is allocated using new, and its memory must be reclaimed, usually:
Explicitly, e.g. using an instance of Unchecked_Deallocation.
Implicitly, e.g. using a controlled type derived from Finalization; as noted here, when the scope of a controlled instance exits, automatic finalization calls Finalize, which reclaims storage in a manner suitable to the type's design.
The children of Ada.Containers use controlled types internally to encapsulate access values and manage memory automatically. For reference, compare your compiler's implementation of a particular container to the corresponding functional container cited here.
Ada offers a variety of ways to manage memory, summarized on slide 28 in the author's order of preferability:
Stack-based.
Container-based.
Finalization-based.
Subpool-based.
Manual allocate/deallocate.
In the particular case of Main, the program allocates storage for 16 instances of Integer. As noted on slide 12, "A compiler may reclaim allocated memory when the corresponding access type goes out of scope." For example, a recent version of the GNAT reference manual indicates that the following storage management implementation advice is followed:
A storage pool for an anonymous access type should be created at the point of an allocator for the type, and be reclaimed when the designated object becomes inaccessible.
Absent such an indication, the storage is not required to be reclaimed. It is typically reclaimed by the host operating system when the program exits.
Do your programs leak memory? It depends on the compiler.
AFAIK, there are only two times when a compiler is required to reclaim allocated memory:
when an access type with Storage_Size specified goes out of scope
when an instance of Ada.Unchecked_Deallocation is called with a non-null value
However, a compiler is allowed to reclaim memory in other cases. For example, a compiler may implement garbage collection, but I don't know of any that do.
FWIW, I don't know of any compiler for which your programs don't leak memory.

Do I change the value of a defglobal in a correct way?

I'm writing an app which should at some point get the value of a defglobal variable and change it. For this I do the following:
DATA_OBJECT cur_time_q;
if (!EnvGetDefglobalValue(CLIEnvironment, cur_timeq_kw, &cur_time_q)) return CUR_TIME_GLBVAR_MISSING;
uint64_t cur_time = t_left;
SetType(cur_time_q, INTEGER);
void* val = EnvAddLong(CLIEnvironment, cur_time);
SetValue(cur_time_q, val);
EnvSetDefglobalValue(CLIEnvironment, cur_timeq_kw, &cur_time_q);
I partly took this approach from "Advanced Programming Guide" and it works fine, but I have some questions:
Does EnvAddLong(...) add a value, which will retain in memory, until the environment is destroyed? May it consume memory and increase the execution time of other API-functions like EnvRun(...), if the function with this fragment of code is called for, say, several thousand iterations?
Isn't it overkill? Should I go for something like EnvEval("(bind ...)") instead?
There's information in the CLIPS Advanced Programming Guide on how CLIPS handles garbage collection. API calls like EnvAddLong which are used to create values to pass to other API functions don't trigger garbage collection. Generally, API calls which cause code to execute or deallocate data structures such as Run, Reset, Clear, and Eval, trigger garbage collection and will deallocate any transient data created by functions like EnvAddLong. So if your program design repeatedly assigns values to globals and then runs, any CLIPS data structures you allocate will eventually be freed once the data is confirmed to be garbage and is no longer referenced by any CLIPS data structures.
If you can easily construct a string to pass to the Eval function, it's often easier to do this rather than make multiple API calls to achieve the same result.
The API was overhauled in release 6.4, so many tasks such as assigning a value to a defglobal can be done with one step rather than several.
CLIPSValue rv;
Defglobal *global;
mainEnv = CreateEnvironment();
Build(mainEnv,"(defglobal ?*x* = 3.1)");
Eval(mainEnv,"?*x*",&rv);
printf("%lf\n",rv.floatValue->contents);
global = FindDefglobal(mainEnv,"x");
if (global != NULL)
{
DefglobalSetInteger(global,343433);
Eval(mainEnv,"(println ?*x*)",NULL);
DefglobalGetValue(global,&rv);
printf("%lf\n",rv.floatValue->contents);
}

Registers usage during compilation

I found information that general purpose registers r1-r23 and r26-r28 are used by the compiler to store local variables, but do they have any other purpose? Also which memory are this registers part of(cache/RAM)?
Finally what does global pointer gp in register r26 points to?
Also which memory are this registers part of(cache/RAM)?
Register are on-processors storage allowing a fast data transfer (2 reads/1 write per cycle). They store variables that can represent memory addresses, but, besides that, are completely unrelated to memory or cache.
I found information that general purpose registers r1-r23 and r26-r28 are used by the compiler to store local variables, but do they have any other purpose?
Registers are use with respect to hardware or software conventions. Hardware conventions are related to the instruction set architecture. For instance, the call instruction transfers control to a subroutine and stores return address in register r31 (ra). Very nasty things are likely to happen if you overwrite r31 register by any mean without precautions. Software conventions are supposed to insure a proper behavior if used consistently within software. They indicate which register have special use, which need to be saved when context switching, etc. These conventions can be changed without hardware modifications, but doing so will probably require changes in several software tools (compiler, linker, loader, OS, ...).
general purpose registers r1-r23 and r26-r28 are used by the compiler to store local variables
Actually, some registers are reserved.
r1 is used by asm for macro expansion. (sw)
r2-r7 are used by the compiler to pass arguments to functions or get return values. (sw)
r24-r25 can only be used by exception handlers. (sw)
r26-r28 hold different pointers (global, stack, frame) that are set either by the runtime or the compiler and cannot be modified by the programmer.(sw)
r29-r31 are hw coded returns addresses for subprograms or interrupts/exceptions. (hw)
So only r8-r23 can used by the compiler.
but do they have any other purpose?
No, and that's why they can be freely used by the compiler or programmer.
Finally what does global pointer in register r26 points to?
Accessing memory with load or stores have a based memory addressing. Effective address for ldx or stx (where 'x' is is b, bu, h, etc depending on data characteristics) is computed by adding a register and a 16 bits immediate. This only allows to go an an address within +/-32k of the content of register.
If the processor has the address of a var in a register (for instance the value returned by a malloc) the immediate allows to do a displacement to access fields in a struct, next array value, etc.
If the address is local or global, it must be computed by the program. Pointers registers are used to that purpose. Local vars addresses are computed by adding an immediate to the stack pointer (r27or sp).
Addresses of global or static vars are computed by adding an integer to the global pointer (r26 or gp). Content of gp corresponds to the start of the memory data segment and is initialized by the loader just before program execution and must not be modified. The immediate displacement with respect to the start of data segment is computed by the linker when it defines memory layout.
Note that this only allows to access 64k memory because of the 16 bits immediate width. If the size of global/static variables exceeds this value and a var is not within this range, a couple of instructions are required to enter the 32 bits of the address of the var before the data transfer. With gp this is not required and it is a way to provide a faster access to global variables.

Can you "pin" an object in memory with Go?

I have a Go object whose address in memory I would like to keep constant. in C# one can pin an object's location in memory. Is there a way to do this in Go?
An object on which you keep a reference won't move. There is no handle or indirection, and the address you get is permanent.
From the documentation :
Note that, unlike in C, it's perfectly OK to return the address of a
local variable; the storage associated with the variable survives
after the function returns
When you set a variable, you can read this address using the & operator, and you can pass it.
tl;dr no - but it does not matter unless you're trying to do something unusual.
Worth noting that the accepted answer is partially incorrect.
There is no guarantee that objects are not moved - either on the stack or on the Go heap - but as long as you don't use unsafe this will not matter to you because the Go runtime will take care of transparently updating your pointers in case an object is moved.
If OTOH you use unsafe to obtain a uintptr, invoke raw syscalls, perform CGO calls, or otherwise expose the address (e.g. oldAddr := fmt.Sprintf("%p", &foo)), etc. you should be aware that addresses can change, and that nor compiler nor runtime will magically patch things for you.
While currently the standard Go compiler only moves objects on the stack (e.g. when a goroutine stack needs to be resized), there is nothing in the Go language specification that prevents a different implementation from moving objects on the Go heap.
While there is (yet) no explicit support for pinning objects in the stack or in the Go heap, there is a recommended workaround: allocate manually the memory outside of the Go heap (e.g. via mmap) and using finalizers to automatically free that allocation once all references to it are dropped. The benefit of this approach is that memory allocated manually outside of the Go heap will never be moved by the Go runtime, so its address will never change, but it will still be deallocated automatically when it's not needed anymore, so it can't leak.

Resources