Matlab: Are local functions (subfunctions) compiled together with main function or separately? - performance

I have heard that MATLAB has an automatic in-need compilation of functions which could create a lot of function-call overhead if you call a function many times like in the following code:
function output = BigFunction( args )
for i = 1:10000000
SmallFunction( args );
end
end
Is it faster to call the function SmallFunction() if I put it in the same file as BigFunction() as a local function? Or is there any good solution other than pasting the code from SmallFunction() into the BigFunction() to optimize the performance?
Edit: It may be false assumption that the function-call overhead is because of the in-need compilation. The question is how to cut down on the overhead without making the code look awful.

Matlab hashes the functions it reads into memory. The functions are only compiled once if they exist as an independent function in its own file. If you put BigFunction in BigFunction.m and SmallFunction in SmallFunction.m then you should recieve the optimization benefit of having the m-script compiled once.

The answer to my first question is that a local function performs the same as a function in another file.
An idea for the second question is to, if possible, make SmallFunction() an inline-function, which has less function-call overhead. I found more about function-call performances in the MathWorks forum, and I paste the question and answer below:
Question:
I have 7 different types of function call:
An Inline function. The body of the function is directory written down (inline).
A function is defined in a separate MATLAB file. The arguments are passed by the calling function (file-pass).
A function is defined in a separate MATLAB file. The arguments are provided by referencing global variables; only indices are provided by the calling function (file-global).
A nested function. The arguments are passed by the enclosing function (nest-pass).
A nested function. The arguments are those shared with the enclosing function; only indices are provided by the enclosing function (nest-share).
A sub function. The arguments are passed by the calling function (sub-pass).
A sub function. The arguments are provided by referencing global variables; only indices are provided by the calling function (sub-global).
I would like to know which function call provides better performance than the others in general.
The answer from MathWorks Support Team pasted here:
The ordering of performance of each function call from the fastest to the slowest tends to be as follows:
inline > file-pass = nest-pass = sub-pass > nest-share > sub-global > file-global
(A>B means A is faster than B and A=B means A is as fast as B)
First, inline is the fastest as it does not incur overhead associated with function call.
Second, when the arguments are passed to the callee function, the calling function sets up the arguments in such a way that the callee function knows where to retrieve them. This setup associated with function call in general incurs performance overhead, and therefore file-pass, nest-pass, and sub-pass are slower than inline.
Third, if the workspace is shared with nested functions and the arguments to a nested function are those shared within the workspace, rather than pass-by-value, then performance of that function call is inhibited. If MATLAB sees a shared variable within the shared workspace, it searches the workspace for the variable. On the other hand, if the arguments are passed by the calling function, then MATLAB does not have to search for them. The time taken for this search explains that type nest-share is slower than file-pass, nest-pass, and sub-pass.
Finally, when a function call involves global variables, performance is even more inhibited. This is because to look for global variables, MATLAB has to expand its search space to the outside of the current workspace. Furthermore, the reason a function call involving global variables appears a lot slower than the others is that MATLAB Accelerator does not optimize such a function call. When MATLAB Accelerator is turned off with the following command,
feature accel off
the difference in performance between inline and file-global becomes less significant.
Please note that the behaviors depend largely on various factors such as operating systems, CPU architectures, MATLAB Interpreter, and what the MATLAB code is doing.

Related

What's the differences from inline and block compilation of SBCL?

Several weeks ago, SBCL updated 2.0.2 and brought the Block compilation feature. I have read this article to understand what it is.
I have a question, what's the difference between (declaim (inline 'some-function)) and Block compilation? Block compilation is automatic by the compiler?
Thanks.
Inline compilation is a specific optimization technique. A function being called is directly integrated into the calling function - usually using its source code - and then compiled.
This means that the inlined function might not be inlined only in one function, but in multiple functions.
Advantage: the overhead of calling a function disappears.
Disadvantage: the code size increases and the calling function(s) needs to be recompiled, when the inlined function changed and we want this change to become visible. Macros have the same problem.
Block compilation means that a bunch of code gets compiled together with different semantic constraints and that this enables the compiler to do a bunch of new optimizations.
Common Lisp has in the standard support for block compilation of single files. It allows the file compiler to assume that a file is such a block of code.
Example from the Common Lisp standard:
3.2.2.3 Semantic Constraints
A call within a file to a named function that is defined in the same file refers to that function, unless that function has been declared notinline. The consequences are unspecified if functions are redefined individually at run time or multiply defined in the same file.
This allows the code to call a global function and not use the symbol's function cell for the call. Thus this disables late binding for global function calls - in this file and for functions in this file.
It's not said how this can be achieved, but the compiler might just allocate the code somewhere and the calls just jump there.
So this part of block compilation is defined in the standard and some compilers are doing that.
Block compilation for multiple files
If the file compiler can use block compilation for one file, then what about multiple files? A few compilers can also tell the file compiler that several files make a block for compilation. CMUCL does that. SBCL was derived and simplified from CMUCL and lacks it until now. I think Lucid Common Lisp (which is no longer actively sold) did support something like that, too.
Might be useful to add this to SBCL, too.

Simple function is not inlined

I am adding metrics calls to my Go program using Prometheus. I decided to separate all the Prometheus calls to simple function calls in a separate source file for maintainability (in case I want to move to a different metrics package). But more important it also makes it faster to write the code as the IDE will prompt with the label names as parameters to the function call. Eg something like this:
var requestCounter = promauto.NewCounterVec(prometheus.CounterOpts{}, []string{"name"})
func incrementRequestCounter(label1, label2 string) {
requestCounter.WithLabelValues(label1, label2).Inc()
}
Some of these functions are called often in a low-level loop so I don't want these calls to slow down the code too much. My assumption was that such a simple line of code would be easy to inline. However checking (with build option --gcflags -m) I found that the above single line function is not inlined (go1.12.5 windows/amd64). Does anyone know why? And how to get around this? Note that this function is inlined:
func incrementRequestCounter(label1, label2 string) {
requestCounter.WithLabelValues(label1, label2)
}
With further experimentation it seems that a function will not be inlined if it has more than one call to non-inlineable functions. (You can have lots of calls to inlineable functions and a function will still be inlineable.)
Just posting an answer (since nobody else has) with these points:
Benchmark before trying to optimise.
A seemingly simple function may be difficult to inline
Inlining is evolving and the above may be inlined in the future

Make-array in SBCL

How does make-array work in SBCL? Are there some equivalents of new and delete operators in C++, or is it something else, perhaps assembler level?
I peeked into the source, but didn't understand anything.
When using SBCL compiled from source and an environment like Emacs/Slime, it is possible to navigate the code quite easily using M-. (meta-point). Basically, the make-array symbol is bound to multiple things: deftransform definitions, and a defun. The deftransform are used mostly for optimization, so better just follow the function, first.
The make-array function delegates to an internal make-array% one, which is quite complex: it checks the parameters, and dispatches to different specialized implementation of arrays, based on those parameters: a bit-vector is implemented differently than a string, for example.
If you follow the case for simple-array, you find a function which calls allocate-vector-with-widetag, which in turn calls allocate-vector.
Now, allocate-vector is bound to several objects, multiple defoptimizers forms, a function and a define-vop form.
The function is only:
(defun allocate-vector (type length words)
(allocate-vector type length words))
Even if it looks like a recursive call, it isn't.
The define-vop form is a way to define how to compile a call to allocate-vector. In the function, and anywhere where there is a call to allocate-vector, the compiler knows how to write the assembly that implements the built-in operation. But the function itself is defined so that there is an entry point with the same name, and a function object that wraps over that code.
define-vop relies on a Domain Specific Language in SBCL that abstracts over assembly. If you follow the definition, you can find different vops (virtual operations) for allocate-vector, like allocate-vector-on-heap and allocate-vector-on-stack.
Allocation on heap translates into a call to calc-size-in-bytes, a call to allocation and put-header, which most likely allocates memory and tag it (I followed the definition to src/compiler/x86-64/alloc.lisp).
How memory is allocated (and garbage collected) is another problem.
allocation emits assembly code using %alloc-tramp, which in turns executes the following:
(invoke-asm-routine 'call (if to-r11 'alloc-tramp-r11 'alloc-tramp) node)
There are apparently assembly routines called alloc-tramp-r11 and alloc-tramp, which are predefined assembly instructions. A comment says:
;;; Most allocation is done by inline code with sometimes help
;;; from the C alloc() function by way of the alloc-tramp
;;; assembly routine.
There is a base of C code for the runtime, see for example /src/runtime/alloc.c.
The -tramp suffix stands for trampoline.
Have also a look at src/runtime/x86-assem.S.

Bash Functions Order and Timing

This should be easy to answer, but I couldn't find exactly what I was asking on google/stackoverflow.
I have a bash script with 18 functions (785 lines)- ridiculous, I know I need to learn another language for the lengthy stuff. I have to run these functions in a particular order because the functions later in the sequence use info from the database and/or text files that were modified by the functions preceding. I am pretty much done with the core functionality of all the functions individually and I would like a function to run them all (One ring to rule them all!).
So my questions are, if I have a function like so:
function precious()
{
rings_of #Functions in Sequence
elves #This function Modifies DB
men #This function uses DB to modify text
dwarves #This function uses that modified text
}
Would variables be carried from one function to the next if declared like so? (inside of a function):
function men()
{
...
frodo_sw_name=`some DB query returning the name of Frodo's sword`
...
}
Also, if the functions are called in a specific order, as seen above, will Bash wait for one function to finish before starting the next? - I am pretty sure the answer is yes, but I have a lot of typing to do either way, and since I couldn't find this answer quickly on the internet, I figured it might benefit others to have this answer posted as well.
Thanks!
Variables persist unless you run the function in a subshell. This would happen if you run it as part of a pipeline, or group it with (...) (you should use { ... } instead for grouping if you don't want to create a subshell.
The exception is if you explicitly declare the variables in the function with declare, typeset, or local, which makes them local to that function rather than global to the script. But you can also use the -g option to declare and typeset to declare global variables (this would obviously be inappropriate for the local declaration).
See this tutorial on variable scope in bash.
Commands are all run sequentially, unless you deliberately background them with & at the end. There's no difference between functions and other commands in this regard.

How can I get the name of a calling function within a module in Mathematica?

If I write a function or module that calls another module, how can I get the name of the calling function/module? This would be helpful for debugging purposes.
The Stack function will do almost exactly what you want, giving a list of the "tags" (for your purposes, read "functions") that are in the call stack. It's not bullet-proof, because of the existence of other functions like StackBegin and StackInhibit, but those are very exotic to begin with.
In most instances, Stack will return the symbols that name the functions being evaluated. To figure out what context those symbols are from, you can use the Context function, which is aboput as close as you can come to figuring out what package they're a part of. This requires some care, though, as symbols can be added to packages dynamically (via Get, Import, ToExpression or Symbol) and they can be redefined or modified (with new evaluation rules, for instance) in other packages as well.

Resources