How can I defeat the Go optimizer in benchmarks? - performance

In Rust, you can use the black_box function to force the compiler to
assume that the function's argument is used (forcing it not to optimize away code that generates that value)
not be able to inspect how its return value is produced (preventing it from doing constant-folding and other such optimizations).
Is there a similar facility in Go (to accomplish either task)?

Is there a similar facility in Go (to accomplish either task)?
No.
If you want to use a result: Assign to an exported global.

I believe runtime.KeepAlive is recommended, as per the following Github issue. Unfortunately, it's unclear whether anything exists for function arguments, or whether KeepAlive is even guaranteed to work.
https://github.com/golang/go/issues/27400

Related

Why does the first parameter of the function LdrRegisterDllNotification have to be zero?

As stated in the Microsoft docs, the parameter Flags of the LdrRegisterDllNotification must be zero, but no further explanation is provided. What's the purpose of defining this parameter at all if the only accepted value is zero? What happens if a non-zero value is passed instead?
Parameters where the documentation tells you to pass zero has two possible reasons:
The parameter is unused in all existing Windows versions but might be used for something in the future. The developer might have envisioned extra features but they did not have time to implement them etc.
The parameter is used to pass undocumented information/flags that triggers some private functionality inside the function. Windows 95 for example supports undocumented flags in its *Alloc functions that causes them to allocate shared memory visible to all processes.
Either way, the best practice is to just follow the documentation and pass zero.

GCC syntax check ensure NULL passed as last parameter in function call with variable arguments

I want to do something similar to how, in GCC, you can do syntax checking on printf-style calls (to make sure that the argument list is actually correct for the call).
I have some functions that take a variable number of parameters. Rather than enforce what parameters are sent, I need to ensure that the last parameter passed is a NULL, regardless of how many parameters are passed-in.
Is there a way to get GCC to do this type of syntax check during compile time?
You probably want the sentinel function attribute, so declare your function like
void foo(int,double,...) __attribute__((sentinel));
You might consider customizing your GCC with a plugin or a MELT extension to typecheck more precisely your variadic functions. That is, you could extend GCC with your own attributes which would do more precise checks (or simply make additional checks based on the names of your functions).
The ex06/ example of melt-examples is doing a similar check for the jansson library; unfortunately that example is incomplete today October 18th 2012, I am still working on it.
In addition, you could define a variadic macro to call such a function by always adding a NULL e.g. something like:
#define FOO(N,D,...) foo((N),(D),##__V_ARGS__,NULL)
Then by coding FOO(i+3,3.14,"a") you'll get foo((i+3),(3.14),"a",NULL) so you are sure that a NULL is appended.
Basile Starynkevitch is right, go with a function attribute. There are a ton of other useful function attributes, like being able to tell the compiler "If the caller doesn't check the return value of this function, it's an error."
You may also want to see if splint can check for you, but I don't think so. I think it would have stuck in my memory.
If you haven't read over this page of GCC compiler flags, do that, too. There are a ton of handy checks in there. http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

How can I make an external toolbox available to a MATLAB Parallel Computing Toolbox job?

As a continuation of this question and the subsequent answer, does anyone know how to have a job created using the Parallel Computing Toolbox (using createJob and createTask) access external toolboxes? Is there a configuration parameter I can specify when creating the function to specify toolboxes that should be loaded?
According to this section of the documentation, one way you can do this is to specify either the 'PathDependencies' property or the 'FileDependencies' property of the job object so that it points to the functions you need the job's workers to be able to use.
You should be able to point the way to the KbCheck function in PsychToolbox, along with any other functions or directories needed for KbCheck to work properly. It would look something like this:
obj = createJob('PathDependencies',{'path_to_KbCheck',...
'path_to_other_PTB_functions'});
A few comments, based on my work troubleshooting this:
It appears that there are inconsistencies with how well nested functions and anonymous functions work with the Parallel Computation toolkit. I was unable to get them to work, while others have been able to. (Also see here.) As such, I would recommend having each function stored in it's own file, and including those files using the PathDependencies or FileDependencies properties, as described by gnovice above.
It is very hard to troubleshoot the Parallel Computation toolkit, as everything happens outside your view. Use breakpoints liberally in your code, and the inspect command is your friend. Also note that if there is an error, task objects will contain an error parameter, which in turn will contain ErrorMessage string, and possibly the Error.causes MException object. Both of these were immensely useful in debugging.
When including Psychtoolbox, you need to do it as follows. First, create a jobStartup.m file with the following lines:
PTB_path = '/Users/eliezerk/Documents/MATLAB/Psychtoolbox3/';
addpath( PTB_path );
cd( PTB_path );
SetupPsychtoolbox;
However, since the Parallel Computation toolkit can't handle any graphics functionality, running SetupPsychtoolbox as-is will actually cause your thread to crash. To avoid this, you need to edit the PsychtoolboxPostInstallRoutine function, which is called at the very end of SetupPsychtoolbox. Specifically, you want to comment out the line AssertOpenGL (line 496, as of the time of this answer; this may change in future releases).

Cross version line matching

I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code)
The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations:
It doesn't handle cross file motion.
It might not play well with lines that get changed
It doesn't look at the information available in the intermediate versions.
It provides no way to manually patch up lines when the diff tool gets things wrong.
It's kinda clunky
Before I start diving into developing something better:
What already exists to do this?
What features do similar system have that I've not thought of?
Why do you need to do this? If you use decent source version control, you should have access to old versions of the code, you can simply provide a link to that so people can see the bug in its original place. In fact the main problem I see with this system is that the bug may have already been fixed, but your automatic line tracking code will point to a line and say there's a bug there. Seems this system would be a pain to build, and not provide a whole lot of help in practice.
My suggestion is: instead of trying to track line numbers, which as you observed can quickly get out of sync as software changes, you should decorate each assertion (or other line of interest) with a unique identifier.
Assuming you're using C, in the case of assertions, this could be as simple as changing something like assert(x == 42); to assert(("check_x", x == 42)); -- this is functionally identical, due to the semantics of the comma operator in C and the fact that a string literal will always evaluate to true.
Of course this means that you need to identify a priori those items that you wish to track. But given that there's no generally reliable way to match up source line numbers across versions (by which I mean that for any mechanism you could propose, I believe I could propose a situation in which that mechanism does the wrong thing) I would argue that this is the best you can do.
Another idea: If you're using C++, you can make use of RAII to track dynamic scopes very elegantly. Basically, you have a Track class whose constructor takes a string describing the scope and adds this to a global stack of currently active scopes. The Track destructor pops the top element off the stack. The final ingredient is a static function Track::getState(), which simply returns a list of all currently active scopes -- this can be called from an exception handler or other error-handling mechanism.

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources