Preventing "Notation Palette" from popping up - wolfram-mathematica

I have some session initialization code that loads "Notation" package for each session. This brings up Notation Palette. Any idea how to prevent it, or add code to get rid of it automatically?
OK, belisarius tip solves it, I need to import Notation package as follows
Notation`AutoLoadNotationPalette = False;
Needs["Notation`"];

From Notation.m
If[ !ValueQ[AutoLoadNotationPalette::usage],
AutoLoadNotationPalette::usage =
"AutoLoadNotationPalette is a boolean variable. If False then the Notation palette will not be loaded when the Notation package is loaded. If the value is undefined or True the Notation palette will be loaded when the Notation package loads. Other package designers can set this variable outside of the Notation package through a statement similar to Notation`AutoLoadNotationPalette = False."
];
HTH!

Related

Underscore in LHS of assignment statement in go

What does this code snippet do?
var i int
_ = i
I understand the use of "_" as a blank identifier, but what does the second line in the above achieve?
Here is an example from the etcd GitHub repository: etcd
The code is machine generated. The generator added the statements _ = i to avoid unused variable declarations in the case where there's nothing to marshal.
The author of the code generator probably found it easier to add the blank assignment statements than to omit the variables when not needed.
I would guess you might do this to stop go complaining about an unused variable
It would be better to not declare the variable at all
Note, sometimes the underscore is used in imports so that the init() code of a package will get executed, however there is no need to call functions in that package.
This is often a technique applied for image processing to register image handlers.
See
A use case for importing with blank identifier in golang

Recursive variable declaration

I have just seen this black magic in folly/ManualExecutor.h
TimePoint now_ = now_.min();
After I grep'ed the whole library source code, I haven't seen a definition of the variable now_ anywhere else than here. What's happening here? Is this effectively some sort recursive variable declaration?
That code is most likely equal to this:
TimePoint now_ = TimePoint::min();
That means, min() is a static method, and calling it using an instance is same as calling it like this, the instance is used just for determining the type. No black magic involved, that's just two syntaxes for doing the same thing.
As to why the code in question compiles: now_ is already declared by the left side of the line, so when it's used for initialization on the right side, compiler already knows its type and is able to call the static method. Trying to call non-static method should give an error (see comment of #BenVoigt below).
As demonstrated by the fact that you had to write this question, the syntax in the question is not the most clear. It may be tempting if type name long, and is perhaps justifiable in member variable declarations with initializer (which the question code is). In code inside functions, auto is better way to reduce repetition.
Digging into the code shows that TimePoint is an alias for chrono::steady_clock::time_point, where min() is indeed a static method that returns the minimum allowable duration:
http://en.cppreference.com/w/cpp/chrono/time_point/min

DIA SDK how to get parent function of FuncDebugStart / FuncDebugEnd?

The documentation for SymTagFuncDebugStart and SymTagFuncDebugEnd state that calling IDiaSymbol::get_lexicalParent will return a symbol for the enclosing function. I interpret this as I will get an IDiaSymbol whose get_symTag method returns SymTagFunction. However, when I do this it returns me the SymTagCompiland and not the function. So the documentation appears wrong, but worse I'm not sure how to actually tie the SymTagFuncDebugStart and SymTagFuncDebugEnd to the containing SymTagFunction.
Does anyone know? A few dumps suggest that SymTagFuncDebugStart and SymTagFuncDebugEnd always come immediately after the corresponding SymTagFunction when enumerating the symbols via IEnumSymbols. Or put another way, that if IDiaSymbol::get_symIndexId returns n for the function, it will return n+1 and n+2 respectively for the func debug start and func debug end.
But I can't be sure this is always true, and this seems unreliable and hackish.
Does anyone have any suggestions on the correct way to do this?
Could you paste your code here? I guess there is something wrong in your code. Call get_lexicalParent on SymTagFuncDebugStart and SymTagFuncDebugEnd should return the symbol associated the enclosing function (SymTagFunction).
I got this working eventually. The problem is that when you enumerate all the symbols in the global scope using SymTagNull, you will find the FuncDebugStart and FuncDebugEnd symbols. The lexical parent of these symbols is the global scope, because it's the "parent" in the sense that it vended you the pointers to the FuncDebugStart and FuncDebugEnd symbols.
If you get the FuncDebugStart and FuncDebugEnd by calling findChildren on an actual SymTagFunction symbol, however, then its lexical parent will in fact be the original function. So this was an issue of unclear documentation.

why F# debugger lies?

Can anyone explain me why debugger of VS2012 is showing different values for same member of object? (See the figure)
http://s2.uploads.ru/jlkw0.png (Sorry for nonEnglish interface of VS, but I think the situation is clear.)
Here the code:
http://pastie.org/7186239
The debugging experience seems to do a poor job of identifying the correct binding for identifiers. In your example, this means that any identifier called Source is really showing the value of this.Source, rather than the corresponding property of the correct object. Note that you can get the right value by hovering over y and expanding the members (although this is obviously not a great experience).
There are even more confusing ways that this issue manifests itself:
type T() =
member val P = 1
member this.DoSomething() =
let P = "test" // set breakpoint here, hover over P
printfn "%i" this.P // set breakpoint here, hover over P
T().DoSomething()
Now, whichever instance of P you hover over, you get the wrong thing!

Does defining variables within a loop matter?

This is a piece of code I am writing.
var cList:XMLList = xml.defines.c;
var className:String;
var properties:XMLList;
var property:XML;
var i:int,l:int;
var c:XML;
for each(c in cList)
{
className = String(c.#name);
if(cDict[className])
{
throw new Error('class name has been defined' + className);
}
if(className)
{
cDict[className] = c;
}
properties = c.property;
i = 0,
l = properties.length();
if(l)
{
propertyDict[className] = new Dictionary();
for(;i<l;i++)
{
// ...
}
}
}
As you can see, I defined all variables outside of loops. I am always worried, that if I defined them inside the loop, it might slow down the process speed, though I don't have proof - it's just a feeling.
I also don't like that the as3 grammar allows using a variable name before the defintion. So I always define vars at the very beginning of my functions.
Now I am worried these habits might backfire on me someday. Or is it just a matter of personal taste?
No it doesn't matters because the compiler use variable hoisting, so it means that that the compiler moves all variable declarations to the top of the function :
More explanation on variables:
http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7f9d.html
AS3 IDEs allow you to use variable names before the declaration, because they know that the compiler uses a mechanism called "hoisting" to move all variable definitions to the top of a function, anyway. This happens without you noticing it, so that you can conveniently keep your code more readable. Therefore, it does not really make a difference if you manually move all the definitions to the top - unless you like your code to be structured in that way.
For the same reason, variable declaration within loops does not affect performance, unless you keep those loops in separate functions - only then will it result in actual allocation of a variable.
A lot of AS3 programmers, me including, would consider the way you have it now to be "the right" way, while putting variables inside any block to be the "wrong way". The speed does not matter in this situation, I'll try to present both side's arguments in as little biased way as I can.
Put variables definition as close to the code it is used in. The motivation is simple: if the variable is used somewhere, you might want to know where was it declared. This is useful if you aren't sure of type of the variable, or its modifiers. Normally, the declaration is also the place to put commentary.
Put variables at the place where they are actually declared. Even a seasoned ActionScript programmer may eventually confuse him- or herself by declaring variables inside blocks, where a seemingly uninitialized variable would suddenly contain some random value. The common case looks like this:
for (var i:int; i < x; i++) {
// this loop is entered exactly once, instead of `i' times
for (var j:int; j < y: j++) { ... }
}
There is also a long tradition, originating in the C89 standard (aka ANSI C), which didn't have block-scoped variables and would not allow variable definition inside a loop. This has been later altered so that variables were scoped to the block of code where they are declared. Many modern C-like languages, for example C# treat variables like that. So, in the example above, C# would re-initialize j every time the inner loop was entered.
Programmers with longer tradition of writing code in other C-like languages would be led to believe thus, once they see variables declared inside a block, that the variable is scoped to the block. The "hoisting" is thus thought of as counter-intuitive. Therefore error-prone.

Resources