Recursive variable declaration - c++11

I have just seen this black magic in folly/ManualExecutor.h
TimePoint now_ = now_.min();
After I grep'ed the whole library source code, I haven't seen a definition of the variable now_ anywhere else than here. What's happening here? Is this effectively some sort recursive variable declaration?

That code is most likely equal to this:
TimePoint now_ = TimePoint::min();
That means, min() is a static method, and calling it using an instance is same as calling it like this, the instance is used just for determining the type. No black magic involved, that's just two syntaxes for doing the same thing.
As to why the code in question compiles: now_ is already declared by the left side of the line, so when it's used for initialization on the right side, compiler already knows its type and is able to call the static method. Trying to call non-static method should give an error (see comment of #BenVoigt below).
As demonstrated by the fact that you had to write this question, the syntax in the question is not the most clear. It may be tempting if type name long, and is perhaps justifiable in member variable declarations with initializer (which the question code is). In code inside functions, auto is better way to reduce repetition.

Digging into the code shows that TimePoint is an alias for chrono::steady_clock::time_point, where min() is indeed a static method that returns the minimum allowable duration:
http://en.cppreference.com/w/cpp/chrono/time_point/min

Related

DIA SDK how to get parent function of FuncDebugStart / FuncDebugEnd?

The documentation for SymTagFuncDebugStart and SymTagFuncDebugEnd state that calling IDiaSymbol::get_lexicalParent will return a symbol for the enclosing function. I interpret this as I will get an IDiaSymbol whose get_symTag method returns SymTagFunction. However, when I do this it returns me the SymTagCompiland and not the function. So the documentation appears wrong, but worse I'm not sure how to actually tie the SymTagFuncDebugStart and SymTagFuncDebugEnd to the containing SymTagFunction.
Does anyone know? A few dumps suggest that SymTagFuncDebugStart and SymTagFuncDebugEnd always come immediately after the corresponding SymTagFunction when enumerating the symbols via IEnumSymbols. Or put another way, that if IDiaSymbol::get_symIndexId returns n for the function, it will return n+1 and n+2 respectively for the func debug start and func debug end.
But I can't be sure this is always true, and this seems unreliable and hackish.
Does anyone have any suggestions on the correct way to do this?
Could you paste your code here? I guess there is something wrong in your code. Call get_lexicalParent on SymTagFuncDebugStart and SymTagFuncDebugEnd should return the symbol associated the enclosing function (SymTagFunction).
I got this working eventually. The problem is that when you enumerate all the symbols in the global scope using SymTagNull, you will find the FuncDebugStart and FuncDebugEnd symbols. The lexical parent of these symbols is the global scope, because it's the "parent" in the sense that it vended you the pointers to the FuncDebugStart and FuncDebugEnd symbols.
If you get the FuncDebugStart and FuncDebugEnd by calling findChildren on an actual SymTagFunction symbol, however, then its lexical parent will in fact be the original function. So this was an issue of unclear documentation.

Performance of thenComparing vs thenComparingInt - which to use?

I have a question, if I'm comparing ints, is there a performance difference in calling thenComparingInt(My::intMethod) vs thenComparing(My::intMethod), in other words, if I'm comparing differemt types, both reference and primitive, e.g. String, int, etc. Part of me just wants to say comparing().thenComparing().thenComparing() etc, but should I do comparing.thenComparing().thenComparingInt() if the 3rd call was comparing an int or Integer value?
I am assuming that comparing() and thenComparing() use the compareTo method to compare any given type behind the scenes or possibly for ints, the Integer.compare? I'm also assuming the answer to my original question may involve performance in that thenComparingInt would know an int is being passed in, whereas, thenComparing would have to autobox int to Integer then maybe cast to Object?
Also, another question whilst I think of it - is there a way of chaining method references, e.g. Song::getArtist::length where getArtist returns a string? Reason is I wanted to do something like this:
songlist.sort(
Comparator.comparing((Song s) -> s.getArtist().length()));
songlist.sort(
Comparator.comparing(Song::getArtist,
Comparator.comparingInt(String::length)));
songlist.sort(
Comparator.comparing(Song::getArtist, String::length));
Of the 3 examples, the top two compile but the bottom seems to throw a compilation error in Eclipse, I would have thought the 2nd argument of String::length was valid? But maybe not as it's expecting a Comparator not a function?
Question 1
I would think thenComparingInt(My::intMethod) might be better since it should avoid boxing, but you would have to try out both versions to see if it really makes a difference.
Question 2
songlist.sort(
Comparator.comparing(Song::getArtist, String::length));
Is invalid because the 2nd parameter should be a Comparator not a method that returns int.

Initialize member variables in a method and not the constructor

I have a public method which uses a variable (only in the scope of the public method) I pass as a parameter we will call A, this method calls a private method multiple times which also requires the parameter.
At present I am passing the parameter every time but it looks weird, is it bad practice to make this member variable of the class or would the uncertainty about whether it is initialized out way the advantages of not having to pass it?
Simplified pseudo code:
public_method(parameter a)
do something with a
private_method(string_a, a)
private_method(string_b, a)
private_method(string_c, a)
private_method(String, parameter a)
do something with String and a
Additional information: parameter a is a read only map with over 100 entries and in reality I will be calling private_method about 50 times
I had this same problem myself.
I implemented it differently in 3 different contexts to see hands-on what are result using 3 different strategies, see below.
Note that I am type of programmer that makes many changes to the code always trying to improve it. Thus I settle only for the code that is amenable to changes, readbale, would you call this "flexible" code. I settle only for very clear code.
After experimentation, I came to these results:
Passing a as parameter is perfectly OK if you have one or two - short number - of such values. Passing in parmeters has very good visibility, clarity, clear passing lines, well visible lifetime (initialization points, destruction points), amenable to changes, easy to track.
If number of such values begin to grow to >= 5-6 values, I swithc to approach #3 below.
Passing values through class members -- did not do good to clarity of my code, eventually I got rid of it. It makes for less clear code. Code becomes muddled. I did not like it. It had no advantages.
As alternative to (1) and (2), I adopted Inner class approach, in cases when amount of such values is > 5 (which makes for too long argument list).
I pack those values into small Inner class and pass such object by reference as argument to all internal members.
Public function of a class usually creates an object of Inner class (I call is Impl or Ctx or Args) and passes it down to private functions.
This combines clarity of arg passing with brevity. It's perfect.
Good luck
Edit
Consider preparing array of strings and using a loop rather than writing 50 almost-identical calls. Something like char *strings[] = {...} (C/C++).
This really depends on your use case. Does 'a' represent a state that your application/object care about? Then you might want to make it a member of your object. Evaluate the big picture, think about maintenance, extensibility when designing structures.
If your parameter a is a of a class of your own, you might consider making the private_method a public method for the variable a.
Otherwise, I do not think this looks weird. If you only need a in just 1 function, making it a private variable of your class would be silly (at least to me). However, if you'd need it like 20 times I would do so :P Or even better, just make 'a' an object of your own that has that certain function you need.
A method should ideally not pass more than 7 parameters. Using the number of parameters more than 6-7 usually indicates a problem with the design (do the 7 parameters represent an object of a nested class?).
As for your question, if you want to make the parameter private only for the sake of passing between private methods without the parameter having anything to do with the current state of the object (or some information about the object), then it is not recommended that you do so.
From a performance point of view (memory consumption), reference parameters can be passed around as method parameters without any significant impact on the memory consumption as they are passed by reference rather than by value (i.e. a copy of the data is not created). For small number of parameters that can be grouped together you can use a struct. For example, if the parameters represent x and y coordinates of a point, then pass them in a single Point structure.
Bottomline
Ask yourself this question, does the parameter that you are making as a members represent any information (data) about the object? (data can be state or unique identification information). If the answer to his question is a clear no, then do not include the parameter as a member of the class.
More information
Limit number of parameters per method?
Parameter passing in C#

scala coalesces multiple function call parameters into a Tuple -- can this be disabled?

This is a troublesome violation of type safety in my project, so I'm looking for a way to disable it. It seems that if a function takes an AnyRef (or a java.lang.Object), you can call the function with any combination of parameters, and Scala will coalesce the parameters into a Tuple object and invoke the function.
In my case the function isn't expecting a Tuple, and fails at runtime. I would expect this situation to be caught at compile time.
object WhyTuple {
def main(args: Array[String]): Unit = {
fooIt("foo", "bar")
}
def fooIt(o: AnyRef) {
println(o.toString)
}
}
Output:
(foo,bar)
No implicits or Predef at play here at all -- just good old fashioned compiler magic. You can find it in the type checker. I can't locate it in the spec right now.
If you're motivated enough, you could add a -X option to the compiler prevent this.
Alternatively, you could avoid writing arity-1 methods that accept a supertype of TupleN.
What about something like this:
object Qx2 {
#deprecated def callingWithATupleProducesAWarning(a: Product) = 2
def callingWithATupleProducesAWarning(a: Any) = 3
}
Tuples have the Product trait, so any call to callingWithATupleProducesAWarning that passes a tuple will produce a deprecation warning.
Edit: According to people better informed than me, the following answer is actually wrong: see this answer. Thanks Aaron Novstrup for pointing this out.
This is actually a quirk of the parser, not of the type system or the compiler. Scala allows zero- or one-arg functions to be invoked without parentheses, but not functions with more than one argument. So as Fred Haslam says, what you've written isn't an invocation with two arguments, it's an invocation with one tuple-valued argument. However, if the method did take two arguments, the invocation would be a two-arg invocation. It seems like the meaning of the code affects how it parses (which is a bit suckful).
As for what you can actually do about this, that's tricky. If the method really did require two arguments, this problem would go away (i.e. if someone then mistakenly tried to call it with one argument or with three, they'd get a compile error as you expect). Don't suppose there's some extra parameter you've been putting off adding to that method? :)
The compile is capable of interpreting methods without round brackets. So it takes the round brackets in the fooIt to mean Tuple. Your call is the same as:
fooIt( ("foo","bar") )
That being said, you can cause the method to exclude the call, and retrieve the value if you use some wrapper like Some(AnyRef) or Tuple1(AnyRef).
I think the definition of (x, y) in Predef is responsible. The "-Yno-predefs" compiler flag might be of some use, assuming you're willing to do the work of manually importing any implicits you otherwise need. By that I mean that you'll have to add import scala.Predef._ all over the place.
Could you also add a two-param override, which would prevent the compiler applying the syntactic sugar? By making the types taking suitably obscure you're unlikely to get false positives. E.g:
object WhyTuple {
...
class DummyType
def fooIt(a: DummyType, b: DummyType) {
throw new UnsupportedOperationException("Dummy function - should not be called")
}
}

How does Integer === 3 work?

So as I understand it, the === operator tests to see if the RHS object is a member of the LHS object. That makes sense. But how does this work in Ruby? I'm looking at the Ruby docs and I only see === defined in Object, I don't see it in Integer itself. Is it just not documented?
Integer is a class, which (at least in Ruby) means that it is just a boring old normal object like any other object, which just happens to be an instance of the Class class (instead of, say, Object or String or MyWhateverFoo).
Class in turn is a subclass of Module (although arguably it shouldn't be, because it violates the Liskov Substition Principle, but that is a discussion for another forum, and is also a dead horse that has already been beaten many many times). And in Module#=== you will find the definition you are looking for, which Class inherits from Module and instances of Class (like Integer) understand.
Module#=== is basically defined symmetric to Object#kind_of?, it returns true if its argument is an instance of itself. So, 3 is an instance of Integer, therefore Integer === 3 returns true, just as 3.kind_of?(Integer) would.
So as I understand it, the === operator tests to see if the RHS object is a member of the LHS object.
Not necessarily. === is a method, just like any other method. It does whatever I want it to do. And in some cases the "is member of" analogy breaks down. In this case it is already pretty hard to swallow. If you are a hardcore type theory freak, then viewing a type as a set and instances of that type as members of a set is totally natural. And of course for Array and Hash the definition of "member" is also obvious.
But what about Regexp? Again, if you are formal languages buff and know your Chomsky backwards, then interpreting a Regexp as an infinite set of words and Strings as members of that set feels completely natural, but if not, then it sounds kind of weird.
So far, I have failed to come up with a concise description of precisely what === means. In fact, I haven't even come up with a good name for it. It is usually called the triple equals operator, threequals operator or case equality operator, but I strongly dislike those names, because it has absolutely nothing to do with equality.
So, what does it do? The best I have come up with is: imagine you are making a table, and one of the column headers is Integer. Would it make sense to write 3 in that column? If one of the column headers is /ab*a/, would it make sense to write 'abbbba' in that column?
Based on that definition, it could be called the subsumption operator, but that's even worse than the other examples ...
It's defined on Module, which Class is a subclass of, which Integer is an instance of.
In other words, when you run Integer === 3, you're calling '===' (with the parameter 3) on the object referred to to by the constant Integer, which is an instance of the class named Class. Since Class is a subclass of Module and doesn't define its own ===, you get the implementation of === defined on Module.
See the API docs for Module for more information.
Umm, Integer is a subclass of Object.

Resources