singletons and google coding-style - coding-style

Google c++ coding-style does not allow non-trivial static objects (and for a reason) and hence no singletons. At the same time singletons do represent reality of application logic.
So what is the correct way to implement singleton functionality google-style:
(a) have static pointers and initialize them on startup as a separate step (e.g. by linked list of initializer/maker classes)
(b) have context holding references to all singleton-like object and pass it with every method
(c) have context to be member of every class
(d) something else?

The "Google C++ Style Guide" does mention "Types representing singleton objects (Registerer)"
You can see an implementation of said registerer in ronaflx/cpp-utility with "util/registerer.h" for function pointers (illustrated here), and util/singleton.h for classic singleton.
The OP points to their own project alex4747-pub/proper_singleton.

Related

Near-sdk-as Contracts - Singleton style vs Bag of functions

It seems like there are two styles for writing NEAR smart contracts in assembly script
Bag of functions like Meme Museum
Singleton style like Lottery.
I was wondering under what circumstance one style is recommended over the other.
When should you use one over the other? What are the advantages/disadvantages of each style?
The big differences is in initialization of the contract.
Bag of Functions (BoF)
For the first style, usually a persistent collection is declared at the top level. All top level code in each file is placed into a function and one start function calls each. The Wasm binary format allows to specify the start function so that anytime the binary is instantiated (e.i. loaded by a runtime) that function is called before any exported functions can be called.
For persistent collections this means allocating the objects, but no data is required from storage, until something is read, including the length or size of the collection.
Singleton
A top level instance of the contract is declared. Then storage is checked for the "STATE" key, which contains the state of the instance. If it is present storage is read and the instance is deserialized from storage. Then each "method" of the contract is an exported function that uses the global instance, e.g. instance.method(...), passing the arguments to the method. If the method is decorated with #mutateState then the instance is written back to storage after the method call.
Which To Use
Singleton's provide a nice interface for the contract and is generally easier to understand. However, since you are reading from storage at every method call, it can be more expensive.
The other advantage of the BoF is it's easier to extend as you can export more functions from a dependency.
However, it's also possible to use a combination of both as well.
So really it's whatever makes sense to you.

Resharper refactoring creating static methods

When you refactor a method using Resharper 8 and the method arguments do not depend on instance variables of the class, a static method is constructed. However, an instance method could also have been created.
Is a static method created for performance reasons?
TIA.
That's right. Here’s what the MSDN documentation has to say about it:
Members that do not access instance data or call instance methods can
be marked as static (Shared in Visual Basic). After you mark the
methods as static, the compiler will emit nonvirtual call sites to
these members. Emitting nonvirtual call sites will prevent a check at
runtime for each call that makes sure that the current object pointer
is non-null. This can achieve a measurable performance gain for
performance-sensitive code. In some cases, the failure to access the
current object instance represents a correctness issue.
Source: http://msdn.microsoft.com/en-us/library/ms245046.aspx

The Class/Object Paradox confusion

In the book The Well Grounded Rubyist (excerpt), David Black talks about the "Class/Object Chicken-and-Egg Paradox". I'm having a tough time understanding the entire concept.
Can someone explain it in better/easier/analogical/other terms?
Quote (emphasis mine):
The class Class is an instance of itself; that is, it’s a Class
object. And there’s more. Remember the class Object? Well, Object
is a class... but classes are objects. So, Object is an object. And
Class is a class. And Object is a class, and Class is an object.
Which came first? How can the class Class be created unless the
class Object already exists? But how can there be a class Object
(or any other class) until there’s a class Class of which there can
be instances?
The best way to deal with this paradox, at least for now, is to ignore
it. Ruby has to do some of this chicken-or-egg stuff in order to get
the class and object system up and running—and then, the circularity
and paradoxes don’t matter. In the course of programming, you just
need to know that classes are objects, instances of the class called
Class.
(If you want to know in brief how it works, it’s like this: every
object has an internal record of what class it’s an instance of, and
the internal record inside the object Class points back to Class.)
You can see the problem in this diagram:
(source: phrogz.net)
All object instances inherit from Object. All classes are objects, and Class is a class, therefore Class is an object. However, object instances inherit from their class, and Object is an instance of the Class class, therefore Object itself gets methods from Class.
As you can see in the diagram, however, there isn't a circular lookup loop, because there are two different inheritance 'parts' to every class: the instance methods and the 'class' methods. In the end, the lookup path is sane.
N.B.: This diagram reflects Ruby 1.8, and thus does not include the core BasicObject class introduced in Ruby 1.9.
In practical terms, all you need to understand is that Object is the mother of all classes. All classes extend Object. It is this relationship that you will use in programming, understanding inheritance and so forth.
Eg; You can call hash() on any instance of any object at any time? Why? Because that function appears in the Object class, and all classes inherit that function, because all classes extend Object.
As far as the idea of Class goes, this comes up much less frequently. An object of class Class is like a blueprint, it's like having the class in your hands, without creating an instance of it. There's a little more to it, but it's a difficult one to describe without a lengthy example. Rest assured, when (if ever) the time comes to use it, you'll see it's purpose.
All this excerpt is saying is that Object has a class of type Class and Class is an object, so must extend Object. Its cyclic, but it's irrelevant. The answer is buried somewhere in the compiler.
Regarding the which-came-first criterion, there are two kinds of Ruby objects:
Built-in objects. They exist at the start of a Ruby program and can be considered to have zero creation time.
User created objects. They are created after the program starts via object creation methods (new/clone/dup, class-definition, module-definition, literal-constructs, ...). Such objects are linearly ordered by their time of creation. This order happens to inversely correspond to class-inheritance and instance-of relations.
A detailed explanation of the Ruby object model is available at www.atalon.cz.
I know that my answer comes at least 3 years late, but I have learned about Ruby quite recently and I must admit that the literature sometimes presents (in my opinion) misleading explanations, even though one is handling a very simple problem. Moreover, I am (and was) surprised by such appalling phrases as:
"The best way to deal with this paradox, at least for now, is to ignore it."
stated by the author D. Black, and quoted in the question, but this attitude seems to be pervasive. I have experienced this tendency within the physics community but I have not suspected it had also spread through the programming one. I am not stating that nobody understands what is lurking behind, but it seems at least intriguing why not providing the (indeed) very simple and precise answer, as in this case there is one, without invoking any obscure words such as "paradox" within the explanation.
This so-called "paradox" (even though it is definitely NOT such thing) comes from the (at least misleading) belief that "being an instance of (a subclass of)" should be something as "being an element of" (in, say, naive set theory), or in other terms, a class (in Ruby) is the set containing all the objects sharing some common property (for instance, under this naive interpretation the class String includes all the string objects). Even though this naive point of view (which I may call the "membership interpretation") may help understanding isolated (and rather easy) classes such as String or Symbol, it indeed PRODUCES A CLEAR CONTRADICTION with our naive understanding (and also the axiomatic one, for it contradicts Von Neumann's regularity axiom of set theory, if someone knows what I am talking about) of the membership relationship, for an object could not be an element of itself, as the previous interpretation implies when regarding the object Class.
The previously stated problem is easily avoided if one gets rid of such misleading membership interpretation with a very simply minded model as follows.
I would have guess that my following explanation is well-known by the experts, so I claim no originality at all, but perhaps it was not rephrased in the (simple) terms I am going to present it: in some sense I am completely astonished that (apparently) nobody stated in these terms from the very beginning, and my intention is just to highlight what I believe is the basic underlying structure.
Let us consider a set O of (basic) objects, which consists of all the (basic) objects Ruby has, provided with a mapping f from O to O (which is more or less the .class method), satisfying that the image of the composition of f with itself has only one element.
This latter element (or object) is denoted Class (indeed, what you know to be the class Class).
I would be tempted to write this post using LaTeX code but I am not quite sure if it will be parsed and converted into typical math expressions.
The image of the mapping f is (by definition) the set of Ruby classes (e.g. String, Object, Symbol, Class, etc).
Ruby programmers (even though if they do not know it) interpret the previous construction as follows: we say that an object "x is an instance of y" if and only if y = f(x). By the way this tells us you exactly that Class is an instance of Class (i.e. of itself).
Now, we would need much more decorations to this simple model in order to get the full Ruby hierarchy of classes and functionality (imposing the existence of some fixed members on the image of the map f, a partial order on the image of the map f in order to define subclasses to get afterwards inheritance, etc), and in particular to get the nice picture that was interestingly upvoted, but I suppose that everybody can figure this out from the primitive model I gave (I have written some pages explaining this for myself, after having read several Ruby manuals. I may provide a copy if anybody finds it useful).
Which came first: The class Class or the class Object?
Every class is an instance of the class Class. It follows that the class Object is an instance of the class Class. So you need the class Class to create the class Object. Therefore:
The class Class exists before the class Object.
The class Class is a subclass of the class Object. So you need the class Object from which the class Class can be created. Therefore:
The class Object exists before the class Class.
So statement-2 conflicts with statement-1 and so we have our chicken & egg dilemma. Or you can just accept it as a circular definition!
I have done a dig into the source code of Ruby and created this post on how to make sense of it.

Is the DI pattern limiting wrt expensive object creation coupled with infrequent dependency usage?

I'm having a hard time getting my head around what seems like an obvious pattern problem/limitation when it comes to typical constructor dependency injection. For example purposes, lets say I have an ASP.NET MVC3 controller that looks like:
Public Class MyController
Inherits Controller
Private ReadOnly mServiceA As IServiceA
Private ReadOnly mServiceB As IServiceB
Private ReadOnly mServiceC As IServiceC
Public Sub New(serviceA As IServiceA, serviceB As IServiceB, serviceC As IServiceC)
Me.mServiceA = serviceA
Me.mServiceB = serviceB
Me.mServiceC = serviceC
End Sub
Public Function ActionA() As ActionResult
' Do something with Me.mServiceA and Me.mServiceB
End Function
Public Function ActionB() As ActionResult
' Do something with Me.mServiceB and Me.mServiceC
End Function
End Class
The thing I'm having a hard time getting over is the fact that the DI container was asked to instantiate all three dependencies when at any given time only a subset of the dependencies may be required by the action methods on this controller.
It's seems assumed that object construction is dirt-cheep and there are no side effects from object construction OR all dependencies are consistently utilized. What if object construction wasn't cheep or there were side effects? For example, if constructing IServiceA involved opening a connection or allocating other significant resources, then that would be completely wasted time/resources when ActionB is called.
If these action methods used a service location pattern (or other similar pattern), then there would never be the chance to unnecessarily construct an object instance that will go unused, of course using this pattern has other issues attached making it unattractive.
Does using the canonical constructor injection + interfaces pattern of DI basically lock the developer into a "limitation" of sorts that implementations of the dependency must be cheep to instantiate or the instance must be significantly utilized? I know all patterns have their pros and cons, is this just one of DI's cons? I've never seen it mentioned before, which I find curious.
If you have a lot of fields that aren't being used by every member this means that the class' cohesion is low. This is a general programming concept - Constructor Injection just makes it more visible. It's usually a pretty good indicator that the Single Responsibility Principle is being violated.
If that's the case then refactor (e.g. to Facade Services).
You don't have to worry about performance when creating object graphs.
When it comes to side effects, (DI) constructors should be simple and not have side effects.
Generally speaking, there should be no major costs or side effects of object construction. This is a general statement that I believe applies to most (not all) objects, but is especially true for services that you would inject via DI. In other words, constructing a service class automatically makes a database/service call, or changes the state of your system in a way that would have side effects is (at least) a code smell.
Regarding instances that go unused: it's hard to create a system that has perfect utilization of instances within dependent classes, regardless of whether you use DI or not. I'm not sure achieving this is very important, as long as you are adhering to the Single Responsibility Principle. If you find that your class has too many services injected, or that utilization is really uneven, it might be a sign that your class is doing too much and needs to be split into two or more smaller classes with more focused responsibilities.
No you are not tied to the limitations you have listed. As of .net 4 you do have Lazy(Of T) at your disposal, which will allow you to defer instantiation of your dependencies until required.
It is not assumed that object construction is dirt-cheap and consequently some DI containers support Lazy(Of T) out of the box. Whilst Unity 2.0 supports lazy initialization out of the box through automatic factories, there is a good article here on an extension supporting Lazy(Of T) the author has on MSDN.
Isn't your controller a singleton though? That is the normal way to do it in Java. There is only one instance created. Also you could split the controller into multiple controllers if the roles of the actions is so distinct.

Is it normal to have a long list of arguments in the constructor of a Presenter class?

Warning acronym overload approaching!!! I'm doing TDD and DDD with an MVP passive view pattern and DI. I'm finding myself adding dependency after dependency to the constructor of my presenter class as I write each new test. Most are domain objects. I'm using factories for dependency injection though I will likely be moving to an IoC container eventually.
When using constructor injection (as apposed to property injection) its easy to see where your dependencies are. A large number of dependencies is usually an indicator that a class has too much responsibility but in the case of a presenter, I fail to see how to avoid this.
I've thought of wrapping all the domain objects into a single "Domain" class which would act as a middle man but I have this gut feeling that I'd only be moving the problem instead of fixing it.
Am I missing something or is this unavoidable?
Often a large number of arguments to a method (constructor, function, etc) is a code smell. It can be hard to understand what all the arguments are. This is especially the case if you have large numbers of arguments of the same type. It is very easy for them to get confused which can introduce subtle bugs.
The refactoring is called "Introduce Parameter Object". Whether that's really a domain object or not, it is basically a data transfer object that minimizes the number of parameters passed to a method and gives them a bit more context.
I only use DI on the Constructor if I need something to be there from the start. Otherwise I use properties and have lazy loading for the other items. For TDD/DI as long as you can inject the item when you need it you don't need to add it to your constructor.
I recommend always following the Law of Demeter and not following the DI myth of everything needs to be in the constructor. Misko Hevery (Agile Coach at Google) describes it well on his blog http://misko.hevery.com/2008/10/21/dependency-injection-myth-reference-passing/
Having a Layer Supertype might not be a bad idea, but I think your code smell might be indicating something else. Geofflane mentioned the refactor pattern, Introduce Parameter Object. While it's a good pattern for this sort of problem, I'm not entirely sure it's the way to go for this situation.
Question: Why are you passing in Domain Model objects to the constructor?
There is such a thing as having too much abstraction. If there's one solid layer of code you should be able to trust, it's your Domain Model. You don't need to reference 3 IEntity objects when you're dealing with Customer, Vendor, and Product classes if those are part of your basic Domain Model and you don't necessarily need polymorphism.
My advice: Pass in application and domain services. Trust your Domain Model.
EDIT:
Re-reading the problem when it's not horribly late at night, I realize your "Domain class" is already the Introduce Parameter Object refactoring and not, in fact, a Layer Supertype, as I thought at 3AM.
I also realize that perhaps you need to reference the Model objects in the application code, outside the Presenter. Perhaps you're doing some initial setup of your Model objects before passing them in to the Presenter. If this is the case, your "Domain class" idea might be best. If there is some initial setup, when moving to an IoC, you'll want to look at something like Factory Support in Castle Windsor. (Other IoC containers have similar concepts.)

Resources