I've read about various COM design patterns detailed in COM Programmer's Cookbook as well some of related SO threads, notably the thread discussing composition vs. multiple inheritance. Possibly because I'm too new to both C++ and COM, I may be missing the points made in various sources so here's my question expressed in single sentence:
Can I extend an interface generated by MIDL for DLL's internal use and if so, how do I handle the diamond problem/parallel hierarchy correctly given MIDL/COM limitation?
The dirty details...
Hopefully to help other pinpoint where my confusion may be, here are my assumptions:
1) COM does not support virtual inheritance, and only allows for multiple inheritance via interfaces only.
2) Even though COM cannot see it, it should not be illegal for me to use unsupported C++ inheritance as long I do not expect it to be directly exposed by COM.
3) Because MIDL only allows single inheritance of interfaces, if I have a parallel hierarchy, I need to aggregate them for the coclass.
4) MIDL does not appear to declare the coclass itself, so I would need to write a .h file declaring the actual class and there, I may extend as needed with understanding COM consumers cannot use it (and that's OK).
What I want to do is have a base object (I've not decided yet whether it'll be abstract or not though I think it will be for now) that handles most of implementation details and delegate some specific functionality to subclasses. The client would typically use the subclasses. So,
project.idl
import "oaidl.idl"
import "ocidl.idl"
[
object,
uuid(...),
dual,
oleautomation
]
interface IBase : IDispatch {
//stuff I want to show to COM
};
[
object,
uuid(...),
dual,
oleautomation
]
interface IChild1 : IBase {
//stuff (in addition to base) I want to show to COM
};
[
object,
uuid(...),
dual,
oleautomation
]
interface IChild2 : IBase {
//stuff (in addition to base) I want to show to COM
};
[
uuid(...),
version(...),
]
library myproject {
importlib("stdole32.tlb");
interface IBase;
interface IChild1;
interface IChild2;
[
uuid(...),
]
coclass Base {
[default]interface IBase;
interface IOle*; //include other IOle* interfaces required for the functionality
};
[
uuid(...),
]
coclass Child1 {
[default]interface IChild1;
interface IOle*; //those are delegated to the base members
};
[
uuid(...),
]
coclass Child2 {
[default]interface IChild2;
interface IOle*; //those are delegated to the base members
};
};
base.h
#include base_h.h //interfaces generated by MIDL
// I assume I need to re-include IOle* because IBase has no relationship
// and C++ wouldn't know that I want the object base to also have those
// interfaces...
class base : public IBase,
public IOle* {
//handle all IUnknown, IDispatch and other IOle* stuff here
//as well as the common implementations as appropriate
};
child1.h
#include base.h
//I'm not sure if I need to re-include the IOle* interfaces...
//I also assume that by inheriting base, child1 also inherits its interface
class Child1 : public Base,
public IChild1 {
//specific details only, let base handle everything else.
};
child2.h
#include base.h
//I'm not sure if I need to re-include the IOle* interfaces...
class Child2 : public Base,
public IChild2 {
//specific details only, let base handle everything else.
};
Conceptually, creating a new child* object would always imply a creation of base object because base would be required to handle the implementation details, so I thought it's also appropriate to have the base take care of the QueryInterface & Reference counting, but I get confused on the following points:
1) compiler complains about members being ambiguous due to parallel hierarchy; IUnknown is re-implemented several times from my custom interface and from the additional IOle* interfaces. Documentations suggests that it is expected that only one implementation per object is actually needed but I'm not clear how I'd address the compiler's issues and I feel that doing a casting is somehow wrong? I also wonder if I am supposed to have all interfaces be inherited virtually which seems to be valid for C++, though COM wouldn't have no such understanding but it shouldn't care either(?).
2) However, if I do declare all of inherited interfaces as virtual in the .h files, compiler then complains that inherited members are not allowed when I try to implement QueryInterface in the Base class.cpp. I've googled on that error but am not clear what it is trying to tell me here.
EDIT: I answered my own question. Intel had documentation for this error which I initially did not click on the link, assuming that it may not apply to Visual Studio. I wish I did anyway but now I understand why I was getting this error since I was trying to do all implementation in Base::, rather than IUnknown:: or IDispatch::. This now begs a new question which may clarify my original & main question -- how do I defer the implementation from IUnknown (and others) to Base and work only from Base, if that is even possible? It seems that if I just use IUnknown::xxx, it no longer can access the Base's private members, which seems sane thing to expect so that may not be what I want. I tried declaring all other interfaces except base's own as virtual, but that didn't really take. (Again, it may be my inexperience not seeing the obvious solution.)
3) Base's QueryInterface cannot cast base to a child, which is a reasonable complaint, so I assume I have to reimplement the QI for all children anyway but I can delegate back to base's QI once I determine the requested interfaces isn't the child's. Bizarrely, the compiler insists that the child* class are abstract due to missing members for IUnknown & IDispatch, but didn't the base already implement and thus the child should have those members as well?
The various compiler errors makes me worry that my inexperience with either or both of language & framework is leading me to make a flawed assumptions about how I can design the COM objects & inheritance hierarchy and implementation details and I'm decidedly missing something here. Any pointers, even a slap on the head would be much appreciated.
Thanks!
You've got the right idea here, all you're missing is tying up some loose ends in the most-derived class. As a COM developer, you expect all the AddRef/Release/QI impls on a class object to be the same; but the C++-oriented compiler doesn't know that, so is treating them as all being potentially separate. The two impls you have here are the one in Base and the ones in any interfaces you've added.
Setting the compiler straight here is pretty easy: in the most derived class, redefine all the IUnknown methods, and direct them to the appropriate base class - eg.
class ChildX: public Base,
public IChildA
... more COM interfaces if needed ...
{
...
// Direct IUnknown methods to Base which does the refcounting for us...
STDMETHODIMP_(ULONG) AddRef() { return Base::AddRef(); }
STDMETHODIMP_(ULONG) Release() { return Base::Release(); }
... suggest implementing QI explicitly here.
}
This basically says that all methods called AddRef, regardless of how they ended up in ChildX, will get that specific implementation.
Its simplest to actually implement QI outright here, and only delegate AddRef/Release to Base. (Technically, Base can cast to Child using static_cast, but you need to put the code in an function after Child has been fully defined; it's not recommended to do this, however, since there's rarely a good reason for a base class to know about the classes that derive from it.)
Other things to watch for: make sure that Base has a virtual dtor declared - even if just empty, so that when Base does a 'delete this' when the ref goes to 0, it will call the dtors in the derived classes and any resources that they have allocated get cleaned up appropriately. Also, be sure to get the ref counting correct, and thread-safe if needed; check with any good intro to COM book (eg "Inside Distributed COM", which, despite the name, starts off with plain COM) to see how other folk do this.
This is a very common idiom in COM, and many frameworks use either #define macros or a more-derived template class to add in the AddRef/Release/QI (as MFC does) at the most-derived class and then delegate those to a well-known base class that handles much of the housekeeping.
Related
There are some common interfaces in Go's standard library, e.g, the io.Closer:
type Closer interface {
Close() error
}
If I would want to define an interface in my code that has a Close method, would I embed io.Closer like this:
type example interface {
io.Closer
// ... some other functions or embedded types
}
or would I just define the function itself like:
type example interface {
Close() error
// ... some other functions or embedded types
}
Is there any best practice for this?
For such common and simple interfaces I would definately embed the one from the standard lib (such as io.Closer, io.Reader and io.ByteReader).
But not any interface type. In general interfaces should be defined where they are needed. Embedding any interface defined in other packages (including the standard library) has the danger of failing to implicitly implement them if they are changed or extended.
The "owner" (definer) of the package may change it (e.g. extend it with a new method) and properly update all its types implementing it, so the package can continue to work from the outside, but obviously the package owner will not update your implementations.
For example the reflect.Type interface type had no Type.ConvertibleTo() method back in Go 1.0, it was added in Go 1.1. The same thing may happen: interfaces in the standard lib may get altered or extended in future Go versions, resulting in your existing code failing to implement them.
What's the difference between small, common interfaces and the "rest"? The bigger the interface, the weaker the abstraction – so goes the Go proverb. Small interfaces like io.Closer and io.Reader capture a tiny yet important functionality. They are so common, "every" library tries to implement them, every utility functions build upon them. I don't ever expect them to change. If there will be a reason to change / extend them, they will be rather added as new interfaces. Not like bigger interfaces where abstraction is harder to capture accurately. They have a better chance to change / evolve over time.
In Visual Studio, I've created a UML class diagram with a class that realises an interface containing an attribute and an operation as thus:
The operation is automatically replicated to the class, but not the attribute. The MSDN guidelines indicate this behaviour:
When you create a realization connector, the operations of the interface are automatically replicated in the realizing class. If you add new operations to an interface, they are replicated in its realizing classes.
However, this seems counterintuitive to their statement just beforehand, namely:
Realization means that a class implements the attributes and operations specified by the interface.
I'm sure there must be a good technical reason for this (some OO concept like polymorphism or abstraction), but I can't think why it discerns between attributes and operations in this way.
Can anyone give me some insight into this, and perhaps what I should do to get round it (do I add the attributes to the class manually in UML?), as it's resulting in generated code that doesn't compile?
While I don't know for sure, I'd imagine it is because in C# interfaces cannot contain fields, only methods. Having attributes on an Interface therefore doesn't make sense.
Interfaces can contain properties, but these just get compiled to PropType get_PropName() and void set_PropName(PropType value). (Fun fact, trying to declare those methods yourself will generate a compiler error.)
Unfortunately, there is not a nice "out of the box" way of defining properties in UML class diagrams, as they are a language specific feature. I think you have to define a custom stereotype and the templates to generate the code accordingly - faff.
I read through the TypeScript Coding guidelines
And I found this statement rather puzzling:
Do not use "I" as a prefix for interface names
I mean something like this wouldn't make a lot of sense without the "I" prefix
class Engine implements IEngine
Am I missing something obvious?
Another thing I didn't quite understand was this:
Classes
For consistency, do not use classes in the core compiler pipeline. Use
function closures instead.
Does that state that I shouldn't use classes at all?
Hope someone can clear it up for me :)
When a team/company ships a framework/compiler/tool-set they already have some experience, set of best practices. They share it as guidelines. Guidelines are recommendations. If you don't like any you can disregard them.
Compiler still will compile your code.
Though when in Rome...
This is my vision why TypeScript team recommends not I-prefixing interfaces.
Reason #1 The times of the Hungarian notation have passed
Main argument from I-prefix-for-interface supporters is that prefixing is helpful for immediately grokking (peeking) whether type is an interface. Statement that prefix is helpful for immediately grokking (peeking) is an appeal to Hungarian notation. I prefix for interface name, C for class, A for abstract class, s for string variable, c for const variable, i for integer variable. I agree that such name decoration can provide you type information without hovering mouse over identifier or navigating to type definition via a hot-key. This tiny benefit is outweighed by Hungarian notation disadvantages and other reasons mentioned below. Hungarian notation is not used in contemporary frameworks. C# has I prefix (and this the only prefix in C#) for interfaces due to historical reasons (COM). In retrospect one of .NET architects (Brad Abrams) thinks it would have been better not using I prefix. TypeScript is COM-legacy-free thereby it has no I-prefix-for-interface rule.
Reason #2 I-prefix violates encapsulation principle
Let's assume you get some black-box. You get some type reference that allows you to interact with that box. You should not care if it is an interface or a class. You just use its interface part. Demanding to know what is it (interface, specific implementation or abstract class) is a violation of encapsulation.
Example: let's assume you need to fix API Design Myth: Interface as Contract in your code e.g. delete ICar interface and use Car base-class instead. Then you need to perform such replacement in all consumers. I-prefix leads to implicit dependency of consumers on black-box implementation details.
Reason #3 Protection from bad naming
Developers are lazy to think properly about names. Naming is one of the Two Hard Things in Computer Science. When a developer needs to extract an interface it is easy to just add the letter I to the class name and you get an interface name. Disallowing I prefix for interfaces forces developers to strain their brains to choose appropriate names for interfaces. Chosen names should be different not only in prefix but emphasize intent difference.
Abstraction case: you should not not define an ICar interface and an associated Car class. Car is an abstraction and it should be the one used for the contract. Implementations should have descriptive, distinctive names e.g. SportsCar, SuvCar, HollowCar.
Good example: WpfeServerAutosuggestManager implements AutosuggestManager, FileBasedAutosuggestManager implements AutosuggestManager.
Bad example: AutosuggestManager implements IAutosuggestManager.
Reason #4 Properly chosen names vaccinate you against API Design Myth: Interface as Contract.
In my practice, I met a lot of people that thoughtlessly duplicated interface part of a class in a separate interface having Car implements ICar naming scheme. Duplicating interface part of a class in separate interface type does not magically convert it into abstraction. You will still get concrete implementation but with duplicated interface part. If your abstraction is not so good, duplicating interface part will not improve it anyhow. Extracting abstraction is hard work.
NOTE: In TS you don't need separate interface for mocking classes or overloading functionality.
Instead of creating a separate interface that describes public members of a class you can use TypeScript utility types. E.g. Required<T> constructs a type consisting of all public members of type T.
export class SecurityPrincipalStub implements Required<SecurityPrincipal> {
public isFeatureEnabled(entitlement: Entitlement): boolean {
return true;
}
public isWidgetEnabled(kind: string): boolean {
return true;
}
public areAdminToolsEnabled(): boolean {
return true;
}
}
If you want to construct a type excluding some public members then you can use combination of Omit and Exclude.
Clarification regarding the link that you reference:
This is the documentation about the style of the code for TypeScript, and not a style guideline for how to implement your project.
If using the I prefix makes sense to you and your team, use it (I do).
If not, maybe the Java style of SomeThing (interface) with SomeThingImpl (implementation) then by all means use that.
I find #stanislav-berkov's a pretty good answer to the OP's question. I would only share my 2 cents adding that, in the end it is up to your Team/Department/Company/Whatever to get to a common understanding and set its own rules/guidelines to follow across.
Sticking to standards and/or conventions, whenever possible and desirable, is a good practice and it keeps things easier to understand. On the other side, I do like to think we are still free to choose the way how we write our code.
Thinking a bit on the emotional side of it, the way we write code, or our coding style, reflects our personality and in some cases even our mood. This is what keeps us humans and not just coding machines following rules. I believe coding can be a craft not just an industrialized process.
I personally quite like the idea of turning a noun into an adjective by adding the -able suffix. It sounds very impropper, but I love it!
interface Walletable {
inPocket:boolean
cash:number
}
export class Wallet implements Walletable {
//...
}
}
The guidelines that are suggested in the Typescript documentation aren't for the people who use typescript but rather for the people who are contributing to the typescript project. If you read the details at the begging of the page it clearly defines who should use that guideline. Here is a link to the guidelines.
Typescript guidelines
In conclusion as a developer you can name you interfaces the way you see fit.
I'm trying out this pattern similar to other answers, but exporting a function that instantiates the concrete class as the interface type, like this:
export interface Engine {
rpm: number;
}
class EngineImpl implements Engine {
constructor() {
this.rpm = 0;
}
}
export const createEngine = (): Engine => new EngineImpl();
In this case the concrete implementation is never exported.
I do like to add a Props suffix.
interface FormProps {
some: string;
}
const Form:VFC<FormProps> = (props) => {
...
}
The type being an interface is an implementation detail. Implementation details should be hidden in API:s. That is why you should avoid I.
You should avoid both prefix and suffix. These are both wrong:
ICar
CarInterface
What you should do is to make a pretty name visible in the API and have a the implemtation detail hidden in the implementation. That is why I propose:
Car - An interface that is exposed in the API.
CarImpl - An implementation of that API, that is hidden from the consumer.
From the Go documentation on method declarations:
The receiver type must be of the form T or *T where T is a type name. T is called the receiver base type or just base type. The base type must not be a pointer or interface type and must be declared in the same package as the method.
Can anyone give me some insight on why this might be? Are there any other (statically typed) languages that would allow this? I really want to define methods on an interface so I can treat any instance of a given interface type as another. For example (stealing the example from the Wikipedia article on the Template Method Pattern) if the following was valid:
type Game interface {
PlayOneGame(playersCount int)
}
type GameImplementation interface {
InitializeGame()
MakePlay(player int)
EndOfGame() bool
PrintWinner()
}
func (game *GameImplementation) PlayOneGame(playersCount int) {
game.InitializeGame()
for j := 0; !game.EndOfGame(); j = (j + 1) % playersCount {
game.MakePlay(j)
}
game.PrintWinner()
}
I could use any instance implementing "GameImplementation" as a "Game" without any conversion:
var newGame Game
newGame = NewMonopolyGame() // implements GameImplementation
newGame.PlayOneGame(2)
UPDATE: the purpose of this was to try and achieve all the benefits of abstract base classes without all the coupling that goes with an explicit hierarchy. If I wanted to define a new behaviour PlayBestOfThreeGames, abstract base classes would require me to change the base class itself - whereas here I just define one more method on top of the GameImplementation interface
It's probably for the same reason you can't define methods on interfaces in Java.
An interface is meant to be a description of a part of, or the whole of, the external interface for a set of objects and not how they implement the underlying behavior. In Java you would probably use an abstract class if you need parts of the behavior to be pre-defined but I think the only way to do that in Go is to use functions rather than methods.
I believe that for your example the more Go idiomatic code would be something like this:
type GameImplementation interface {
InitializeGame()
MakePlay(player int)
EndOfGame() bool
PrintWinner()
}
func PlayOneGame(game GameImplementation, playersCount int) {
game.InitializeGame()
for j := 0; !game.EndOfGame(); j = (j + 1) % playersCount {
game.MakePlay(j)
}
game.PrintWinner()
}
Where PlayOneGame and any specific game implementation are probably living in different packages.
Here is some discussion on golang-nuts
In answer to your question of whether there are other statically typed languages that allow this: yes, most. Any language with multiple inheritance allows classes to have arbitrary mixes of abstract and concrete methods. Also, see Scala's traits, which are like Java's interfaces but can have concrete methods. Scala also has structural types, which are really all that Go's interfaces are.
What you're describing as in Interface is really what might elsewhere be referred to as an abstract class -- that is, a class with some methods defined but not all, which must be subclassed in order to be instantiated.
However, Go doesn't have any concept of a class hierarchy -- the whole type structure is flat. Each method on a class is defined for that class specifically, not on any parent class or subclass or interface. This was a conscious design decision, not an omission.
In Go, an Interface is therefore not a component of a type hierarchy (as there is no such thing). Instead, it is simply an ad-hoc specification of the set of methods which must be implemented for a given purpose. That's all. They're a stand-in for dynamic typing whereby you can declare ahead of time which functions on a given type you'll be using -- then any variable who's type satisfies those requirements can be used.
This makes it impossible to use patterns like Generics with Go, and Rob Pike has said at a conference that this might be changed in the future if someone can come with a an elegant implementation and a compelling use case. But that remains yet to be seen.
First, it's important to notice that types implement interfaces implicitly — that is, interfaces are "duck types". Any type that provides the methods required by the interface is assignable to a variable of the interface type, without any cooperation from the original type. This is different from, say, Java or C# where a class that implements an interface has to declare its intention to implement the interface, in addition to actually providing the methods.
Go also has a pretty strong tendency against "action at a distance". For example, even though methods are declared separately from types, it's illegal to declare a method in a different package from its receiver type. You can't just go adding methods to os.File.
If interfaces could provide methods (making them traits/roles) then any type that implemented an interface would gain a bunch of new methods out of nowhere. Someone reading the code and seeing those methods used probably have a hard time figuring out where they came from.
There's a problem with fragility — change the signature of a method that's required by an interface, and a bunch of other methods appear or disappear. In the case where they disappeared, it's not obvious where they "would have" come from. If types had to declare their intention to implement an interface then breaking the contract would prompt an error (and "accidentally" implementing an interface does nothing), but when interfaces are satisfied implicitly things are trickier.
Worse, there could be name conflicts — an interface provides a method with the same name as a method provided by a type that implements that interface, or two interfaces both provide a method with the same name, and some type happens to implement both of those interfaces. Resolving that conflict is the kind of complication that Go really likes to avoid, and in a lot of cases there is no satisfying resolution.
Basically, it would be really cool if interfaces could provide methods — roles as composable units of behavior are cool, and mesh well with Go's composition-over-inheritance philosophy — but actually doing it would be too complicated and too action-at-a-distance-y for Go to contemplate.
I have found that there is generally a singe type or namespace that takes in any particular enum as a parameter and as a result I have always defined those enums there. Recently though, I had a co-worker make a big deal about how that was a stupid thing to do, and you should always have an enum namespace at the root of your project where you define everyone of your enum types.
Where is the best place to locate enum types?
Why treat enums differently to other types? Keep them in the same namespace as they're likely to be used - and assuming they're going to be used by other classes, make them top-level types in their own files.
The only type of type which I do commonly clump together is delegates - I sometimes have a Delegates.cs file with a bunch of delegates in. Less so with .NET 3.5 and Func/Action, mind you.
Also, namespaces are for separation of things that belong together logically. Not all classes belong in the same namespace just because they are classes. Likewise, not all enums belong in the same namespace just because they are enums. Put them with the code they logically belong in.
I generally try to put all my different types (classes, interfaces and enums) in their own files, regardless of how small they are. It just makes it much easier to find and manage the file they're in, especially if you don't happen to be in Visual Studio and have the "go to definition" feature available. I've found that nearly every time I've put a "simple" type like that in another class, I end up either adding on to it later on, or reusing it in a way that it no longer makes sense for it to not have its own file.
As far as which namespace, it really depends on the design of whatever you're developing. In general, I try to mimic the .NET framework's convention.
I try to put everything associated with a class in the class. That includes not just enums, but also constants. I don't want to go searching elsewhere for the file or class containing the enums. In a large app with lots of classes and folders, it wouldn't always be obvious where to put the enum file so it would be easy to find.
If the enum if used in several closely-related classes, you could create a base class so that the common types like enums are shared there.
Of course, if an enum is really generic and widely used, you may want to create a separate class for them, along with other generic utilities.
I think you put Enums and Constants in the class that consumes them or that uses them to control code decisions the most and you use code completion to find them. That way you don't have to remember where they are, they are associated with the class. So for example if I have a ColoredBox class then I don't have to think about where they are at. They would be part of ColoredBox. ColoredBox.Colors.Red, ColoredBox.Colors.Blue etc. I
I think of the enum and constant as a property or description of that class.
If it used by multiple classes and no one class reigns supreme then it is appropriate to have an enum class or constants class.
This follows rules of encapsulation. Isolating properties from dissimilar classes. What if you decide to change the RGB of Red in Cirle objects but
you don't want to change the red for ColoredBox objects? Encapsulating their properties enables this.
I use nested namespaces for this. I like them better than putting the enum within a class because outside of the class you have to use the full MyClass::MyEnum usage even if MyEnum is not going to clash with anything else in scope.
By using a nested namespace you can use the "using" syntax. Also I will put enums that relate to a given subsystem in their own file so you don't get dependency problems of having to include the world to use them.
So in the enum header file you get:
// MyEnumHeader.h
// Consolidated enum header file for this dll,lib,subsystem whatever.
namespace MyApp
{
namespace MyEnums
{
enum SomeEnum { EnumVal0, EnumVal1, EnumVal2 };
};
};
And then in the class header file you get:
// MyInterfaceHeader.h
// Class interfaces for the subsystem with all the expected dependencies.
#include "MyEnumHeader.h"
namespace MyApp
{
class MyInterface
{
public:
virtual void DoSomethingWithEnumParam (MyEnums::SomeEnum enumParam) = 0;
};
};
Or use as many enum header files as makes sense. I like to keep them separate from the class headers so the enums can be params elsewhere in the system without needing the class headers. Then if you want to use them elsewhere you don't have to have the encapsulating class defs as you would if the enums were declared within the classes.
And as mentioned before, in the outer code you can use the following:
using namespace MyApp::MyEnums;
What environment?
In .NET I usually create an empty class file, rename it to MyEnum or whatever to indicate it holds my enum and just declare it in there.
If my enumeration has any chance of ever being used outside the class I intend to use it, I create a separate source file for the enum. Otherwise I will place it inside the class I intend to use it.
Usually I find that the enum is centered around a single class -- as a MyClassOptions type of thing.
In that case, I place the enum in the same file as MyClass, but inside the namespace but outside the class.
namespace mynamespace
{
public partial class MyClass
{
}
enum MyClassOptions
{
}
}
I tend to define them, where their use is evident in the evident. If I have a typedef for a struct that makes use of it for some reason...
typedef enum {
HI,
GOODBYE
} msg_type;
typdef struct {
msg_type type;
union {
int hivar;
float goodbyevar;
}
} msg;