Style consistency when subclassing from an external library in a new project? - coding-style

Keeping a consistent style is important when adding to an existing project. But what about when subclassing heavily from a library in a new project when there are distinct differences between the project's and the library's standard?
I'm working on a project with a particular coding style, specifically when it comes to how classes and methods within those classes are named. One module of the project however, the UI, relies heavily on an external library and most of the classes in this module will overload this library. In this case, it's Qt, but it could be anything.
In such a situation, is it better to stick to the style used by the API or use the project's style for new functions? As an example, Qt uses camelCase for its function declarations whereas this project uses UpperCase. Consider something like this:
class MyNewWidget : public QWidget {
Q_OBJECT
public:
void setVisible(bool visible); // QWidget override
void SetNewFeature(std::string feature_key); // New function
bool visibility() const; // I know you can't override that, but bear with me
std::string GetNewFeature() const; // propertyName() vs GetPropertyName()
};
This feels along the same lines as subclassing from STL which uses lower_case. There are some good reasons in that case to follow convention in some instances at least, e.g. when typedef'ing iterator types or if you want your begin/end/swap functions to be usable by <algorithm> et al.
But I'm at a wall on this one.

I would strive to maintain a single style per class/file/module where possible. The difficulty comes when you're trying to think of the name of a method, and the less you can cause others to stop and think, the better. Of course, a good IDE/editor should correct your case.
Another option is to favor composition over inheritance. This would mean you'll write new methods using your projects style that delegate to the Qt methods internally. It will give you the added benefit of not being tied to the Qt API if that's at all a concern.

Related

How can I design classes that follow SOLID without off loading violations of SOLID somewhere else?

I have a controller that is violating Open and Closed Principle. I'm trying to figure out how to solve this with certain conditions. Here is the implementation.
struct BasicRecording
{
void show() {}
void hide() {}
};
struct AdvanceRecording
{
void show() {}
void hide() {}
};
class NotSolidRecordingSettingsController
{
BasicRecording br;
AdvanceRecording ar;
public:
void swithToAR();
void swithToBR();
};
void NotSolidRecordingSettingsController::swithToAR()
{
br.hide();
ar.show();
};
void NotSolidRecordingSettingsController::swithToBR()
{
ar.hide();
br.show();
};
The problem here is if I have a new recording settings, I would need to go back inside settings controller and add that new recording settings. If I inject the BasicRecording and AdvanceRecording in NotSolidRecordingSettingsController, then the Object instantiating NotSolidRecordingSettingsController will need to do the instantiating of BasicRecording and AdvanceRecording. But then that Object then violates OCP. Somebody has to create the object.
How can I design this to be OCP without just off loading the Not OCP part to some other thing?
Is there a particular design pattern for this kind of problem?
In the SOLID principles, OCP is really a consequence of SRP -- a class should have a single responsibility, and its code should only change when its internal requirements -- the requirements associated with its one job -- change.
In particular, a class shouldn't have to change because it's clients change. It may have a lot of clients and you don't want to mess with its code every time it gets new clients or the existing clients need to do something different. Hence OCP. Note that in OCP, "closed for modification" means you don't mess with it when you change the stuff that uses it. You do modify it when its own internal requirements change. That's just maintenance.
So OCP is about how classes relate to their clients, BUT, there are classes that have no clients. Their job is not to be services, and they are not used by other classes or to implement APIs, etc. These classes do not need to worry about OCP, because they naturally have no reason to change other than a change to their internal requirements.
A great example of such a class is what is sometimes called a "composition root" (googlable). Its single responsibility is to define how the system is built from its components. This is the guy that creates all the objects and injects them into everywhere they're needed.
You need a composition root that injects the settings panes into the controller (and whatever triggers them, because you can't have a method called switchToAR anymore). This class' job is to define the system by creating the required objects. When the arrangement of objects needs to change, then, that is a change to its internal requirements and you can go ahead and modify its code without violating SOLID.
(or you can have all that stuff read from configuration instead, but that is just implementing the composition root in config instead of your programming language. Sometimes that is a good idea, and sometimes not)

Confused about the Interface and Class coding guidelines for TypeScript

I read through the TypeScript Coding guidelines
And I found this statement rather puzzling:
Do not use "I" as a prefix for interface names
I mean something like this wouldn't make a lot of sense without the "I" prefix
class Engine implements IEngine
Am I missing something obvious?
Another thing I didn't quite understand was this:
Classes
For consistency, do not use classes in the core compiler pipeline. Use
function closures instead.
Does that state that I shouldn't use classes at all?
Hope someone can clear it up for me :)
When a team/company ships a framework/compiler/tool-set they already have some experience, set of best practices. They share it as guidelines. Guidelines are recommendations. If you don't like any you can disregard them.
Compiler still will compile your code.
Though when in Rome...
This is my vision why TypeScript team recommends not I-prefixing interfaces.
Reason #1 The times of the Hungarian notation have passed
Main argument from I-prefix-for-interface supporters is that prefixing is helpful for immediately grokking (peeking) whether type is an interface. Statement that prefix is helpful for immediately grokking (peeking) is an appeal to Hungarian notation. I prefix for interface name, C for class, A for abstract class, s for string variable, c for const variable, i for integer variable. I agree that such name decoration can provide you type information without hovering mouse over identifier or navigating to type definition via a hot-key. This tiny benefit is outweighed by Hungarian notation disadvantages and other reasons mentioned below. Hungarian notation is not used in contemporary frameworks. C# has I prefix (and this the only prefix in C#) for interfaces due to historical reasons (COM). In retrospect one of .NET architects (Brad Abrams) thinks it would have been better not using I prefix. TypeScript is COM-legacy-free thereby it has no I-prefix-for-interface rule.
Reason #2 I-prefix violates encapsulation principle
Let's assume you get some black-box. You get some type reference that allows you to interact with that box. You should not care if it is an interface or a class. You just use its interface part. Demanding to know what is it (interface, specific implementation or abstract class) is a violation of encapsulation.
Example: let's assume you need to fix API Design Myth: Interface as Contract in your code e.g. delete ICar interface and use Car base-class instead. Then you need to perform such replacement in all consumers. I-prefix leads to implicit dependency of consumers on black-box implementation details.
Reason #3 Protection from bad naming
Developers are lazy to think properly about names. Naming is one of the Two Hard Things in Computer Science. When a developer needs to extract an interface it is easy to just add the letter I to the class name and you get an interface name. Disallowing I prefix for interfaces forces developers to strain their brains to choose appropriate names for interfaces. Chosen names should be different not only in prefix but emphasize intent difference.
Abstraction case: you should not not define an ICar interface and an associated Car class. Car is an abstraction and it should be the one used for the contract. Implementations should have descriptive, distinctive names e.g. SportsCar, SuvCar, HollowCar.
Good example: WpfeServerAutosuggestManager implements AutosuggestManager, FileBasedAutosuggestManager implements AutosuggestManager.
Bad example: AutosuggestManager implements IAutosuggestManager.
Reason #4 Properly chosen names vaccinate you against API Design Myth: Interface as Contract.
In my practice, I met a lot of people that thoughtlessly duplicated interface part of a class in a separate interface having Car implements ICar naming scheme. Duplicating interface part of a class in separate interface type does not magically convert it into abstraction. You will still get concrete implementation but with duplicated interface part. If your abstraction is not so good, duplicating interface part will not improve it anyhow. Extracting abstraction is hard work.
NOTE: In TS you don't need separate interface for mocking classes or overloading functionality.
Instead of creating a separate interface that describes public members of a class you can use TypeScript utility types. E.g. Required<T> constructs a type consisting of all public members of type T.
export class SecurityPrincipalStub implements Required<SecurityPrincipal> {
public isFeatureEnabled(entitlement: Entitlement): boolean {
return true;
}
public isWidgetEnabled(kind: string): boolean {
return true;
}
public areAdminToolsEnabled(): boolean {
return true;
}
}
If you want to construct a type excluding some public members then you can use combination of Omit and Exclude.
Clarification regarding the link that you reference:
This is the documentation about the style of the code for TypeScript, and not a style guideline for how to implement your project.
If using the I prefix makes sense to you and your team, use it (I do).
If not, maybe the Java style of SomeThing (interface) with SomeThingImpl (implementation) then by all means use that.
I find #stanislav-berkov's a pretty good answer to the OP's question. I would only share my 2 cents adding that, in the end it is up to your Team/Department/Company/Whatever to get to a common understanding and set its own rules/guidelines to follow across.
Sticking to standards and/or conventions, whenever possible and desirable, is a good practice and it keeps things easier to understand. On the other side, I do like to think we are still free to choose the way how we write our code.
Thinking a bit on the emotional side of it, the way we write code, or our coding style, reflects our personality and in some cases even our mood. This is what keeps us humans and not just coding machines following rules. I believe coding can be a craft not just an industrialized process.
I personally quite like the idea of turning a noun into an adjective by adding the -able suffix. It sounds very impropper, but I love it!
interface Walletable {
inPocket:boolean
cash:number
}
export class Wallet implements Walletable {
//...
}
}
The guidelines that are suggested in the Typescript documentation aren't for the people who use typescript but rather for the people who are contributing to the typescript project. If you read the details at the begging of the page it clearly defines who should use that guideline. Here is a link to the guidelines.
Typescript guidelines
In conclusion as a developer you can name you interfaces the way you see fit.
I'm trying out this pattern similar to other answers, but exporting a function that instantiates the concrete class as the interface type, like this:
export interface Engine {
rpm: number;
}
class EngineImpl implements Engine {
constructor() {
this.rpm = 0;
}
}
export const createEngine = (): Engine => new EngineImpl();
In this case the concrete implementation is never exported.
I do like to add a Props suffix.
interface FormProps {
some: string;
}
const Form:VFC<FormProps> = (props) => {
...
}
The type being an interface is an implementation detail. Implementation details should be hidden in API:s. That is why you should avoid I.
You should avoid both prefix and suffix. These are both wrong:
ICar
CarInterface
What you should do is to make a pretty name visible in the API and have a the implemtation detail hidden in the implementation. That is why I propose:
Car - An interface that is exposed in the API.
CarImpl - An implementation of that API, that is hidden from the consumer.

C++ RAII, Prototype Design Pattern, and lazy initialization working in tandem

I'm trying my best to adhere to some strict design patterns while developing my current code base, as I am hoping I will be working on it for quite a while to come and I want it to be as flexible and clean as possible. However, in trying to combine all these design patterns to solve the current problem I am facing, I am running into some issues I'm hoping someone can advise me a bit on.
I'm working on some basic homebrewn GUI widgets that are to provide some generic click/drag/duplication behavior. On the surface, the user clicks the widget, then drags it somewhere. Having dragged it far enough, the widget will 'appear' to clone itself and leave the user dragging this new copy.
The Prototype design pattern obviously enters the foray to make this duplication functionality generalizable for many types of widgets. For most objects, the story ends there. The Prototype object is virtually an identical copy of the duplicated version the user ends up dragging.
However, one of the objects I want to duplicate has some fairly big resources attached to it, so I don't want to load them until the user actually decides to click and drag, and subsequently duplicate, that particular object. Enter Lazy initialization. But this presents me with a bit of a conundrum. I cannot just let the prototype object clone itself, as it would need to have the big resources loaded before the user duplicates the dummy prototype version. I'm also not keen on putting some logic into the object which, upon being cloned/duplicated, checks what's going on and decides if it should load in these resources. Instead, a helpful person suggested I create a kind of shell object and when it gets cloned, it were to return this more derived version containing the resources allowing for me to both use RAII and lazy initialization.
But I'm having some trouble implementing this, and I'm starting to wonder if I can even do it the way I'm thinking it should be done. Right now it looks like this:
class widgetSpawner : public widget {
public:
widgetSpawner();
~widgetSpawner();
private:
widget* mPrototypeWidget; // Blueprint with which to spawn new elements
};
class widgetAudioShell : public widget {
public:
widgetAudioShell(std::string pathToAudioFile);
widgetAudioShell( const widgetAudioShell& other );
~widgetAudioShell();
virtual widgetAudio* clone() const { return new widgetAudio(*this); };
private:
std::string mPathToAudioFile;
};
class widgetAudio : public widgetAudioShell {
public:
widgetAudio(AudioEngineAudioTrack &aTrack);
widgetAudio( const widgetAudio& other );
widgetAudio( const widgetAudioShell& other );
~widgetAudio();
virtual widgetAudio* clone() const { return new widgetAudio(*this); };
private:
AudioEngineAudioTrack &mATrack;
};
Obviously, this is not workable as the shell doesn't know the object that's using it to derive a new class. So it cannot return it via the clone function. However, if I keep the two inheritance-wise independent (as in they both inherit from widget), then the compiler complains about lack of co-variance which I think makes sense? Or maybe it's because I am again having the trouble of properly defining one before the other.
Essentially widgetAudioShell needs to know about widgetAudio so it can return a 'new' copy. widgetAudio needs to know about widgetAudioShell so it can read it's member functions when being created/cloned.
If I am not mistaken, this circular dependency is there because of my like of using references rather than pointers, and if I have to use pointers, then suddenly all my other widgets need to do the same which I'd find quite hellish. I'm hoping someone who's had their fingers in something similar might provide some wisdom?

What does "1" in "D2D1" stands for?

D2D1 is the namespace for Direct2D technology in Win32. However I don't understand the etymology of this name. The D2D part most likely refers to Direct2D, however the last 1 puzzles me...
There are also a lot of classes with "1" as a suffix: IDWriteFactory1, IDWriteFont1, IDWriteTextLayout1, etc. — what is their purpose, and difference from the similar objects without the suffix?
COM interfaces are usually named with increasing numbers to allow for versioning. Once a COM interface has been published publically, COM rules dictate that the interface is not allowed to be changed anymore, or else compatibility with existing code breaks. If new functionality needs to be added, a new interface has to be exposed. Let's take IDWriteFactory1 for example. Today, that is a first release interface. In the future, maybe IDWriteFactory2 will be added that extends IDWriteFactory1 with new methods. Existing code is preserved, since they don't know anything about IDWriteFactory2, but new code could create an IDWriteFactory1 object and query it to see if it supports IDWriteFactory2 and if so then use the newer methods as needed.
The naming of the D2D1 namespace is likely similar. It is probably being used for versioning purposes (Direct2D v1 ?). Maybe there will be a D2D2 (Direct2D v2) namespace added in the future for things that don't fit in the existing Direct2D architecture. Who knows for sure.

Where is the best place to locate enum types?

I have found that there is generally a singe type or namespace that takes in any particular enum as a parameter and as a result I have always defined those enums there. Recently though, I had a co-worker make a big deal about how that was a stupid thing to do, and you should always have an enum namespace at the root of your project where you define everyone of your enum types.
Where is the best place to locate enum types?
Why treat enums differently to other types? Keep them in the same namespace as they're likely to be used - and assuming they're going to be used by other classes, make them top-level types in their own files.
The only type of type which I do commonly clump together is delegates - I sometimes have a Delegates.cs file with a bunch of delegates in. Less so with .NET 3.5 and Func/Action, mind you.
Also, namespaces are for separation of things that belong together logically. Not all classes belong in the same namespace just because they are classes. Likewise, not all enums belong in the same namespace just because they are enums. Put them with the code they logically belong in.
I generally try to put all my different types (classes, interfaces and enums) in their own files, regardless of how small they are. It just makes it much easier to find and manage the file they're in, especially if you don't happen to be in Visual Studio and have the "go to definition" feature available. I've found that nearly every time I've put a "simple" type like that in another class, I end up either adding on to it later on, or reusing it in a way that it no longer makes sense for it to not have its own file.
As far as which namespace, it really depends on the design of whatever you're developing. In general, I try to mimic the .NET framework's convention.
I try to put everything associated with a class in the class. That includes not just enums, but also constants. I don't want to go searching elsewhere for the file or class containing the enums. In a large app with lots of classes and folders, it wouldn't always be obvious where to put the enum file so it would be easy to find.
If the enum if used in several closely-related classes, you could create a base class so that the common types like enums are shared there.
Of course, if an enum is really generic and widely used, you may want to create a separate class for them, along with other generic utilities.
I think you put Enums and Constants in the class that consumes them or that uses them to control code decisions the most and you use code completion to find them. That way you don't have to remember where they are, they are associated with the class. So for example if I have a ColoredBox class then I don't have to think about where they are at. They would be part of ColoredBox. ColoredBox.Colors.Red, ColoredBox.Colors.Blue etc. I
I think of the enum and constant as a property or description of that class.
If it used by multiple classes and no one class reigns supreme then it is appropriate to have an enum class or constants class.
This follows rules of encapsulation. Isolating properties from dissimilar classes. What if you decide to change the RGB of Red in Cirle objects but
you don't want to change the red for ColoredBox objects? Encapsulating their properties enables this.
I use nested namespaces for this. I like them better than putting the enum within a class because outside of the class you have to use the full MyClass::MyEnum usage even if MyEnum is not going to clash with anything else in scope.
By using a nested namespace you can use the "using" syntax. Also I will put enums that relate to a given subsystem in their own file so you don't get dependency problems of having to include the world to use them.
So in the enum header file you get:
// MyEnumHeader.h
// Consolidated enum header file for this dll,lib,subsystem whatever.
namespace MyApp
{
namespace MyEnums
{
enum SomeEnum { EnumVal0, EnumVal1, EnumVal2 };
};
};
And then in the class header file you get:
// MyInterfaceHeader.h
// Class interfaces for the subsystem with all the expected dependencies.
#include "MyEnumHeader.h"
namespace MyApp
{
class MyInterface
{
public:
virtual void DoSomethingWithEnumParam (MyEnums::SomeEnum enumParam) = 0;
};
};
Or use as many enum header files as makes sense. I like to keep them separate from the class headers so the enums can be params elsewhere in the system without needing the class headers. Then if you want to use them elsewhere you don't have to have the encapsulating class defs as you would if the enums were declared within the classes.
And as mentioned before, in the outer code you can use the following:
using namespace MyApp::MyEnums;
What environment?
In .NET I usually create an empty class file, rename it to MyEnum or whatever to indicate it holds my enum and just declare it in there.
If my enumeration has any chance of ever being used outside the class I intend to use it, I create a separate source file for the enum. Otherwise I will place it inside the class I intend to use it.
Usually I find that the enum is centered around a single class -- as a MyClassOptions type of thing.
In that case, I place the enum in the same file as MyClass, but inside the namespace but outside the class.
namespace mynamespace
{
public partial class MyClass
{
}
enum MyClassOptions
{
}
}
I tend to define them, where their use is evident in the evident. If I have a typedef for a struct that makes use of it for some reason...
typedef enum {
HI,
GOODBYE
} msg_type;
typdef struct {
msg_type type;
union {
int hivar;
float goodbyevar;
}
} msg;

Resources