How far can we stretch a UML Interface? - events

First, context:
We're developing some applications using QP Framework from quantum leaps.
QP is basically an event manager for state machines.
Our approach:
Since we're using QP, our modules interact with events and signals (classes/structs and enums) except some very specific modules that interact by methods.
Problem:
For those last modules, interfaces are easy, just a collection of all the public methods in said module, however, how about the others? Can we say an interface is a composition of other classes and enumerations?
Since the modules interact by sending/receiving events, both should know what kind of data packets (events) can travel through that interface.
Can we represent an interface like this? or should an interface only be a collection of methods?
edit 1:
Replying to the comments below, I don't want to say the interface had nested classes, but that the interface would define said classes so they could be used as events, but from your answers, seems using signals would be a better approach. (The ADC:: prefix and the name of the interface are not the same, just some bad naming choices, as the package name was ADC and so was the interface)
edit 2:
From the comments and answers below, I had no knowledge of the signal stereotype, so updating the question, I think it would become something like this?
This solves the classes and signals problem, however, the enum remain...
My intention was to say the interface would define this keywords, i.e. the module that implements the interface, should define this enum and their values.
Is this the correct approach?

The interface in your diagram is perfectly legit. UML interfaces do not have the constraints of Java interfaces: they can have operations, attributes, and all kind of associations (btw, the aggregation does formally not add any semantics to associations, and you could safely consider to remove the white diamonds), including composition (black diamond).
However it might not represent what you are looking for: interfaces express a contract. It means that the classes implementing it must provide those attributes and associations, and not that the classes will get all these features by a kind of “inheritance”.
Moreover, I’m not sure if you try to express some complex interface, or just found no better way to represent event consumption/generation. In the latter cse it would be an overcomplex and misleading approach.
For event based design, you may be interested in using signals. Signals are like classes, but there are UML elements to express the ability to process a family of events, and/or to generate events. It moreover facilitates the link between sequence diagrams and class diagrams, as a message can correspond directly to an event instance.

The model per se is legit. However, it's likely not what you intended. The shared aggregation has no defined meaning. See p. 110 of UML 2.5
shared | Indicates that the Property has shared aggregation semantics. Precise semantics of shared aggregation varies by application area and modeler.
So unless you define the meaning of them the model has no defined meaning.
If you intend to inherit operations/attributes you would draw an open triangle arrow towards other classes. Enumerations should simply be an attribute (simple association with role).
P.S. As per Geert's comment I notice the nesting too. So you obviously over-define this by adding the (meaningless) shared aggregation. You could represent the nested classes by showing them inside the enlarged «interface». Or you use (as suggested by Geert) the nesting relation (personally I use that rarely, but it's an alternative).

Related

Interface attributes not automatically replicated to class in UML class diagram in Visual Studio

In Visual Studio, I've created a UML class diagram with a class that realises an interface containing an attribute and an operation as thus:
The operation is automatically replicated to the class, but not the attribute. The MSDN guidelines indicate this behaviour:
When you create a realization connector, the operations of the interface are automatically replicated in the realizing class. If you add new operations to an interface, they are replicated in its realizing classes.
However, this seems counterintuitive to their statement just beforehand, namely:
Realization means that a class implements the attributes and operations specified by the interface.
I'm sure there must be a good technical reason for this (some OO concept like polymorphism or abstraction), but I can't think why it discerns between attributes and operations in this way.
Can anyone give me some insight into this, and perhaps what I should do to get round it (do I add the attributes to the class manually in UML?), as it's resulting in generated code that doesn't compile?
While I don't know for sure, I'd imagine it is because in C# interfaces cannot contain fields, only methods. Having attributes on an Interface therefore doesn't make sense.
Interfaces can contain properties, but these just get compiled to PropType get_PropName() and void set_PropName(PropType value). (Fun fact, trying to declare those methods yourself will generate a compiler error.)
Unfortunately, there is not a nice "out of the box" way of defining properties in UML class diagrams, as they are a language specific feature. I think you have to define a custom stereotype and the templates to generate the code accordingly - faff.

Java 8 doesn't provide the same solution to allow multiple inheritance which they gave to solve interface default methods

Problem:
We know that Java doesn’t allow to extend multiple classes because it would result in the Diamond Problem where the compiler could’t decide which superclass method to use. With interface default methods the Diamond Problem were introduction in Java 8. That is, because if a class implements two interfaces, each defining the same default method, and the implementing class doesn’t override the common default method, the compiler couldn’t decide which implementation to chose.
Solution:
Java 8 requires to provide an implementation for default methods implemented by more than one interface. So if a class would implement both interfaces mentioned above, it would have to provide an implementation for the common default method. Otherwise the compiler would throw a compile time error.
Question:
Why is this solution not applicable for multiple class inheritance, by overriding common methods introduced by child class?
You didn’t understand the Diamond Problem correctly (and granted, the current state of the Wikipedia article doesn’t explain it sufficiently). As shown in this graphic,
the diamond problem occurs, when the same class is inherited multiple times through different inheritance paths. This isn’t a problem for interfaces (and never was), as they only define a contract and specifying the same contract multiple times makes no difference.
The main problem is not associated with the methods but the data of that super type. Should the instance state of A exist once or twice in that case? If once, C and B can have different, conflicting constraints on A’s instance state. Both classes might also assume to have full control over A’s state, i.e. not consider that other class having the same access level. If having two different A states, a widening conversion of a D reference to an A reference becomes ambiguous, as either A could be meant.
Interfaces don’t have these problems, as they do not carry instance data at all. They also have (almost) no accessibility issues as their methods are always public. Allowing default methods, doesn’t change this, as default methods still don’t access instance variables but operate with the interface methods only.
Of course, there is the possibility that B and C declared default methods with identical signature, causing an ambiguity that has to be resolved in D. But this is even the case, when there is no A, i.e. no “diamond” at all. So this scenario is not a correct example of the “Diamond Problem”.
Methods introduced by interfaces may always be overriden, while methods introduced by classes could be final. This is one reason why you potentially couldn't apply the same strategy for classes as you could for interfaces.
The conflict described as "diamond problem" can best be illustrated using a polymorphic call to method A.m() where the runtime type of the receiver has type D: Imagine D inherits two different methods both claiming to play the role of A.m() (one of them could be the original method A.m(), at least one of them is an override). Now, dynamic dispatch cannot decide which of the conflicting methods to invoke.
Aside: the distinction betwee the "diamond problem" and regular name clashes is particularly relevant in languages like Eiffel, where the conflict could be locally resolved for the perspective of type D, e.g., by renaming one method. This would avoid the name clash for invocations with static type D, but not for invocations with static type A.
Now, with default methods in Java 8, JLS was amended with rules that detect any such conflicts, requiring D to resolve the conflict (many different cases exist, depending on whether or not some of the types involved are classes). I.e., the diamond problem is not "solved" in Java 8, it is just avoided by rejecting any programs that would produce it.
In theory, similar rules could have been defined in Java 1 to admit multiple inheritance for classes. It's just a decision that was made early on, that the designers of Java did not want to support multiple inheritance.
The choice to admit multiple (implementation) inheritance for default methods but not for class methods is a purely pragmatic choice, not necessitated by any theory.

Swift: Do protocols even have a real purpose?

I'm wondering why protocols are used in swift. In every case I've had to use one so far (UICollectionViewDelegate, UICollectionViewDataSource) I've noted that they don't even have to be added to the class declaration for my code to work. All they do is make it such that your class needs to have certain methods in it so that it can compile. Beats me why this is useful other then as a little post it note to help you keep track of what your classes do.
I'm assuming I'm wrong though. Would anyone care to point out why to me please?
A protocol defines a blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality. The protocol doesn’t actually provide an implementation for any of these requirements—it only describes what an implementation will look like.
So it's basically an interface, right?
You use an interface when you want to define a "contract" for your code. In most cases, the purpose of this is to enable multiple implementations of that contract. For example, you can provide a real implementation, and a fake one for testing.
Further Reading
Protocols
What is the point of an Interface?
It allows flexible linkage between parts of code. Once the protocol is defined it can be used by code that doesn't need to know what will be done when the methods are called or exactly what object (or struct or enum) is actually being used. An alternative approach could be setting callbacks or blocks but by using a protocol as complete set of behaviours can be grouped and documented.
Some other code will typically create the concrete instance and pass it to the code expecting the protocol (sometimes the concrete instance will pass itself). In some cases neither the implementation of the code using it need to be aware of each other and it can all be set up by some other code to keep it reusable and testable.
It might be possible to do this in some languages by duck typing which is to say that a runtime inspection could allow a object to act in such a context without particular declaration but this is probably not possible to do at compile time in all cases and it could also be error prone if worked out implicitly.

DDD Events and abstract base classes

I am currently working on implementing multiple events which share common properties and are basically the same: Templates. Our event provider applies several events like SomeTemplateAddedEvent and SomeOtherTemplateAddedEvent. There could possibly come more variations later, so I was thinking about implementing a base class for each TemplateAddedEvent since they all share common properties. But I am doubtful if this is the right way to go, since some people prefer events to be simple classes containing every property instead of having to dig deeper to find out what the event can provide.
I hope someone can shed some light on this subject.
Inheritance is normally used for two orthogonal reasons - to reuse functionality and to declare an "is-a" relationship between classes. It seems that you're using it for the first reason. This reason is a weaker argument because reuse can also be attained with composition. The question to ask then is whether there exists an "is-a" relationship between the events. Sometimes inheritance among events is desirable, such as when it makes sense to provide a handler for all events deriving from the base class.
Overall, I'd caution against inheritance if it is only applied to attain code reuse. If it is an appropriate statement about the domain, then it can make sense.

Separation of domain and ui layer in a composite

i'm wondering if there is a pattern how to separate the domain logic of a class from the ui responsibilities of the objects in the domain layer.
Example:
// Domain classes
interface MachinePart
{
CalculateX(in, out)
// Where do we put these:
// Draw(Screen) ??
// ShowProperties(View) ??
// ...
}
class Assembly : MachinePart
{
CalculateX(in, out)
subParts
}
class Pipe : MachinePart
{
CalculateX(in, out)
length, diamater...
}
There is an application that calculates the value X for machines assembled from many machine parts. The assembly is loaded from a file representation and is designed as a composite. Each concrete part class stores some data to implement the CalculateX(in,out) method to simulate behaviour of the whole assembly. The application runs well but without GUI. To increase the usability a GUi should be developed on top of the existing implementation (changes to the existing code are allowed). The GUI should show a schematic graphical representation of the assembly and provide part specific dialogs to edit several parameters.
To achieve these goals the application needs new functionality for each machine part to draw a schematic representation on the screen, show a property dialog and other things not related to the domain of machine simulation. I can think of some different solutions to implement a Draw(Screen) functionality for each part but i am not happy with each of them.
First i could add a Draw(Screen) method to the MachinePart interface but this would mix-up domain code with ui code and i had to add a lot of functionality to each machine part class what makes my domain model hard to read and hard to understand.
Another "simple" solution is to make all parts visitable and implement ui code in visitors but Visitor does not belong to my favorite patterns.
I could derive UI variants from each machine part class to add the UI implementation there but i had to check if each part class is suited for inheritance and had to be careful on changes to the base classes.
My currently favorite design is to create a parallel composite hierarchy where each component stores data to define a machine part, has implementation for UI methods and a factory method which creates instances of the corresponding domain classes, so that i can "convert" a UI assembly to a domain assembly. But there are problems to go back from the created domain hierarchy to the UI hierarchy for showing calculation results in the drawing for example (imagine some parts store some values during the calculation i want to show in the schematic representation after the simluation).
Maybe there are some proven patterns for such problems?
Agree with #Marjin, and to generalise his answer. What you need is Model-View-Controller of which MVP and MVVM are variants. From your comments I think you understand that, but need to understand how to implement the pattern. Without knowing your language & target architecture it's hard to give absolute specifics. Notwithstanding, I'd start with the Observer pattern (link has sample code).
The problem you're dealing with is how to provide observable access from the domain to the UI - without encumbering the domain with UI-specific code. Observer provides a means to do that. It does require domain changes, in particular to enable registration of observers and notification of changes. However there's nothing GUI-specific in that so it stays well encapsulated.
hth.
PS: If your app is a typical thin-client web app you'll need to modify the approach. And beware: lots of web app frameworks are advertised as "MVC", but the implementation is architecturally quite different to the Observer pattern.
You can take a look at the model-view-presenter (mvp) and model-view-viewmodel (mvvm) patterns.
Fowler's presentation model includes two sample applications; it also might be of interest to you.
I think that investigating these patterns will give you some ideas on how to continue. Mvvm looks a lot like your current solution; so i'd start there if I were you.
Maybe a View Helper can help. It's not a C++, but a Java EE pattern, but in your case it will definitely separate your domain objects from their presentation details...

Resources