What's the better coding style in oop: one method with one parameter VS two methods without parameters? - coding-style

What's the better way for clean code from the oop point of view? Having two related methods with different names or one common method with an extra parameter?
(Simplified) Example:
1.) public void LogError() { ... }
public void LogWarning() { ... }
VS
2.) public void Log(LogType logType) { ... } //LogType.Error vs LogType.Warning

Both are good choices. Maybe a few examples can make it more clear. Usually, I try to think who is gonna use the library (me or someone else) and what programming language I use.
For example:
If I use strongly typed language like Java, C#, etc then I prefer choice 2.
If I use something else like PHP or Python, then I prefer choice 1.
If I want to make a simplified interface for other developers that are gonna use my library, for example, then I prefer choice 1 too.
When you have LogType enum for example, then it really doesn’t matter. Just try to think about how to describe the intent and make it clear.
Watch out boolean parameters that can be confusing many times. For example:
public void SaveProduct(bool cache) { ... }
In those situations, choice 1 is usually better because it can be very hard to understand what boolean values do. (How it changes the behavior) Also, it usually tells that the method is doing two different actions so possibly there is a way to refactor it. For example, splitting it into two methods and then the developer does not need to know about the implementation details.

Related

When to use Encapsulate Collection?

In the smell Data Class as Martin Fowler described in Refactoring, he suggests if I have a collection field in my class I should encapsulate it.
The pattern Encapsulate Collection(208) says we should add following methods:
get_unmodified_collection
add_item
remove_item
and remove these:
get_collection
set_collection
To make sure any changes on this collection need go through the class.
Should I refactor every class which has a collection field with this pattern? Or it depends on some other reasons like frequency of usage?
I use C++ in my project now.
Any suggestion would be helpful. Thanks.
These are well formulated questions and my answer is:
Should I refactor every class which has a collection field with this
pattern?
No, you should not refactor every class which has a collection field. Every fundamentalism is a way to hell. Use common sense and do not make your design too good, just good enough.
Or it depends on some other reasons like frequency of usage?
The second question comes from a common mistake. The reason why we refactor or use design pattern is not primarily the frequency of use. We do it to make the code more clear, more maintainable, more expandable, more understandable, sometimes (but not always!) more effective. Everything which adds to these goals is good. Everything which does not, is bad.
You might have expected a yes/no answer, but such one is not possible here. As said, use your common sense and measure your solution from the above mentioned viewpoints.
I generally like the idea of encapsulating collections. Also encapsulating plain Strings into named business classes. I do it almost always when the classes are meaningful in the business domain.
I would always prefer
public class People {
private final Collection<Man> people;
... // useful methods
}
over the plain Collection<Man> when Man is a business class (a domain object). Or I would sometimes do it in this way:
public class People implements Collection<Man> {
private final Collection<Man> people;
... // delegate methods, such as
#Override
public int size() {
return people.size();
}
#Override
public Man get(int index) {
// Here might also be some manipulation with the returned data etc.
return people.get(index);
}
#Override
public boolean add(Man man) {
// Decoration - added some validation
if (/* man does not match some criteria */) {
return false;
}
return people.add(man);
}
... // useful methods
}
Or similarly I prefer
public class StreetAddress {
private final String value;
public String getTextValue() { return value; }
...
// later I may add more business logic, such as parsing the street address
// to street name and house number etc.
}
over just using plain String streetAddress - thus I keep the door opened to any future change of the underlying logic and to adding any useful methods.
However, I try not to overkill my design when it is not needed so I am as well as happy with plain collections and plain Strings when it is more suited.
I think it depends on the language you are developing with. Since there are already interfaces that do just that C# and Java for example. In C# we have ICollection, IEnumerable, IList. In Java Collection, List, etc.
If your language doesn't have an interface to refer to a collection regarless of their inner implementation and you require to have your own abstraction of that class, then it's probably a good idea to do so. And yes, you should not let the collection to be modified directly since that completely defeats the purpose.
It would really help if you tell us which language are you developing with. Granted, it is kind of a language-agnostic question, but people knowledgeable in that language might recommend you the best practices in it and if there's already a way to achieve what you need.
The motivation behind Encapsulate Collection is to reduce the coupling of the collection's owning class to its clients.
Every refactoring tries to improve maintainability of the code, so future changes are easier. In this case changing the collection class from vector to list for example, changes all the clients' uses of the class. If you encapsulate this with this refactoring you can change the collection without changes to clients. This follows on of SOLID principles, the dependency inversion principle: Depend upon Abstractions. Do not depend upon concretions.
You have to decide for your own code base, whether this is relevant for you, meaning that your code base is still being changed and has to be maintained (then yes, do it for every class) or not (then no, leave the code be).

Is there a good way to use polymorphism to remove this switch statement?

I've been reading on refactoring and replacing conditional statements with polymorphism. The trouble I have is that it only seems to make sense to me when you have a more complex case where, without polymorphism, you would have to repeat the same switch statements or if-elses many times. I don't see how it makes sense if you're only doing it once - you have to have that conditional somewhere, right?
As an example, I recently wrote the following class, which is responsible for reading a XML file and converting its data into the program's objects. There are 2 possible formats for the file that we are supporting, so I simply wrote a method in the class for handling each one, and used a case-switch to determine which one to use:
public class ComponentXmlReader
{
public IEnumerable<UserComponent> ImportComponentsFromXml(string path)
{
var xmlFile = XElement.Load(path);
switch (xmlFile.Name.LocalName)
{
case "CaseDefinition":
return ImportComponentsFromA(xmlFile);
case "Root":
return ImportComponentsFromB(xmlFile);
}
}
private IEnumerable<UserComponent> ImportComponentsFromA(XContainer file)
{
//do stuff
}
private IEnumerable<UserComponent> ImportComponentsFromB(XContainer file)
{
//do stuff
}
}
As far as I can tell, I could write a class hierarchy for this to do the parsing, but I don't see the advantage here - I'd still have to use a case-switch to determine which class to instantiate. It looks to me like it would be extra complexity for no benefit. If I was going to keep these classes around and do more things with them that depended on the file type, then it would eliminate doing the same switch in multiple places, but this is single-use. Is this right, or is there some reason or technique I'm not seeing that makes it a good idea to use a polymorphic class hierarchy to do this?
If you had, say, an abstract ComponentImporter class, with concrete subclasses FromA and FromB, you could instantiate one of each, and put it in a Map. Then you could call componentImporterMap.get(xmlFile.Name.LocalName).importComponents() and avoid the switch.
As with all design choices, context is key. In this case, you have what seems to be a fairly simple class handling two very similar tasks. If the two Import methods contained very little duplicate code, then including them in a single class is perhaps the best choice since, as you say, it reduces complexity.
However, it's possible you'll use this class in the future, and even add new types of imports. In that case, the class would be more reusable if it was polymorphic.
Additionally, since these methods sound very similar, you're likely to have a bunch of duplicate code, which you could keep in a base class and only put import-specific code in the child classes.
Plus, as Carl mentions, there are numbers of ways to implement this logic without using a case statement.

Is there a DSL or declarative system for TPL Dataflow?

Is there any DSL or other fully- or partially-declarative mechanism for declaring TPL Dataflow flows? Or is the best (only) practice to just wire them up in code?
Failing that, is there any DSL or other fully- or partially-declarative mechanism for using any dataflow library that I could use as a model and/or source of ideas?
(I've searched without success so maybe one doesn't exist ... or maybe I didn't find it.)
Update: To answer #svick below as to why I want this and what do I gain by it:
First, I just like a sparser syntax that more clearly shows the flow rather than the details. I think
downloadString => createWordList => filterWordList => findPalindromes;
is preferable to
downloadString.LinkTo(createWordList);
createWordList.LinkTo(filterWordList);
filterWordList.LinkTo(findPalindromes);
findPalindromes.LinkTo(printPalindrome);
with its repeated names and extra punctuation. Similar to the way you'd rather use the dot DSL to describe a DAG than a bunch of calls to the Visio DOM API. You can imagine a syntax for network flows, as well as pipelines, such that network flows in particular would be very clear. That may not seem compelling, of course, but I like it.
Second, I think that with a DSL you might be able to persist the DSL description, e.g., as a field in a row in a database, and then instantiate it later. Though perhaps that's a different capability entirely.
Let's start with the relevant facts and work from there:
There isn't anything like this for TPL Dataflow yet.
There isn't a good way of embedding a DSL into C#. The common compilers are not extensible and it would be hard to access local variables from a string-based DSL.
The are several limitations to operators in C#, but the most significant here is that operators can't be generic. This means that the sparser syntax either wouldn't be type-safe (which is unacceptable to me), or it can't use overloaded operators.
The IDisposable returned from LinkTo() that can be used to break the created link isn't used that often, so it doesn't have to be supported. (Or maybe the expression that sets up the flow could return a single IDisposable that breaks the whole flow?)
Because of this, I think the best that can be done is something like:
downloadString.Link(createWordList).Link(filterWordList).Link(findPalindromes);
This avoids the repetition of LinkTo(), but is not much better.
The implementation of the simple form of this is mostly trivial:
public static class DataflowLinkExtensions
{
public static ISourceBlock<TTarget> Link<TSource, TTarget>(
this ISourceBlock<TSource> source,
IPropagatorBlock<TSource, TTarget> target)
{
source.LinkTo(
target,
new DataflowLinkOptions { PropagateCompletion = true });
return target;
}
public static void Link<TSource>(
this ISourceBlock<TSource> source, ITargetBlock<TSource> target)
{
source.LinkTo(
target,
new DataflowLinkOptions { PropagateCompletion = true });
}
}
I chose to set PropagateCompletion to true, because I think that makes the most sense here. But it could also be an option of Link().
I think most of the alternative linking operators of Axum are not relevant to TPL Dataflow, but linking multiple blocks to or from the same block could be done by taking a collection or array as one of the parameters of Link():
new[] { source1, source2 }.Link(target);
source.Link(target1, target2);
If Link() actually returned something that represents the whole flow (similar to Encapsulate()), you could combine this to create more complicated flows, like:
source.Link(propagator1.Link(target1), target2);

How do I refactor chained methods?

Starting with this code:
new Person("ET").WithAge(88)
How can it be refactored to:
new Person("ET", 88)
What sequence of refactorings needs to be performed to complete the transformation?
Why? Because there could be hundreds of these, and I wouldn't want to introduce errors by doing it manually.
Would you say a drawback with fluent interfaces is they can't easily be refactored?
NOTE: I want to do this automatically without hand typing the code.
Perhaps the simplest way to refactor this is to change the name "WithAge" to "InitAge", make InitAge private, then call it from your constructor instead. Then update all references of new Person(string).WithAge(int) to use the new constructor.
If WithAge is a one-liner, you can just move the code to your new constructor instead, and do away with InitAge altogether, unless having the additional method provides extra readability.
Having good unit tests will isolate where errors are introduced, if they are.
Assuming that WithAge is a method on Person that returns a Person, what about something like
Person(string name, int age)
{
this.name = name;
this.WithAge(age);
}
Or more generalized:
Person(SomeType originalParameter, FluentParamType fluentParameter)
{
//Original constructor stuff
this.FluentMethod(fluentParameter);
}
And then as make the FluentMethod private if you don't want it, or keep it public if you want to allow both ways.
If this is C# (ideally you would tag the question with the language), the Person class needs this constructor:
public Person(string name, int age)
: this(name) { WithAge(age); }
To then change all client code to call this new constructor where appropriate, you would need to find all occurrences of the pattern:
new Person(x1).WithAge(x2)
where x1 and x2 are expressions, and replace them with:
new Person(x1, x2)
If there are other modifier methods aside from WithAge, it might get more complicated. For example:
new Person(x1).WithHair(x2).WithAge(x3)
Perhaps you'd want that to become:
new Person(x1, x3).WithHair(x2)
It all depends on whether you have an IDE that lets you define language-aware search/replace patterns like that. You can get a long way to the solution with simple textual search and replace, combined with a macro that replays a sequence of key presses.
Would you say a drawback with fluent
interfaces is they can't easily be
refactored?
Not especially - it's more that refactoring features in IDEs are either designed flexibly enough to let you creatively invent new refactorings, or else they are hard-coded for certain common cases. I'd prefer the common cases to be defined as examples that I could mutate to invent new ones.
I don't have any practical experience with that sort of thing, but if I was in your situation the place I'd go looking would be custom Eclipse refactorings (or the equivalent in Refactor! Pro for .Net if that's what you're using).
Basically what you want is a match and replace, except that your regular expressions should match abstract syntax trees rather than plain text. That's what automated refactorings are.
One risk of this refactoring is that the target version is less precise than the original. Consider:
class Person {
public Person(String name, int age);
public Person(String name, int numberOfChildren);
}
There is no way to tell which of these constructors the chained call to Person.WithAge should be replaced with.
So, automated support for this would have to check for such ambiguities before allowing you to proceed. If there is already a constructor with the target parameters, abort the refactoring.
Other than that it seems pretty straightforward. Give the new constructor the following content:
public Person(String name, int age) {
this(name);
withAge(age);
}
Then you can safely replace the original call with the new.
(There is a subtle additional risk, in that calling withAge within the constructor, i.e. on a partially constructed object, isn't quite the same as calling it after the constructor. The difference matters if you have an inheritance chain and if withAge does something non-trivial. But then that's what your unit tests are for...)
Write unit tests for the old code.
Refactor until the tests pass again.

UI interface and TDD babysteps

OK, having tried my first TDD attempt, it's time to reflect a little
and get some guidance, because it wasn't that successful for me.
The solution was partly being made with an existing framework, perhaps
making TDD less ideal. The part that seemed to give me the biggest
problem, was the interaction between the view and controller. I'll
give a few simple examples and hope that someone will tell me what I
can do better wrong.
Each view's interface inherits from a base interface, with these
members (there are more):
public interface IView
{
void ShowField(string fieldId)
void HideField(string fieldId)
void SetFieldVisibility(string fieldId, bool visible)
void DisableField(string fieldId)
void ShowValidationError(string fieldId)
...
}
The interface for a concrete view, would then add members for each
field like this
public interface IMyView : IView
{
string Name { get; set; }
string NameFieldID { get; }
...
}
What do you think of this? Is inheriting from a common interface a
good or bad idea?
One on the things that gave me trouble was, that first I used
ShowField and HideField and the found out I would rather use
SetFieldVisiblity. I didn't change the outcome of the method, but I
had to update my test, which I seem should be necessary. Is having
multiple methods doing the same thing, a bad thing? On one hand both
methods are handy for different cases, but they do clutter the
interface, making the interface more complex than it strictly have to be.
Would a design without a common interface be better? That would remove
the fieldID, I don't why, but I think the fieldID-thing smells, I
might be wrong.
I would only make the Show and Hide methods, when needed, that is if
they would be called by the controller. This would be a less generic
solution and require more code in the view, but the controller code
would be a bit more simple.
So a view interface might look like this:
public interface IMyView
{
void ShowName()
void HideName()
string Name { get; set; }
int Age { get; set; }
}
What do you want to test? Whether Show* will make an widget in the UI visible? What for?
My suggestion: Don't try to figure out if a framework is working correctly. It's a waste of time. The people who developed the framework should have done that, so you're duplicating their work.
Usually, you want to know if your code does the right thing. So if you want to know if you are calling the correct methods, create mockups:
public class SomeFrameworkMockup extends SomeFramework {
public boolean wasCalled;
public void methodToTest() {
wasCalled = true;
}
}
Build the UI using the mockups.
The second thing to test is whether your algorithms work. To do that, isolate them in simple helper objects where you can all every method easily and test them with various inputs.
Avoid the external framework during tests. It only confuses you. When you've built a working product, test that using your mouse. If you find any problems, get to the root of them and only then, start writing tests against the framework to make sure this bug doesn't appear again. But 90% of the time, these bugs will be in your code, too.
At the moment I don't really see the added value of the common interface.
I think a better solution would be to have some properties on the controller class: IsControlXYZVisible. You can then databind the visible property of the control to this property.
And your unit test will test the value of IsControlXYZVisible, which will be easier to acomplish.
I also don't understand why you say you had a bad experience with TDD. I think your application architecture needs more work.
Your question is a little bit obscure for me but the title itself calls for a link :
The Humble Dialog box
And when you ask if it(s bad to have two functions doing the same thing, I say "Yes it's bad".
If one is calling the other, what's the point of having two functions ?
If not, you have a code duplication, that is a bug waiting to sprout whenyou update one and not the other.
In fact there is a valid case where you have two nearly identical functions : one that check its arguments and one that does not but usually only one is public and the other private ...

Resources