Custom 'image attribute in Ada? - syntax

So I have a thing.
type Thing is new record
...elements...
end record;
I have a function which stringifies it.
function ToString(t: Thing) returns string;
I would like to be able to tell Ada to use this function for Thing'image, so that users of my library don't have to think about whether they're using a builtin type or a Thing.
However, the obvious syntax:
for Thing'image use ToString;
...doesn't work.
Is there a way to do this?

I don’t know why the language doesn’t support this, and I don’t know whether anyone has ever raised a formal proposal that it should (an Ada Issue or AI). The somewhat-related AI12-0020 (the 20th AI for Ada 2012) includes the remark "I don't think we rejected it for technical reasons as much as importance”.
You can see why the Ada Rapporteur Group might think this was relatively unimportant: you can always declare an Image function; the difference between
Pkg.Image (V);
and
Pkg.Typ’Image (V);
isn’t very large.

One common method is to create a unary + function...
function "+"(item : myType) return String;
which is syntactically very light.
Obvious disclaimer: may lead to some ambiguity when applied to numeric types (e.g. Put (+4);)
However there is still the distinction between prebuilt types and user defined types.
'img wouldn't be able to be used by your client's code though, unless you specified an interface that enforced this function to be present (what if the client called on 'img for a private type that didn't have a 'img function defined?).
If you end up having to have an interface, it really doesn't matter what the function is called.

Related

Is there a difference between fun(n::Integer) and fun(n::T) where T<:Integer in performance/code generation?

In Julia, I most often see code written like fun(n::T) where T<:Integer, when the function works for all subtypes of Integer. But sometimes, I also see fun(n::Integer), which some guides claim is equivalent to the above, whereas others say it's less efficient because Julia doesn't specialize on the specific subtype unless the subtype T is explicitly referred to.
The latter form is obviously more convenient, and I'd like to be able to use that if possible, but are the two forms equivalent? If not, what are the practicaly differences between them?
Yes Bogumił Kamiński is correct in his comment: f(n::T) where T<:Integer and f(n::Integer) will behave exactly the same, with the exception the the former method will have the name T already defined in its body. Of course, in the latter case you can just explicitly assign T = typeof(n) and it'll be computed at compile time.
There are a few other cases where using a TypeVar like this is crucially important, though, and it's probably worth calling them out:
f(::Array{T}) where T<:Integer is indeed very different from f(::Array{Integer}). This is the common parametric invariance gotcha (docs and another SO question about it).
f(::Type) will generate just one specialization for all DataTypes. Because types are so important to Julia, the Type type itself is special and allows parameterization like Type{Integer} to allow you to specify just the Integer type. You can use f(::Type{T}) where T<:Integer to require Julia to specialize on the exact type of Type it gets as an argument, allowing Integer or any subtypes thereof.
Both definitions are equivalent. Normally you will use fun(n::Integer) form and apply fun(n::T) where T<:Integer only if you need to use specific type T directly in your code. For example consider the following definitions from Base (all following definitions are also from Base) where it has a natural use:
zero(::Type{T}) where {T<:Number} = convert(T,0)
or
(+)(x::T, y::T) where {T<:BitInteger} = add_int(x, y)
And even if you need type information in many cases it is enough to use typeof function. Again an example definition is:
oftype(x, y) = convert(typeof(x), y)
Even if you are using a parametric type you can often avoid using where clause (which is a bit verbose) like in:
median(r::AbstractRange{<:Real}) = mean(r)
because you do not care about the actual value of the parameter in the body of the function.
Now - if you are Julia user like me - the question is how to convince yourself that this works as expected. There are the following methods:
you can check that one definition overwrites the other in methods table (i.e. after evaluating both definitions only one method is present for this function);
you can check code generated by both functions using #code_typed, #code_warntype, #code_llvm or #code_native etc. and find out that it is the same
finally you can benchmark the code for performance using BenchmarkTools
A nice plot explaining what Julia does with your code is here http://slides.com/valentinchuravy/julia-parallelism#/1/1 (I also recommend the whole presentation to any Julia user - it is excellent). And you can see on it that Julia after lowering AST applies type inference step to specialize function call before LLVM codegen step.
You can hint Julia compiler to avoid specialization. This is done using #nospecialize macro on Julia 0.7 (it is only a hint though).

How to Work with Ruby Duck Typing

I am learning Ruby and I'm having a major conceptual problem concerning typing. Allow me to detail why I don't understand with paradigm.
Say I am method chaining for concise code as you do in Ruby. I have to precisely know what the return type of each method call in the chain, otherwise I can't know what methods are available on the next link. Do I have to check the method documentation every time?? I'm running into this constantly running tutorial exercises. It seems I'm stuck with a process of reference, infer, run, fail, fix, repeat to get code running rather then knowing precisely what I'm working with during coding. This flies in the face of Ruby's promise of intuitiveness.
Say I am using a third party library, once again I need to know what types are allow to pass on the parameters otherwise I get a failure. I can look at the code but there may or may not be any comments or declaration of what type the method is expecting. I understand you code based on methods are available on an object, not the type. But then I have to be sure whatever I pass as a parameter has all the methods the library is expect, so I still have to do type checking. Do I have to hope and pray everything is documented properly on an interface so I know if I'm expected to give a string, a hash, a class, etc.
If I look at the source of a method I can get a list of methods being called and infer the type expected, but I have to perform analysis.
Ruby and duck typing: design by contract impossible?
The discussions in the preceding stackoverflow question don't really answer anything other than "there are processes you have to follow" and those processes don't seem to be standard, everyone has a different opinion on what process to follow, and the language has zero enforcement. Method Validation? Test-Driven Design? Documented API? Strict Method Naming Conventions? What's the standard and who dictates it? What do I follow? Would these guidelines solve this concern https://stackoverflow.com/questions/616037/ruby-coding-style-guidelines? Is there editors that help?
Conceptually I don't get the advantage either. You need to know what methods are needed for any method called, so regardless you are typing when you code anything. You just aren't informing the language or anyone else explicitly, unless you decide to document it. Then you are stuck doing all type checking at runtime instead of during coding. I've done PHP and Python programming and I don't understand it there either.
What am I missing or not understanding? Please help me understand this paradigm.
This is not a Ruby specific problem, it's the same for all dynamically typed languages.
Usually there are no guidelines for how to document this either (and most of the time not really possible). See for instance map in the ruby documentation
map { |item| block } → new_ary
map → Enumerator
What is item, block and new_ary here and how are they related? There's no way to tell unless you know the implementation or can infer it from the name of the function somehow. Specifying the type is also hard since new_ary depends on what block returns, which in turn depends on the type of item, which could be different for each element in the Array.
A lot of times you also stumble across documentation that says that an argument is of type Object, Which again tells you nothing since everything is an Object.
OCaml has a solution for this, it supports structural typing so a function that needs an object with a property foo that's a String will be inferred to be { foo : String } instead of a concrete type. But OCaml is still statically typed.
Worth noting is that this can be a problem in statically typed lanugages too. Scala has very generic methods on collections which leads to type signatures like ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[Array[T], B, That]): That for appending two collections.
So most of the time, you will just have to learn this by heart in dynamically typed languages, and perhaps help improve the documentation of libraries you are using.
And this is why I prefer static typing ;)
Edit One thing that might make sense is to do what Scala also does. It doesn't actually show you that type signature for ++ by default, instead it shows ++[B](that: GenTraversableOnce[B]): Array[B] which is not as generic, but probably covers most of the use cases. So for Ruby's map it could have a monomorphic type signature like Array<a> -> (a -> b) -> Array<b>. It's only correct for the cases where the list only contains values of one type and the block only returns elements of one other type, but it's much easier to understand and gives a good overview of what the function does.
Yes, you seem to misunderstand the concept. It's not a replacement for static type checking. It's just different. For example, if you convert objects to json (for rendering them to client), you don't care about actual type of the object, as long as it has #to_json method. In Java, you'd have to create IJsonable interface. In ruby no overhead is needed.
As for knowing what to pass where and what returns what: memorize this or consult docs each time. We all do that.
Just another day, I've seen rails programmer with 6+ years of experience complain on twitter that he can't memorize order of parameters to alias_method: does new name go first or last?
This flies in the face of Ruby's promise of intuitiveness.
Not really. Maybe it's just badly written library. In core ruby everything is quite intuitive, I dare say.
Statically typed languages with their powerful IDEs have a small advantage here, because they can show you documentation right here, very quickly. This is still accessing documentation, though. Only quicker.
Consider that the design choices of strongly typed languages (C++,Java,C#,et al) enforce strict declarations of type passed to methods, and type returned by methods. This is because these languages were designed to validate that arguments are correct (and since these languages are compiled, this work can be done at compile time). But some questions can only be answered at run time, and C++ for example has the RTTI (Run Time Type Interpreter) to examine and enforce type guarantees. But as the developer, you are guided by syntax, semantics and the compiler to produce code that follows these type constraints.
Ruby gives you flexibility to take dynamic argument types, and return dynamic types. This freedom enables you to write more generic code (read Stepanov on the STL and generic programming), and gives you a rich set of introspection methods (is_a?, instance_of?, respond_to?, kind_of?, is_array?, et al) which you can use dynamically. Ruby enables you to write generic methods, but you can also explicity enforce design by contract, and process failure of contract by means chosen.
Yes, you will need to use care when chaining methods together, but learning Ruby is not just a few new keywords. Ruby supports multiple paradigms; you can write procedural, object oriend, generic, and functional programs. The cycle you are in right now will improve quickly as you learn about Ruby.
Perhaps your concern stems from a bias towards strongly typed languages (C++, Java, C#, et al). Duck typing is a different approach. You think differently. Duck typing means that if an object looks like a , behaves like a , then it is a . Everything (almost) is an Object in Ruby, so everything is polymorphic.
Consider templates (C++ has them, C# has them, Java is getting them, C has macros). You build an algorithm, and then have the compiler generate instances for your chosen types. You aren't doing design by contract with generics, but when you recognize their power, you write less code, and produce more.
Some of your other concerns,
third party libraries (gems) are not as hard to use as you fear
Documented API? See Rdoc and http://www.ruby-doc.org/
Rdoc documentation is (usually) provided for libraries
coding guidelines - look at the source for a couple of simple gems for starters
naming conventions - snake case and camel case are both popular
Suggestion - approach an online tutorial with an open mind, do the tutorial (http://rubymonk.com/learning/books/ is good), and you will have more focused questions.

Why isn't DRY considered a good thing for type declarations?

It seems like people who would never dare cut and paste code have no problem specifying the type of something over and over and over. Why isn't it emphasized as a good practice that type information should be declared once and only once so as to cause as little ripple effect as possible throughout the source code if the type of something is modified? For example, using pseudocode that borrows from C# and D:
MyClass<MyGenericArg> foo = new MyClass<MyGenericArg>(ctorArg);
void fun(MyClass<MyGenericArg> arg) {
gun(arg);
}
void gun(MyClass<MyGenericArg> arg) {
// do stuff.
}
Vs.
var foo = new MyClass<MyGenericArg>(ctorArg);
void fun(T)(T arg) {
gun(arg);
}
void gun(T)(T arg) {
// do stuff.
}
It seems like the second one is a lot less brittle if you change the name of MyClass, or change the type of MyGenericArg, or otherwise decide to change the type of foo.
I don't think you're going to find a lot of disagreement with your argument that the latter example is "better" for the programmer. A lot of language design features are there because they're better for the compiler implementer!
See Scala for one reification of your idea.
Other languages (such as the ML family) take type inference much further, and create a whole style of programming where the type is enormously important, much more so than in the C-like languages. (See The Little MLer for a gentle introduction.)
It isn't considered a bad thing at all. In fact, C# maintainers are already moving a bit towards reducing the tiring boilerplate with the var keyword, where
MyContainer<MyType> cont = new MyContainer<MyType>();
is exactly equivalent to
var cont = new MyContainer<MyType>();
Although you will see many people who will argue against var usage, which kind of shows that many people is not familiar with strong typed languages with type inference; type inference is mistaken for dynamic/soft typing.
Repetition may lead to more readable code, and sometimes may be required in the general case. I've always seen the focus of DRY being more about duplicating logic than repeating literal text. Technically, you can eliminate 'var' and 'void' from your bottom code as well. Not to mention you indicate scope with indentation, why repeat yourself with braces?
Repetition can also have practical benefits: parsing by a program is easier by keeping the 'void', for example.
(However, I still strongly agree with you on prefering "var name = new Type()" over "Type name = new Type()".)
It's a bad thing. This very topic was mentioned in Google's Go language Techtalk.
Albert Einstein said, "Everything should be made as simple as possible, but not one bit simpler."
Your complaint makes no sense in the case of a dynamically typed language, so you must intend this to refer to statically typed languages. In that case, your replacement example implicitly uses Generics (aka Template Classes), which means that any time that fun or gun is used, a new definition based upon the type of the argument. That could result in dozens of extra methods, regardless of the intent of the programmer. In particular, you're throwing away the benefit of compiler-checked type-safety for a runtime error.
If your goal was to simply pass through the argument without checking its type, then the correct type would be Object not T.
Type declarations are intended to make the programmer's life simpler, by catching errors at compile-time, instead of failing at runtime. If you have an overly complex type definition, then you probably don't understand your data. In your example, I would have suggested adding fun and gun to MyClass, instead of defining them separately. If fun and gun don't apply to all possible template types, then they should be defined in an explicit subclass, not as separate functions that take a templated class argument.
Generics exist as a way to wrap behavior around more specific objects. List, Queue, Stack, these are fine reasons for Generics, but at the end of the day, the only thing you should be doing with a bare Generic is creating an instance of it, and calling methods on it. If you really feel the need to do more than that with a Generic, then you probably need to embed your Generic class as an instance object in a wrapper class, one that defines the behaviors you need. You do this for the same reason that you embed primitives into a class: because by themselves, numbers and strings do not convey semantic information about their contents.
Example:
What semantic information does List convey? Just that you're working with multiple triples of integers. On the other hand, List, where a color has 3 integers (red, blue, green) with bounded values (0-255) conveys the intent that you're working with multiple Colors, but provides no hint as to whether the List is ordered, allows duplicates, or any other information about the Colors. Finally a Palette can add those semantics for you: a Palette has a name, contains multiple Colors, but no duplicates, and order isn't important.
This has gotten a bit far afield from the original question, but what it means to me is that DRY (Don't Repeat Yourself) means specifying information once, but that specification should be as precise as is necessary.

Return concrete or abstract datatypes?

I'm in the middle of reading Code Complete, and towards the end of the book, in the chapter about refactoring, the author lists a bunch of things you should do to improve the quality of your code while refactoring.
One of his points was to always return as specific types of data as possible, especially when returning collections, iterators etc. So, as I've understood it, instead of returning, say, Collection<String>, you should return HashSet<String>, if you use that data type inside the method.
This confuses me, because it sounds like he's encouraging people to break the rule of information hiding. Now, I understand this when talking about accessors, that's a clear cut case. But, when calculating and mangling data, and the level of abstraction of the method implies no direct data structure, I find it best to return as abstract a datatype as possible, as long as the data doesn't fall apart (I wouldn't return Object instead of Iterable<String>, for example).
So, my question is: is there a deeper philosophy behind Code Complete's advice of always returning as specific a data type as possible, and allow downcasting, instead of maintaining a need-to-know-basis, that I've just not understood?
I think it is simply wrong for the most cases. It has to be:
be as lenient as possible, be as specific as needed
In my opinion, you should always return List rather than LinkedList or ArrayList, because the difference is more an implementation detail and not a semantic one. The guys from the Google collections api for Java taking this one step further: they return (and expect) iterators where that's enough. But, they also recommend to return ImmutableList, -Set, -Map etc. where possible to show the caller he doesn't have to make a defensive copy.
Beside that, I think the performance of the different list implementations isn't the bottleneck for most applications.
Most of the time one should return an interface or perhaps an abstract type that represents the return value being returned. If you are returning a list of X, then use List. This ultimately provides maximum flexibility if the need arises to return the list type.
Maybe later you realise that you want to return a linked list or a readonly list etc. If you put a concrete type your stuck and its a pain to change. Using the interface solves this problem.
#Gishu
If your api requires that clients cast straight away most of the time your design is suckered. Why bother returning X if clients need to cast to Y.
Can't find any evidence to substantiate my claim but the idea/guideline seems to be:
Be as lenient as possible when accepting input. Choose a generalized type over a specialized type. This means clients can use your method with different specialized types. So an IEnumerable or an IList as an input parameter would mean that the method can run off an ArrayList or a ListItemCollection. It maximizes the chance that your method is useful.
Be as strict as possible when returning values. Prefer a specialized type if possible. This means clients do not have to second-guess or jump through hoops to process the return value. Also specialized types have greater functionality. If you choose to return an IList or an IEnumerable, the number of things the caller can do with your return value drastically reduces - e.g. If you return an IList over an ArrayList, to get the number of elements returned - use the Count property, the client must downcast. But then such downcasting defeats the purpose - works today.. won't tomorrow (if you change the Type of returned object). So for all purposes, the client can't get a count of elements easily - leading him to write mundane boilerplate code (in multiple places or as a helper method)
The summary here is it depends on the context (exceptions to most rules). E.g. if the most probable use of your return value is that clients would use the returned list to search for some element, it makes sense to return a List Implementation (type) that supports some kind of search method. Make it as easy as possible for the client to consume the return value.
I could see how, in some cases, having a more specific data type returned could be useful. For example knowing that the return value is a LinkedList rather than just List would allow you to do a delete from the list knowing that it will be efficient.
I think, while designing interfaces, you should design a method to return the as abstract data type as possible. Returning specific type would make the purpose of the method more clear about what they return.
Also, I would understand it in this way:
Return as abstract a data type as possible = return as specific a data type as possible
i.e. when your method is supposed to return any collection data type return collection rather than object.
tell me if i m wrong.
A specific return type is much more valuable because it:
reduces possible performance issues with discovering functionality with casting or reflection
increases code readability
does NOT in fact, expose more than is necessary.
The return type of a function is specifically chosen to cater to ALL of its callers. It is the calling function that should USE the return variable as abstractly as possible, since the calling function knows how the data will be used.
Is it only necessary to traverse the structure? is it necessary to sort the structure? transform it? clone it? These are questions only the caller can answer, and thus can use an abstracted type. The called function MUST provide for all of these cases.
If,in fact, the most specific use case you have right now is Iterable< string >, then that's fine. But more often than not - your callers will eventually need to have more details, so start with a specific return type - it doesn't cost anything.

Are booleans as method arguments unacceptable? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
A colleague of mine states that booleans as method arguments are not acceptable. They shall be replaced by enumerations. At first I did not see any benefit, but he gave me an example.
What's easier to understand?
file.writeData( data, true );
Or
enum WriteMode {
Append,
Overwrite
};
file.writeData( data, Append );
Now I got it! ;-)
This is definitely an example where an enumeration as second parameter makes the code much more readable.
So, what's your opinion on this topic?
Boolean's represent "yes/no" choices. If you want to represent a "yes/no", then use a boolean, it should be self-explanatory.
But if it's a choice between two options, neither of which is clearly yes or no, then an enum can sometimes be more readable.
Enums also allow for future modifications, where you now want a third choice (or more).
Use the one that best models your problem. In the example you give, the enum is a better choice. However, there would be other times when a boolean is better. Which makes more sense to you:
lock.setIsLocked(True);
or
enum LockState { Locked, Unlocked };
lock.setLockState(Locked);
In this case, I might choose the boolean option since I think it's quite clear and unambiguous, and I'm pretty sure my lock is not going to have more than two states. Still, the second choice is valid, but unnecessarily complicated, IMHO.
To me, neither using boolean nor enumeration is a good approach. Robert C. Martin captures this very clearly in his Clean Code Tip #12: Eliminate Boolean Arguments:
Boolean arguments loudly declare that the function does more than one thing. They are confusing and should be eliminated.
If a method does more than one thing, you should rather write two different methods, for example in your case: file.append(data) and file.overwrite(data).
Using an enumeration doesn't make things clearer. It doesn't change anything, it's still a flag argument.
Remember the question Adlai Stevenson posed to ambassador Zorin at the U.N. during the cuban missile crisis?
"You are in the courtroom of world
opinion right now, and you can answer
yes or no. You have denied that [the missiles]
exist, and I want to know whether I
have understood you correctly.... I am
prepared to wait for my answer until
hell freezes over, if that's your
decision."
If the flag you have in your method is of such a nature that you can pin it down to a binary decision, and that decision will never turn into a three-way or n-way decision, go for boolean. Indications: your flag is called isXXX.
Don't make it boolean in case of something that is a mode switch. There is always one more mode than you thought of when writing the method in the first place.
The one-more-mode dilemma has e.g. haunted Unix, where the possible permission modes a file or directory can have today result in weird double meanings of modes depending on file type, ownership etc.
There are two reasons I've run into this being a bad thing:
Because some people will write methods like:
ProcessBatch(true, false, false, true, false, false, true);
This is obviously bad because it's too easy to mix up parameters, and you have no idea by looking at it what you're specifying. Just one bool isn't too bad though.
Because controlling program flow by a simple yes/no branch might mean you have two entirely different functions that are wrapped up into one in an awkard way. For instance:
public void Write(bool toOptical);
Really, this should be two methods
public void WriteOptical();
public void WriteMagnetic();
because the code in these might be entirely different; they might have to do all sorts of different error handling and validation, or maybe even have to format the outgoing data differently. You can't tell that just by using Write() or even Write(Enum.Optical) (though of course you could have either of those methods just call internal methods WriteOptical/Mag if you want).
I guess it just depends. I wouldn't make too big of a deal about it except for #1.
I think you almost answered this yourself, I think the end aim is to make the code more readable, and in this case the enum did that, IMO its always best to look at the end aim rather than blanket rules, maybe think of it more as a guideline i.e. enums are often more readable in code than generic bools, ints etc but there will always be exceptions to the rule.
Enums are better but I wouldn't call boolean params as "unacceptable". Sometimes it's just easier to throw one little boolean in and move on (think private methods etc.)
Booleans may be OK in languages that have named parameters, like Python and Objective-C, since the name can explain what the parameter does:
file.writeData(data, overwrite=true)
or:
[file writeData:data overwrite:YES]
Enums have a definite benefit, but you should't just go replacing all your booleans with enums. There are many places where true/false is actually the best way to represent what is going on.
However, using them as method arguments is a bit suspect, simply because you can't see without digging into things what they are supposed to do, as they let you see what the true/false actually means
[Edit for the current state in 2022]
In modern C#, or other languages that support this, the nicest way to do it is with named arguments:
var worker = new BackgroundWorker(workerReportsProgress: true);
If your language doesn't allow for named arguments, then you may find properties to be a reasonable solution as well
[Original Answer from 2008 left for posterity]
Properties (especially with C#3 object initializers) or keyword arguments (a la ruby or python) are a much better way to go where you'd otherwise use a boolean argument.
C# example:
var worker = new BackgroundWorker { WorkerReportsProgress = true };
Ruby example
validates_presence_of :name, :allow_nil => true
Python example
connect_to_database( persistent=true )
The only thing I can think of where a boolean method argument is the right thing to do is in java, where you don't have either properties or keyword arguments. This is one of the reasons I hate java :-(
I would not agree that it is a good rule. Obviously, Enum makes for a better explicit or verbose code at some instances, but as a rule it seems way over reaching.
First let me take your example:
The programmers responsibility (and ability) to write good code is not really jeopardized by having a Boolean parameter. In your example the programmer could have written just as verbose code by writing:
dim append as boolean = true
file.writeData( data, append );
or I prefer more general
dim shouldAppend as boolean = true
file.writeData( data, shouldAppend );
Second:
The Enum example you gave is only "better" because you are passing a CONST. Most likely in most application at least some if not most of the time parameters that are passed to functions are VARIABLES. in which case my second example (giving variables with good names) is much better and Enum would have given you little benefits.
While it is true that in many cases enums are more readable and more extensible than booleans, an absolute rule that "booleans are not acceptable" is daft. It is inflexible and counter-productive - it does not leave room for human judgement. They're a fundamental built in type in most languages because they're useful - consider applying it to other built-in-types: saying for instance "never use an int as a parameter" would just be crazy.
This rule is just a question of style, not of potential for bugs or runtime performance. A better rule would be "prefer enums to booleans for reasons of readability".
Look at the .Net framework. Booleans are used as parameters on quite a few methods. The .Net API is not perfect, but I don't think that the use of boolean as parameters is a big problem. The tooltip always gives you the name of the parameter, and you can build this kind of guidance too - fill in your XML comments on the method parameters, they will come up in the tooltip.
I should also add that there is a case when you should clearly refactor booleans to an enumeration - when you have two or more booleans on your class, or in your method params, and not all states are valid (e.g. it's not valid to have them both set true).
For instance, if your class has properties like
public bool IsFoo
public bool IsBar
And it's an error to have both of them true at the same time, what you've actually got is three valid states, better expressed as something like:
enum FooBarType { IsFoo, IsBar, IsNeither };
Some rules that your colleague might be better adhering to are:
Don't be dogmatic with your design.
Choose what fits most appropriately for the users of your code.
Don't try to bash star-shaped pegs into every hole just because you like the shape this month!
A Boolean would only be acceptable if you do not intend to extend the functionality of the framework. The Enum is preferred because you can extend the enum and not break previous implementations of the function call.
The other advantage of the Enum is that is easier to read.
If the method asks a question such as:
KeepWritingData (DataAvailable());
where
bool DataAvailable()
{
return true; //data is ALWAYS available!
}
void KeepWritingData (bool keepGoing)
{
if (keepGoing)
{
...
}
}
boolean method arguments seem to make absolutely perfect sense.
It depends on the method. If the method does something that is very obviously a true/false thing then it is fine, e.g. below [though not I am not saying this is the best design for this method, it's just an example of where the usage is obvious].
CommentService.SetApprovalStatus(commentId, false);
However in most cases, such as the example you mention, it is better to use an enumeration. There are many examples in the .NET Framework itself where this convention is not followed, but that is because they introduced this design guideline fairly late on in the cycle.
It does make things a bit more explicit, but does start to massively extend the complexity of your interfaces - in a sheer boolean choice such as appending/overwriting it seems like overkill. If you need to add a further option (which I can't think of in this case), you can always perform a refactor (depending on the language)
Enums can certainly make the code more readable. There are still a few things to watch out for (in .net at least)
Because the underlying storage of an enum is an int, the default value will be zero, so you should make sure that 0 is a sensible default. (E.g. structs have all fields set to zero when created, so there's no way to specify a default other than 0. If you don't have a 0 value, you can't even test the enum without casting to int, which would be bad style.)
If your enum's are private to your code (never exposed publicly) then you can stop reading here.
If your enums are published in any way to external code and/or are saved outside of the program, consider numbering them explicitly. The compiler automatically numbers them from 0, but if you rearrange your enums without giving them values you can end up with defects.
I can legally write
WriteMode illegalButWorks = (WriteMode)1000000;
file.Write( data, illegalButWorks );
To combat this, any code that consumes an enum that you can't be certain of (e.g. public API) needs to check if the enum is valid. You do this via
if (!Enum.IsDefined(typeof(WriteMode), userValue))
throw new ArgumentException("userValue");
The only caveat of Enum.IsDefined is that it uses reflection and is slower. It also suffers a versioning issue. If you need to check the enum value often, you would be better off the following:
public static bool CheckWriteModeEnumValue(WriteMode writeMode)
{
switch( writeMode )
{
case WriteMode.Append:
case WriteMode.OverWrite:
break;
default:
Debug.Assert(false, "The WriteMode '" + writeMode + "' is not valid.");
return false;
}
return true;
}
The versioning issue is that old code may only know how to handle the 2 enums you have. If you add a third value, Enum.IsDefined will be true, but the old code can't necessarily handle it. Whoops.
There's even more fun you can do with [Flags] enums, and the validation code for that is slightly different.
I'll also note that for portability, you should use call ToString() on the enum, and use Enum.Parse() when reading them back in. Both ToString() and Enum.Parse() can handle [Flags] enum's as well, so there's no reason not to use them. Mind you, it's yet another pitfall, because now you can't even change the name of the enum without possibly breaking code.
So, sometimes you need to weigh all of the above in when you ask yourself Can I get away with just an bool?
IMHO it seems like an enum would be the obvious choice for any situation where more than two options are possible. But there definitely ARE situations where a boolean is all you need. In that case I would say that using an enum where a bool would work would be an example of using 7 words when 4 will do.
Booleans make sense when you have an obvious toggle which can only be one of two things (i.e. the state of a light bulb, on or off). Other than that, it's good to write it in such a way that it's obvious what you're passing - e.g. disk writes - unbuffered, line-buffered, or synchronous - should be passed as such. Even if you don't want to allow synchronous writes now (and so you're limited to two options), it's worth considering making them more verbose for the purposes of knowing what they do at first glance.
That said, you can also use False and True (boolean 0 and 1) and then if you need more values later, expand the function out to support user-defined values (say, 2 and 3), and your old 0/1 values will port over nicely, so your code ought not to break.
Sometimes it's just simpler to model different behaviour with overloads. To continue from your example would be:
file.appendData( data );
file.overwriteData( data );
This approach degrades if you have multiple parameters, each allowing a fixed set of options. For example, a method that opens a file might have several permutations of file mode (open/create), file access (read/write), sharing mode (none/read/write). The total number of configurations is equal to the Cartesian products of the individual options. Naturally in such cases multiple overloads are not appropriate.
Enums can, in some cases make code more readable, although validating the exact enum value in some languages (C# for example) can be difficult.
Often a boolean parameter is appended to the list of parameters as a new overload. One example in .NET is:
Enum.Parse(str);
Enum.Parse(str, true); // ignore case
The latter overload became available in a later version of the .NET framework than the first.
If you know that there will only ever be two choices, a boolean might be fine. Enums are extensible in a way that won't break old code, although old libraries might not support new enum values so versioning cannot be completely disregarded.
EDIT
In newer versions of C# it's possible to use named arguments which, IMO, can make calling code clearer in the same way that enums can. Using the same example as above:
Enum.Parse(str, ignoreCase: true);
Where I do agree that Enums are good way to go, in methods where you have 2 options (and just two options you can have readability without enum.)
e.g.
public void writeData(Stream data, boolean is_overwrite)
Love the Enums, but boolean is useful too.
This is a late entry on an old post, and it's so far down the page that nobody will ever read it, but since nobody has said it already....
An inline comment goes a long way to solving the unexpected bool problem. The original example is particularly heinous: imagine trying to name the variable in the function declearation! It'd be something like
void writeData( DataObject data, bool use_append_mode );
But, for the sake of example, let's say that's the declaration. Then, for an otherwise unexplained boolean argument, I put the variable name in an inline comment. Compare
file.writeData( data, true );
with
file.writeData( data, true /* use_append_mode */);
It really depends on the exact nature of the argument. If it is not a yes/no or true/false then a enum makes it more readable. But with an enum you need to check the argument or have acceptable default behaviour since undefined values of the underlying type can be passed.
The use of enums instead of booleans in your example does help make the method call more readable. However, this is a substitute for my favorite wish item in C#, named arguments in method calls. This syntax:
var v = CallMethod(pData = data, pFileMode = WriteMode, pIsDirty = true);
would be perfectly readable, and you could then do what a programmer should do, which is choose the most appropriate type for each parameter in the method without regard to how it looks in the IDE.
C# 3.0 allows named arguments in constructors. I don't know why they can't do this with methods as well.
Booleans values true/false only. So it is not clear what it represent. Enum can have meaningful name, e.g OVERWRITE, APPEND, etc. So enums are better.

Resources