Why didn't C++0x deprecate implicit conversions for user defined types a.k.a. objects? Is there any project which actually uses this (mis)feature? Whenever I see a single argument constructor in a code I get to review or modify I treat it as bug and make it explicit. So far it worked well and nobody complained.
Thank you.
EDIT: Let me quote Alex Stepanov, the creator of STL:
Open your C++ book and read about the
explicit keyword! Also petition your
neighborhood C++ standard committee
member to finally abolish implicit
conversions. There is a common
misconception, often propagated by
people who should know better, that
STL depends on implicit conversions.
Not so!
Reference: A. Stepanov. C++ notes
EDIT AGAIN: No, no debate plz. I am just curious whether anyone uses implicit conversions in their work. I never seen any project which would allow implicit conversion for objects. I thought hard and couldn't come with any hypothetical scenario where implicit conversion wouldn't become a minefield. I mean C++ single argument conversions, not float->double or similar conversions inherited from C.
The obvious answer is that code written and working in C++03 is supposed to continue working with C++0x compilers.
For one thing, it would be a hugely breaking change to remove implicit conversion from the language - even if it were made optional and off-by-default with an implicit keyword.
I've done a search of comp.std.c++ and it doesn't seem to have been discussed at all in that group - though there have been some questions on the subject, no-one seems to have suggested going so far as removing it. I would certainly not go so far either: it's a feature I happily use on occasion and I do not subscribe to making all possibly-converting constructors explicit either - unless it causes real bugs.
Related
I am learning Ruby and I'm having a major conceptual problem concerning typing. Allow me to detail why I don't understand with paradigm.
Say I am method chaining for concise code as you do in Ruby. I have to precisely know what the return type of each method call in the chain, otherwise I can't know what methods are available on the next link. Do I have to check the method documentation every time?? I'm running into this constantly running tutorial exercises. It seems I'm stuck with a process of reference, infer, run, fail, fix, repeat to get code running rather then knowing precisely what I'm working with during coding. This flies in the face of Ruby's promise of intuitiveness.
Say I am using a third party library, once again I need to know what types are allow to pass on the parameters otherwise I get a failure. I can look at the code but there may or may not be any comments or declaration of what type the method is expecting. I understand you code based on methods are available on an object, not the type. But then I have to be sure whatever I pass as a parameter has all the methods the library is expect, so I still have to do type checking. Do I have to hope and pray everything is documented properly on an interface so I know if I'm expected to give a string, a hash, a class, etc.
If I look at the source of a method I can get a list of methods being called and infer the type expected, but I have to perform analysis.
Ruby and duck typing: design by contract impossible?
The discussions in the preceding stackoverflow question don't really answer anything other than "there are processes you have to follow" and those processes don't seem to be standard, everyone has a different opinion on what process to follow, and the language has zero enforcement. Method Validation? Test-Driven Design? Documented API? Strict Method Naming Conventions? What's the standard and who dictates it? What do I follow? Would these guidelines solve this concern https://stackoverflow.com/questions/616037/ruby-coding-style-guidelines? Is there editors that help?
Conceptually I don't get the advantage either. You need to know what methods are needed for any method called, so regardless you are typing when you code anything. You just aren't informing the language or anyone else explicitly, unless you decide to document it. Then you are stuck doing all type checking at runtime instead of during coding. I've done PHP and Python programming and I don't understand it there either.
What am I missing or not understanding? Please help me understand this paradigm.
This is not a Ruby specific problem, it's the same for all dynamically typed languages.
Usually there are no guidelines for how to document this either (and most of the time not really possible). See for instance map in the ruby documentation
map { |item| block } → new_ary
map → Enumerator
What is item, block and new_ary here and how are they related? There's no way to tell unless you know the implementation or can infer it from the name of the function somehow. Specifying the type is also hard since new_ary depends on what block returns, which in turn depends on the type of item, which could be different for each element in the Array.
A lot of times you also stumble across documentation that says that an argument is of type Object, Which again tells you nothing since everything is an Object.
OCaml has a solution for this, it supports structural typing so a function that needs an object with a property foo that's a String will be inferred to be { foo : String } instead of a concrete type. But OCaml is still statically typed.
Worth noting is that this can be a problem in statically typed lanugages too. Scala has very generic methods on collections which leads to type signatures like ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[Array[T], B, That]): That for appending two collections.
So most of the time, you will just have to learn this by heart in dynamically typed languages, and perhaps help improve the documentation of libraries you are using.
And this is why I prefer static typing ;)
Edit One thing that might make sense is to do what Scala also does. It doesn't actually show you that type signature for ++ by default, instead it shows ++[B](that: GenTraversableOnce[B]): Array[B] which is not as generic, but probably covers most of the use cases. So for Ruby's map it could have a monomorphic type signature like Array<a> -> (a -> b) -> Array<b>. It's only correct for the cases where the list only contains values of one type and the block only returns elements of one other type, but it's much easier to understand and gives a good overview of what the function does.
Yes, you seem to misunderstand the concept. It's not a replacement for static type checking. It's just different. For example, if you convert objects to json (for rendering them to client), you don't care about actual type of the object, as long as it has #to_json method. In Java, you'd have to create IJsonable interface. In ruby no overhead is needed.
As for knowing what to pass where and what returns what: memorize this or consult docs each time. We all do that.
Just another day, I've seen rails programmer with 6+ years of experience complain on twitter that he can't memorize order of parameters to alias_method: does new name go first or last?
This flies in the face of Ruby's promise of intuitiveness.
Not really. Maybe it's just badly written library. In core ruby everything is quite intuitive, I dare say.
Statically typed languages with their powerful IDEs have a small advantage here, because they can show you documentation right here, very quickly. This is still accessing documentation, though. Only quicker.
Consider that the design choices of strongly typed languages (C++,Java,C#,et al) enforce strict declarations of type passed to methods, and type returned by methods. This is because these languages were designed to validate that arguments are correct (and since these languages are compiled, this work can be done at compile time). But some questions can only be answered at run time, and C++ for example has the RTTI (Run Time Type Interpreter) to examine and enforce type guarantees. But as the developer, you are guided by syntax, semantics and the compiler to produce code that follows these type constraints.
Ruby gives you flexibility to take dynamic argument types, and return dynamic types. This freedom enables you to write more generic code (read Stepanov on the STL and generic programming), and gives you a rich set of introspection methods (is_a?, instance_of?, respond_to?, kind_of?, is_array?, et al) which you can use dynamically. Ruby enables you to write generic methods, but you can also explicity enforce design by contract, and process failure of contract by means chosen.
Yes, you will need to use care when chaining methods together, but learning Ruby is not just a few new keywords. Ruby supports multiple paradigms; you can write procedural, object oriend, generic, and functional programs. The cycle you are in right now will improve quickly as you learn about Ruby.
Perhaps your concern stems from a bias towards strongly typed languages (C++, Java, C#, et al). Duck typing is a different approach. You think differently. Duck typing means that if an object looks like a , behaves like a , then it is a . Everything (almost) is an Object in Ruby, so everything is polymorphic.
Consider templates (C++ has them, C# has them, Java is getting them, C has macros). You build an algorithm, and then have the compiler generate instances for your chosen types. You aren't doing design by contract with generics, but when you recognize their power, you write less code, and produce more.
Some of your other concerns,
third party libraries (gems) are not as hard to use as you fear
Documented API? See Rdoc and http://www.ruby-doc.org/
Rdoc documentation is (usually) provided for libraries
coding guidelines - look at the source for a couple of simple gems for starters
naming conventions - snake case and camel case are both popular
Suggestion - approach an online tutorial with an open mind, do the tutorial (http://rubymonk.com/learning/books/ is good), and you will have more focused questions.
I'm looking at a piece of very old VB6, and have come across usages such as
Form5!ProgressBar.Max = time_max
and
Form5!ProgressBar.Value = current_time
Perusing the answer to this question here and reading this page here, I deduce that these things mean the same as
Form5.ProgressBar.Max = time_max
Form5.ProgressBar.Value = current_time
but it isn't at all clear that this is the case. Can anyone confirm or deny this, and/or point me at an explanation in words of one syllable?
Yes, Form5!ProgressBar is almost exactly equivalent to Form5.ProgressBar
As far as I can remember there is one difference: the behaviour if the Form5 object does not have a ProgressBar member (i.e. the form does not have a control called ProgressBar). The dot-notation is checked at compile time but the exclamation-mark notation is checked at run time.
Form5.ProgressBar will not compile.
Form5!ProgressBar will compile but will give an error at runtime.
IMHO the dot notation is preferred in VB6, especially when accessing controls. The exclamation mark is only supported for backward-compatibility with very old versions of VB.
The default member of a Form is (indirectly) the Controls collection.
The bang (!) syntax is used for collection access in VB, and in many cases the compiler makes use of it to early bind things that otherwise would be accessed more slowly through late binding.
Far from deprecated, it is often preferable.
However in this case since the default member of Form objects is [_Default] As Object containing a reference to a Controls As Object instance, there is no particular advantage or disadvantage to this syntax over:
Form5("ProgressBar").Value
I agree that in this case however it is better to more directly access the control as a member of the Form as in:
Form5.ProgressBar.Value
Knowing the difference between these is a matter of actually knowing VB. It isn't simply syntactic though, the two "paths" do different things that get to the same result.
Hopefully this answer offers an explanation rather merely invoking voodoo.
Coming from the discussions about the use of vendor specific attributes in another question I asked myself, "what rules should we tell people for using attributes that are not listed in the standard"?
The two attributes that are defined are [[ noreturn ]] and [[ carries_dependencies ]]. The standard leaves open how compilers should react on unknown attributes -- thus, by the standard they may stop with an error message. This is not what e.g. GCC does, it emits a warning and continues. This is probably a behavior to be expected by the most-common compilers. For this reason I would have like to read a "should" in the standard, but we don't have it.
The paper N2553 brings up flexible attributes. It lists further attributes used by GCC (
unused, weak) and MSVC (dllimport). for OpenMP, the widely supported parallelizing framework, scoped attributes are suggested, eg. omp::for(clause, clause), omp::parallel(clause,clause). So, it is very likely that we will se some vendor specific attributes very soon after they support the syntax at all, indeed.
Therefore, when we now go "out in the world" and tell people about C++11, what should the advice be about using attributes?
Only use noreturn and carries_dependencies
Use your compilers old syntax instead, eg. __attribute__((noreturn)) and define a macro when you port the code (the current situation)
Use those attributes your favorite compiler supports freely, knowing this code might not be portable to another standard-conforming compiler, because if the standard allows a compiler to stop with an error, you have to consider this will happen. This sounds a bit like advocating writing non-portable code.
Or, my guess, expect the most-used compilers to warn about unknown attributes, so you can use vendor-specific attributes, keeping in mind that in rare cases you may get problems.
Note the slight difference in the last two bullet-items. While both say "use those attributes you need", item3's message is "do not care about other compilers", while item4 implicitly rephrases the standard texts "implementation defined behavior" to "the compiler should emit a diagnostic message".
What could be the suggestion for an upcoming Best Practice here?
The best practice — the only one that is reasonably portable in practical terms, never mind ambiguity in the Standard — is to use macros. It will be many years before we can forget about compilers that don't support attributes.
The number of compilers and the number of custom __keywords__ defined by those compilers will always be increasing, and it makes sense for the language to define a way to contain the damage. It doesn't need to revolutionize the way people write unportable code, or make unportable code portable (although standard attributes do that). There is a benefit simply to giving caffeine-addled compiler backend engineers a sandbox for when they want to extend the grammar.
It is a bit alarming, though, that no attribute tokens are reserved to the implementation, or to the language besides the ones currently standard. So there will be trouble when they decide to standardize more of them.
This is my attempt to start a collection of GCC special features which usually do not encounter. this comes after #jlebedev in the another question mentioned "Effective C++" option for g++,
-Weffc++
This option warns about C++ code which breaks some of the programming guidelines given in the books "Effective C++" and "More Effective C++" by Scott Meyers. For example, a warning will be given if a class which uses dynamically allocated memory does not define a copy constructor and an assignment operator. Note that the standard library header files do not follow these guidelines, so you may wish to use this option as an occasional test for possible problems in your own code rather than compiling with it all the time.
What other cool features are there?
From time to time I go through the current GCC/G++ command line parameter documentation and update my compiler script to be even more paranoid about any kind of coding error. Here it is if you are interested.
Unfortunately I didn't document them so I forgot most, but -pedantic, -Wall, -Wextra, -Weffc++, -Wshadow, -Wnon-virtual-dtor, -Wold-style-cast, -Woverloaded-virtual, and a few others are always useful, warning me of potentially dangerous situations. I like this aspect of customizability, it forces me to write clean, correct code. It served me well.
However they are not without headaches, especially -Weffc++. Just a few examples:
It requires me to provide a custom copy constructor and assignment operator if there are pointer members in my class, which are useless since I use garbage collection. So I need to declare empty private versions of them.
My NonInstantiable class (which prevents instantiation of any subclass) had to implement a dummy private friend class so G++ didn't whine about "only private constructors and no friends"
My Final<T> class (which prevents subclassing of T if T derived from it virtually) had to wrap T in a private wrapper class to declare it as friend, since the standard flat out forbids befriending a template parameter.
G++ recognizes functions that never return a return value, and throw an exception instead, and whines about them not being declared with the noreturn attribute. Hiding behind always true instructions didn't work, G++ was too clever and recognized them. Took me a while to come up with declaring a variable volatile and comparing it against its value to be able to throw that exception unmolested.
Floating point comparison warnings. Oh god. I have to work around them by writing x <= y and x >= y instead of x == y where it is acceptable.
Shadowing virtuals. Okay, this is clearly useful to prevent stupid shadowing/overloading problems in subclasses but still annoying.
No previous declaration for functions. Kinda lost its importance as soon as I started copypasting the function declaration right above it.
It might sound a bit masochist, but as a whole, these are very cool features that increased my understanding of C++ and general programming.
What other cool features G++ has? Well, it's free, open, it's one of the most widely used and modern compilers, consistently outperforms its competitors, can eat almost anything people throw at it, available on virtually every platform, customizable to hell, continuously improved, has a wide community - what's not to like?
A function that returns a value (for example an int) will return a random value if a code path is followed that ends the function without a 'return value' statement. Not paying attention to this can result in exceptions and out of range memory writes or reads.
For example if a function is used to obtain the index into an array, and the faulty code path is used (the one that doesn't end with a return 'value' statement) then a random value will be returned which might be too big as an index into the array, resulting in all sorts of headaches as you wrongly mess up the stack or heap.
Often there's no need to pay any attention to implicit arguments in Scala, but sometimes it's very helpful to understand how the compiler is automatically providing them. Unfortunately, this understanding seems to be hard to obtain!
Is there a general method to discover how an implicit parameter has been provided, in a given piece of code?
Ideally, one day IDE integration would provide this information in some way, but I expect for now I'll have to dig deeper. Is there some way to ask the compiler to explain exactly which implicit definition it chooses at any given point? Can this be deciphered indirectly from other compiler output?
As an example, I'd like to know how to work out on my own where the implicit bf: CanBuildFrom[Repr, B, That] argument to TraversableLike.map comes from, without reading questions like this one on Stack Overflow!
Add the option -Xprint:typer to the scalac command line. This prints the program tree just after the typer compiler phase. This works best with a short, self contained example. You can also pass this to scalac. This is a really huge step towards self-reliance in Scala!
As mentioned by Randall, IntelliJ shows in-scope and the selected Implicit View with CTRL-ALT-SHIFT-I. Wait a month or two and implicit arguments are likely to have similar support.
Ideally, one day IDE integration would provide this information in some way, ...
That day is today in with JetBrains' IDEA. If you run the latest EAP of IDEA version 9 (9.0.3 EA #95.289) with a recent nightly release of the Scala plug-in, this capability is present. Every value expression may be selected and a command issued that displays a pop-up showing all applicable implicit conversions with the one the compiler will select highlighted.
And since there are apparently a few out there who don't yet know it, there is a free and open-source Community Edition of IDEA and it does support the Scala plug-in.