Related
I have accidentally discovered the Ruby idiom of ||=(),
as in:
def app_logger
#app_logger ||= (
logfile = File.open(::Rails.root.join(LOG_FILE), 'a')
logfile.sync = true
AppLogger.new(logfile)
)
end
I tried to use {} instead of (), but it didn't work. I thought {} is meant to enclose a block.
Is this a known idiom? Is it a good style?
I haven't found much documentation on this kind of use of parentheses. Any pointers would be helpful.
Please note this post is about the use of () this way, not the use of ||=. There are many posts about this latter idiom already.
Like a lot of things in Ruby that can be done, many of them shouldn't be done and this is one of them.
Using brackets to group code when there are already other facilities is potentially confusing and almost certainly contrary to many coding style guides. If I saw this in code I was managing, I'd immediately fix it.
What's best is to use the begin/end markers to make it completely clear what's going on:
def app_logger
#app_logger ||= begin
logfile = File.open(::Rails.root.join(LOG_FILE), 'a')
logfile.sync = true
AppLogger.new(logfile)
end
end
A lot of things in Ruby evaluate down to a single value, and the contents of (...) are apparently one of those things.
Your other example of foo ||= (a=10; a+10) being "nicer" is also rather controversial. This does two assignment and addition, but only conditionally. This one would almost always be better written in long-form with begin/end to make it clear that a+10 is the result.
From a style perspective, hiding the "important" part of that, the a+10, at the end of a line is bad, it can be overlooked. Having it as the last line makes it abundantly clear. This is also why having if statements tacked on the end of long lines is also bad, it hides that the line is only conditionally executed.
Concern for brevity is always trumped by concerns of readability. Saving a couple of bytes on disk is not going to help you one bit when, due to someone misreading your code, they introduce a crippling bug.
That someone could be you in the future when you get caught up in your own cleverness. It's happened to all of us at some point.
In addition to #tadman's answer, I'd counter that writing the method in a more Ruby-like way maintains the readability and keeps it nice and tight:
def app_logger
return #app_logger if #app_logger
logfile = File.open(::Rails.root.join(LOG_FILE), 'a')
logfile.sync = true
#app_logger = AppLogger.new(logfile)
end
For my intent and rationale in writing it this way, see my comment to this answer in response to #fotanus's comment.
Our brains become conditioned to see patterns as we learn to program and learn new languages. Whether the patterns are in hexcode emitted by a debugger or C, Perl, Python, Java or Ruby, we still see patterns. When we're used to seeing them we tend to want to change code to resemble those familiar constructs. That's why Perl and C programmers tend to write Java, Python and Ruby like C and Perl at first. The "beauty is in the eye of the beholder", but that same beauty is a weed when it's out of place, just as wild-flowers are out of place in formal gardens or the middle of a fairway. We should write in the vernacular of the programming language, because that is the way of speaking expected by those who live in that land.
Something to remember is, though we, personally, might be a code-studly monster who can reduce code from multiple lines down to a single character, what will keep me in a job is the ability to write code other people can understand, without necessarily having to be of the same caliber. Code crunching invariably reaches a point of diminishing returns, well before it has been reduced to its minimal size, especially when that code is running in a production environment and is being maintained by junior programmers and there is a need to extend or debug it, and the clock is ticking. Minutes = dollars where I work, and the dollars have really big multipliers, which causes the walls to leak managers like cockroaches when a bug-bomb goes off. (Oh... did I call managers cockroaches? Whatever.)
Being called the morning after an event and being thanked for writing code that made it easy for them to debug/fix or amend/extend is a whole lot better than being called at 2:45AM and asked to get online to help. I value my sleep and that of my coding-partners and push for maintainable code as our #1 priority.
That's my $0.02.
Parentheses are really ugly for in my opinion. Why you don't delegate logic behind creating logger to method?
def app_logger
#app_logger ||= instantiate_app_logger
end
def instantiate_app_logger
logfile = File.open(::Rails.root.join(LOG_FILE), 'a')
logfile.sync = true
AppLogger.new(logfile)
end
But I am afraid that it breaks some OOP, your code also. Why AppLoger class can't open file based on passed path and turning syncing on? It will be so much cleaner.
It seems like people who would never dare cut and paste code have no problem specifying the type of something over and over and over. Why isn't it emphasized as a good practice that type information should be declared once and only once so as to cause as little ripple effect as possible throughout the source code if the type of something is modified? For example, using pseudocode that borrows from C# and D:
MyClass<MyGenericArg> foo = new MyClass<MyGenericArg>(ctorArg);
void fun(MyClass<MyGenericArg> arg) {
gun(arg);
}
void gun(MyClass<MyGenericArg> arg) {
// do stuff.
}
Vs.
var foo = new MyClass<MyGenericArg>(ctorArg);
void fun(T)(T arg) {
gun(arg);
}
void gun(T)(T arg) {
// do stuff.
}
It seems like the second one is a lot less brittle if you change the name of MyClass, or change the type of MyGenericArg, or otherwise decide to change the type of foo.
I don't think you're going to find a lot of disagreement with your argument that the latter example is "better" for the programmer. A lot of language design features are there because they're better for the compiler implementer!
See Scala for one reification of your idea.
Other languages (such as the ML family) take type inference much further, and create a whole style of programming where the type is enormously important, much more so than in the C-like languages. (See The Little MLer for a gentle introduction.)
It isn't considered a bad thing at all. In fact, C# maintainers are already moving a bit towards reducing the tiring boilerplate with the var keyword, where
MyContainer<MyType> cont = new MyContainer<MyType>();
is exactly equivalent to
var cont = new MyContainer<MyType>();
Although you will see many people who will argue against var usage, which kind of shows that many people is not familiar with strong typed languages with type inference; type inference is mistaken for dynamic/soft typing.
Repetition may lead to more readable code, and sometimes may be required in the general case. I've always seen the focus of DRY being more about duplicating logic than repeating literal text. Technically, you can eliminate 'var' and 'void' from your bottom code as well. Not to mention you indicate scope with indentation, why repeat yourself with braces?
Repetition can also have practical benefits: parsing by a program is easier by keeping the 'void', for example.
(However, I still strongly agree with you on prefering "var name = new Type()" over "Type name = new Type()".)
It's a bad thing. This very topic was mentioned in Google's Go language Techtalk.
Albert Einstein said, "Everything should be made as simple as possible, but not one bit simpler."
Your complaint makes no sense in the case of a dynamically typed language, so you must intend this to refer to statically typed languages. In that case, your replacement example implicitly uses Generics (aka Template Classes), which means that any time that fun or gun is used, a new definition based upon the type of the argument. That could result in dozens of extra methods, regardless of the intent of the programmer. In particular, you're throwing away the benefit of compiler-checked type-safety for a runtime error.
If your goal was to simply pass through the argument without checking its type, then the correct type would be Object not T.
Type declarations are intended to make the programmer's life simpler, by catching errors at compile-time, instead of failing at runtime. If you have an overly complex type definition, then you probably don't understand your data. In your example, I would have suggested adding fun and gun to MyClass, instead of defining them separately. If fun and gun don't apply to all possible template types, then they should be defined in an explicit subclass, not as separate functions that take a templated class argument.
Generics exist as a way to wrap behavior around more specific objects. List, Queue, Stack, these are fine reasons for Generics, but at the end of the day, the only thing you should be doing with a bare Generic is creating an instance of it, and calling methods on it. If you really feel the need to do more than that with a Generic, then you probably need to embed your Generic class as an instance object in a wrapper class, one that defines the behaviors you need. You do this for the same reason that you embed primitives into a class: because by themselves, numbers and strings do not convey semantic information about their contents.
Example:
What semantic information does List convey? Just that you're working with multiple triples of integers. On the other hand, List, where a color has 3 integers (red, blue, green) with bounded values (0-255) conveys the intent that you're working with multiple Colors, but provides no hint as to whether the List is ordered, allows duplicates, or any other information about the Colors. Finally a Palette can add those semantics for you: a Palette has a name, contains multiple Colors, but no duplicates, and order isn't important.
This has gotten a bit far afield from the original question, but what it means to me is that DRY (Don't Repeat Yourself) means specifying information once, but that specification should be as precise as is necessary.
It seems to me like Ruby has a lot of syntactic flexibility, and many things may be written in several ways.
Are there any language features/syntactic sugar/coding conventions that you avoid as a Ruby programmer for clarity? I am asking about things that you choose not to use on purpose, not stuff you still have to learn.
If your answer is, "I use everything!," do you ever comment code that would be otherwise obvious if the reader knows the relevant Ruby syntax?
[I'm particularly interested in Ruby in a RoR context, but everything is welcome.]
The whole range of "$" globals (see Pickaxe2 pp333-336) mostly inherited from Perl, are pretty ghastly, although I have on occasion found myself using $: instead of $LOAD_PATH.
I generally refrain from going overboard with monkey patching because it could lead to some maintainability and readability issues. It's a great feature if used correctly, but it's easy to get carried away.
The for ... in ... loop. It directly compiles to obj.each (and throws a strange error message accordingly) and is totally unnecessary. I don't even see where it improves readability - if you've been around ruby for more than a week, #each should be natural.
This might be obvious but I generally avoid using eval if there's any alternative.
First off: I'll break many of these rules if it's for a short one-off script, or a one-liner on the command line, or in irb. But most of my time is spent in medium sized or larger scripts or applications. So:
Avoid:
using class << self code block for class methods. It's a cute trick, but no better than def self.foo, and less readable (especially after the first page).
for i in collection: Use collection.each instead.
proc {...}: usually lambda {...} is better.
class variables (e.g. ##foo). They are problematic and can usually be replaced with class-level instance variables without much effort.
Anything that causes a warning, and preferably anything that causes a warning when run via the more strict ruby -w. This is especially important if you are writing a gem to be used by others.
'else' on a 'begin ... rescue ... end' block. Personal preference: It's too much of an edge case and so few people even know it exists or how it works to be worth it.
ObjectSpace and GC. You probably don't need to go there. You definitely don't want to go there.
=begin and =end multi-line comments. Personal preference for line-wise comments. These just annoy the hell out of me.
Use it, but sparingly or as a last resort (and comment it appropriately):
eval (or class_eval, etc) when passing in a string. There are some metaprogramming tricks you just can't do without passing in a string. And occasionally, the string version performs dramatically better (and sometimes that matters). Otherwise, I prefer to send in blocks of actual ruby code for my metaprogramming. eval can be avoided entirely for many metaprogramming tasks.
Adding or redefining methods on classes which were not created by me and may be used by code outside my control; a.k.a. monkey-patching. This rule is mostly for larger codebases and libraries; I'll gladly and quickly make an exception for small one-off scripts. I'll also make an exception for fixing buggy third-party libraries (although you may shoot yourself in the foot when you upgrade!). Selector namespaces (or something similar) would go a long way to make ruby nice in this regard. That said, sometimes it is worth the trouble. ;-)
Global variables (except for classes). I'll even pass in $stdout as a parameter to my objects or methods, rather than use them directly. It makes re-use of the code much easier and safer. Sometimes you can't avoid it (e.g. $0, $:, $$ and other environmental variables, but even then you can limit your usage).
speaking of which, I prefer to limit my usage of the perlish symbol globals entirely, but if they need to be used more than a little bit, then require "English".
break, redo, next, try: Often they make a block, loop, or method much more elegant than it otherwise could be. Usually they just make you scratch your head for a few minutes when you haven't seen that code for a while.
__END__ for a data-block. Excellent for a small one-file script. Unhelpful for a multi-file app.
Don't use it, but don't really avoid it either:
try/catch
continuations
Things I use often, that others might not care for, or I don't see often:
'and' and 'or' keywords: they have different precedence from && and ||, so you need to be careful with them. I find their different precedence to be very useful.
regex black magic (provided I've got some examples in unit tests for it)
HEREDOC strings
One thing I really loathe is "improper" use of {} and do ... end for blocks. I can't seem to find exactly where I learnt the practice myself but it is generally accepted to do {} for single line blocks and do ... end for multiline blocks.
Proper use:
[1, 2, 3, 4].map {|n| n * n }.inject(1) { |n,product| n * product }
or
[1, 2, 3, 4].inject do |n,product|
n = n * n
product = n * product
end
Improper use:
[1,2,3,4].map do |n| n * n end.inject(1) do |n,product| n * product end
or
[1, 2, 3, 4].inject { |n,product|
n = n * n
product = n * product
}
All of which, of course, will execute giving 576
Avoid chaining too many method calls together. It is very common in Ruby to chain methods together.
user.friends.each {|friend| friend.invite_to_party}
This may seem ok, but breaks Law of Demeter:
More formally, the Law of Demeter for functions requires that a method M of an object O may only invoke the methods of the following kinds of objects:
O itself
M's parameters
any objects created/instantiated
within M
O's direct component objects
The example above isn't perfect and a better solution would be something like this:
user.invite_friends_to_party
The example's problem isn't Ruby's fault but it is very easy to produce code that breaks Law of Demeter and makes code unreadable.
In short, avoid features that decreases code readability. It is a very important that the code you produce is easy to read.
begin nil+234 rescue '' end
above syntax is valid, but you should never use it.
Can any one suggest what is the best way to write good code that is understandable without a single line of comments?
I once had a professor when I was in college tell me that any good code should never need any comments.
Her approach was a combination of very precise logic split out into small functions with very descriptive method/property/variable names. The majority of what she presented was, in fact, extremely readable with no comments. I try to do the same with everything I write...
Read Code Complete, 2nd Edition cover to cover. Perhaps twice.
To give some specifics:
Making code readable
Eliminating code repetition
Doing design/architecture before you write code
I like to 'humanise' code, so instead of:
if (starColour.red > 200 && starColour.blue > 200 && starColour.green > 200){
doSomething();
}
I'll do this:
bool starIsBright;
starIsBright = (starColour.red > 200 && starColour.blue > 200 && starColour.green > 200);
if(starIsBright){
doSomething();
}
In some cases - yes, but in many cases no. The Yes part is already answered by others - keep it simple, write it nicely, give it readable names, etc. The No part goes to when the problem you solve in code is not a code problem at all but rather domain specific problem or business logic problem. I've got no problem reading lousy code even if it doesn't have comments. It's annoying, but doable. But it's practically impossible to read some code without understanding why is it like this and what is it trying to solve. So things like :
if (starColour.red > 200 && starColour.blue > 200 && starColour.green > 200){
doSomething();
}
look nice, but could be quite meaningless in the context of what the program is actually doing. I'd rather have it like this:
// we do this according to the requirement #xxxx blah-blah..
if (starColour.red > 200 && starColour.blue > 200 && starColour.green > 200){
doSomething();
}
Well written code might eliminate the need for comments to explain what you're doing, but you'll still want comments to explain the why.
If you really want to then you would need to be very detailed in your variable names and methods names.
But in my opinion, there is no good way to do this. Comments serve a serious purpose in coding, even if you are the only one coding you still sometimes need to be reminded what part of the code you're looking at.
Yes, you can write code that doesn't need comments to describe what it does, but that may not be enough.
Just because a function is very clear in explaining what it does, does not, by itself, tell you why it is doing what it does.
As in everything, moderation is a good idea. Write code that is explanatory, and write comments that explain why it is there or what assumptions are being made.
I think that the concept of Fluent Interfaces is really a good example of this.
var bob = DB.GetCustomers().FromCountry("USA").WithName("Bob")
Clean Code by Robert C. Martin contains everything you need to write clean, understandable code.
Use descriptive variable names and descriptive method names. Use whitespace.
Make your code read like normal conversation.
Contrast the use of Matchers in Junit:
assertThat(x, is(3));
assertThat(x, is(not(4)));
assertThat(responseString, either(containsString("color")).or(containsString("colour")));
assertThat(myList, hasItem("3"));
with the traditional style of assertEquals:
assertEquals(3, x);
When I look at the assertEquals statement, it is not clear which parameter is "expected" and which is "actual".
When I look at assertThat(x, is(3)) I can read that in English as "Assert that x is 3" which is very clear to me.
Another key to writing self-documenting code is to wrap any bit of logic that is not clear in a method call with a clear name.
if( (x < 3 || x > 17) && (y < 8 || y > 15) )
becomes
if( xAndYAreValid( x, y ) ) // or similar...
I'm not sure writing code that is so expressive that you don't need comments is necessarily a great goal. Seems to me like another form of overoptimization. If I were on your team, I'd be pleased to see clear, concise code with just enough comments.
In most cases, yes, you can write code that is clear enough that comments become unnecessary noise.
The biggest problem with comments is there is no way to check their accuracy. I tend to agree with Uncle Bob Martin in chapter 4 of his book, Clean Code:
The proper use of comments is to compensate for our failure to express ourself in
code. Note that I used the word failure. I meant it. Comments are always failures. We must
have them because we cannot always figure out how to express ourselves without them,
but their use is not a cause for celebration.
So when you find yourself in a position where you need to write a comment, think it
through and see whether there isn’t some way to turn the tables and express yourself in
code. Every time you express yourself in code, you should pat yourself on the back. Every
time you write a comment, you should grimace and feel the failure of your ability of
expression.
Most comments are either needless redundancy, outright fallacy or a crutch used to explain poorly written code. I say most because there are certain scenarios where the lack of expressiveness lies with the language rather than the programmer.
For instance the copyright and license information typically found at the beginning of a source file. As far as I'm aware no known construct exists for this in any of the popular languages. Since a simple one or two line comment suffices, its unlikely that such a construct will be added.
The original need for most comments has been replaced over time by better technology or practices. Using a change journal or commenting out code has been supplanted with source control systems. Explanatory comments in long functions can be mitigated by simply writing shorter functions. etc.
You usually can turn your comment into a function name something like:
if (starColourIsGreaterThanThreshold(){
doSomething();
}
....
private boolean starColourIsGreaterThanThreshold() {
return starColour.red > THRESHOLD &&
starColour.blue > THRESHOLD &&
starColour.green > THRESHOLD
}
I think comments should express the why, perhaps the what, but as much as possible the code should define the how (the behavior).
Someone should be able to read the code and understand what it does (the how) from the code. What may not be obvious is why you would want such behavior and what this behavior contributes to the overall requirements.
The need to comment should give you pause, though. Maybe how you are doing it is too complicated and the need to write a comment shows that.
There is a third alternative to documenting code - logging. A method that is well peppered with logging statements can do a lot to explain the why, can touch on the what and may give you a more useful artifact than well named methods and variables regarding the behavior.
If you want to code entirely without comments and still have your code be followable, then you'll have to write a larger number of shorter methods. Methods will have to have descriptive names. Variables will also have to have descriptive names. One common method of doing this is to give variables the name of nouns and to give methods the names of verbal phrases. For example:
account.updateBalance();
child.givePacifier();
int count = question.getAnswerCount();
Use enums liberally. With an enum, you can replace most booleans and integral constants. For example:
public void dumpStackPretty(boolean allThreads) {
....
}
public void someMethod() {
dumpStackPretty(true);
}
vs
public enum WhichThreads { All, NonDaemon, None; }
public void dumpStackPretty(WhichThreads whichThreads) {
....
}
public void someMethod() {
dumpStackPretty(WhichThreads.All);
}
Descriptive names is your obvious first bet.
Secondly make sure each method does one thing and only one thing. If you have a public method that needs to do many things, split it up into several private methods and call those from the public method, in a way that makes the logic obvious.
Some time ago I had to create a method that calculated the correlation of two time series.
To calculate the correlation you also need the mean and standard deviation. So I had two private methods (well actually in this case they were public as they could be used for other purposes (but assuming they couldn't then they would be private)) for calculating A) the mean, B) the standard deviation.
This sort of splitting up of function into the smallest part that makes sense is probably the most important thing to make a code readable.
How do you decide where to break up methods. My way is, if the name is obvious e.g. getAddressFromPage it is the right size. If you have several contenders you are probably trying to do too much, if you can't think of a name that makes sense you method may not "do" enough - although the latter is much less likely.
I don't really think comments are a good idea in most cases. Comments don't get checked by the compiler so they so often are misleading or wrong as the code changes over time. Instead, I prefer self documenting, concise methods that don't need comments. It can be done, and I have been doing it this way for years.
Writing code without comments takes practice and discipline, but I find that the discipline pays off as the code evolves.
It may not be comments, but, to help someone better understand what it going on you may need some diagrams explaining how the program should work, as, if a person knows the big picture then it is easier to understand code.
But, if you are doing something complex then you may need some comments, for example, in a very math intensive program.
The other place I find comments useful and important, is to ensure that someone doesn't replace code with something that looks like it should work, but won't. In that case I leave the bad code in, and comment it out, with an explanation as to why it shouldn't be used.
So, it is possible to write code without comments, but only if you are limited in what types of applications you are writing, unless you can explain why a decision was made, somewhere, and not call it a comment.
For example, a random generator can be written many ways. If you pick a particular implementation it may be necessary to explain why you picked that particular generator, as the period may be sufficiently long for current requirements, but later the requirements may change and your generator may not be sufficient.
I believe it's possible, if you consider the fact that not everybody likes the same style. So in order to minimize comments, knowing your "readers" is the most important thing.
In "information systems" kind-of software, try using declarative sentence, try to approximate the code line to a line in english, and avoid "mathematical programming" (with the i,j and k for index, and the one-liners-to-do-a-lot) at all costs.
I think code can be self-documenting to a large degree, and I think it's crucial, but reading even well-written code can be like looking at cells of the human body with a microscope. It sometimes takes comments to really explain the big picture of how pieces of the system fit together, especially if it solves a really complex and difficult problem.
Think about special data structures. If all that computer scientists had ever published about data structures were well-written code, few would really understand the relative benefit of one data structure over another -- because Big-O runtime of any given operation is sometimes just not obvious from reading the code. That's where the math and amortized analysis presented in articles come in.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've read some of the recent language vs. language questions with interest... Perl vs. Python, Python vs. Java, Can one language be better than another?
One thing I've noticed is that a lot of us have very superficial reasons for disliking languages. We notice these things at first glance and they turn us off. We shun what are probably perfectly good languages as a result of features that we'd probably learn to love or ignore in 2 seconds if we bothered.
Well, I'm as guilty as the next guy, if not more. Here goes:
Ruby: All the Ruby example code I see uses the puts command, and that's a sort of childish Yiddish anatomical term. So as a result, I can't take Ruby code seriously even though I should.
Python: The first time I saw it, I smirked at the whole significant whitespace thing. I avoided it for the next several years. Now I hardly use anything else.
Java: I don't like identifiersThatLookLikeThis. I'm not sure why exactly.
Lisp: I have trouble with all the parentheses. Things of different importance and purpose (function declarations, variable assignments, etc.) are not syntactically differentiated and I'm too lazy to learn what's what.
Fortran: uppercase everything hurts my eyes. I know modern code doesn't have to be written like that, but most example code is...
Visual Basic: it bugs me that Dim is used to declare variables, since I remember the good ol' days of GW-BASIC when it was only used to dimension arrays.
What languages did look right to me at first glance? Perl, C, QBasic, JavaScript, assembly language, BASH shell, FORTH.
Okay, now that I've aired my dirty laundry... I want to hear yours. What are your language hangups? What superficial features bother you? How have you gotten over them?
I hate Hate HATE "End Function" and "End IF" and "If... Then" parts of VB. I would much rather see a curly bracket instead.
PHP's function name inconsistencies.
// common parameters back-to-front
in_array(needle, haystack);
strpos(haystack, needle);
// _ to separate words, or not?
filesize();
file_exists;
// super globals prefix?
$GLOBALS;
$_POST;
I never really liked the keywords spelled backwards in some scripting shells
if-then-fi is bad enough, but case-in-esac is just getting silly
I just thought of another... I hate the mostly-meaningless URLs used in XML to define namespaces, e.g. xmlns="http://purl.org/rss/1.0/"
Pascal's Begin and End. Too verbose, not subject to bracket matching, and worse, there isn't a Begin for every End, eg.
Type foo = Record
// ...
end;
Although I'm mainly a PHP developer, I dislike languages that don't let me do enough things inline. E.g.:
$x = returnsArray();
$x[1];
instead of
returnsArray()[1];
or
function sort($a, $b) {
return $a < $b;
}
usort($array, 'sort');
instead of
usort($array, function($a, $b) { return $a < $b; });
I like object-oriented style. So it bugs me in Python to see len(str) to get the length of a string, or splitting strings like split(str, "|") in another language. That is fine in C; it doesn't have objects. But Python, D, etc. do have objects and use obj.method() other places. (I still think Python is a great language.)
Inconsistency is another big one for me. I do not like inconsistent naming in the same library: length(), size(), getLength(), getlength(), toUTFindex() (why not toUtfIndex?), Constant, CONSTANT, etc.
The long names in .NET bother me sometimes. Can't they shorten DataGridViewCellContextMenuStripNeededEventArgs somehow? What about ListViewVirtualItemsSelectionRangeChangedEventArgs?
And I hate deep directory trees. If a library/project has a 5 level deep directory tree, I'm going to have trouble with it.
C and C++'s syntax is a bit quirky. They reuse operators for different things. You're probably so used to it that you don't think about it (nor do I), but consider how many meanings parentheses have:
int main() // function declaration / definition
printf("hello") // function call
(int)x // type cast
2*(7+8) // override precedence
int (*)(int) // function pointer
int x(3) // initializer
if (condition) // special part of syntax of if, while, for, switch
And if in C++ you saw
foo<bar>(baz(),baaz)
you couldn't know the meaning without the definition of foo and bar.
the < and > might be a template instantiation, or might be less-than and greater-than (unusual but legal)
the () might be a function call, or might be just surrounding the comma operator (ie. perform baz() for size-effects, then return baaz).
The silly thing is that other languages have copied some of these characteristics!
Java, and its checked exceptions. I left Java for a while, dwelling in the .NET world, then recently came back.
It feels like, sometimes, my throws clause is more voluminous than my method content.
There's nothing in the world I hate more than php.
Variables with $, that's one extra odd character for every variable.
Members are accessed with -> for no apparent reason, one extra character for every member access.
A freakshow of language really.
No namespaces.
Strings are concatenated with ..
A freakshow of language.
All the []s and #s in Objective C. Their use is so different from the underlying C's native syntax that the first time I saw them it gave the impression that all the object-orientation had been clumsily bolted on as an afterthought.
I abhor the boiler plate verbosity of Java.
writing getters and setters for properties
checked exception handling and all the verbiage that implies
long lists of imports
Those, in connection with the Java convention of using veryLongVariableNames, sometimes have me thinking I'm back in the 80's, writing IDENTIFICATION DIVISION. at the top of my programs.
Hint: If you can automate the generation of part of your code in your IDE, that's a good hint that you're producing boilerplate code. With automated tools, it's not a problem to write, but it's a hindrance every time someone has to read that code - which is more often.
While I think it goes a bit overboard on type bureaucracy, Scala has successfully addressed some of these concerns.
Coding Style inconsistencies in team projects.
I'm working on a large team project where some contributors have used 4 spaces instead of the tab character.
Working with their code can be very annoying - I like to keep my code clean and with a consistent style.
It's bad enough when you use different standards for different languages, but in a web project with HTML, CSS, Javascript, PHP and MySQL, that's 5 languages, 5 different styles, and multiplied by the number of people working on the project.
I'd love to re-format my co-workers code when I need to fix something, but then the repository would think I changed every line of their code.
It irritates me sometimes how people expect there to be one language for all jobs. Depending on the task you are doing, each language has its advantages and disadvantages. I like the C-based syntax languages because it's what I'm most used to and I like the flexibility they tend to bestow on the developer. Of course, with great power comes great responsibility, and having the power to write 150 line LINQ statements doesn't mean you should.
I love the inline XML in the latest version of VB.NET although I don't like working with VB mainly because I find the IDE less helpful than the IDE for C#.
If Microsoft had to invent yet another C++-like language in C# why didn't they correct Java's mistake and implement support for RAII?
Case sensitivity.
What kinda hangover do you need to think that differentiating two identifiers solely by caSE is a great idea?
I hate semi-colons. I find they add a lot of noise and you rarely need to put two statements on a line. I prefer the style of Python and other languages... end of line is end of a statement.
Any language that can't fully decide if Arrays/Loop/string character indexes are zero based or one based.
I personally prefer zero based, but any language that mixes the two, or lets you "configure" which is used can drive you bonkers. (Apache Velocity - I'm looking in your direction!)
snip from the VTL reference (default is 1, but you can set it to 0):
# Default starting value of the loop
# counter variable reference.
directive.foreach.counter.initial.value = 1
(try merging 2 projects that used different counter schemes - ugh!)
In no particular order...
OCaml
Tuples definitions use * to separate items rather than ,. So, ("Juliet", 23, true) has the type (string * int * bool).
For being such an awesome language, the documentation has this haunting comment on threads: "The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster." JoCaml doesn't fix this problem.
^^^ I've heard the Jane Street guys were working to add concurrent GC and multi-core threads to OCaml, but I don't know how successful they've been. I can't imagine a language without multi-core threads and GC surviving very long.
No easy way to explore modules in the toplevel. Sure, you can write module q = List;; and the toplevel will happily print out the module definition, but that just seems hacky.
C#
Lousy type inference. Beyond the most trivial expressions, I have to give types to generic functions.
All the LINQ code I ever read uses method syntax, x.Where(item => ...).OrderBy(item => ...). No one ever uses expression syntax, from item in x where ... orderby ... select. Between you and me, I think expression syntax is silly, if for no other reason than that it looks "foreign" against the backdrop of all other C# and VB.NET code.
LINQ
Every other language uses the industry standard names are Map, Fold/Reduce/Inject, and Filter. LINQ has to be different and uses Select, Aggregate, and Where.
Functional Programming
Monads are mystifying. Having seen the Parser monad, Maybe monad, State, and List monads, I can understand perfectly how the code works; however, as a general design pattern, I can't seem to look at problems and say "hey, I bet a monad would fit perfect here".
Ruby
GRRRRAAAAAAAH!!!!! I mean... seriously.
VB
Module Hangups
Dim _juliet as String = "Too Wordy!"
Public Property Juliet() as String
Get
Return _juliet
End Get
Set (ByVal value as String)
_juliet = value
End Set
End Property
End Module
And setter declarations are the bane of my existence. Alright, so I change the data type of my property -- now I need to change the data type in my setter too? Why doesn't VB borrow from C# and simply incorporate an implicit variable called value?
.NET Framework
I personally like Java casing convention: classes are PascalCase, methods and properties are camelCase.
In C/C++, it annoys me how there are different ways of writing the same code.
e.g.
if (condition)
{
callSomeConditionalMethod();
}
callSomeOtherMethod();
vs.
if (condition)
callSomeConditionalMethod();
callSomeOtherMethod();
equate to the same thing, but different people have different styles. I wish the original standard was more strict about making a decision about this, so we wouldn't have this ambiguity. It leads to arguments and disagreements in code reviews!
I found Perl's use of "defined" and "undefined" values to be so useful that I have trouble using scripting languages without it.
Perl:
($lastname, $firstname, $rest) = split(' ', $fullname);
This statement performs well no matter how many words are in $fullname. Try it in Python, and it explodes if $fullname doesn't contain exactly three words.
SQL, they say you should not use cursors and when you do, you really understand why...
its so heavy going!
DECLARE mycurse CURSOR LOCAL FAST_FORWARD READ_ONLY
FOR
SELECT field1, field2, fieldN FROM atable
OPEN mycurse
FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN
WHILE ##fetch_status = 0
BEGIN
-- do something really clever...
FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN
END
CLOSE mycurse
DEALLOCATE mycurse
Although I program primarily in python, It irks me endlessly that lambda body's must be expressions.
I'm still wrapping my brain around JavaScript, and as a whole, Its mostly acceptable. Why is it so hard to create a namespace. In TCL they're just ugly, but in JavaScript, it's actually a rigmarole AND completely unreadable.
In SQL how come everything is just one, huge freekin SELECT statement.
In Ruby, I very strongly dislike how methods do not require self. to be called on current instance, but properties do (otherwise they will clash with locals); i.e.:
def foo()
123
end
def foo=(x)
end
def bar()
x = foo() # okay, same as self.foo()
x = foo # not okay, reads unassigned local variable foo
foo = 123 # not okay, assigns local variable foo
end
To my mind, it's very inconsistent. I'd rather prefer to either always require self. in all cases, or to have a sigil for locals.
Java's packages. I find them complex, more so because I am not a corporation.
I vastly prefer namespaces. I'll get over it, of course - I'm playing with the Android SDK, and Eclipse removes a lot of the pain. I've never had a machine that could run it interactively before, and now I do I'm very impressed.
Prolog's if-then-else syntax.
x -> y ; z
The problem is that ";" is the "or" operator, so the above looks like "x implies y or z".
Java
Generics (Java version of templates) are limited. I can not call methods of the class and I can not create instances of the class. Generics are used by containers, but I can use containers of instances of Object.
No multiple inheritance. If a multiple inheritance use does not lead to diamond problem, it should be allowed. It should allow to write a default implementation of interface methods, a example of problem: the interface MouseListener has 5 methods, one for each event. If I want to handle just one of them, I have to implement the 4 other methods as an empty method.
It does not allow to choose to manually manage memory of some objects.
Java API uses complex combination of classes to do simple tasks. Example, if I want to read from a file, I have to use many classes (FileReader, FileInputStream).
Python
Indentation is part of syntax, I prefer to use the word "end" to indicate end of block and the word "pass" would not be needed.
In classes, the word "self" should not be needed as argument of functions.
C++
Headers are the worst problem. I have to list the functions in a header file and implement them in a cpp file. It can not hide dependencies of a class. If a class A uses the class B privately as a field, if I include the header of A, the header of B will be included too.
Strings and arrays came from C, they do not provide a length field. It is difficult to control if std::string and std::vector will use stack or heap. I have to use pointers with std::string and std::vector if I want to use assignment, pass as argument to a function or return it, because its "=" operator will copy entire structure.
I can not control the constructor and destructor. It is difficult to create an array of objects without a default constructor or choose what constructor to use with if and switch statements.
In most languages, file access. VB.NET is the only language so far where file access makes any sense to me. I do not understand why if I want to check if a file exists, I should use File.exists("") or something similar instead of creating a file object (actually FileInfo in VB.NET) and asking if it exists. And then if I want to open it, I ask it to open: (assuming a FileInfo object called fi) fi.OpenRead, for example. Returns a stream. Nice. Exactly what I wanted. If I want to move a file, fi.MoveTo. I can also do fi.CopyTo. What is this nonsense about not making files full-fledged objects in most languages? Also, if I want to iterate through the files in a directory, I can just create the directory object and call .GetFiles. Or I can do .GetDirectories, and I get a whole new set of DirectoryInfo objects to play with.
Admittedly, Java has some of this file stuff, but this nonsense of having to have a whole object to tell it how to list files is just silly.
Also, I hate ::, ->, => and all other multi-character operators except for <= and >= (and maybe -- and ++).
[Disclaimer: i only have a passing familiarity with VB, so take my comments with a grain of salt]
I Hate How Every Keyword In VB Is Capitalized Like This. I saw a blog post the other week (month?) about someone who tried writing VB code without any capital letters (they did something to a compiler that would let them compile VB code like that), and the language looked much nicer!
My big hangup is MATLAB's syntax. I use it, and there are things I like about it, but it has so many annoying quirks. Let's see.
Matrices are indexed with parentheses. So if you see something like Image(350,260), you have no clue from that whether we're getting an element from the Image matrix, or if we're calling some function called Image and passing arguments to it.
Scope is insane. I seem to recall that for loop index variables stay in scope after the loop ends.
If you forget to stick a semicolon after an assignment, the value will be dumped to standard output.
You may have one function per file. This proves to be very annoying for organizing one's work.
I'm sure I could come up with more if I thought about it.