Related
In Pascal it is possible to declare union types:
AnimalType = (Dog, Cat);
Animal = record
name: string;
case myType: AnimalType of
Dog: (weight: Integer);
Cat: (age: Integer);
end;
However, it is easy to violate the contract of case:
var
a: Animal;
begin
a.name := 'Kittie';
a.myType := Cat;
a.weight := 10; // There is no weight for cats!
writeln(a.age); // Prints 10
end.
There is a semantic error in this example, but the compiler typechecks it successfully. Also, there is no error at the runtime.
So, does the case block exist only for the documentation purposes?
The short answer to your question is, "no case blocks in variant records are not there only for documentation purposes". I say this because although the Pascal implementation you are using did not detect the fact that the program as accessing an inactive variant, other implementation do detect this error.
The long answer to your question is as follows.
Many people who are just learning Pascal don't realize that there are two major flavours of the Pascal language. There is ISO 7185 Standard Pascal (or just Standard Pascal for short) and there is Turbo/Borland Pascal (the most popular variant). So let me give you two answers to your question "So, does the case block exist only for the documentation purposes"?
The Standard Pascal Answer
Standard Pascal defines an error as "A violation by a program of the requirements of this International Standard that a processor is permitted to leave undetected". So yes, the program you have given does contain an error and yes the processor (i.e. Pascal implementation) you are using does not detect it, but other implementation will detect it, so no the case block in variant records actually has a functional purpose.
The Turbo/Borland Answer
As far as the Turbo/Borland Pascal flavours go, I don't know if none of them will detect this error, but even if none of them do, it's probably better to think of this as an error that they do not detect rather than as something for documentation purposes only. Saying something is for documentation purposes only, sounds to me like saying it was never intended to be functional.
A union makes it possible to give a variable multiple types. In your example, weight and age are stored at the same location in memory.
If you want a 'typesafe union', you should use inheritance.
So I have a thing.
type Thing is new record
...elements...
end record;
I have a function which stringifies it.
function ToString(t: Thing) returns string;
I would like to be able to tell Ada to use this function for Thing'image, so that users of my library don't have to think about whether they're using a builtin type or a Thing.
However, the obvious syntax:
for Thing'image use ToString;
...doesn't work.
Is there a way to do this?
I don’t know why the language doesn’t support this, and I don’t know whether anyone has ever raised a formal proposal that it should (an Ada Issue or AI). The somewhat-related AI12-0020 (the 20th AI for Ada 2012) includes the remark "I don't think we rejected it for technical reasons as much as importance”.
You can see why the Ada Rapporteur Group might think this was relatively unimportant: you can always declare an Image function; the difference between
Pkg.Image (V);
and
Pkg.Typ’Image (V);
isn’t very large.
One common method is to create a unary + function...
function "+"(item : myType) return String;
which is syntactically very light.
Obvious disclaimer: may lead to some ambiguity when applied to numeric types (e.g. Put (+4);)
However there is still the distinction between prebuilt types and user defined types.
'img wouldn't be able to be used by your client's code though, unless you specified an interface that enforced this function to be present (what if the client called on 'img for a private type that didn't have a 'img function defined?).
If you end up having to have an interface, it really doesn't matter what the function is called.
I recently had to face this problem, which is, how can I pass 1, 2, 3, 9, 38919, 0 or any random number of arguments to a function or a procedure in Pascal ? I want to make a subprogram that accepts as many parameters as I want to pass it, like writeln.
writeln('Hello, ', name, '.');
writeln('You were born on ', birthDate, ', and you are ', age, ' years old.');
I searched the web for some guide or whatever, but the only related threads I found were these ones, which helped me understanding my problem a bit more, but I still don't know how to do this in Pascal.
I also found this, but I'm not sure I really understood what it says. (and I don't know either if what applies to Free Pascal applies to other compilers too)
Any ideas ? :/
First and for all, writeln is a language construct, and not a function. You can't imitate it for your own functions. (you can reroute writeln output though, and FPC has writestr that can writeln to string)
The array of const syntax is more Delphi oriented. Open array is Delphi oriented too, but Turbo Pascal had a own form. It is only for one type though.
But since classic pascal has no way of doing variadic parameters, if you want this, you can't avoid using extensions.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
When I asked this question I got almost always a definite yes you should have coding standards.
What was the strangest coding standard rule that you were ever forced to follow?
And by strangest I mean funniest, or worst, or just plain odd.
In each answer, please mention which language, what your team size was, and which ill effects it caused you and your team.
I hate it when the use of multiple returns is banned.
reverse indentation. For example:
for(int i = 0; i < 10; i++)
{
myFunc();
}
and:
if(something)
{
// do A
}
else
{
// do B
}
Maybe not the most outlandish one you'll get, but I really really hate when I have to preface database table names with 'tbl'
Almost any kind of hungarian notation.
The problem with hungarian notation is that it is very often misunderstood. The original idea was to prefix the variable so that the meaning was clear. For example:
int appCount = 0; // Number of apples.
int pearCount = 0; // Number of pears.
But most people use it to determine the type.
int iAppleCount = 0; // Number of apples.
int iPearCount = 0; // Number of pears.
This is confusing, because although both numbers are integers, everybody knows, you can't compare apples with pears.
No ternary operator allowed where I currently work:
int value = (a < b) ? a : b;
... because not everyone "gets it". If you told me, "Don't use it because we've had to rewrite them when the structures get too complicated" (nested ternary operators, anyone?), then I'd understand. But when you tell me that some developers don't understand them... um... Sure.
To NEVER remove any code when making changes. We were told to comment all changes. Bear in mind we use source control. This policy didn't last long because developers were in an uproar about it and how it would make the code unreadable.
I once worked under the tyranny of the Mighty VB King.
The VB King was the pure master of MS Excel and VBA, as well as databases (Hence his surname : He played with Excel while the developers worked with compilers, and challenging him on databases could have detrimental effects on your career...).
Of course, his immense skills gave him an unique vision of development problems and project management solutions: While not exactly coding standards in the strictest sense, the VB King regularly had new ideas about "coding standards" and "best practices" he tried (and oftentimes succeeded) to impose on us. For example:
All C/C++ arrays shall start at index 1, instead of 0. Indeed, the use of 0 as first index of an array is obsolete, and has been superseded by Visual Basic 6's insightful array index management.
All functions shall return an error code: There are no exceptions in VB6, so why would we need them at all? (i.e. in C++)
Since "All functions shall return an error code" is not practical for functions returning meaningful types, all functions shall have an error code as first [in/out] parameter.
All our code will check the error codes (this led to the worst case of VBScript if-indentation I ever saw in my career... Of course, as the "else" clauses were never handled, no error was actually found until too late).
Since we're working with C++/COM, starting this very day, we will code all our DOM utility functions in Visual Basic.
ASP 115 errors are evil. For this reason, we will use On Error Resume Next in our VBScript/ASP code to avoid them.
XSL-T is an object oriented language. Use inheritance to resolve your problems (dumb surprise almost broke my jaw open this one day).
Exceptions are not used, and thus should be removed. For this reason, we will uncheck the checkbox asking for destructor call in case of exception unwinding (it took days for an expert to find the cause of all those memory leaks, and he almost went berserk when he found out they had willingly ignored (and hidden) his technical note about checking the option again, sent handfuls of weeks before).
catch all exceptions in the COM interface of our COM modules, and dispose them silently (this way, instead of crashing, a module would only appear to be faster... Shiny!... As we used the über error handling described above, it even took us some time to understand what was really happening... You can't have both speed and correct results, can you?).
Starting today, our code base will split into four branches. We will manage their synchronization and integrate all bug corrections/evolutions by hand.
All but the C/C++ arrays, VB DOM utility functions and XSL-T as OOP language were implemented despite our protests. Of course, over the time, some were discovered, ahem, broken, and abandoned altogether.
Of course, the VB King credibility never suffered for that: Among the higher management, he remained a "top gun" technical expert...
This produced some amusing side effects, as you can see by following the link What is the best comment in source code you have ever encountered?
Back in the 80's/90's, I worked for an aircraft simulator company that used FORTRAN. Our FORTRAN compiler had a limit of 8 characters for variable names. The company's coding standards reserved the first three of them for Hungarian-notation style info. So we had to try and create meaningful variable names with just 5 characters!
I worked at a place that had a merger between 2 companies. The 'dominant' one had a major server written in K&R C (i.e. pre-ANSI). They forced the Java teams (from both offices -- probably 20 devs total) to use this format, which gleefully ignored the 2 pillars of the "brace debate" and goes straight to crazy:
if ( x == y )
{
System.out.println("this is painful");
x = 0;
y++;
}
Forbidden:
while (true) {
Allowed:
for (;;) {
a friend of mine - we'll call him CodeMonkey - got his first job out of college [many years ago] doing in-house development in COBOL. His first program was rejected as 'not complying with our standards' because it used... [shudder!] nested IF statements
the coding standards banned the use of nested IF statements
now, CodeMonkey was not shy and was certain of his abilities, so he persisted in asking everyone up the chain and down the aisle why this rule existed. Most claimed they did not know, some made up stuff about 'readability', and finally one person remembered the original reason: the first version of the COBOL compiler they used had a bug and didn't handle nested IF statements correctly.
This compiler bug, of course, had been fixed for at least a decade, but no one had challenged the standards. [baaa!]
CodeMonkey was successful in getting the standards changed - eventually!
Once worked on a project where underscores were banned. And I mean totally banned. So in a c# winforms app, whenever we added a new event handler (e.g. for a button) we'd have to rename the default method name from buttonName_Click() to something else, just to satisfy the ego of the guy that wrote the coding standards. To this day I don't know what he had against the humble underscore
Totally useless database naming conventions.
Every table name has to start with a number. The numbers show which kind of data is in the table.
0: data that is used everywhere
1: data that is used by a certain module only
2: lookup table
3: calendar, chat and mail
4: logging
This makes it hard to find a table if you only know the first letter of its name.
Also - as this is a mssql database - we have to surround tablenames with square brackets everywhere.
-- doesn't work
select * from 0examples;
-- does work
select * from [0examples];
We were doing a C++ project and the team lead was a Pascal guy.
So we had a coding standard include file to redefine all that pesky C and C++ syntax:
#define BEGIN {
#define END }
but wait there's more!
#define ENDIF }
#define CASE switch
etc. It's hard to remember after all this time.
This took what would have been perfectly readable C++ code and made it illegible to anyone except the team lead.
We also had to use reverse Hungarian notation, i.e.
MyClass *class_pt // pt = pointer to type
UINT32 maxHops_u // u = uint32
although oddly I grew to like this.
At a former job:
"Normal" tables begin with T_
"System" tables (usually lookups) begin with TS_ (except when they don't because somebody didn't feel like it that day)
Cross-reference tables begin with TSX_
All field names begin with F_
Yes, that's right. All of the fields, in every single table. So that we can tell it's a field.
A buddy of mine encountered this rule while working at a government job. The use of ++ (pre or post) was completely banned. The reason: Different compilers might interpret it differently.
Half of the team favored four-space indentation; the other half favored two-space indentation.
As you can guess, the coding standard mandated three, so as to "offend all equally" (a direct quote).
Not being able to use Reflection as the manager claimed it involved too much 'magic'.
The very strangest one I had, and one which took me quite some time to overthrow, was when the owner of our company demanded that our new product be IE only. If it could work on FireFox, that was OK, but it had to be IE only.
This might not sound too strange, except for one little flaw. All of the software was for a bespoke server software package, running on Linux, and all client boxes that our customer was buying were Linux. Short of trying to figure out how to get Wine (in those days, very unreliable) up and running on all of these boxes and seeing if we could get IE running and training their admins how to debug Wine problems, it simply wasn't possible to meet the owner's request. The problem was that he was doing the Web design and simply didn't know how to make Web sites compliant with FireFox.
It probably won't shock you to know that that our company went bankrupt.
Using generic numbered identifier names
At my current work we have two rules which are really mean:
Rule 1: Every time we create a new field in a database table we have to add additional reserve fields for future use. These reserve fields are numbered (because no one knows which data they will hold some day) The next time we need a new field we first look for an unused reserve field.
So we end up with with customer.reserve_field_14 containing the e-mail address of the customer.
At one day our boss thought about introducing reserve tables, but fortunatly we could convince him not to do it.
Rule 2: One of our products is written in VB6 and VB6 has a limit of the total count of different identifier names and since the code is very large, we constantly run into this limit. As a "solution" all local variable names are numbered:
Lvarlong1
Lvarlong2
Lvarstr1
...
Although that effectively circumvents the identifier limit, these two rules combined lead to beautiful code like this:
...
If Lvarbool1 Then
Lvarbool2 = True
End If
If Lvarbool2 Or Lvarstr1 <> Lvarstr5 Then
db.Execute("DELETE FROM customer WHERE " _
& "reserve_field_12 = '" & Lvarstr1 & "'")
End If
...
You can imagine how hard it is to fix old or someone else's code...
Latest update: Now we are also using "reserve procedures" for private members:
Private Sub LSub1(Lvarlong1 As Long, Lvarstr1 As String)
If Lvarlong1 >= 0 Then
Lvarbool1 = LFunc1(Lvarstr1)
Else
Lvarbool1 = LFunc6()
End If
If Lvarbool1 Then
LSub4 Lvarstr1
End If
End Sub
EDIT: It seems that this code pattern is becoming more and more popular. See this The Daily WTF post to learn more: Astigmatism :)
Back in my C++ days we were not allowed to use ==,>=, <=,&&, etc. there were macros for this ...
if (bob EQ 7 AND alice LEQ 10)
{
// blah
}
this was obviously to deal with the "old accidental assignment in conditional bug", however we also had the rule "put constants before variables", so
if (NULL EQ ptr); //ok
if (ptr EQ NULL); //not ok
Just remembered, the simplest coding standard I ever heard was "Write code as if the next maintainer is a vicious psychopath who knows where you live."
Hungarian notation in general.
I've had a lot of stupid rules, but not a lot that I considered downright strange.
The sillyiest was on a NASA job I worked back in the early 90's. This was a huge job, with well over 100 developers on it. The experienced developers who wrote the coding standards decided that every source file should begin with a four letter acronym, and the first letter had to stand for the group that was responsible for the file. This was probably a great idea for the old FORTRAN 77 projects they were used to.
However, this was an Ada project, with a nice hierarchal library structure, so it made no sense at all. Every directory was full of files starting with the same letter, followed by 3 more nonsense leters, an underscore, and then part of the file name that mattered. All the Ada packages had to start with this same five-character wart. Ada "use" clauses were not allowed either (arguably a good thing under normal circumstances), so that meant any reference to any identifier that wasn't local to that source file also had to include this useless wart. There probably should have been an insurrection over this, but the entire project was staffed by junior programmers and fresh from college new hires (myself being the latter).
A typical assignment statement (already verbose in Ada) would end up looking something like this:
NABC_The_Package_Name.X := NABC_The_Package_Name.X +
CXYZ_Some_Other_Package_Name.Delta_X;
Fortunately they were at least enlightened enough to allow us more than 80 columns! Still, the facility wart was hated enough that it became boilerplate code at the top of everyone's source files to use Ada "renames" to get rid of the wart. There'd be one rename for each imported ("withed") package. Like this:
package Package_Name renames NABC_Package_Name;
package Some_Other_Package_Name renames CXYZ_Some_Other_Package_Name;
--// Repeated in this vein for an average of 10 lines or so
What the more creative among us took to doing was trying to use the wart to make an acutally sensible (or silly) package name. (I know what you are thinking, but explitives were not allowed and shame on you! That's disgusting). For example, I was in the Common code group, and I needed to make a package to interface with the Workstation group. After a brainstorming session with the Workstation guy, we decided to name our packages so that someone needing both would have to write:
with CANT_Interface_Package;
with WONT_Interface_Package;
When I started working at one place, and started entering my code into the source control, my boss suddenly came up to me, and asked me to stop committing so much. He told me it is discouraged to do more than 1 commit per-day for a developer because it litters the source control. I simply gaped at him...
Later I understood that the reason he even came up to me about it is because the SVN server would send him (and 10 more high executives) a mail for each commit someone makes. And by littering the source control I guessed he ment his mailbox.
Doing all database queries via stored procedures in Sql Server 2000. From complex multi-table queries to simple ones like:
select id, name from people
The arguments in favor of procedures were:
Performance
Security
Maintainability
I know that the procedure topic is quite controversial, so feel free to score my answer negatively ;)
There must be 165 unit tests (not necessarily automated) per 1000 lines of code. That works out at one test for roughly every 8 lines.
Needless to say, some of the lines of code are quite long, and functions return this pointers to allow chaining.
We had to sort all the functions in classes alphabetically, to make them "easier to find".
Never mind the ide had a drop down. That was too many clicks.
(same tech lead wrote an app to remove all comments from our source code).
In 1987 or so, I took a job with a company that hired me because I was one of a small handful of people who knew how to use Revelation. Revelation, if you've never heard of it, was essentially a PC-based implementation of the Pick operating system - which, if you've never heard of it, got its name from its inventor, the fabulously-named Dick Pick. Much can be said about the Pick OS, most of it good. A number of supermini vendors (Prime and MIPS, at least) used Pick, or their own custom implementations of it.
This company was a Prime shop, and for their in-house systems they used Information. (No, that was really its name: it was Prime's implementation of Pick.) They had a contract with the state to build a PC-based system, and had put about a year into their Revelation project before the guy doing all the work, who was also their MIS director, decided he couldn't do both jobs anymore and hired me.
At any rate, he'd established a number of coding standards for their Prime-based software, many of which derived from two basic conditions: 1) the use of 80-column dumb terminals, and 2) the fact that since Prime didn't have a visual editor, he'd written his own. Because of the magic portability of Pick code, he'd brought his editor down into Revelation, and had built the entire project on the PC using it.
Revelation, of course, being PC-based, had a perfectly good full-screen editor, and didn't object when you went past column 80. However, for the first several months I was there, he insisted that I use his editor and his standards.
So, the first standard was that every line of code had to be commented. Every line. No exceptions. His rationale for that was that even if your comment said exactly what you had just written in the code, having to comment it meant you at least thought about the line twice. Also, as he cheerfully pointed out, he'd added a command to the editor that formatted each line of code so that you could put an end-of-line comment.
Oh, yes. When you commented every line of code, it was with end-of-line comments. In short, the first 64 characters of each line were for code, then there was a semicolon, and then you had 15 characters to describe what your 64 characters did. In short, we were using an assembly language convention to format our Pick/Basic code. This led to things that looked like this:
EVENT.LIST[DATE.INDEX][-1] = _ ;ADD THE MOST RECENT EVENT
EVENTS[LEN(EVENTS)] ;TO THE END OF EVENT LIST
(Actually, after 20 years I have finally forgotten R/Basic's line-continuation syntax, so it may have looked different. But you get the idea.)
Additionally, whenever you had to insert multiline comments, the rule was that you use a flower box:
************************************************************************
** IN CASE YOU NEVER HEARD OF ONE, OR COULDN'T GUESS FROM ITS NAME, **
** THIS IS A FLOWER BOX. **
************************************************************************
Yes, those closing asterisks on each line were required. After all, if you used his editor, it was just a simple editor command to insert a flower box.
Getting him to relent and let me use Revelation's built-in editor was quite a battle. At first he was insistent, simply because those were the rules. When I objected that a) I already knew the Revelation editor b) it was substantially more functional than his editor, c) other Revelation developers would have the same perspective, he retorted that if I didn't train on his editor I wouldn't ever be able to work on the Prime codebase, which, as we both knew, was not going to happen as long as hell remained unfrozen over. Finally he gave in.
But the coding standards were the last to go. The flower-box comments in particular were a stupid waste of time, and he fought me tooth and nail on them, saying that if I'd just use the right editor maintaining them would be perfectly easy. (The whole thing got pretty passive-aggressive.) Finally I quietly gave in, and from then on all of the code I brought to code reviews had his precious flower-box comments.
One day, several months into the job, when I'd pretty much proven myself more than competent (especially in comparison with the remarkable parade of other coders that passed through that office while I worked there), he was looking over my shoulder as I worked, and he noticed I wasn't using flower-box comments. Oh, I said, I wrote a source-code formatter that converts my comments into your style when I print them out. It's easier than maintaining them in the editor. He opened his mouth, thought for a moment, closed it, went away, and we never talked about coding standards again. Both of our jobs got easier after that.
At my first job, all C programs, no matter how simple or complex, had only four functions. You had the main, which called the other three functions in turn. I can't remember their names, but they were something along the lines of begin(), middle(), and end(). begin() opened files and database connections, end() closed them, and middle() did everything else. Needless to say, middle() was a very long function.
And just to make things even better, all variables had to be global.
One of my proudest memories of that job is having been part of the general revolt that led to the destruction of those standards.
An externally-written C coding standard that had the rule 'don't rely on built in operator precedence, always use brackets'
Fair enough, the obvious intent was to ban:
a = 3 + 6 * 2;
in favour of:
a = 3 + (6 * 2);
Thing was, this was enforced by a tool that followed the C syntax rules that '=', '==', '.' and array access are operators. So code like:
a[i].x += b[i].y + d - 7;
had to be written as:
((a[i]).x) += (((b[i]).y + d) - 7);
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've read some of the recent language vs. language questions with interest... Perl vs. Python, Python vs. Java, Can one language be better than another?
One thing I've noticed is that a lot of us have very superficial reasons for disliking languages. We notice these things at first glance and they turn us off. We shun what are probably perfectly good languages as a result of features that we'd probably learn to love or ignore in 2 seconds if we bothered.
Well, I'm as guilty as the next guy, if not more. Here goes:
Ruby: All the Ruby example code I see uses the puts command, and that's a sort of childish Yiddish anatomical term. So as a result, I can't take Ruby code seriously even though I should.
Python: The first time I saw it, I smirked at the whole significant whitespace thing. I avoided it for the next several years. Now I hardly use anything else.
Java: I don't like identifiersThatLookLikeThis. I'm not sure why exactly.
Lisp: I have trouble with all the parentheses. Things of different importance and purpose (function declarations, variable assignments, etc.) are not syntactically differentiated and I'm too lazy to learn what's what.
Fortran: uppercase everything hurts my eyes. I know modern code doesn't have to be written like that, but most example code is...
Visual Basic: it bugs me that Dim is used to declare variables, since I remember the good ol' days of GW-BASIC when it was only used to dimension arrays.
What languages did look right to me at first glance? Perl, C, QBasic, JavaScript, assembly language, BASH shell, FORTH.
Okay, now that I've aired my dirty laundry... I want to hear yours. What are your language hangups? What superficial features bother you? How have you gotten over them?
I hate Hate HATE "End Function" and "End IF" and "If... Then" parts of VB. I would much rather see a curly bracket instead.
PHP's function name inconsistencies.
// common parameters back-to-front
in_array(needle, haystack);
strpos(haystack, needle);
// _ to separate words, or not?
filesize();
file_exists;
// super globals prefix?
$GLOBALS;
$_POST;
I never really liked the keywords spelled backwards in some scripting shells
if-then-fi is bad enough, but case-in-esac is just getting silly
I just thought of another... I hate the mostly-meaningless URLs used in XML to define namespaces, e.g. xmlns="http://purl.org/rss/1.0/"
Pascal's Begin and End. Too verbose, not subject to bracket matching, and worse, there isn't a Begin for every End, eg.
Type foo = Record
// ...
end;
Although I'm mainly a PHP developer, I dislike languages that don't let me do enough things inline. E.g.:
$x = returnsArray();
$x[1];
instead of
returnsArray()[1];
or
function sort($a, $b) {
return $a < $b;
}
usort($array, 'sort');
instead of
usort($array, function($a, $b) { return $a < $b; });
I like object-oriented style. So it bugs me in Python to see len(str) to get the length of a string, or splitting strings like split(str, "|") in another language. That is fine in C; it doesn't have objects. But Python, D, etc. do have objects and use obj.method() other places. (I still think Python is a great language.)
Inconsistency is another big one for me. I do not like inconsistent naming in the same library: length(), size(), getLength(), getlength(), toUTFindex() (why not toUtfIndex?), Constant, CONSTANT, etc.
The long names in .NET bother me sometimes. Can't they shorten DataGridViewCellContextMenuStripNeededEventArgs somehow? What about ListViewVirtualItemsSelectionRangeChangedEventArgs?
And I hate deep directory trees. If a library/project has a 5 level deep directory tree, I'm going to have trouble with it.
C and C++'s syntax is a bit quirky. They reuse operators for different things. You're probably so used to it that you don't think about it (nor do I), but consider how many meanings parentheses have:
int main() // function declaration / definition
printf("hello") // function call
(int)x // type cast
2*(7+8) // override precedence
int (*)(int) // function pointer
int x(3) // initializer
if (condition) // special part of syntax of if, while, for, switch
And if in C++ you saw
foo<bar>(baz(),baaz)
you couldn't know the meaning without the definition of foo and bar.
the < and > might be a template instantiation, or might be less-than and greater-than (unusual but legal)
the () might be a function call, or might be just surrounding the comma operator (ie. perform baz() for size-effects, then return baaz).
The silly thing is that other languages have copied some of these characteristics!
Java, and its checked exceptions. I left Java for a while, dwelling in the .NET world, then recently came back.
It feels like, sometimes, my throws clause is more voluminous than my method content.
There's nothing in the world I hate more than php.
Variables with $, that's one extra odd character for every variable.
Members are accessed with -> for no apparent reason, one extra character for every member access.
A freakshow of language really.
No namespaces.
Strings are concatenated with ..
A freakshow of language.
All the []s and #s in Objective C. Their use is so different from the underlying C's native syntax that the first time I saw them it gave the impression that all the object-orientation had been clumsily bolted on as an afterthought.
I abhor the boiler plate verbosity of Java.
writing getters and setters for properties
checked exception handling and all the verbiage that implies
long lists of imports
Those, in connection with the Java convention of using veryLongVariableNames, sometimes have me thinking I'm back in the 80's, writing IDENTIFICATION DIVISION. at the top of my programs.
Hint: If you can automate the generation of part of your code in your IDE, that's a good hint that you're producing boilerplate code. With automated tools, it's not a problem to write, but it's a hindrance every time someone has to read that code - which is more often.
While I think it goes a bit overboard on type bureaucracy, Scala has successfully addressed some of these concerns.
Coding Style inconsistencies in team projects.
I'm working on a large team project where some contributors have used 4 spaces instead of the tab character.
Working with their code can be very annoying - I like to keep my code clean and with a consistent style.
It's bad enough when you use different standards for different languages, but in a web project with HTML, CSS, Javascript, PHP and MySQL, that's 5 languages, 5 different styles, and multiplied by the number of people working on the project.
I'd love to re-format my co-workers code when I need to fix something, but then the repository would think I changed every line of their code.
It irritates me sometimes how people expect there to be one language for all jobs. Depending on the task you are doing, each language has its advantages and disadvantages. I like the C-based syntax languages because it's what I'm most used to and I like the flexibility they tend to bestow on the developer. Of course, with great power comes great responsibility, and having the power to write 150 line LINQ statements doesn't mean you should.
I love the inline XML in the latest version of VB.NET although I don't like working with VB mainly because I find the IDE less helpful than the IDE for C#.
If Microsoft had to invent yet another C++-like language in C# why didn't they correct Java's mistake and implement support for RAII?
Case sensitivity.
What kinda hangover do you need to think that differentiating two identifiers solely by caSE is a great idea?
I hate semi-colons. I find they add a lot of noise and you rarely need to put two statements on a line. I prefer the style of Python and other languages... end of line is end of a statement.
Any language that can't fully decide if Arrays/Loop/string character indexes are zero based or one based.
I personally prefer zero based, but any language that mixes the two, or lets you "configure" which is used can drive you bonkers. (Apache Velocity - I'm looking in your direction!)
snip from the VTL reference (default is 1, but you can set it to 0):
# Default starting value of the loop
# counter variable reference.
directive.foreach.counter.initial.value = 1
(try merging 2 projects that used different counter schemes - ugh!)
In no particular order...
OCaml
Tuples definitions use * to separate items rather than ,. So, ("Juliet", 23, true) has the type (string * int * bool).
For being such an awesome language, the documentation has this haunting comment on threads: "The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster." JoCaml doesn't fix this problem.
^^^ I've heard the Jane Street guys were working to add concurrent GC and multi-core threads to OCaml, but I don't know how successful they've been. I can't imagine a language without multi-core threads and GC surviving very long.
No easy way to explore modules in the toplevel. Sure, you can write module q = List;; and the toplevel will happily print out the module definition, but that just seems hacky.
C#
Lousy type inference. Beyond the most trivial expressions, I have to give types to generic functions.
All the LINQ code I ever read uses method syntax, x.Where(item => ...).OrderBy(item => ...). No one ever uses expression syntax, from item in x where ... orderby ... select. Between you and me, I think expression syntax is silly, if for no other reason than that it looks "foreign" against the backdrop of all other C# and VB.NET code.
LINQ
Every other language uses the industry standard names are Map, Fold/Reduce/Inject, and Filter. LINQ has to be different and uses Select, Aggregate, and Where.
Functional Programming
Monads are mystifying. Having seen the Parser monad, Maybe monad, State, and List monads, I can understand perfectly how the code works; however, as a general design pattern, I can't seem to look at problems and say "hey, I bet a monad would fit perfect here".
Ruby
GRRRRAAAAAAAH!!!!! I mean... seriously.
VB
Module Hangups
Dim _juliet as String = "Too Wordy!"
Public Property Juliet() as String
Get
Return _juliet
End Get
Set (ByVal value as String)
_juliet = value
End Set
End Property
End Module
And setter declarations are the bane of my existence. Alright, so I change the data type of my property -- now I need to change the data type in my setter too? Why doesn't VB borrow from C# and simply incorporate an implicit variable called value?
.NET Framework
I personally like Java casing convention: classes are PascalCase, methods and properties are camelCase.
In C/C++, it annoys me how there are different ways of writing the same code.
e.g.
if (condition)
{
callSomeConditionalMethod();
}
callSomeOtherMethod();
vs.
if (condition)
callSomeConditionalMethod();
callSomeOtherMethod();
equate to the same thing, but different people have different styles. I wish the original standard was more strict about making a decision about this, so we wouldn't have this ambiguity. It leads to arguments and disagreements in code reviews!
I found Perl's use of "defined" and "undefined" values to be so useful that I have trouble using scripting languages without it.
Perl:
($lastname, $firstname, $rest) = split(' ', $fullname);
This statement performs well no matter how many words are in $fullname. Try it in Python, and it explodes if $fullname doesn't contain exactly three words.
SQL, they say you should not use cursors and when you do, you really understand why...
its so heavy going!
DECLARE mycurse CURSOR LOCAL FAST_FORWARD READ_ONLY
FOR
SELECT field1, field2, fieldN FROM atable
OPEN mycurse
FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN
WHILE ##fetch_status = 0
BEGIN
-- do something really clever...
FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN
END
CLOSE mycurse
DEALLOCATE mycurse
Although I program primarily in python, It irks me endlessly that lambda body's must be expressions.
I'm still wrapping my brain around JavaScript, and as a whole, Its mostly acceptable. Why is it so hard to create a namespace. In TCL they're just ugly, but in JavaScript, it's actually a rigmarole AND completely unreadable.
In SQL how come everything is just one, huge freekin SELECT statement.
In Ruby, I very strongly dislike how methods do not require self. to be called on current instance, but properties do (otherwise they will clash with locals); i.e.:
def foo()
123
end
def foo=(x)
end
def bar()
x = foo() # okay, same as self.foo()
x = foo # not okay, reads unassigned local variable foo
foo = 123 # not okay, assigns local variable foo
end
To my mind, it's very inconsistent. I'd rather prefer to either always require self. in all cases, or to have a sigil for locals.
Java's packages. I find them complex, more so because I am not a corporation.
I vastly prefer namespaces. I'll get over it, of course - I'm playing with the Android SDK, and Eclipse removes a lot of the pain. I've never had a machine that could run it interactively before, and now I do I'm very impressed.
Prolog's if-then-else syntax.
x -> y ; z
The problem is that ";" is the "or" operator, so the above looks like "x implies y or z".
Java
Generics (Java version of templates) are limited. I can not call methods of the class and I can not create instances of the class. Generics are used by containers, but I can use containers of instances of Object.
No multiple inheritance. If a multiple inheritance use does not lead to diamond problem, it should be allowed. It should allow to write a default implementation of interface methods, a example of problem: the interface MouseListener has 5 methods, one for each event. If I want to handle just one of them, I have to implement the 4 other methods as an empty method.
It does not allow to choose to manually manage memory of some objects.
Java API uses complex combination of classes to do simple tasks. Example, if I want to read from a file, I have to use many classes (FileReader, FileInputStream).
Python
Indentation is part of syntax, I prefer to use the word "end" to indicate end of block and the word "pass" would not be needed.
In classes, the word "self" should not be needed as argument of functions.
C++
Headers are the worst problem. I have to list the functions in a header file and implement them in a cpp file. It can not hide dependencies of a class. If a class A uses the class B privately as a field, if I include the header of A, the header of B will be included too.
Strings and arrays came from C, they do not provide a length field. It is difficult to control if std::string and std::vector will use stack or heap. I have to use pointers with std::string and std::vector if I want to use assignment, pass as argument to a function or return it, because its "=" operator will copy entire structure.
I can not control the constructor and destructor. It is difficult to create an array of objects without a default constructor or choose what constructor to use with if and switch statements.
In most languages, file access. VB.NET is the only language so far where file access makes any sense to me. I do not understand why if I want to check if a file exists, I should use File.exists("") or something similar instead of creating a file object (actually FileInfo in VB.NET) and asking if it exists. And then if I want to open it, I ask it to open: (assuming a FileInfo object called fi) fi.OpenRead, for example. Returns a stream. Nice. Exactly what I wanted. If I want to move a file, fi.MoveTo. I can also do fi.CopyTo. What is this nonsense about not making files full-fledged objects in most languages? Also, if I want to iterate through the files in a directory, I can just create the directory object and call .GetFiles. Or I can do .GetDirectories, and I get a whole new set of DirectoryInfo objects to play with.
Admittedly, Java has some of this file stuff, but this nonsense of having to have a whole object to tell it how to list files is just silly.
Also, I hate ::, ->, => and all other multi-character operators except for <= and >= (and maybe -- and ++).
[Disclaimer: i only have a passing familiarity with VB, so take my comments with a grain of salt]
I Hate How Every Keyword In VB Is Capitalized Like This. I saw a blog post the other week (month?) about someone who tried writing VB code without any capital letters (they did something to a compiler that would let them compile VB code like that), and the language looked much nicer!
My big hangup is MATLAB's syntax. I use it, and there are things I like about it, but it has so many annoying quirks. Let's see.
Matrices are indexed with parentheses. So if you see something like Image(350,260), you have no clue from that whether we're getting an element from the Image matrix, or if we're calling some function called Image and passing arguments to it.
Scope is insane. I seem to recall that for loop index variables stay in scope after the loop ends.
If you forget to stick a semicolon after an assignment, the value will be dumped to standard output.
You may have one function per file. This proves to be very annoying for organizing one's work.
I'm sure I could come up with more if I thought about it.