Software engineering with Ada: stubs; separate and compilation units [closed] - compilation
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm with a mechanical engineering background but I'm interested to learn good software engineering practice with Ada. I have a few queries.
Q1. If I understand correctly then someone can just write a package specification (ads) file, compile it and then compile the main program which is using the package. Later on, when one knows what to include in the package body then the latter can be written and compiled. Afterwards, the main program can now be run. I've tried this and I would like to confirm that this is good practice.
Q2. My second question is about stubs (sub-units) and the use of SEPARATE. Say I have a main program as follows:
WITH Ada.Float_Text_IO;
WITH Ada.Text_IO;
WITH Ada.Integer_Text_IO;
PROCEDURE TEST2 IS
A,B : FLOAT;
N : INTEGER;
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS SEPARATE;
BEGIN -- main program
INPUT(A,B,N);
Ada.Float_Text_IO.Put(Item => A);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Put(Item => N);
END TEST2;
Then I have the procedure INPUT in a separate file:
separate(TEST2)
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
BEGIN
Ada.Float_Text_IO.Get(Item => A);
Ada.Text_IO.New_line;
Ada.Float_Text_IO.Get(Item => B);
Ada.Text_IO.New_line;
Ada.Integer_Text_IO.Get(Item => N);
END INPUT;
My questions:
a) AdaGIDE suggests me to save the the INPUT procedure file as input.adb. But then on compiling the main program test2, I get the warning:
warning: subunit "TEST2.INPUT" in file "test2-input.adb" not found
cannot generate code for file test2.adb (missing subunits)
To AdaGIDE, this is more of an error as the above warnings come before the message:
Compiling...
Done--error detected
So I renamed the input.adb file to test2-input.adb as was suggested to me by AdaGIDE on compiling. Now on compiling the main file, I don't have any warnings. My question now is if it's ok to write
PROCEDURE INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
as I did in the sub-unit file test2-input.adb or is it better to write a more descriptive term like
PROCEDURE TEST2-INPUT(A,B: OUT FLOAT; N: OUT INTEGER) IS
to emphasize that procedure input has a parent procedure test2 ? This thought follows from AdaGIDE hinting me about test2-input.adb as I mentioned above.
b) My next question:
If I understand well the compilation order, then I should compile the main file test2.adb first and then the stub test2-input.adb . On compiling the stub I get the error message:
cannot generate code for file test2-input.adb (subunit)
Done--error detected
However, I can now do the binding and linking for test2.adb and run the program .
I would like to know if I did wrong by trying to compile the stub test2-input.adb or should it not be compiled?
Q3. What is the use of having subunits? Is it just to break a large program into smaller parts? I know an error arises if one doesn't put any statements between BEGIN and END in the subunit. So this means that one always has to put a statement there. And if one wants to write the statements later, one can always put a NULL statement between between BEGIN and END in the subunit and comes back to the latter at a later time. Is this how software engineering is done in practice?
Thanks a lot...
Q1: That is excellent practice.
And by treating the package specification as a specification, you can provide it to other developers so that they will know how to interface to your code.
Q2: I believe that AdaGIDE actually uses the GNAT compiler for all compilation, so it's actually GNAT that is in charge of the acceptable filenames. (This can be configured, but unless you have a very compelling reason to do so, it is far simpler to simply go with GNAT/AdaGIDE's file naming conventions.) More pertinent to your question, though, there's no strong reason to include the parent unit as part of the separate unit's name. But see the answer to Q3...
Q3: Subunits were introduced with the first version of Ada--Ada 83--in part to help modularize code, and allow for deferred development and compilation. However, Ada software development practice has pretty much abandoned the use of subunits, all the procedure/function/task/etc bodies are simply all maintained in the body of the package. They are still used in some areas, like if a platform-specific version of subprogram may be needed, but for the most part they're rarely used. It leaves fewer files to keep track of, and keeps the implementation code of a package all together. So I strongly recommend you simply ignore the subunit capabilities and place all your implementation code in package bodies.
It's pretty normal to split a problem up into component parts (packages), each supporting a different aspect. If you've learnt Ada, it'd be normal to write the specs of the packages first, argue (perhaps with yourself) why that's the right design, and then implement them. And this would be normal, I think, in any language that supports specs and bodies - for example, C.
Personally I would do check compilations as I went, just to make sure I'm not doing anything stupid.
As for separates - one (not very good) reason is to reduce clutter, to stop the unit getting too long. Another reason (for a code generator I wrote) was so that the code generator didn't need to worry about preserving developers' hand-written code in the UML model; all code bodies were separates. A third might be for environment-dependent implementation (eg, Windows vs Unix), where you'd let the compiler see a different version of the separate body for each environment (people normally use library packages for this, though).
Compilers have their own rules about file names, and what order things can be compiled in. When GNAT sees
procedure Foo is
procedure Bar is separate;
it expects to find Foo's body in a file named foo.adb and Bar's body in foo-bar.adb (you can, I believe, tell it different - gnatmake's package Naming - but it's probably not worth the trouble). It's best to go with the flow here;
separate (Foo)
procedure Bar is
is clear enough.
You can compile foo-bar.adb, and that will do a full analysis and catch almost all errors in the code; but GNAT can't generate code for this on its own. Instead, when you compile foo.adb it includes all the separate bodies in the one generated object file. It certainly isn't wrong to do this.
With GNAT, there's no need to worry about compilation order, you can compile in any order you like. But it's best to use gnatmake and let the computer take the strain!
You can indeed work the way you describe, except of course your program won't link until all of the package bodies have some kind of implementation. For that reason, I think it is more normal to write a dummy package body with all procedures implemented as:
begin
null;
end;
And all functions implemented as something like:
begin
return The_Return_Type'first; --'
end;
As for separates...I don't like them. For me I'd much rather be able to follow the rule that all the code for a package is in its package body. Separates are marginally acceptable if for some reason the routine is huge, but in that case a better solution is almost always to refactor your code. So any time I see one, it is a big red flag.
As for the file name thing, this is a gnat issue, not an Ada issue. Gnat took the unusual position for a compiler that the name of the contents of a file dictate what the file itself must be named. There are probably other compilers in the world that do that, but I've yet to find one in 30 years of coding.
Related
Need validation on a claim from Go Lang
I have been lately looking into GoLang -- coming from C++ background-- I am reading a paper which allegedly explains the reasoning behind making Golang, here is its link: https://talks.golang.org/2012/splash.article One of the claims being is, handling Dependencies (Package) in C and C++ is pain and takes on a #ifndef guard instance to state The intent is that the C preprocessor reads in the file but disregards the contents on the second and subsequent readings of the file... I referred a GCC page for the same, https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html. so that if the header file appears in a subsequent #include directive and FOO is defined, then it is ignored and it doesn’t preprocess or even re-open the file a second time Go: "Reads in and disregard" vs GCC: it doesn’t preprocess or even re-open the file a second time. Doesn't contradict? your thoughts are appreciated. Thanks for Reading my question.
The first passage is talking about a generic compiler, which, conceptually speaking, should read the contents of the file and disregard the contents (because they are #ifdefd out). That is, roughly, what the C standard specifies a compiler should do. But practically everything in the C standard is under the "as if" rule - a compiler does not actually have to be implemented in the way suggested in the standard, so long as the end result it produces is exactly the same in every case. As such, GCC's particular implementation adds an optimization where, in cases where it can tell with certainty that the contents of the file would be disregarded, it doesn't actually read it. This is perfectly fine because it still behaves as if it has read the file but disregarded it. Note that other compilers do not necessarily do the same.
Profiling the OCaml compiler
Background information (you do not need to repeat these steps to answer the question, this just gives some background): I am trying to compile a rather large set of generated modules. These files are the output of a prototype Modelica to OCaml compiler and reflect the Modelica class structure of the Modelica Standard Library. The main feature is the use of polymorphic, open recursion: Every method takes a this argument which contains the final superclass hierarchy. So for instance the model: model A type T = Real type S = T end A; is translated into let m_A = object method m_T this = m_Modelica_Real method m_S this = this#m_T this end and has to be closed before usage: let _ = m_A#m_T m_A This seems to postpone a lot of typechecking until the superclass hierarchy is actually fixed, which in turn makes it impossible to compile the final linkage module (try ocamlbuild Linkage.cmo after editing the comments in the corresponding file to see what I mean). Unfortunately, since the code base is rather large and uses a lot of objects, the type-structure might not be the root cause after all, it might as well be some optimization or a flaw in the code-generation (although I strongly suspect the typechecker). So my question is: Is there any way to profile the ocaml compiler in a way that signals when a certain phase (typechecking, intermediate code generation, optimization) is over and how long it took? Any further insights into my particular use case are also welcome.
As of right now, there isn't. You can do it yourself though, the compiler source are open and you can get those and modify them to fit your needs. Depending on whether you use ocamlc or ocamlopt, you'll need to modify either driver/compile.ml or driver/optcompile.ml to add timers to the compilation process. Fortunately, this already has been done for you here. Just compile with the option -dtimings or environment variable OCAMLPARAM=timings=1,_. Even more easily, you can download the opam Flambda switch: opam switch install 4.03.0+pr132 ocamlopt -dtimings myfile.ml Note: Flambda itself changes the compilation time (most what happens after typing) and its integration into the OCaml compiler is not confirmed yet.
OCaml compiler is an ordinary OCaml program in that regard. I would use poorman's profiler for a quick inspection, using e.g. pmp script.
How many times does a Common Lisp compiler recompile?
While not all Common Lisp implementations do compilation to machine code, some of them do, including SBCL and CCL. In C/C++, if the source files don't change, the binary output of a C/C++ compiler will also not change, assuming the underlying system remains the same. In a Common Lisp compiler, the compilation is not under the user's direct control, unlike C/C++. My question is that if the Lisp source files haven't changed, under what circumstances will a CL compiler compile the code more than once, and why? If possible, a simple illustrative example would be helpful.
I think that the question is based on some misconceptions. The compiler doesn't compile files, and it's not something that the user has no control over. The compiler is quite readily available through the compile function. The compiler operates on code, not on files. E.g., you can type at the REPL CL-USER> (compile nil (list 'lambda (list 'x) (list '+ 'x 'x))) #<FUNCTION (LAMBDA (X)) {100460E24B}> NIL NIL There's no file involved at all. However, there is also a compile-file function, but notice that its description is: compile-file transforms the contents of the file specified by input-file into implementation-dependent binary data which are placed in the file specified by output-file. The contents of the file are compiled. Then that compiled file can be loaded. (You can also load uncompiled source files, too.) I think your question might boil down to asking under what circumstances would compile-file generate a file with different contents. I think that's really implementation dependent, and it's not really predictable. I don't know that your characterization of compilers for other languages necessarily holds either: In C/C++, if the source files don't change, the binary output of a C/C++ compiler will also not change, assuming the underlying system remains the same. What if the compiler happens to include a timestamp into the output in some data segment? Then you'd get different binary output every time. It's true that some common scripted compilation/build systems (e.g., make and similar) will check whether previous output can be reused based on whether the input files have changed in the meantime. That doesn't really say what the compiler does, though.
The rules are pretty much the same, but in Common Lisp, it's not a practice to separate declarations from implementation, so usually you must recompile every dependency to be sure. This is a shared practical consequence of dynamic environments. Imagining there was such separation in place, the following are blantant examples (clearly not exhaustive) of changes that require recompiling specific dependent files, as the output may be different: A changed package definition A changed macro character or a change in its code A changed macro Adding or removing a inline or notinline declaration A change in a global type or function type declaration A changed function used in #., defvar, defparameter, defconstant, load-time-value, eql specializer, make-load-form generated code, defmacro et al (e.g. setf expanders)... A change in the Lisp compiler, or in the base image I mean, you can see it's not trivial to determine which files need to be recompiled. Sometimes, the answer is "all subsequent files", e.g. changing the " (double-quotes) macro-character, which might affect every literal string, or the compiler evolved in a non-backwards compatible way. In essence, we end where we started: you can only be sure with a full recompile and not reusing fasls across compilations. And sometimes it's faster than determining the minimum set of files that need to be recompiled. In practice, you end up compiling single definitions a lot in development (e.g. with Slime) and not recompiling files when there's a fasl as old or younger than the source file. Many times, you reuse files from e.g. Quicklisp. But for testing and deployment, I advise clearing all fasls and recompiling everything. There have been efforts to automate minimum dependency compilation with SBCL, but I think it's too slow when you change the interim projects more often that not (it involves a lot of forking, so in Windows it's either infeasible or very slow). However, it may be a time saver for base libraries that rarely change, if at all. Another approach is to make custom base images with base libraries built-in, i.e. those you always load. It'll save both compilation and load times.
What is the best way to manage a large quantity of constants
I am currently working on a very complex program that processes rows from an input table and has a huge number of possible outcomes for each record. Because of this I have a very large number of constants defined for the outcome messages. There is one success message for the record, but a multitude of possible warnings and errors. My first thought was to define all of my constants for these messages at the package body level, but then I decided to move each constant to the procedure where it is used. I'm now second guessing that decision and thinking of moving everything back to package body level. What is the best way to define this many constants? Ease of maintainability is my ultimate goal for this program since it is so complex.
I think this is a matter of taste. In my application I put all error codes into an Error-Package. All main and commonly used constants I put into a separate package (without a package body).
Again, a matter of taste, but I tend to put a list of named constants at the package spec level rather than the package body so that they can be referenced by any portion of the application. If I ever want to change the error code that c_err_for_specific_reason_x uses, it becomes a single place to do so. If I wanted to hide the codes and put them within the body I would have a get_error_code(p_get_error_name varchar) function that did the translation based on you passing a valid constant name. I've done both on different projects, but tend towards the list over the function most times. I tend to use the function if it a table-driven source of the data.
It ... wait for it ... depends! Since you currently define your constants in the package body, you don't need them to be publicly accessible outside the package. So defining them in a spec really doesn't buy you anything. Here's is the rule I follow: Define constants within the smallest scope needed. So if a constant is used only within one procedure, define it in that procedure. If it is used within more than one procedure, define it in the body. If it is used elsewhere by code in other packages (or non-packaged SPs) but only when using a particular package, define it in the spec of that package. If it is used by other code for general use, put it in a separate spec of such general constants.
What are your language "hangups"? [closed]
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago. Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions. I've read some of the recent language vs. language questions with interest... Perl vs. Python, Python vs. Java, Can one language be better than another? One thing I've noticed is that a lot of us have very superficial reasons for disliking languages. We notice these things at first glance and they turn us off. We shun what are probably perfectly good languages as a result of features that we'd probably learn to love or ignore in 2 seconds if we bothered. Well, I'm as guilty as the next guy, if not more. Here goes: Ruby: All the Ruby example code I see uses the puts command, and that's a sort of childish Yiddish anatomical term. So as a result, I can't take Ruby code seriously even though I should. Python: The first time I saw it, I smirked at the whole significant whitespace thing. I avoided it for the next several years. Now I hardly use anything else. Java: I don't like identifiersThatLookLikeThis. I'm not sure why exactly. Lisp: I have trouble with all the parentheses. Things of different importance and purpose (function declarations, variable assignments, etc.) are not syntactically differentiated and I'm too lazy to learn what's what. Fortran: uppercase everything hurts my eyes. I know modern code doesn't have to be written like that, but most example code is... Visual Basic: it bugs me that Dim is used to declare variables, since I remember the good ol' days of GW-BASIC when it was only used to dimension arrays. What languages did look right to me at first glance? Perl, C, QBasic, JavaScript, assembly language, BASH shell, FORTH. Okay, now that I've aired my dirty laundry... I want to hear yours. What are your language hangups? What superficial features bother you? How have you gotten over them?
I hate Hate HATE "End Function" and "End IF" and "If... Then" parts of VB. I would much rather see a curly bracket instead.
PHP's function name inconsistencies. // common parameters back-to-front in_array(needle, haystack); strpos(haystack, needle); // _ to separate words, or not? filesize(); file_exists; // super globals prefix? $GLOBALS; $_POST;
I never really liked the keywords spelled backwards in some scripting shells if-then-fi is bad enough, but case-in-esac is just getting silly
I just thought of another... I hate the mostly-meaningless URLs used in XML to define namespaces, e.g. xmlns="http://purl.org/rss/1.0/"
Pascal's Begin and End. Too verbose, not subject to bracket matching, and worse, there isn't a Begin for every End, eg. Type foo = Record // ... end;
Although I'm mainly a PHP developer, I dislike languages that don't let me do enough things inline. E.g.: $x = returnsArray(); $x[1]; instead of returnsArray()[1]; or function sort($a, $b) { return $a < $b; } usort($array, 'sort'); instead of usort($array, function($a, $b) { return $a < $b; });
I like object-oriented style. So it bugs me in Python to see len(str) to get the length of a string, or splitting strings like split(str, "|") in another language. That is fine in C; it doesn't have objects. But Python, D, etc. do have objects and use obj.method() other places. (I still think Python is a great language.) Inconsistency is another big one for me. I do not like inconsistent naming in the same library: length(), size(), getLength(), getlength(), toUTFindex() (why not toUtfIndex?), Constant, CONSTANT, etc. The long names in .NET bother me sometimes. Can't they shorten DataGridViewCellContextMenuStripNeededEventArgs somehow? What about ListViewVirtualItemsSelectionRangeChangedEventArgs? And I hate deep directory trees. If a library/project has a 5 level deep directory tree, I'm going to have trouble with it.
C and C++'s syntax is a bit quirky. They reuse operators for different things. You're probably so used to it that you don't think about it (nor do I), but consider how many meanings parentheses have: int main() // function declaration / definition printf("hello") // function call (int)x // type cast 2*(7+8) // override precedence int (*)(int) // function pointer int x(3) // initializer if (condition) // special part of syntax of if, while, for, switch And if in C++ you saw foo<bar>(baz(),baaz) you couldn't know the meaning without the definition of foo and bar. the < and > might be a template instantiation, or might be less-than and greater-than (unusual but legal) the () might be a function call, or might be just surrounding the comma operator (ie. perform baz() for size-effects, then return baaz). The silly thing is that other languages have copied some of these characteristics!
Java, and its checked exceptions. I left Java for a while, dwelling in the .NET world, then recently came back. It feels like, sometimes, my throws clause is more voluminous than my method content.
There's nothing in the world I hate more than php. Variables with $, that's one extra odd character for every variable. Members are accessed with -> for no apparent reason, one extra character for every member access. A freakshow of language really. No namespaces. Strings are concatenated with .. A freakshow of language.
All the []s and #s in Objective C. Their use is so different from the underlying C's native syntax that the first time I saw them it gave the impression that all the object-orientation had been clumsily bolted on as an afterthought.
I abhor the boiler plate verbosity of Java. writing getters and setters for properties checked exception handling and all the verbiage that implies long lists of imports Those, in connection with the Java convention of using veryLongVariableNames, sometimes have me thinking I'm back in the 80's, writing IDENTIFICATION DIVISION. at the top of my programs. Hint: If you can automate the generation of part of your code in your IDE, that's a good hint that you're producing boilerplate code. With automated tools, it's not a problem to write, but it's a hindrance every time someone has to read that code - which is more often. While I think it goes a bit overboard on type bureaucracy, Scala has successfully addressed some of these concerns.
Coding Style inconsistencies in team projects. I'm working on a large team project where some contributors have used 4 spaces instead of the tab character. Working with their code can be very annoying - I like to keep my code clean and with a consistent style. It's bad enough when you use different standards for different languages, but in a web project with HTML, CSS, Javascript, PHP and MySQL, that's 5 languages, 5 different styles, and multiplied by the number of people working on the project. I'd love to re-format my co-workers code when I need to fix something, but then the repository would think I changed every line of their code.
It irritates me sometimes how people expect there to be one language for all jobs. Depending on the task you are doing, each language has its advantages and disadvantages. I like the C-based syntax languages because it's what I'm most used to and I like the flexibility they tend to bestow on the developer. Of course, with great power comes great responsibility, and having the power to write 150 line LINQ statements doesn't mean you should. I love the inline XML in the latest version of VB.NET although I don't like working with VB mainly because I find the IDE less helpful than the IDE for C#.
If Microsoft had to invent yet another C++-like language in C# why didn't they correct Java's mistake and implement support for RAII?
Case sensitivity. What kinda hangover do you need to think that differentiating two identifiers solely by caSE is a great idea?
I hate semi-colons. I find they add a lot of noise and you rarely need to put two statements on a line. I prefer the style of Python and other languages... end of line is end of a statement.
Any language that can't fully decide if Arrays/Loop/string character indexes are zero based or one based. I personally prefer zero based, but any language that mixes the two, or lets you "configure" which is used can drive you bonkers. (Apache Velocity - I'm looking in your direction!) snip from the VTL reference (default is 1, but you can set it to 0): # Default starting value of the loop # counter variable reference. directive.foreach.counter.initial.value = 1 (try merging 2 projects that used different counter schemes - ugh!)
In no particular order... OCaml Tuples definitions use * to separate items rather than ,. So, ("Juliet", 23, true) has the type (string * int * bool). For being such an awesome language, the documentation has this haunting comment on threads: "The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster." JoCaml doesn't fix this problem. ^^^ I've heard the Jane Street guys were working to add concurrent GC and multi-core threads to OCaml, but I don't know how successful they've been. I can't imagine a language without multi-core threads and GC surviving very long. No easy way to explore modules in the toplevel. Sure, you can write module q = List;; and the toplevel will happily print out the module definition, but that just seems hacky. C# Lousy type inference. Beyond the most trivial expressions, I have to give types to generic functions. All the LINQ code I ever read uses method syntax, x.Where(item => ...).OrderBy(item => ...). No one ever uses expression syntax, from item in x where ... orderby ... select. Between you and me, I think expression syntax is silly, if for no other reason than that it looks "foreign" against the backdrop of all other C# and VB.NET code. LINQ Every other language uses the industry standard names are Map, Fold/Reduce/Inject, and Filter. LINQ has to be different and uses Select, Aggregate, and Where. Functional Programming Monads are mystifying. Having seen the Parser monad, Maybe monad, State, and List monads, I can understand perfectly how the code works; however, as a general design pattern, I can't seem to look at problems and say "hey, I bet a monad would fit perfect here". Ruby GRRRRAAAAAAAH!!!!! I mean... seriously. VB Module Hangups Dim _juliet as String = "Too Wordy!" Public Property Juliet() as String Get Return _juliet End Get Set (ByVal value as String) _juliet = value End Set End Property End Module And setter declarations are the bane of my existence. Alright, so I change the data type of my property -- now I need to change the data type in my setter too? Why doesn't VB borrow from C# and simply incorporate an implicit variable called value? .NET Framework I personally like Java casing convention: classes are PascalCase, methods and properties are camelCase.
In C/C++, it annoys me how there are different ways of writing the same code. e.g. if (condition) { callSomeConditionalMethod(); } callSomeOtherMethod(); vs. if (condition) callSomeConditionalMethod(); callSomeOtherMethod(); equate to the same thing, but different people have different styles. I wish the original standard was more strict about making a decision about this, so we wouldn't have this ambiguity. It leads to arguments and disagreements in code reviews!
I found Perl's use of "defined" and "undefined" values to be so useful that I have trouble using scripting languages without it. Perl: ($lastname, $firstname, $rest) = split(' ', $fullname); This statement performs well no matter how many words are in $fullname. Try it in Python, and it explodes if $fullname doesn't contain exactly three words.
SQL, they say you should not use cursors and when you do, you really understand why... its so heavy going! DECLARE mycurse CURSOR LOCAL FAST_FORWARD READ_ONLY FOR SELECT field1, field2, fieldN FROM atable OPEN mycurse FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN WHILE ##fetch_status = 0 BEGIN -- do something really clever... FETCH NEXT FROM mycurse INTO #Var1, #Var2, #VarN END CLOSE mycurse DEALLOCATE mycurse
Although I program primarily in python, It irks me endlessly that lambda body's must be expressions. I'm still wrapping my brain around JavaScript, and as a whole, Its mostly acceptable. Why is it so hard to create a namespace. In TCL they're just ugly, but in JavaScript, it's actually a rigmarole AND completely unreadable. In SQL how come everything is just one, huge freekin SELECT statement.
In Ruby, I very strongly dislike how methods do not require self. to be called on current instance, but properties do (otherwise they will clash with locals); i.e.: def foo() 123 end def foo=(x) end def bar() x = foo() # okay, same as self.foo() x = foo # not okay, reads unassigned local variable foo foo = 123 # not okay, assigns local variable foo end To my mind, it's very inconsistent. I'd rather prefer to either always require self. in all cases, or to have a sigil for locals.
Java's packages. I find them complex, more so because I am not a corporation. I vastly prefer namespaces. I'll get over it, of course - I'm playing with the Android SDK, and Eclipse removes a lot of the pain. I've never had a machine that could run it interactively before, and now I do I'm very impressed.
Prolog's if-then-else syntax. x -> y ; z The problem is that ";" is the "or" operator, so the above looks like "x implies y or z".
Java Generics (Java version of templates) are limited. I can not call methods of the class and I can not create instances of the class. Generics are used by containers, but I can use containers of instances of Object. No multiple inheritance. If a multiple inheritance use does not lead to diamond problem, it should be allowed. It should allow to write a default implementation of interface methods, a example of problem: the interface MouseListener has 5 methods, one for each event. If I want to handle just one of them, I have to implement the 4 other methods as an empty method. It does not allow to choose to manually manage memory of some objects. Java API uses complex combination of classes to do simple tasks. Example, if I want to read from a file, I have to use many classes (FileReader, FileInputStream). Python Indentation is part of syntax, I prefer to use the word "end" to indicate end of block and the word "pass" would not be needed. In classes, the word "self" should not be needed as argument of functions. C++ Headers are the worst problem. I have to list the functions in a header file and implement them in a cpp file. It can not hide dependencies of a class. If a class A uses the class B privately as a field, if I include the header of A, the header of B will be included too. Strings and arrays came from C, they do not provide a length field. It is difficult to control if std::string and std::vector will use stack or heap. I have to use pointers with std::string and std::vector if I want to use assignment, pass as argument to a function or return it, because its "=" operator will copy entire structure. I can not control the constructor and destructor. It is difficult to create an array of objects without a default constructor or choose what constructor to use with if and switch statements.
In most languages, file access. VB.NET is the only language so far where file access makes any sense to me. I do not understand why if I want to check if a file exists, I should use File.exists("") or something similar instead of creating a file object (actually FileInfo in VB.NET) and asking if it exists. And then if I want to open it, I ask it to open: (assuming a FileInfo object called fi) fi.OpenRead, for example. Returns a stream. Nice. Exactly what I wanted. If I want to move a file, fi.MoveTo. I can also do fi.CopyTo. What is this nonsense about not making files full-fledged objects in most languages? Also, if I want to iterate through the files in a directory, I can just create the directory object and call .GetFiles. Or I can do .GetDirectories, and I get a whole new set of DirectoryInfo objects to play with. Admittedly, Java has some of this file stuff, but this nonsense of having to have a whole object to tell it how to list files is just silly. Also, I hate ::, ->, => and all other multi-character operators except for <= and >= (and maybe -- and ++).
[Disclaimer: i only have a passing familiarity with VB, so take my comments with a grain of salt] I Hate How Every Keyword In VB Is Capitalized Like This. I saw a blog post the other week (month?) about someone who tried writing VB code without any capital letters (they did something to a compiler that would let them compile VB code like that), and the language looked much nicer!
My big hangup is MATLAB's syntax. I use it, and there are things I like about it, but it has so many annoying quirks. Let's see. Matrices are indexed with parentheses. So if you see something like Image(350,260), you have no clue from that whether we're getting an element from the Image matrix, or if we're calling some function called Image and passing arguments to it. Scope is insane. I seem to recall that for loop index variables stay in scope after the loop ends. If you forget to stick a semicolon after an assignment, the value will be dumped to standard output. You may have one function per file. This proves to be very annoying for organizing one's work. I'm sure I could come up with more if I thought about it.