I think I finally know what I want in a compiled programming language, a fast compiler. I get the feeling that this is a really superficial thing to care about but some time after switching from Java to Scala for a while I realized that being able to make a small change in code and immediately run the program is actually quite important to me. Besides Java and Go I don't know of any languages that really value compile speed.
Delphi/Object Pascal. Make a change, press F9 and it runs - you don't even notice the compile time. A full rebuild of a fairly substantial project that we run takes of the order of 10-20 seconds, even on a fairly wimpy machine
There's an open source variant available at www.freepascal.org. I've not messed with it but it reportedly is just as fast - it's the design of the Pascal language that allows this.
Java isn't fast for compiling. The feature you a looking for is probably a hot replacement/redeployment while coding. Eclipse recompiles just the files you changed.
You could try some interpreted languages. They usually don't require compiling at all.
I wouldn't choose a language based on compilation speed...
Java is not the fastest compiler out there.
Pascal (and its close relatives) is designed to be fast - it can be compiled in a single pass. Objective Caml is known for its compilation speed (and there is a REPL too).
On the other hand, what you really need is REPL, not a fast recompilation and re-linking of everything. So you may want to try a language which supports an incremental compilation. Clojure fits well (and it is built on top of the same JVM you're used to). Common Lisp is another option.
I'd like to add that there official compilers for languages and unofficial ones made by different people. Obviously because of this the performance changes per compiler.
If you were to talk just about the official compiler I'd say it's probably Fortran. It's very old but it's still used in most science and engineering projects because it is one of the fastest languages. C and C++ come probably tied in second because there also used in science and engineering.
Related
I was reading about how people were having trouble finding people to work with COBOL when working government systems that still use it. I was also reading about how Fortran, a language made two years before COBOL, is interoperable with C, C++, R, and Python with the right libraries.
This allows Fortran scripts to work with modern programming languages to some degree and even create scripts in modern programming languages that can work alongside Fortran code, making it easier for novices of Fortran to work with it. Are there any particular issues that prevent COBOL from having similar interoperability with other programming languages like SQL (which is used for databases similar to COBOL) that would make it easier for modern programmers who might not normally learn COBOL to work with it?
Q1: Does anything prevents interoperability between modern languages and COBOL?
A1: Short answer similar to those above: No, it is actually often done.
But that may depends on what "modern language" is defined for the reader.
Even with "real" COBOL (not some "shiny" [may be read as "blending"] "managed COBOL") you are in most cases free to directly call any C functions so more or less can call anything (at least with a C wrapper) and also can call binaries as you can do on the operating system (`CALL 'SYSTEM' USING 'some-executabe-or-script "param1" "param2"' is a common extension).
For calling into any "native code" directly (like Win32 or POSIX) you obviously have to ensure you are using the correct parameter definitions, but COBOL 2002+ have stuff like USAGE SIGNED-LONG, USAGE POINTER and similar (the extension USAGE COMP-5 is also common in this place).
Additional there are often direct ways to inter-operate with socket servers, HTTP(S), XML, JSON, ... ; and many COBOL implementations also allow to ASSIGN a (line-)sequential file to a pipe, allowing to interact with other programs in this way, too.
Q2: Are there any particular issues that prevent COBOL from having [...] interoperability with [...] SQL?
A2: No, and SQL is a very common directly used in COBOL: EXEC SQL
Many people will say that SQL is no "programming language". It is a query language and may be used in different environments, including COBOL.
Depending on the environment used, EXEC SQL may be directly integrated into the COBOL environment or with a pre-parser that adjusts the code to be plain COBOL (normally CALLing some "native" code, see Q1).
Q3: [... stuff] that would make it easier for modern programmers who might not normally learn COBOL to work with it?
... this is a completely different question, whatever a "modern programmer" is.
For a programmer to get to know a programming language it all depends on the programmer and the resources (like time, manuals, tutorials, mentors) - and the will of the programmer. Many people actually don't "want" to learn COBOL (for reasons I've heard but don't understand or disagree), other miss some of the resources (a free compiler is available with GnuCOBOL, nearly all COBOL compiler have their manuals available online and the ISO working group for COBOL publish the draft standards online, too; you often can find mentors in COBOL discussion forums or mailing lists, along with many samples).
One thing that often is special with COBOL is not the language itself, but the environment it is used on ("mainframe" with job control language "jcl" instead of a GUI to click or a shell to use) and/or the software that is actually coded in COBOL; every software that is maintained over decades has "special ways" here and there, and if you get to "decade old code that wasn't actually maintained for years" you get into even more troubles/fun (this is not something COBOL specific, but with COBOL you may encounter this software more often).
No, there is nothing preventing interoperability.
The main reason (this is an opinion, not based on known facts) that Fortran seems to have more interop out-of-the-box was that there was a free software GNU/Fortran for interested parties to work with. COBOL was very late in the game getting a viable free software compiler. That is no longer an issue with GnuCOBOL and people are finally starting to write the code needed to catch up.
Adding to Simon's answer; proof of concept for direct embedding is in a branch for GnuCOBOL; intrinsic functions added to support FUNCTION TCL, FUNCTION PYTHON, FUNCTION REXX, FUNCION LUA and FUNCTION JVM, so far. With FUNCTION JVM tests for Scala, Groovy, Java, Frink, all worked. This allows data transfer between COBOL working storage and the other language engine using simple COBOL syntax. Including setups for callbacks to and from. Those functions are embedded into the compiler and libcob run-time, when using that branch.
For other interface trials, not built into the compiler, but still allowing interop; the GnuCOBOL FAQ has dozens of examples. Shakespeare? Yep. Falcon? Yep. C, well, GnuCOBOL emits intermediate C so that's covered in spades. There is also a C++ edition of the compiler, so C++ is also covered, in spades. Javascript; Jsish, Duktape, Spidermonkey, Quickjs to name a few of the trials.
Ada, D, Vala, Genie, S-Lang, ROOT/CINT, J, Gambas, Forth, Perl, Postscript, Pure, Icon and Unicon, Nim, BaCon, SWIG (which opens up many multiples), PARI/GP, Gretl, R, Red, Ruby, Haxe/Neko, Pascal, Erlang, Elixir, SQLite, Rust, Go, more..., including a fair number of esolangs, and GNU Lightning for on the fly assembly modules. Trials documented in the GnuCOBOL FAQ.
Framework interfacing for AWT/Swing, GTK, Agar, and things like ZeroMQ, CGI and websockets also proved successful and are in productive use. Along with at least 7 EXEC SQL preprocessors successfully tested, and in use.
It comes down to someone caring to try, and writing some glue or properly aligning call frames. No attempts I've tried have failed to produce satisfactory results, although Perl 5 was a hair pull of unraveling macro layers. (Ok, I just lied, while attempting to embed jq, which relies on using C call and return by struct features, I would have had to leave pure COBOL interface coding, and didn't bother with the C middleware that would have made it easy). ;-) Will do that someday though, as jq is quite the powerful little JSON handler.
Use the search engine you mistrust the least and look for "gnu-cobol-builtin-script" and "GnuCOBOL FAQ", and visit the hits on SourceForge.
In my particular explorations I usually focus on languages with a C Application Binary Interface, but other ABIs would be along a similar vein. It only takes sitting down and writing some middleware or figuring out how to properly synch the call frames.
Are these current samples perfect? Not always, there are edge and corner cases with some datatypes and COBOL PICTURE data that would require more work, but that is all; a little bit of work and testing to smooth over the bumps. When exploring, I don't always go that far until an actual need arises. These seed work experiments are just to get some proof in the pudding, all done for the simple joy of it.
One of the lead developers for GnuCOBOL just added uni and bi-directional piping using simple filenames, which provides access to whatever the base OS offers, using basic COBOL OPEN/READ/WRITE/CLOSE (and other file IO) statements. Code was committed to trunk just a few hours before I started typing this response.
Basically, the answer to the titular question is a resounding No.
The scenario involved in the governmental systems is most likely IBM mainframe hardware with a flavor of z/OS, z/VSE, or z/VM operating system.
It somewhat depends on what is meant by interoperability in the sense that most any modern mainframe supports TCP/IP and that pretty much opens up the whole networked computing ecosystem to networked interoperability.
My guess is when all is said and done, the reason there is a problem is that the state refuses to pay a market rate for experienced mainframe developers and has kicked the maintenance can down the road as cost-saving measures.
It most likely is not a matter of there being no mainframe COBOL professionals able to make the systems work; it's most likely the state won't pay the price.
But this is speculation on my part since all I know is that the governor blames inanimate objects for appropriations and management failures within the state IT administration.
As a 40-year mainframe veteran, I'm dying to know details as to how this perfectly good technology is at fault for problems dealing with (again, I assume) unprecedented volumes of processing demand.
We found an interoperability problem between C and GnuCOBOL.
Our problem was addressed so this answer is just for educational purposes so you can understand what kind of problems you may have.
The problem manifests when C calls COBOL(a, b) calls C(c) calls COBOL(a, b).
And specifically when the number of arguments varies.
A recent change to GnuCOBOL assumed that COBOL called COBOL so it passed meta data about the arguments in some global area. Then the called COBOL program cleared out the second argument because is falsely thought it was being called with one argument. That is, the intermediate C call was transparent to COBOL.
This is under the guise of making it more compatible with IBM mainframe but it caused me a lot of grief. It was quick addressed with runtime changes. I would like to see it addressed with a compile time option:
Make .so file a stand alone .so file called from any language but programmer has to be vigilant.
Make .so file assume it will be called from COBOL and has the additional protections afforded by mainframe COBOL.
BTW: GnuCOBOL is great and has a great community behind it. If you are experiencing problems report it and you will get better response than commercial products.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
What are the best IDE's / IDE plugins / Tools, etc for programming with CUDA / MPI etc?
I've been working in these frameworks for a short while but feel like the IDE could be doing more heavy lifting in terms of scaling and job processing interactions.
(I usually use Eclipse or Netbeans, and usually in C/C++ with occasional Java, and its a vague question but I can't think of any more specific way to put it)
This is not really an answer, but I feel so confined by the comment box ...
I do a fair amount of MPI programming, OpenMP too, but not CUDA and GPU stuff. I write mainly Fortran, some C++. I'm still using Emacs as my editor, and for the other things that Emacs does well. I use a separate parallel debugger (DDT, I've used TotalView in the past, more a question of which one is on the machine than which one I prefer) and a performance profiling tool called OPT (like DDT produced by Allinea Software).
I have looked, though not for a year or so, for plug-ins for NetBeans and Eclipse (former preferred, latter too Java-centric and too heavy these days) for parallel programming. What's out there is better for C++ than for Fortran. But I haven't yet come across any plug-in which has really made it far enough out of the research lab to be useful enough to make me change from the old ways.
I'll be as interested as you to see what other SOers recommend though right now it doesn't look very promising.
I'm lately feeling the need to learn a build tool. I'm looking through StackOverflow for recommendations and Gnu Make gets barely mentioned. Instead I see Ant, Maven, CMake, Scon and many others. However, when I look at the little "rogue sources" (as in not-in-the-repo) that I sometimes have to compile, they all require the make && make install steps.
Is learning Make a worse investment of my time than learning another tool?
If so why is Make still so popular?
Make is the standard build tool for everything C/C++. Many others have stepped to the plate, but even when they were useful and successful, they never achieved the ubiquity of make.
Make is installed on virtually every Unix-like machine out there. No matter if you're working with AIX, Solaris, Irix, BSD, or Linux, if there's a compiler installed, there's also make.
Some of the "replacements" (like Automake, CMake) even create Makefiles, which are in turn executed by make.
I would definitely recommend becoming familiar with make. If handled by someone who took the time to learn about make, it is a powerful tool, which can be used in a number of ways not even necessarily related to software development.
Even if you end up using a different build tool in the end, you will be able to "recycle" the lessons learned with make, as the underlying concepts are quite similar. And the sheer number of make-built projects means that there will always be the chance that you have to figure out an existing Makefile.
One thing, though. Get it right from the beginning.
I think the reason you don't see (GNU) make mentioned is that it's often the default; if you have a GNU toolchain, you will have make already. Thus, most people that start talking about build tools, talk about something else.
In my experience, make is fine, but it can be kind of tricky to get it to do exactly what you want to. It's maybe slightly arcane, but it's proven and works.
Make is popular because it's used (mainly) for C/C++ sources in Linux/*nix projects, and is far older than any of the other tools you've mentioned, thus it has stood the test of time and is mature. Kinda like tar.
To be honest with you, I only know make. Those other tools above may be better, but so many projects just use a basic Makefile that you're best off knowing at least a little bit of it. Not only for your own projects at work but most of the open-source ones you find on the net.
It really depends how much you will use it.
If yoy work a lot with C/C++ make projects, then yes, I would recommend learning more about it as a large make file has a steeper learning curve than other build tools you mention.
If you don't work with make, or work in other languages such as C#, Java or PHP then you'd be better off learning build tools relevant to those languages.
Like all tools, if you use it at all, you should put some time
into becoming reasonably adept at it. Also, some tools (like CMake, for example) generate makefiles and you may one day need to mess with those generated files.
GNU make has an excellent manual - it's certainly worth spendin an hour or two reading it.
Make is the de-facto standard on Linux systems for example. It is a very complex tool, and also a very powerful tool.
It is well suited to learn if you are developing C or C++, particularly if targeting Linux/*nix.
One of the features of make, is that you can set up dependencies for when to rebuild a file. E.g. each c or c++ file is build into an .obj file, and in the end, all .obj files are linked to an executable. But maybe the executable is a statically linked library, that is linked into another executable with other .obj files.
Make can make sure that you compilation time is as short as possible, because you can define that a c file should only be compiled if it, or any dependent header files, are newer that the .obj file. So any compilation or linking step is only executed if the current source files for the step is newer that the target file.
If you are developing in for example C#, you don't need this kind of dependency checking because all .cs files are compiled at once into a single executable.
So the conclusion is that you should use a build tool that is well suited for your choice of programming language.
Even if you end up preferring another build tool (personally I'm fond of VS... I know...) knowing make will probably prove more useful.
Make has many applications and whilst it is not always ideal for a single task, when dealing with new technologies it is stalwart and flexible.
I guess where you work is probably different, but I know that everywhere I've worked I would have been a far less valuable employee if I hadn't at least learned how to read Makefiles. Even in all Windows-VisualStudio environments, it comes up every now and then.
For instance, we just got a job that involves porting a bunch of old CX/UX code to Windows. The old code was built with makefiles. There's no way to understand their old system without knowing how to read those old makefiles.
Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And #Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
(Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.