I see many Fortran compilers available for Windows:
NAG
Absoft
Portland Group
Lahey
Intel
OpenWatcom (F77 only)
Silverfrost FTN95
Pathscale
GNU/GCC
GnuFortran
If I've missed any, feel free to edit this post.
I'd like to use at least Fortran 95, and possibly Fortran 2003. What are the differences among the compilers above?
(GCC:
Generic front-end
GnuFortran: Fortran front-end)
If I've missed any, feel free to edit this post.
You definitely missed GNU Fortran compiler (for Windows you can get binaries from this site).
I'd like to use at least Fortran 95, and possibly Fortran 2003.
Just to alert you from the very beginning - forget about full Fortran 2003 support. Fortran 2003 status.
What are the differences among the compilers above?
Different prices, different platform (OS) support, different list of extensions to the standard language, different (empty) list of supported Fortran 2003 features, some additional features (i.e., Portland Group has special CUDA compiler, different optimization capabilities.
Fortran Compiler Comparisons.
As with other kinds of software the main differences would be;
- their development status in regards to Fortran standard support
- support from dev. and support crew (in this regard I've been very satisfied with Intel's) at my old workspace ... their forum is a great place for seeking advice, and their crew is very helpful
- price ...
In the end, when choosing a compiler, it most often comes down to what compiler line you used on present codebase (for example - MS Powerstation -> Digital's -> Compaq's Visual Fortran -> Intel's ...). Apart from the standard almost all compilers come with various custom extensions of their own, graphical libraries and so on, and this often is the choosing factor.
I'd like to use at least Fortran 95, and possibly Fortran 2003. What are the differences among the compilers above?
Fortran 95 is supported by all of them, I think (not having the time to check it now, but fairly certain), and a lot of them is quite good on supporting F2008 features. However, F2008 is a minor standard upgrade to F2003, which was a major standard upgrade, and so a lot of them is still not supporting all F2003 features (it will be some time yet before they do that, btw). For example, Intel's as of recent started claiming support for co-arrays, a lot awaited feature of many users.
Therefore, at current there is no F2003 (apart from Cray's maybe, but they're not available on Windows platform) standard compilant compiler, and certanly no F2008 standard compilant compiler since they would have to have full support for 2003 and 2008 to be able to claim that. It is a little complicated story to go in now.
However, in the one #kemiisto gave and there are others as well, you can check for individual features you need and based on that choose which compiler supoorts your demands.
Hope some of this makes sense.
Related
This is a a question in my head for some years, I am using OCaml under windows, when I build each OCaml distribution version, I need a C compiler, either MSVC or MingGW, and I have to do it under Cygwin.
When I have my OCaml in hand, and when I need to compile my codes, I also need the c linker that I used for compiling my OCaml..for me it's very strange. Why OCaml can't auto bootstrap with an elder version of OCaml instead of some C compiler?
The OCaml toolchain relies on external tools to assemble and link binaries. The latter is probably more important than assembler, as assemblers are more or less stable. But linkers are usually deeply integrated with an operating system and differ each version. Bundling them will increase support burden and make OCaml programs less portable and the whole OCaml distribution more fragile. So, depending on assembler/linker abstraction is sort of a sweet point, that minimizes dependencies and support burden and maximizes portability.
Other languages, usually, follow the same approach. Even those that depend on LLVM, as LLVM actually uses the GNU toolchain linker underneath the hood.
For building OCaml itself, the C compiler is absolutely necessary. The OCaml itself is not written entirely in OCaml. In fact, OCaml runtime is written in pure C, e.g., garbage collector. Also, many functions, especially that define system interface (e.g., Unix) are also written in C. The sloccount tool gives us a rough estimate, that 15% of OCaml source code (45,000 LOC) is written in C.
The OCaml bytecode interpreter is written in C - see the description in the OCaml README here.
ivg's answer says it all, but I'll just give a quick tip for Windows 10 users.
I always recommend that Windows 10 users use Ubuntu on Windows 10. You'll then have access to a fully-fledged Unix environment instead of Cygwin, which include (among other things) a built-in C tool chain.
I'd only use Windows to develop if I intend to release on Windows, which I rarely do. Even then, I'd prefer to use a cross-compiler and use Windows only for testing.
I'm an embedded software engineer working with IA-32 type processors. We are looking for a compiler tool chain - preferable free.
We used to use Mentor Graphics CodeBench Lite but it's no longer available.
We have looked at other GCC distributions but none of them have a bare metal implementation of glibc. None except newlib but we can't use it because of the GPL and LGPL licencing issues. We're an OEM and our customers (and us) have proprietary code.
Any suggestions welcome.
Sourcery's "lite" gpl tools are still available, it's just that Mentor likes to play hide-the-link.
If you want a lightweight C library with non-GPL licensing, you might look at Bionic from Android.
However, you concern may be mistaken. IANAL but most C library licenses have a linking exception of some sort which you may want to research with the help of your lawyers - their utility as system libraries would be extremely limited without.
And actually, a quick search of the newlib licensing page (which is complicated) seems to show that more of it is under BSD-style licenses than under GPL-style ones, though care would be needed to sort it all out.
Mentor may no longer be providing a Lite edition of the IA-32 bare-metal toolchain, but I'm pretty sure it's still supported in the commercial editions, and a basic license is not that expensive.
As Chris says, the Newlib licensing page is a bit complicated -- but the gist of it is that basically all of it that you need for a bare-metal system is BSD licensed; IIRC, the parts that are GPL-licensed are clearly-delineated system-specific pieces that reference things in the Linux kernel or the like (and thus have to be GPL-licensed), and those aren't included in the bare-metal builds. I think they're even all in one or two distinct directories that you can just delete. Obviously you should do the analysis for yourself, but that's the result you should expect to find.
A shortcut that may be useful: The download page for the most recent version of CodeBench Lite for IA-32 ELF that was produced is on this page. If you download the source tarball from there, you'll get the Newlib sources that were used to build that, and there's also a .sh file in the package indicating how it was configured and built. You'll note that in the documentation (licenses are in the back of the Getting Started Guide) the Newlib binaries are simply listed as BSD-licensed, so this should show you how Mentor got a compiled library that fits that licensing description.
(Disclaimer: I used to work for Mentor until recently.)
If a particular piece of software is made to be run on one platform and the programmer/company/whatever wants to port it to the other, what exactly is done? I mean, do they just rewrite linux or windows-specific references to the equivalent in the other? Or is an entire rewrite necessary?
Just trying to understand what makes it so cost-prohibitive that so many major vendors don't port their software to Linux (specifically thinking about Adobe)
Thanks
this is the point of a cross-platform toolkit like qt or gtk, they provide a platform-agnostic API, which delegates to whichever platform the program is compiled for.
some companies don't use such a toolkit, and write their own (for whatever reason - could well be optimisation-related), meaning they can't just recompile their code for another os.
There are also libraries available that ease, at least on a specific area, the port of Windows API calls to Linux. See the Windows to Linux porting library.
In my experience, there are three main reasons why it's cost-prohibitive to take a large existing program on one platform and port it to another:
it has (not necessarily purposely) extensively used some library or API (often GUI, but there are also plenty of other things) that turns out not to exist on the other platform
it has unknowingly become riddled with dependency on nonstandard features or oddities of the compiler or other tools
it was written by somebody who didn't know that you had to use some oddball feature to get things to work on the other platform (like a Linux library that isn't sprinkled with the right __declspec directives you need for a good Windows DLL).
It's much easier to write a cross-platform app if you consider that a design goal from the start, and I have three specific recommendations:
Use Boost—oodles of handy things you might ordinarily get from platform-specific APIs and libraries, but by using Boost you get it cross-platform.
Do all your GUI programming using a cross-platform library. My favorite these days is Qt, but there are other worthy ones as well.
Build and test every day on both platforms, never provide an opportunity for the code to develop a dependency on only one platform and discover it only too late.
There are many reasons why it may be very difficult to port an application to a different platform, most often it is because some interfaces the application uses to communicate with the system are not available, and one either has to implement them on their own, port a library your application depends on, or rewrite the application, so that it uses alternative functions. Most languages today are very portable across hardware architectures and operating systems, but the problem is with libraries, system calls and potentially other interfaces the OS (or platform) provides. To be more specific:
Compilers may differ in their configuration and the standard functions they provide. On Windows the most popular compiler for C/C++ is Visual Studio, while on unix it is gcc and llvm (in combination with the standard library glibc or BSD libc). They expect different flags, different forms of declaration, produce different file format of executables and shared libraries. Even though C and C++ have standard libraries, they are implemented differently across platforms. There are some systems whose aim is to make compilation portable, such as Autotools, CMake and SCons.
On top of standard libraries there are additional functions OS provides. On Windows they are covered by win32 API, on unix systems these are part of the POSIX standard, with various GNU, BSD and Linux specific extensions, and there are also plain system calls (the lowest-level interface between applications and the operating system). POSIX functions are available on Windows via systems such as cygwin and mingw, win32 API function are available on unix via Wine. The problem is that these implementations are not complete, and often there are minor (but important) differences.
Communication with the desktop system (in order to make a GUI interface) is done differently. On Linux this might be the X Window System (together with freedesktop libraries) or Wayland, while Windows has its own systems. There are GUI libraries which try to provide an abstract interface for these, such as Qt, GTK, wxWidgets, EFL.
Other services the OS provides, such as network communication may be implemented differently. On Windows many applications use .NET libraries, for which there is only limited support on unix systems. Some unix applications rely on Linux-specific features such as systemd, /proc, KMS, cgroups, namespaces. This limits portability even among unix systems (Linux, BSD systems, Mac OS X, ...). Even .NET libraries are not very compatible across different versions, and they might not be available on an older version of Windows or on embedded systems. Android and iOS have different interfaces entirely.
Web applications are usually the most portable, but HTML5 is a live standard, and many interfaces may not be available yet in some browsers/web engines. This requires the use of polyfills, but it is usually much less painful than the situation with "native" applications.
Because of all of these limitations, porting can be a pretty hard work and sometimes it is easier to create a new application from scratch, either specifically for the other platform, using cross-platform abstraction libraries/platforms (such as Qt or Java), or as a web application (potentially bundled in something like Electron). It is a good idea to use these from the beginning, but many programmers choose not to because the applications tend to look and behave differently from "native" applications on the platform, and they might also be slower and more restricted in the way they interact with the OS.
Porting a piece of software that has not been made platform-independant upfront can be an enormous task. Often, the code is deeply ingrained with non-portable APIs, whether 3rd party or just OS libraries. If the 3rd party vendor does not provide the API for the platform you are porting on, you are pretty much forced into a full rewrite of that functionality, or finding another 3rd party that is portable. This only can be awfully costly.
Finally, porting software also means supporting it on another platform, which means hiring some specialists, and training support to answer more complex queries.
In the end, such a process can be very costly, for very little additional sales. Sadly, the decision is easy: concentrate on new functionality on your current platform that you know your customers are going to pay for.
If the software was written for a single OS, a major rework is likely. The first step is to move absolutely all platform-specific code into a single area of the code base; this area should have little or no app-specific stuff. Then rewrite this isolated portion of the code for the new target OS.
Of course, this glosses over some extremely major implications. For instance, if your first version targeted the Win32 API, then any GUI code will be heavily tied to Windows, and to maintain any hope of preserving your sanity, you will need to move all that code to a cross-platform GUI framework like Qt or GTK.
Under Mono, you can write a C# Winforms program that works on both platforms. But to make that possible, the Mono team had to write their own Winforms library that essentially duplicates all of the functions of Winforms. So there is still no free lunch.
Most software is portable to some extent. In the case of a C app - there will be a lot of #ifdefs in the area, apart from path changes, etc.
Rarely windows/linux version of the same software don't share a common codebase - this would actually mean that they only share a common name. It's always harder to maintain more codebases, but I think that the actual problem with porting applications has little to do with the technical side and a lot with business side. Linux has much fewer users that Windows/OSX, most of them expect everything to be free as in beer or simply hate commercial software on some religious grounds.
When you come to think about it - most open source software is multiplatform, no matter what language was used to implement it. This speaks for itself...
P.S. Disclaimer - I'm an avid supporter of Free and Open source software, I don't want to insult anybody - I just share my perspective on the topic.
can anyone tell me what gcc is?? and wts are its advantage over other compiler like turbo c and visual c
The GNU Compiler Collection is an open source (GPL) compiler. It's found on a wide variety of systems, ranging from GNU/Linux to every flavor of Unix, to Windows.
GCC contains support for many languages (C, C++, Fortran, to name but a few). It's highly portable, and widely used, and tends to produce good code. It can also be used as a cross-compiler (compiling for a system other than the one running GCC).
It's the default compiler choice for most Unix-type systems because most vendors don't bother to write their own compilers anymore - GCC is just too good for general use.
Under Windows, Microsoft's own dev tools are often preferred because they get support for new technologies quicker.
In high-performance programming environments (and some embedded environments) you may want a compiler that's more highly tuned to the chip/system in question.
The GNU Compiler Collection are the compilers used in GNU/Linux systems. I don't know that they compete with Turbo C or Visual C, which I think only run on DOS/Windows systems.
The main advantage to a user is that GCC can be installed on (and is sometimes distributed with) nearly every GNU/Linux system and can be used to build packages that are distributed as source.
I'm sure there are advantages that programmers would recognize, but maybe that's a topic for stackoverflow.com.
[Edit]
Now that this question has been migrated, see Michael Kohne's answer for some advantages to programmers.
Big advantage of gcc over Turbo C and Visual C: it's free!*
And it's ubiquitous, especially on the various *nix environments. You can use it on Windows via either cygwin or MinGW. It compiles a truly staggering number of languages (C, C++, Ada, Java, Fortran, Objective-C), and supplies an intermediate language for Haskell.
It has been used for industrial-strength projects for decades now, so you're pretty safe with it.
*(Though, in all fairness, Microsoft does offer Visual C++ Express for free, though it is not open source.)
The real true advantage of gcc over turbo C and visual C is it's availability on platforms other than Windows, and it's ability of building cross-platform binaries, meaning you can set up a build tool chain to build windows binaries on a linux box with gcc, or you can set up a similar tool chain to build some arm binaries on an intel box, which is definitely nice since you might not have as much power in your arm device as you might have in you development rig. With visual c compiler and turbo c compiler that's close to impossible.
A nice bonus to all that - gcc is open source and free.
Whereas GCC started as GNU C Compiler, it is now GNU Compiler Collection, as it has compilers for C, C++, Java, Fortran among others. Why would you want to use it? Because its better. Better, because:
Once you have written your code using the GCC compilers, you are assured that your code will work on a lot of other OSs/platforms/architectures. You will be able to do a lot more in and with your programs than you would do in Turbo C, which is pretty much tied to Windows.
GCC is used by all of those projects out there in the wild. Having some experience with GCC is definitely a great plus when you are moving to some serious programming.
and its just good karma :)
The GCC collection is open source. This allows it to be ported to MANY different platforms rather quickly because so many people work on it and have seen how it works. Commercial compilers usually are closed-source, and such only one company or consorsium can do the porting, which can be time-consuming.
Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And #Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
(Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.