Contrary to what applies for RANDOM_NUMBER and RAND() (http://gcc.gnu.org/onlinedocs/gfortran/RANDOM_005fNUMBER.html#RANDOM_005fNUMBER), in the gfortran compiler 4.8.0, there is no documentation (brief or detailed) for a random number generator that runs as RANDOM(RAND). I'm using the Geany 1.23 frontend for gfortran and when I'm calling RANDOM(RAND), "RANDOM" appears in brown, while "RAND" appears in blue.
Any idea as to where I can find documentation for this built-in random number generator? I'm asking because although good random number generation conduct dictates the use of portable code -and I do so-, RANDOM(RAND) appears to work equally well for my application.
I have no idea about Fortran or how to put this message as a comment, but I just wanted to point out that Geany's keywords are just a simple list in a text (config) file and don't necessarily represent a full or accurate set of language/builtin/standard keywords (though they usually do). Looking at Geany's commit history, the 'random' keyword has been in since the Fortran filetype was initially added, by someone who, as far as I know, neither speaks nor uses Fortran himself.
Related
I wonder the other major difference about the distribution format of GCC between its source code and its precompiled binary code, except for the former can be customized installation(including cross-compiler) but the latter cannot!
I wonder the other major difference about the distribution format
Distribution format has nothing to do with your question.
The differences are exactly the same as for any other program: if you have source code, you can debug, customize, or modify the program; if you only have binary, you can't (except to the degree that the program author has envisioned and explicitly programmed for).
So I kind of inherited this (not really legacy) project written in Fortran. In order to make it thread-safe, I had to pass a void* pointer (called user_data, you might know the pattern) to all fortran routines so they could pass it back to the callbacks (hence global state was properly heap allocated now).
To my sincere surprise, this lead to a complete breakdown and segfaults in the weirdest places. After all, I had only added one unchanged argument to all functions?
To my sheer horror (I am not a Fortran programmer, just an average hacker with a knack for problem solving), I learned that a Fortran compiler simply ignores everything beyond column 72, probably because columns are expensive or something, without even giving a warning (well except for some cases where a "type error" (haha type-discipline in Fortran, what a joke) was reported).
Up until today I keep finding places in the code that suffer from the unintended consequences of this indention.
Is there any tool out there that can check a Fortran codebase reliably for this kind of mistake?
And, as a bonus question dedicated to John Oliver: Why is a 72 column limit still a thing?
Is there any tool out there that can check a Fortran codebase reliably for this kind of mistake?
Yes, your compiler. With gfortran, this would be -Wline-truncation (included in -Wall, something that you always should have on). With ifort, this would be -warn truncated_source. I would bet that (almost) any other compiler has options for this as well.
The column limit of 72 is grown historically from punch cards and kept for backwards compatibility. With most compilers you can change or even disable this limit. With gfortran this would be -ffixed-line-length-<n> with an integer <n> and -ffixed-line-length-0 to disable it.
I try to use function names that are active and descriptive, which I then document with active and descriptive text (!). This generates redundant-looking code.
Simplified (but not so unrealistic) example in python, following numpy docstring style:
def calculate_inverse(matrix):
"""Calculate the inverse of a matrix.
Parameters
----------
matrix : ndarray
The matrix to be inverted.
Returns
-------
matrix_inv : ndarray
The inverse of the matrix.
"""
matrix_inv = scipy.linalg.inv(matrix)
return matrix_inv
Specifically for python, I have read PEP-257 and the sphinx/napoleon example numpy and Google style docstrings. I like that I can automatically generate documentation for my functions, but what is the "best practice" for redundant examples like above? Should one simply not document "obvious" classes, functions, etc? The degree of "obviousness" then of course becomes subjective ...
I have in mind open-source, distributed code. Multiple authors suggests that the code itself should be readable (calculate_inverse(A) better than dgetri(A)), but multiple end-users would benefit from sphinx-style documentation.
I've always followed the guideline that the code tells you what it does, the comments are added to explain why it does something.
If you can't read the code, you have no business looking at it, so having (in the extreme):
index += 1 # move to next item
is a total waste of time. So is a comment on a function called calculate_inverse(matrix) which states that it calculates the inverse of the matrix.
Whereas something like:
# Use Pythagoras theorem to find hypotenuse length.
hypo = sqrt (side1 * side1 + side2 * side2)
might be more suitable since it adds the information on where the equation came from, in case you need to investigate it further.
Comments should really be reserved for added information, such as the algorithm you use for calculating the inverse. In this case, since your algorithm is simply handing off the work to scipy, it's totally unnecessary.
If you must have a docstring here for auto-generated documentation, I certainly wouldn't be going beyond the one-liner variant for this very simple case:
"""Return the inverse of a matrix"""
"Always"? Definitively not. Comment as little as possible. Comments lie. They always lie, and if they don't, then they will be lying tomorrow. The same applies to many docs.
The only times (imo) that you should be writing comments/documentation for your code is when you are shipping a library to clients/customers or if you're in an open source project. In these cases you should also have a rigorous standard so there is never any ambiguity what should and should not be documented, and how.
In these cases you also need to have an established workflow regarding who is responsible for updating the docs, since they will get out of sync with the code all the time.
So in summary, never ever comment/document if you can help it. If you have to (because of shipping libs/doing open source), do it Properly(tm).
Clear, concise, well written, and properly placed comments are often useful. In your example, however, I think the code stands alone without the comments. It can go both ways. Comments range from needed and excellent to completely useless.
This is an important topic. You should read the chapter on comments in “Clean Code: A Handbook of Agile Software Craftsmanship,” by Robert Martin and others (2008). Chapter 4, “Comments,” starts with this assertion, “Clear and expressive code with few comments is far superior to cluttered and complex code with lots of comments. Rather than spend your time writing the comments that explain the mess you’ve made, spend it cleaning the mess.” The chapter continues with an excellent discussion on comments.
Yes, you should always document functions.
Many answers write about commenting your code, this is very different. I say about docstrings, which document your interface.
Docstrings are useful, because you can get interactive help in python interpreter. For example,
import math
help(math)
shows you the following help:
...
cos(...)
cos(x)
Return the cosine of x (measured in radians).
cosh(...)
cosh(x)
Return the hyperbolic cosine of x.
...
Note that even though cos and cosh are very familiar (and exactly repeat functions from C math.h), they are documented. For cos it is stated explicitly that its argument should be in radians. For your example it would be useful to know what a matrix could be. Is it an array of arrays? A tuple of tuples, or an ndarray, as you correctly wrote in its proper documentation? Will a rectangular or zero matrix suit?
Another 'familiar' function is chdir from os, which is documented like this:
chdir(...)
chdir(path)
Change the current working directory to the specified path.
Frankly speaking, not all functions in standard library modules are documented. I found a non-documented method of a class statvfs_result in os:
| __reduce__(...)
Maybe it is still a good example of why you should document. I admit that I forgot what reduce does, so I've no idea about this method. More familiar __eq__, __ne__ are still documented in that class (like x.__eq__(y) <==> x==y).
If you don't document your function, the help for your module will look like this:
calculate_inverse(matrix)
Functions will clump together more, because a docstring takes additional vertical space.
Write a docstring for a person who doesn't see your code. If the function is really simple, the docstring should be simple as well. It will give confidence that the function really is simple, and nothing unexpected will raise from that undocumented function (if they didn't bother to write documentation, are they competent and responsible to produce good code, indeed?)
The spirit of PEPs and other guidelines is that code should be good for all.
I'm pretty sure that somebody will once have difficulty with which is obvious for you.
I (currently) write from my laptop with not a very large screen, and have only one window in vim, but I write in conformance with PEP 8, which says: "Limiting the required editor window width makes it possible to have several files open side-by-side, and works well when using code review tools that present the two versions in adjacent columns". PEP 257 recommends docstrings which will work well with Emacs' fill-paragraph.
So, I don't know any good example when not to write a docstring is worthy. But, as PEPs and guidelines are only recommendations, you can omit a docstring if your function will not be used by many people, if you won't use it in the future, and if you don't care to write good code (at least there).
I ve written a code in C for ATmega128 and
I d like to know how the changes that I do in the code influence the Program Memory.
To be more specific, let's consider that the code is similar to that one:
d=fun1(a,b);
c=fun2(c,d);
the change that I do in the code is that I call the same functions more times e.g.:
d=fun1(a,b);
c=fun2(c,d);
h=fun1(k,l);
n=fun2(p,m);
etc...
I build the solution at the AtmelStudio 6.1 and I see the changes in the Program Memory.
Is there anyway to foresee, without builiding the solution, how the chages in the code will affect the program memory?
Thanks!!
Generally speaking this is next to impossible using C/C++ (that means the effort does not pay off).
In your simple case (the number of calls increase), you can determine the number of instructions for each call, and multiply by the number. This will only be correct, if the compiler does not inline in all cases, and does not apply optimzations at a higher level.
These calculations might be wrong, if you upgrade to a newer gcc version.
So normally you only get exact numbers when you compare two builds (same compiler version, same optimisations). avr-size and avr-nm gives you all information, for example to compare functions by size. You can automate this task (by converting the output into .csv files), and use a spreadsheet or diff to look for changes.
This method normally only pays off, if you have to squeeze a program into a smaller device (from 4k flash into 2k for example - you already have 128k flash, that's quite a lot).
This process is frustrating, because if you apply the same design pattern in C with small differences, it can lead to different sizes: So from C/C++, you cannot really predict what's going to happen.
An example:
In "Using and Porting GCC" (2001), there is the macro SMALL_REGISTER_CLASSES, which tells the compiler to minimize the lifetime of hard registers. Its definition consists of a simple zero / non-zero expression, usually a constant.
In "GCC internals" (2011), the above macro is replaced by the following target hook:
bool TARGET_SMALL_REGISTER_CLASSES_FOR_MODE_P(enum mode)
which is not nearly as neat as the original macro.
Note: Not sure what the difference is between "Using and Porting" and "GCC internals" as far as porting goes (RTL representation, Machine Description and Target Description Modes and Functions). I started by reading the first one thoroughly because that was the suggested documentation, overlooking the fact that it is actually 10 years old.
The short answer is "no".
At the start of 2001, the current release was 2.95, although 3.0 was already well into development. The current release is 4.6, with 4.7 due in a few months. That's two major release numbers which means two large-scale rewrites of the source code, plus many many other smaller changes that add up to a lot of code churn.
Of course, you'll find lots of details that are the same now as ever, but the old documents are not to be trusted.
The current documentation is pretty good, as far as it goes, but it's hardly comprehensive, so if you'd like to improve it as you learn more, I'm sure it'll be appreciated. ;)