How can I learn to read formulas with greek symbols? - algorithm

I suppose maybe it's because I don't know the keywords to google for, but I can't find any sources on how to read those formulas you see on wikipedia, like this for instance:
Erlang Distribution
I've searched in the math world and computer science world. It feels like it is assumed that we're supposed to understand it out of thin air. Beginner lessons seem scarce.
So far I know how sigma works. And that upside-down shape that is used as the half-life logo is called lambda. But what the heck is it trying to say?? Why is there a semi-colon in the function, etc..
If there is a book on this stuff I'd buy it in an instant. It is probably very basic stuff but I never had experience in theoretical math or even know where to look.
Does anyone know what this subject is called, and what to google for?

Formulas with this symbols usually are statistics or probability notations.
Greek letters (e.g. θ, β) are commonly used to denote unknown parameters (population parameters).
Greek letters used in mathematics, science, and engineering
you can find info here
Notation in probability and statistics
here

I think the colon in alt.: \scriptstyle \theta \;=\; \frac{1}{\lambda} > 0\, scale (real) in the box in Wikipedia is just saying that there is an alternative definition, in which you specify theta rather than lambda, and in that definition what is called theta is the reciprocal of lambda in the other definition.
I once complained to a much better mathematician than I was that I came unstuck with formulas with some of the weirder greek letters in because I couldn't write them recognisably in my handwriting (which is bad enough for the latin alphabet). He said a lot of the people he knew simply said "let x be funny-squiggle-thing" and rewrote with sensible letters. I really wish I'd thought of that.
In general, letters in weird alphabets behave pretty much like sensible letters, at least in the sort of thing you are pointing at. It's done as a sort of type-checking - usually all of the letters pinched from some particular foreign language are related in some way - e.g. all parameters. Unfortunately that doesn't hold exactly in the Wikipedia example you quote, where two of the greek letters stand for functions - one is definitely the Gamma function. I suspect the other is http://en.wikipedia.org/wiki/Digamma_function, but I'm not really sure.

Check out the resources list here: http://en.wikipedia.org/wiki/Greek_alphabet

I would say your best bet is still searching in Google (or other search engine, whatever float your boat) about the specific formula you are trying to learn. Sometimes a symbol may be used in different meaning in different formula.
Anyway, there is a good resource in here that explained a lot of math symbols, not just the Greek symbol.
Some link that may interest you here and here.

First, find a Greek alphabet (upper and lower case) to refer to, so that you can at least call lambda by it's name. No one starts out knowing automatically what the various Greek characters are, not even Greeks.
Second, Read the actual article, usually either the character is defined (as lambda happens to be in the Wikipedia page you references) or it's standard nomenclature (in which case you've done the right thing by looking for a basic article on the function in question-- I do this all the time so don't feel bad.) Or, as a third option, it's a crappy paper. Happens sometimes. It's kind of a pain, though, since you can't just do a text search on the lambda character in a PDF.
(Someone educate me on that if I'm wrong....)
Third, try to pick out which unfamiliar symbols are variables (like lambda) and which are operators (like sigma, and it's helpers.) It's the operators that can sometimes cause real trouble. A variable is just a name for something, but operators come freighted with more meaning, more rules, and more syntax. It's not always obvious which symbols are operators, either.
Finally, and specifically for computer science, a good introductory book (college freshman/sophomore level) on discrete math will hopefully treat most of the basic notations and operators to at least get your feet on the ground. Nowadays, you kids and your newfangled internet might be able to get something similar from Udacity, Edx, Course RA, or the Khan Academy.
Basically, it's a lot of hard work, especially on your own, but you're already doing most of the right things.

Related

Should you always document functions, even if redundant (specifically python)?

I try to use function names that are active and descriptive, which I then document with active and descriptive text (!). This generates redundant-looking code.
Simplified (but not so unrealistic) example in python, following numpy docstring style:
def calculate_inverse(matrix):
"""Calculate the inverse of a matrix.
Parameters
----------
matrix : ndarray
The matrix to be inverted.
Returns
-------
matrix_inv : ndarray
The inverse of the matrix.
"""
matrix_inv = scipy.linalg.inv(matrix)
return matrix_inv
Specifically for python, I have read PEP-257 and the sphinx/napoleon example numpy and Google style docstrings. I like that I can automatically generate documentation for my functions, but what is the "best practice" for redundant examples like above? Should one simply not document "obvious" classes, functions, etc? The degree of "obviousness" then of course becomes subjective ...
I have in mind open-source, distributed code. Multiple authors suggests that the code itself should be readable (calculate_inverse(A) better than dgetri(A)), but multiple end-users would benefit from sphinx-style documentation.
I've always followed the guideline that the code tells you what it does, the comments are added to explain why it does something.
If you can't read the code, you have no business looking at it, so having (in the extreme):
index += 1 # move to next item
is a total waste of time. So is a comment on a function called calculate_inverse(matrix) which states that it calculates the inverse of the matrix.
Whereas something like:
# Use Pythagoras theorem to find hypotenuse length.
hypo = sqrt (side1 * side1 + side2 * side2)
might be more suitable since it adds the information on where the equation came from, in case you need to investigate it further.
Comments should really be reserved for added information, such as the algorithm you use for calculating the inverse. In this case, since your algorithm is simply handing off the work to scipy, it's totally unnecessary.
If you must have a docstring here for auto-generated documentation, I certainly wouldn't be going beyond the one-liner variant for this very simple case:
"""Return the inverse of a matrix"""
"Always"? Definitively not. Comment as little as possible. Comments lie. They always lie, and if they don't, then they will be lying tomorrow. The same applies to many docs.
The only times (imo) that you should be writing comments/documentation for your code is when you are shipping a library to clients/customers or if you're in an open source project. In these cases you should also have a rigorous standard so there is never any ambiguity what should and should not be documented, and how.
In these cases you also need to have an established workflow regarding who is responsible for updating the docs, since they will get out of sync with the code all the time.
So in summary, never ever comment/document if you can help it. If you have to (because of shipping libs/doing open source), do it Properly(tm).
Clear, concise, well written, and properly placed comments are often useful. In your example, however, I think the code stands alone without the comments. It can go both ways. Comments range from needed and excellent to completely useless.
This is an important topic. You should read the chapter on comments in “Clean Code: A Handbook of Agile Software Craftsmanship,” by Robert Martin and others (2008). Chapter 4, “Comments,” starts with this assertion, “Clear and expressive code with few comments is far superior to cluttered and complex code with lots of comments. Rather than spend your time writing the comments that explain the mess you’ve made, spend it cleaning the mess.” The chapter continues with an excellent discussion on comments.
Yes, you should always document functions.
Many answers write about commenting your code, this is very different. I say about docstrings, which document your interface.
Docstrings are useful, because you can get interactive help in python interpreter. For example,
import math
help(math)
shows you the following help:
...
cos(...)
cos(x)
Return the cosine of x (measured in radians).
cosh(...)
cosh(x)
Return the hyperbolic cosine of x.
...
Note that even though cos and cosh are very familiar (and exactly repeat functions from C math.h), they are documented. For cos it is stated explicitly that its argument should be in radians. For your example it would be useful to know what a matrix could be. Is it an array of arrays? A tuple of tuples, or an ndarray, as you correctly wrote in its proper documentation? Will a rectangular or zero matrix suit?
Another 'familiar' function is chdir from os, which is documented like this:
chdir(...)
chdir(path)
Change the current working directory to the specified path.
Frankly speaking, not all functions in standard library modules are documented. I found a non-documented method of a class statvfs_result in os:
| __reduce__(...)
Maybe it is still a good example of why you should document. I admit that I forgot what reduce does, so I've no idea about this method. More familiar __eq__, __ne__ are still documented in that class (like x.__eq__(y) <==> x==y).
If you don't document your function, the help for your module will look like this:
calculate_inverse(matrix)
Functions will clump together more, because a docstring takes additional vertical space.
Write a docstring for a person who doesn't see your code. If the function is really simple, the docstring should be simple as well. It will give confidence that the function really is simple, and nothing unexpected will raise from that undocumented function (if they didn't bother to write documentation, are they competent and responsible to produce good code, indeed?)
The spirit of PEPs and other guidelines is that code should be good for all.
I'm pretty sure that somebody will once have difficulty with which is obvious for you.
I (currently) write from my laptop with not a very large screen, and have only one window in vim, but I write in conformance with PEP 8, which says: "Limiting the required editor window width makes it possible to have several files open side-by-side, and works well when using code review tools that present the two versions in adjacent columns". PEP 257 recommends docstrings which will work well with Emacs' fill-paragraph.
So, I don't know any good example when not to write a docstring is worthy. But, as PEPs and guidelines are only recommendations, you can omit a docstring if your function will not be used by many people, if you won't use it in the future, and if you don't care to write good code (at least there).

Pseudocode interpreter?

Like lots of you guys on SO, I often write in several languages. And when it comes to planning stuff, (or even answering some SO questions), I actually think and write in some unspecified hybrid language. Although I used to be taught to do this using flow diagrams or UML-like diagrams, in retrospect, I find "my" pseudocode language has components of C, Python, Java, bash, Matlab, perl, Basic. I seem to unconsciously select the idiom best suited to expressing the concept/algorithm.
Common idioms might include Java-like braces for scope, pythonic list comprehensions or indentation, C++like inheritance, C#-style lambdas, matlab-like slices and matrix operations.
I noticed that it's actually quite easy for people to recognise exactly what I'm triying to do, and quite easy for people to intelligently translate into other languages. Of course, that step involves considering the corner cases, and the moments where each language behaves idiosyncratically.
But in reality, most of these languages share a subset of keywords and library functions which generally behave identically - maths functions, type names, while/for/if etc. Clearly I'd have to exclude many 'odd' languages like lisp, APL derivatives, but...
So my questions are,
Does code already exist that recognises the programming language of a text file? (Surely this must be a less complicated task than eclipse's syntax trees or than google translate's language guessing feature, right?) In fact, does the SO syntax highlighter do anything like this?
Is it theoretically possible to create a single interpreter or compiler that recognises what language idiom you're using at any moment and (maybe "intelligently") executes or translates to a runnable form. And flags the corner cases where my syntax is ambiguous with regards to behaviour. Immediate difficulties I see include: knowing when to switch between indentation-dependent and brace-dependent modes, recognising funny operators (like *pointer vs *kwargs) and knowing when to use list vs array-like representations.
Is there any language or interpreter in existence, that can manage this kind of flexible interpreting?
Have I missed an obvious obstacle to this being possible?
edit
Thanks all for your answers and ideas. I am planning to write a constraint-based heuristic translator that could, potentially, "solve" code for the intended meaning and translate into real python code. It will notice keywords from many common languages, and will use syntactic clues to disambiguate the human's intentions - like spacing, brackets, optional helper words like let or then, context of how variables are previously used etc, plus knowledge of common conventions (like capital names, i for iteration, and some simplistic limited understanding of naming of variables/methods e.g containing the word get, asynchronous, count, last, previous, my etc). In real pseudocode, variable naming is as informative as the operations themselves!
Using these clues it will create assumptions as to the implementation of each operation (like 0/1 based indexing, when should exceptions be caught or ignored, what variables ought to be const/global/local, where to start and end execution, and what bits should be in separate threads, notice when numerical units match / need converting). Each assumption will have a given certainty - and the program will list the assumptions on each statement, as it coaxes what you write into something executable!
For each assumption, you can 'clarify' your code if you don't like the initial interpretation. The libraries issue is very interesting. My translator, like some IDE's, will read all definitions available from all modules, use some statistics about which classes/methods are used most frequently and in what contexts, and just guess! (adding a note to the program to say why it guessed as such...) I guess it should attempt to execute everything, and warn you about what it doesn't like. It should allow anything, but let you know what the several alternative interpretations are, if you're being ambiguous.
It will certainly be some time before it can manage such unusual examples like #Albin Sunnanbo's ImportantCustomer example. But I'll let you know how I get on!
I think that is quite useless for everything but toy examples and strict mathematical algorithms. For everything else the language is not just the language. There are lots of standard libraries and whole environments around the languages. I think I write almost as many lines of library calls as I write "actual code".
In C# you have .NET Framework, in C++ you have STL, in Java you have some Java libraries, etc.
The difference between those libraries are too big to be just syntactic nuances.
<subjective>
There has been attempts at unifying language constructs of different languages to a "unified syntax". That was called 4GL language and never really took of.
</subjective>
As a side note I have seen a code example about a page long that was valid as c#, Java and Java script code. That can serve as an example of where it is impossible to determine the actual language used.
Edit:
Besides, the whole purpose of pseudocode is that it does not need to compile in any way. The reason you write pseudocode is to create a "sketch", however sloppy you like.
foreach c in ImportantCustomers{== OrderValue >=$1M}
SendMailInviteToSpecialEvent(c)
Now tell me what language it is and write an interpreter for that.
To detect what programming language is used: Detecting programming language from a snippet
I think it should be possible. The approach in 1. could be leveraged to do this, I think. I would try to do it iteratively: detect the syntax used in the first line/clause of code, "compile" it to intermediate form based on that detection, along with any important syntax (e.g. begin/end wrappers). Then the next line/clause etc. Basically write a parser that attempts to recognize each "chunk". Ambiguity could be flagged by the same algorithm.
I doubt that this has been done ... seems like the cognitive load of learning to write e.g. python-compatible pseudocode would be much easier than trying to debug the cases where your interpreter fails.
a. I think the biggest problem is that most pseudocode is invalid in any language. For example, I might completely skip object initialization in a block of pseudocode because for a human reader it is almost always straightforward to infer. But for your case it might be completely invalid in the language syntax of choice, and it might be impossible to automatically determine e.g. the class of the object (it might not even exist). Etc.
b. I think the best you can hope for is an interpreter that "works" (subject to 4a) for your pseudocode only, no-one else's.
Note that I don't think that 4a,4b are necessarily obstacles to it being possible. I just think it won't be useful for any practical purpose.
Recognizing what language a program is in is really not that big a deal. Recognizing the language of a snippet is more difficult, and recognizing snippets that aren't clearly delimited (what do you do if four lines are Python and the next one is C or Java?) is going to be really difficult.
Assuming you got the lines assigned to the right language, doing any sort of compilation would require specialized compilers for all languages that would cooperate. This is a tremendous job in itself.
Moreover, when you write pseudo-code you aren't worrying about the syntax. (If you are, you're doing it wrong.) You'll wind up with code that simply can't be compiled because it's incomplete or even contradictory.
And, assuming you overcame all these obstacles, how certain would you be that the pseudo-code was being interpreted the way you were thinking?
What you would have would be a new computer language, that you would have to write correct programs in. It would be a sprawling and ambiguous language, very difficult to work with properly. It would require great care in its use. It would be almost exactly what you don't want in pseudo-code. The value of pseudo-code is that you can quickly sketch out your algorithms, without worrying about the details. That would be completely lost.
If you want an easy-to-write language, learn one. Python is a good choice. Use pseudo-code for sketching out how processing is supposed to occur, not as a compilable language.
An interesting approach would be a "type-as-you-go" pseudocode interpreter. That is, you would set the language to be used up front, and then it would attempt to convert the pseudo code to real code, in real time, as you typed. An interactive facility could be used to clarify ambiguous stuff and allow corrections. Part of the mechanism could be a library of code which the converter tried to match. Over time, it could learn and adapt its translation based on the habits of a particular user.
People who program all the time will probably prefer to just use the language in most cases. However, I could see the above being a great boon to learners, "non-programmer programmers" such as scientists, and for use in brainstorming sessions with programmers of various languages and skill levels.
-Neil
Programs interpreting human input need to be given the option of saying "I don't know." The language PL/I is a famous example of a system designed to find a reasonable interpretation of anything resembling a computer program that could cause havoc when it guessed wrong: see http://horningtales.blogspot.com/2006/10/my-first-pli-program.html
Note that in the later language C++, when it resolves possible ambiguities it limits the scope of the type coercions it tries, and that it will flag an error if there is not a unique best interpretation.
I have a feeling that the answer to 2. is NO. All I need to prove it false is a code snippet that can be interpreted in more than one way by a competent programmer.
Does code already exist that
recognises the programming language
of a text file?
Yes, the Unix file command.
(Surely this must be a less
complicated task than eclipse's syntax
trees or than google translate's
language guessing feature, right?) In
fact, does the SO syntax highlighter
do anything like this?
As far as I can tell, SO has a one-size-fits-all syntax highlighter that tries to combine the keywords and comment syntax of every major language. Sometimes it gets it wrong:
def median(seq):
"""Returns the median of a list."""
seq_sorted = sorted(seq)
if len(seq) & 1:
# For an odd-length list, return the middle item
return seq_sorted[len(seq) // 2]
else:
# For an even-length list, return the mean of the 2 middle items
return (seq_sorted[len(seq) // 2 - 1] + seq_sorted[len(seq) // 2]) / 2
Note that SO's highlighter assumes that // starts a C++-style comment, but in Python it's the integer division operator.
This is going to be a major problem if you try to combine multiple languages into one. What do you do if the same token has different meanings in different languages? Similar situations are:
Is ^ exponentiation like in BASIC, or bitwise XOR like in C?
Is || logical OR like in C, or string concatenation like in SQL?
What is 1 + "2"? Is the number converted to a string (giving "12"), or is the string converted to a number (giving 3)?
Is there any language or interpreter
in existence, that can manage this
kind of flexible interpreting?
On another forum, I heard a story of a compiler (IIRC, for FORTRAN) that would compile any program regardless of syntax errors. If you had the line
= Y + Z
The compiler would recognize that a variable was missing and automatically convert the statement to X = Y + Z, regardless of whether you had an X in your program or not.
This programmer had a convention of starting comment blocks with a line of hyphens, like this:
C ----------------------------------------
But one day, they forgot the leading C, and the compiler choked trying to add dozens of variables between what it thought was subtraction operators.
"Flexible parsing" is not always a good thing.
To create a "pseudocode interpreter," it might be necessary to design a programming language that allows user-defined extensions to its syntax. There already are several programming languages with this feature, such as Coq, Seed7, Agda, and Lever. A particularly interesting example is the Inform programming language, since its syntax is essentially "structured English."
The Coq programming language allows "syntax extensions", so the language can be extended to parse new operators:
Notation "A /\ B" := (and A B).
Similarly, the Seed7 programming language can be extended to parse "pseudocode" using "structured syntax definitions." The while loop in Seed7 is defined in this way:
syntax expr: .while.().do.().end.while is -> 25;
Alternatively, it might be possible to "train" a statistical machine translation system to translate pseudocode into a real programming language, though this would require a large corpus of parallel texts.

How to calculate indefinite integral programmatically

I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7

Algebraic logic

Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.

Avoiding Mixup of Language Details

Today someone asked me what was wrong with their source code. It was obvious. "Use double equals in place of that single equal in that if statement. Um, I think..." As I remember some languages actually take a single equals for comparison. Since I sometimes forget or mix up the syntax details among the several languages I use, I stepped over to my laptop to try a quickie experiment.
It costs a bit of time and is a break in the flow to try "quick" experiments (though maybe the practice is good for memory.) What tips do you have for keeping straight in your mind the syntax (and other) details of multiple languages?
(And nowadays, this applies just as well to the many wiki-like markups!)
To me, the hardest part isn't the syntax -usually you get into the mode when looking at the code you're working on. The really hard part is remembering the library of the language so you don't go inventing the wheel over and over again. Now if only people would organize their help files so it was easy to search for particular stuff in the library.
IDEs that can draw red and yellow squiggles can help, until you develop that mental muscle memory.
One of the annoying things with XCode (for Cocoa/ObjectiveC) is that you don't get said squiggles until you compile. (As opposed to Eclipse/Java where you get live squiggles).
In my case it's just experience. I think once you code in a language for long enough your brain seems to be able to do language-context-switching with it.
Indeed, on SO I advised not to forget avoiding if (a = b) in Java, and someone reminded me that it is legal only if a and b are boolean! Of course, the advice is good for C, C++, JavaScript and a number of other C-like languages.
Likewise, I realized only recently that var v in JavaScript have a function-level scope only, not a brace-level scope.
Somehow, that's the pitfall of having similar syntaxes, but different behaviors.
For the anecdote, some people in the Lua mailing list complain that this language isn't C-like, with the terse and familiar curly braces, the += and ++, the bitwise operators. They say it hurts adoption of the language, because people are more familiar with C-like syntax.
That's non-sense, Basic was (and still is) widely used with its verbose syntax. And so is Pascal (Delphià. And lot of people find the Lua syntax readable and easy to learn, good for those non familiar to programming (game AI specialists, for example).
Moreover, and to the point, Lua is designed to be integrated to C/C++ programs and to be extended with C[++] functions. And people say the quite different syntaxes helps in the mindset shifting.

Resources