what are some of J's unique features? - tacit-programming

I come from a background of C, Fortran, Python, R, Matlab, and some Lisp - and I've read a few things on Haskell. What are some neat ideas/examples in J or other languages from the APL family that are unique and not implemented in more common languages? I'm always interested in finding out what I'm missing...

J has a very large set of operators that make it easy to gin up complex programs without having to hunt for a library. It has extremely powerful array processing capabilities, as well as iterative constructs that make explicit control structures irrelevant for most purposes -- so much so that I prefer using tensor algebra to declaring an explicit loop because it's more convenient. J runs in an interpreter, but a good J script can be just as fast as a program written in a compiler language. (When you take out explicit loops, the interpreter doesn't have to compile the contents of the loop every time it executes.)
Another fun feature of J is tacit programming. You can build scripts without explicit reference to input variables, which lets you express an idea purely in terms of what you intend to do. For example, I could define the average function as 'summing the terms in a list and dividing them by the number of entries in the list' like so:
(+/ % #)
or I could make a script that slices into a 2D array and only returns the averages of rows that have averages greater than 10:
(10&<#])(+/%#)"1
There's lots of other neat stuff you can do with J; it's an executable form of mathematical notation. Ideas generalize easily, so you get a lot of benefit out of learning any one aspect of how the language works.

I think one of the most interesting aspects of J is that it is one of the very few non-von-Neumann languages that is even remotely mainstream.
Uhm. J mainstream? Well, yeah, compared to other non-von-Neumann languages it is! There are only very few non-von-Neumann languages to begin with, most of those only live inside some PhD thesis and were never actually implemented and those that were implemented usually have a userbase of 1 if even that. Generally, they are considered successful if at least one of the users doesn't sit on the same floor as the guy who invented it.
Compared to that, J is mainstream. In particular, J is based on John Backus's FP from his seminal Turing Award Lecture "Can Programming be Liberated from the von Neumann Style?" and it is AFAIK the only working implementation of it. (I don't think Backus ever actually implemented FP, for example.)

This is perhaps not as unique as I make it out to be, but the top feature I can think of for J is implicit typing. It creates that nice abstraction level above execution and memory management to focus on the data being processed.
Suppose you need to store a number:
var1 =: 10
And it's done. Array?
var2 =: 4 8 15 16 23 42
Done. Oh, but wait, you need to divide that by 3.7? Don't bother with casting, just go for it:
var2 % 3.7
Being rid of that necessity to cast and manipulate and allocate is a tiny blessing.

Related

Pipelining the Ziggurat Random Number Generator

I'm currently implementing a version of George Marsaglia's Ziggurat random number generator. Although it is supposedly one of the fastest ways to generate good quality normally-distributed random number generators, it is full of loop control code (ie. return statements in the middle of a loop, if-statements, branches, etc) and it makes several calls to standard C functions like exp() and log(). Not to mention the infinite loop.
This makes for code that cannot be pipelined by the compiler. Ultimately, I feel like a basic approach, such as using the central limit theorem directly, might ultimately be faster since it can be pipelined easily. Unfortunately, it is not suitable for the tails of the Gaussian distribution and therefore it's not acceptable for my application.
Does anybody here have any ideas on how control code and function calls might be reduced. I am currently using Colin Green's implementation of the algorithm that I ported to C. My underlying uniform generator is the Tiny Mersenne Twister (so please don't tell me to use the MT as I've seen other people do, I'm already there. This discussion is for normally-distributed RNG's, not uniform RNG's).
You might take a look at my C implementation
here. The main function is only 20-something lines of code, so should be easy to unroll the loop a bit. It also gives you the choice of using integer or float compares, whichever is faster on your machine. You can plug in any back-end RNG.

Why are interval comparisons (e.g: x < variable < y) not supported in most "mainstream" languages?

This might be a very simple question, or perhaps have been asked before but I couldn't really find an answer after a brief search here and via google.
So taking the risk of having missed something similar to this, here goes my question. Why is it so that conditional statements checking for intervals are often not supported as they are, but instead implemented as block ifs or using the && operator to bind together the two separate conditions?
Admitted that it's not the whole world if I have to write two lines of more code, but coming from mathematics background, I find it really peculiar that modern compilers cannot do their voodoo on this relatively simple statement.
Is there a particular reason for it for a technical or theoretical perspective?
I would say that there is no impediment from a technical point of view since some languages do support that kind of comparison:
Python (docs)
Clojure (docs) -and almost any other lispy language out there
Perl 6 (docs)
And Mathematica if I remember right.
What it is sure is that it makes the parsing of the language a bit more complex, and language designers may or may not pay that price if they don't see it as a feature to be used very frequently.
ps. I personally like it but have found myself rarely using it in Python.
My two cents here: if you write a < b < c in many languages it means (considering that compare operators are often left associative) ((a < b) < c). But a < b returns a boolean, and so (for example if a < b returns true) the successive reduction is (true < c) which in all but an extreme rare case is not what you want to do.

Code generation by genetic algorithms

Evolutionary programming seems to be a great way to solve many optimization problems. The idea is very easy and the implementation does not make problems.
I was wondering if there is any way to evolutionarily create a program in ruby/python script (or any other language)?
The idea is simple:
Create a population of programs
Perform genetic operations (roulette-wheel selection or any other selection), create new programs with inheritance from best programs, etc.
Loop point 2 until program that will satisfy our condition is found
But there are still few problems:
How will chromosomes be represented? For example, should one cell of chromosome be one line of code?
How will chromosomes be generated? If they will be lines of code, how do we generate them to ensure that they are syntactically correct, etc.?
Example of a program that could be generated:
Create script that takes N numbers as input and returns their mean as output.
If there were any attempts to create such algorithms I'll be glad to see any links/sources.
If you are sure you want to do this, you want genetic programming, rather than a genetic algorithm. GP allows you to evolve tree-structured programs. What you would do would be to give it a bunch of primitive operations (while($register), read($register), increment($register), decrement($register), divide($result $numerator $denominator), print, progn2 (this is GP speak for "execute two commands sequentially")).
You could produce something like this:
progn2(
progn2(
read($1)
while($1
progn2(
while($1
progn2( #add the input to the total
increment($2)
decrement($1)
)
)
progn2( #increment number of values entered, read again
increment($3)
read($1)
)
)
)
)
progn2( #calculate result
divide($1 $2 $3)
print($1)
)
)
You would use, as your fitness function, how close it is to the real solution. And therein lies the catch, that you have to calculate that traditionally anyway*. And then have something that translates that into code in (your language of choice). Note that, as you've got a potential infinite loop in there, you'll have to cut off execution after a while (there's no way around the halting problem), and it probably won't work. Shucks. Note also, that my provided code will attempt to divide by zero.
*There are ways around this, but generally not terribly far around it.
It can be done, but works very badly for most kinds of applications.
Genetic algorithms only work when the fitness function is continuous, i.e. you can determine which candidates in your current population are closer to the solution than others, because only then you'll get improvements from one generation to the next. I learned this the hard way when I had a genetic algorithm with one strongly-weighted non-continuous component in my fitness function. It dominated all others and because it was non-continuous, there was no gradual advancement towards greater fitness because candidates that were almost correct in that aspect were not considered more fit than ones that were completely incorrect.
Unfortunately, program correctness is utterly non-continuous. Is a program that stops with error X on line A better than one that stops with error Y on line B? Your program could be one character away from being correct, and still abort with an error, while one that returns a constant hardcoded result can at least pass one test.
And that's not even touching on the matter of the code itself being non-continuous under modifications...
Well this is very possible and #Jivlain correctly points out in his (nice) answer that genetic Programming is what you are looking for (and not simple Genetic Algorithms).
Genetic Programming is a field that has not reached a broad audience yet, partially because of some of the complications #MichaelBorgwardt indicates in his answer. But those are mere complications, it is far from true that this is impossible to do. Research on the topic has been going on for more than 20 years.
Andre Koza is one of the leading researchers on this (have a look at his 1992 work) and he demonstrated as early as 1996 how genetic programming can in some cases outperform naive GAs on some classic computational problems (such as evolving programs for Cellular Automata synchronization).
Here's a good Genetic Programming tutorial from Koza and Poli dated 2003.
For a recent reference you might wanna have a look at A field guide to genetic programming (2008).
Since this question was asked, the field of genetic programming has advanced a bit, and there have been some additional attempts to evolve code in configurations other than the tree structures of traditional genetic programming. Here are just a few of them:
PushGP - designed with the goal of evolving modular functions like human coders use, programs in this system store all variables and code on different stacks (one for each variable type). Programs are written by pushing and popping commands and data off of the stacks.
FINCH - a system that evolves Java byte-code. This has been used to great effect to evolve game-playing agents.
Various algorithms have started evolving C++ code, often with a step in which compiler errors are corrected. This has had mixed, but not altogether unpromising results. Here's an example.
Avida - a system in which agents evolve programs (mostly boolean logic tasks) using a very simple assembly code. Based off of the older (and less versatile) Tierra.
The language isn't an issue. Regardless of the language, you have to define some higher-level of mutation, otherwise it will take forever to learn.
For example, since any Ruby language can be defined in terms of a text string, you could just randomly generate text strings and optimize that. Better would be to generate only legal Ruby programs. However, it would also take forever.
If you were trying to build a sorting program and you had high level operations like "swap", "move", etc. then you would have a much higher chance of success.
In theory, a bunch of monkeys banging on a typewriter for an infinite amount of time will output all the works of Shakespeare. In practice, it isn't a practical way to write literature. Just because genetic algorithms can solve optimization problems doesn't mean that it's easy or even necessarily a good way to do it.
The biggest selling point of genetic algorithms, as you say, is that they are dirt simple. They don't have the best performance or mathematical background, but even if you have no idea how to solve your problem, as long as you can define it as an optimization problem you will be able to turn it into a GA.
Programs aren't really suited for GA's precisely because code isn't good chromossome material. I have seen someone who did something similar with (simpler) machine code instead of Python (although it was more of an ecossystem simulation then a GA per se) and you might have better luck if you codify your programs using automata / LISP or something like that.
On the other hand, given how alluring GA's are and how basically everyone who looks at them asks this same question, I'm pretty sure there are already people who tried this somewhere - I just have no idea if any of them succeeded.
Good luck with that.
Sure, you could write a "mutation" program that reads a program and randomly adds, deletes, or changes some number of characters. Then you could compile the result and see if the output is better than the original program. (However we define and measure "better".) Of course 99.9% of the time the result would be compile errors: syntax errors, undefined variables, etc. And surely most of the rest would be wildly incorrect.
Try some very simple problem. Say, start with a program that reads in two numbers, adds them together, and outputs the sum. Let's say that the goal is a program that reads in three numbers and calculates the sum. Just how long and complex such a program would be of course depends on the language. Let's say we have some very high level language that lets us read or write a number with just one line of code. Then the starting program is just 4 lines:
read x
read y
total=x+y
write total
The simplest program to meet the desired goal would be something like
read x
read y
read z
total=x+y+z
write total
So through a random mutation, we have to add "read z" and "+z", a total of 9 characters including the space and the new-line. Let's make it easy on our mutation program and say it always inserts exactly 9 random characters, that they're guaranteed to be in the right places, and that it chooses from a character set of just 26 letters plus 10 digits plus 14 special characters = 50 characters. What are the odds that it will pick the correct 9 characters? 1 in 50^9 = 1 in 2.0e15. (Okay, the program would work if instead of "read z" and "+z" it inserted "read w" and "+w", but then I'm making it easy by assuming it magically inserts exactly the right number of characters and always inserts them in the right places. So I think this estimate is still generous.)
1 in 2.0e15 is a pretty small probability. Even if the program runs a thousand times a second, and you can test the output that quickly, the chance is still just 1 in 2.0e12 per second, or 1 in 5.4e8 per hour, 1 in 2.3e7 per day. Keep it running for a year and the chance of success is still only 1 in 62,000.
Even a moderately competent programmer should be able to make such a change in, what, ten minutes?
Note that changes must come in at least "packets" that are correct. That is, if a mutation generates "reax z", that's only one character away from "read z", but it would still produce compile errors, and so would fail.
Likewise adding "read z" but changing the calculation to "total=x+y+w" is not going to work. Depending on the language, you'll either get errors for the undefined variable or at best it will have some default value, like zero, and give incorrect results.
You could, I suppose, theorize incremental solutions. Maybe one mutation adds the new read statement, then a future mutation updates the calculation. But without the calculation, the additional read is worthless. How will the program be evaluated to determine that the additional read is "a step in the right direction"? The only way I see to do that is to have an intelligent being read the code after each mutation and see if the change is making progress toward the desired goal. And if you have an intelligent designer who can do that, that must mean that he knows what the desired goal is and how to achieve it. At which point, it would be far more efficient to just make the desired change rather than waiting for it to happen randomly.
And this is an exceedingly trivial program in a very easy language. Most programs are, what, hundreds or thousands of lines, all of which must work together. The odds against any random process writing a working program are astronomical.
There might be ways to do something that resembles this in some very specialized application, where you are not really making random mutations, but rather making incremental modifications to the parameters of a solution. Like, we have a formula with some constants whose values we don't know. We know what the correct results are for some small set of inputs. So we make random changes to the constants, and if the result is closer to the right answer, change from there, if not, go back to the previous value. But even at that, I think it would rarely be productive to make random changes. It would likely be more helpful to try changing the constants according to a strict formula, like start with changing by 1000's, then 100's then 10's, etc.
I want to just give you a suggestion. I don't know how successful you'd be, but perhaps you could try to evolve a core war bot with genetic programming. Your fitness function is easy: just let the bots compete in a game. You could start with well known bots and perhaps a few random ones then wait and see what happens.

Program memory footprint for different interpreters/compilers

Here's an excerpt from the Wikipedia entry on K programming language:
The small size of the interpreter and compact syntax of the language makes it possible for K applications to fit entirely within the level 1 cache of the processor.
What in particular makes K programs so small? When one uses ' operator in K, map in compiled functional language like Haskell, or equivalent for loop in a compiled imperative language like C, I can't imagine either compiler generating radically different assembly code or that what happens in interpreter's internals will be very different from for loop. Is there anything special in K that makes its runtime and programs so small?
There's a similar question on SO, but the answers there basically clarify nothing.
There are ways of generating a very compact code. For example, a http://en.wikipedia.org/wiki/Threaded_code of Forth and alike. It is likely that K is compiled into some form of it.
I am not the author of the wikipedia statement above, just somebody who uses K extensively.
As for code, K is not unrolling loops or making other changes to the program structure that would increase it in size beyond what you're expecting. The executable interpreter itself is tiny. And the programs tend to be small (though not necessarily so). It's not the execution of any particular instructions for mapping, etc. that make it more likely that the code itself will execute all within cache.
K programs tend to be small because they are a small, tight bytecode in storage, and their syntax tends to yield very small amounts of code for a given operation.
Compare this Java program:
int r=0;
for(int i=0; i<100; i++) {
r+=i;
}
Against this K program to yield the same result:
+/!100
The amount of code being executed is similar, but the storage required by the program (much less typing!) is far less. K is great for those with repetitive stress injuries.
As for the data, the encouragement to work on multiple data items with single instructions tends to make access sequential, in a manner friendly to the cache, rather than random access. All of this merely makes it more likely that the program will be cache friendly.
But this is all just tendencies and best practices within the language in combination with the K executable itself. If you link in large amounts of additional code, special case lots of functions, and randomize your indices before accessing your data, your program will be just as unfriendly to the cache as you'd expect.

Reasoning about performance in Haskell

The following two Haskell programs for computing the n'th term of the Fibonacci sequence have greatly different performance characteristics:
fib1 n =
case n of
0 -> 1
1 -> 1
x -> (fib1 (x-1)) + (fib1 (x-2))
fib2 n = fibArr !! n where
fibArr = 1:1:[a + b | (a, b) <- zip fibArr (tail fibArr)]
They are very close to mathematically identical, but fib2 uses the list notation to memoize its intermediate results, while fib1 has explicit recursion. Despite the potential for the intermediate results to be cached in fib1, the execution time gets to be a problem even for fib1 25, suggesting that the recursive steps are always evaluated. Does referential transparency contribute anything to Haskell's performance? How can I know ahead of time if it will or won't?
This is just an example of the sort of thing I'm worried about. I'd like to hear any thoughts about overcoming the difficulty inherent in reasoning about the performance of a lazily-executed, functional programming language.
Summary: I'm accepting 3lectrologos's answer, because the point that you don't reason so much about the language's performance, as about your compiler's optimization, seems to be extremely important in Haskell - more so than in any other language I'm familiar with. I'm inclined to say that the importance of the compiler is the factor that differentiates reasoning about performance in lazy, functional langauges, from reasoning about the performance of any other type.
Addendum: Anyone happening on this question may want to look at the slides from Johan Tibell's talk about high performance Haskell.
In your particular Fibonacci example, it's not very hard to see why the second one should run faster (although you haven't specified what f2 is).
It's mainly an algorithmic issue:
fib1 implements the purely recursive algorithm and (as far as I know) Haskell has no mechanism for "implicit memoization".
fib2 uses explicit memoization (using the fibArr list to store previously computed values.
In general, it's much harder to make performance assumptions for a lazy language like Haskell, than for an eager one. Nevertheless, if you understand the underlying mechanisms (especially for laziness) and gather some experience, you will be able to make some "predictions" about performance.
Referential transparency increases (potentially) performance in (at least) two ways:
First, you (as a programmer) can be sure that two calls to the same function will always return the same, so you can exploit this in various cases to benefit in performance.
Second (and more important), the Haskell compiler can be sure for the above fact and this may enable many optimizations that can't be enabled in impure languages (if you've ever written a compiler or have any experience in compiler optimizations you are probably aware of the importance of this).
If you want to read more about the reasoning behind the design choices (laziness, pureness) of Haskell, I'd suggest reading this.
Reasoning about performance is generally hard in Haskell and lazy languages in general, although not impossible. Some techniques are covered in Chris Okasaki's Purely Function Data Structures (also available online in a previous version).
Another way to ensure performance is to fix the evaluation order, either using annotations or continuation passing style. That way you get to control when things are evaluated.
In your example you might calculate the numbers "bottom up" and pass the previous two numbers along to each iteration:
fib n = fib_iter(1,1,n)
where
fib_iter(a,b,0) = a
fib_iter(a,b,1) = a
fib_iter(a,b,n) = fib_iter(a+b,a,n-1)
This results in a linear time algorithm.
Whenever you have a dynamic programming algorithm where each result relies on the N previous results, you can use this technique. Otherwise you might have to use an array or something completely different.
Your implementation of fib2 uses memoization but each time you call fib2 it rebuild the "whole" result. Turn on ghci time and size profiling:
Prelude> :set +s
If it was doing memoisation "between" calls the subsequent calls would be faster and use no memory. Call fib2 20000 twice and see for yourself.
By comparison a more idiomatic version where you define the exact mathematical identity:
-- the infinite list of all fibs numbers.
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
memoFib n = fibs !! n
actually do use memoisation, explicit as you see. If you run memoFib 20000 twice you'll see the time and space taken the first time then the second call is instantaneous and take no memory. No magic and no implicit memoization like a comment might have hinted at.
Now about your original question: optimizing and reasoning about performance in Haskell...
I wouldn't call myself an expert in Haskell, I have only been using it for 3 years, 2 of which at my workplace but I did have to optimize and get to understand how to reason somewhat about its performance.
As mentionned in other post laziness is your friend and can help you gain performance however YOU have to be in control of what is lazily evaluated and what is strictly evaluated.
Check this comparison of foldl vs foldr
foldl actually stores "how" to compute the value i.e. it is lazy. In some case you saves time and space beeing lazy, like the "infinite" fibs. The infinite "fibs" doesn't generate all of them but knows how. When you know you will need the value you might as well just get it "strictly" speaking... That's where strictness annotation are usefull, to give you back control.
I recall reading many times that in lisp you have to "minimize" consing.
Understanding what is stricly evaluated and how to force it is important but so is understanding how much "trashing" you do to the memory. Remember Haskell is immutable, that means that updating a "variable" is actually creating a copy with the modification. Prepending with (:) is vastly more efficient than appending with (++) because (:) does not copy memory contrarily to (++). Whenever a big atomic block is updated (even for a single char) the whole block needs to be copied to represent the "updated" version. The way you structure data and update it can have a big impact on performance. The ghc profiler is your friend and will help you spot these. Sure the garbage collector is fast but not having it do anything is faster!
Cheers
Aside from the memoization issue, fib1 also uses non-tailcall recursion. Tailcall recursion can be re-factored automatically into a simple goto and perform very well, but the recursion in fib1 cannot be optimized in this way, because you need the stack frame from each instance of fib1 in order to calculate the result. If you rewrote fib1 to pass a running total as an argument, thus allowing a tail call instead of needing to keep the stack frame for the final addition, the performance would improve immensely. But not as much as the memoized example, of course :)
Since allocation is a major cost in any functional language, an important part of understanding performance is to understand when objects are allocated, how long they live, when they die, and when they are reclaimed. To get this information you need a heap profiler. It's an essential tool, and luckily GHC ships with a good one.
For more information, read Colin Runciman's papers.

Resources