How can I tell if my tail-recursive Scheme function is being optimized correctly - scheme

I have a Scheme function who's basic form looks like this
(define (foo param var)
(cond ((end-condition) (return-something))
((other-end-condit) (return-something-else))
(else
(let ((newvar (if some-condition
(make-some-updated var)
(destructive-update! var))))
(foo param newvar)))))
I feel like this is pretty clearly something that needs to be optimized to iteration in compilation, but when I compile it (with chicken) it still runs incredibly slowly. (if I understand the R5RS specs: http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r5rs-html.old/r5rs_22.html, this looks like it should work)
I wrote the same exact algorithm with a while loop in python and the interpreted program terminated in seconds. My compiled scheme takes about 15 minutes, and I am positive the algorithm is the same.
I think this is a tail recursion not getting optimized issue, as I can't think what else it could possibly be, but I cant figure it out. Any ideas? For what its worth, the var is a hash and the destructive update is merely adding an element, although it also returns the updated hash to be passed in as newvar.

That function is indeed tail-recursive, so you're good there. However, tail recursion just means that stack space won't grow, not that your program is guaranteed to run fast. If you want to see if your program is really running tail-recursively, run it while watching the total memory taken by Chicken (and make sure you aren't allocating memory in make-some-updated, which you might be). If the memory grows, then Chicken isn't compiling your program correctly according to the standard.

Related

Clojure comp doesn't tail-call-optimize (can create StackOverflow exception)

I got stuck on a Clojure program handling a very large amount of data (image data). When the image was larger than 128x128, the program would crash with a StackOverflow exception. Because it worked for smaller images, I knew it wasn't an infinite loop.
There were lots of possible causes of high memory usage, so I spent time digging around. Making sure I was using lazy sequences correctly, making sure to use recur as appropriate, etc. The turning point came when I realized that this:
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
referred to the comp function.
So I looked at where I was using it:
(defn pipe [val funcs]
((apply comp funcs) val))
(pipe the-image-vec
(map
(fn [index] (fn [image-vec] ( ... )))
(range image-size)))
I was doing per-pixel operations, mapping a function to each pixel to process it. Interestingly, comp doesn't appear to benefit from tail-call optimization, or do any kind of sequential application of functions. It seems that it was just composing things in the basic way, which when there are 65k functions, understandably overflows the stack. Here's the fixed version:
(defn pipe [val funcs]
(cond
(= (count funcs) 0) val
:else (recur ((first funcs) val) (rest funcs))))
recur ensures the recursion gets tail-call optimized, preventing a stack buildup.
If anybody can explain why comp works this way (or rather, doesn't work this way), I'd love to be enlightened.
First, a more straightforward MCVE:
(def fs (repeat 1e6 identity))
((apply comp fs) 99)
#<StackOverflowError...
But why does this happen? If you look at the (abridged) comp source:
(defn comp
([f g]
(fn
([x] (f (g x)))
([f g & fs]
(reduce1 comp (list* f g fs))))
You can see that the whole thing is basically just 2 parts:
The first argument overload that does the main work of wrapping each composed function call in another function.
Reducing over the functions using comp.
I believe the first point is the problem. comp works by taking the list of functions and continually wrapping each set of calls in functions. Eventually, this will exhaust the stack space if you try to compose too many functions, as it ends up creating a massive function that's wrapping many other functions.
So, why can TCO not help here? Because unlike most StackOverflowErrors, recursion is not the problem here. The recursive calls only ever reach one frame deep in the variardic case at the bottom. The problem is the building up of a massive function, which can't simply be optimizated away.
Why were you able to "fix" it though? Because you have access to val, so you're able to evaluate the functions as you go instead of building up one function to call later. comp was written using a simple implementation that works fine for most cases, but fails for extreme cases like the one you presented. It's fairly trivial to write a specialized version that handles massive collections though:
(defn safe-comp [& fs]
(fn [value]
(reduce (fn [acc f]
(f acc))
value
(reverse fs))))
Of course note though, this doesn't handle multiple arities like the core version does.
Honestly though, in 3 and a bit years of using Clojure, I've never once written (apply comp ...). While it's certainly possible you have experienced a case I've never needed to deal with, it's more likely that you're using the wrong tool for the job here. When this code is complete, post it on Code Review and we may be able to suggest better ways of accomplishing what you're trying to do.

About speed of procedures between user-made and built-in in scheme (related with SICP exercise 1.23)

//My question was so long So I reduced.
In scheme, user-made procedures consume more time than built-in procedures?
(If both's functions are same)
//This is my short version question.
//Below is long long version question.
EX 1.23 is problem(below), why the (next) procedure isn't twice faster than (+ 1)?
This is my guess.
reason 1 : (next) contains 'if' (special-form) and it consumes time.
reason 2 : function call consumes time.
http://community.schemewiki.org/?sicp-ex-1.23 says reason 1 is right.
And I want to know reason 2 is also right.
SO I rewrote the (next) procedure. I didn't use 'if' and checked the number divided by 2 just once before use (next)(so (next) procedure only do + 2). And I remeasured the time. It was more fast than before BUT still not 2. SO I rewrote again. I changed (next) to (+ num 2). Finally It became 2 or almost 2. And I thought why. This is why I guess the 'reason 2'. I want to know what is correct answer.
ps. I'm also curious about why some primes are being tested (very?) fast than others? It doesn't make sense because if a number n is prime, process should see from 2 to sqrt(n). But some numbers are tested faster. Do you know why some primes are tested faster?
Exercise 1.23. The smallest-divisor procedure shown at the start of this section does lots of needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to see if it is divisible by any larger even numbers. This suggests that the values used for test-divisor should not be 2, 3, 4, 5, 6, ..., but rather 2, 3, 5, 7, 9, .... To implement this change, define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. Modify the smallest-divisor procedure to use (next test-divisor) instead of (+ test-divisor 1). With timed-prime-test incorporating this modified version of smallest-divisor, run the test for each of the 12 primes found in exercise 1.22. Since this modification halves the number of test steps, you should expect it to run about twice as fast. Is this expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how do you explain the fact that it is different from 2?
Where the book is :
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.6
Your long ans short questions are actually addressing two different problems.
EX 1.23 is problem(below), why the (next) procedure isn't twice faster than (+ 1)?
You provide two possible reasons to explain the relative lack of speed up (and 30% speed gain is already a good achievement):
The apparent use of a function next instead of a simple arithmetic expression (as I understand it from your explanations),
the inclusion of a test in that very next function,
The first reason is an illusion: (+ 1) is a function incrementing its argument, so in both cases there is a function call (although the increment function is certainly a builtin one, a point which is addressed by your other question).
The second reason is indeed relevant: any test in a block of code will introduce a potential redirection in the code flow (a jump from the current executing instruction to some other address in the program), which may incur a delay. Note that this is analogous to a function call, which will also induce an address jump, so both reasons actually resolve to only one potential cause.
Regarding your short question, builtin functions are indeed usually faster, because the compiler is able to apply a special treatment to them in certain cases. This is due to the facts that:
knowing the semantics of the builtins, compiler designers are able to include special rules pertaining to the algebraic properties of these builtins, and for instance fuse successive incréments in a single call, or suppress a combination of increment and decrement called in sequence.
A builtin function call, when not optimised away, will be converted into a native machine code function call, which may not have to abide to all the calling convention rules. If your scheme compiler produce machine code from the source, then there might be only a marginal gain, but if it produce so called bytecode, the gain might be quite substantial, since user written functions will be translated to that bytecode format, and still require some form of interpretation. If you are only using an interpreter, then the gain is even more important.
I believe this is highly implementation and setting dependent. In many implementations there are different kinds of optimizations and in some there are none. To get the best performance you may need to compile your code or use settings to reduce debug information / stack traces. Getting the best performance in one implementation can worsen the performance in another.
Primitive procedures are usually compiled to be native and in some implementations, like ikarus, it's even inlined. (When you do (map car lst) ikarus changes it to (map (lambda (x) (car x)) lst) since car isn't a procedure) Lambdas are supposed to be cheap.. Remember many scheme implementations change your code to CPS and that is one procedure call for each expression in the body of a procedure call. It will never be as fast as machine code since it needs to do load closure variables.
To check which of the two options which are correct for your implementation make next do the same as it originally did. eg. no if but just increment the argument. The difference now is the extra call and nothing else. Then you can inline next by writing it's code directly in your procedure and substituting arguments for the operands. Is it still slower, then it's if. you need to run the tests several times, preferably with large enough number of primes to produce that it runs for a minute or so. Use time or similar in both the Scheme implementations to get the differences in ms. I use unix time command as well to see how the OS reflects on it.
You should also test to see if you get the same reason in some other implementation. It's not like it's not enough Scheme implementations out there so know yourself out! The differences between them might amaze you. I always use racket (raco exe source to make executable) and Ikarus. WHen doing a large test I include Chicken, Gambit and Chibi.

Improve performance of a ClojureScript program

I have a ClojureScript program that mainly performs math calculations on collections. It was developed in idiomatic, host-independent Clojure, so it's easy to benchmark it. To my surprise (and contrary to what the answers would suggest to Which is faster, Clojure or ClojureScript (and why)?), the same code in ClojureScript runs 5-10 times slower than its Clojure equivalent.
Here is what I did. I opened a lein repl and a browser repl at http://clojurescript.net/. Then I tried these snippets in both REPLs.
(time (dotimes [x 1000000] (+ 2 8)))
(let [coll (list 1 2 3)] (time (dotimes [x 1000000] (first coll))))
Then I opened a javascript console at the browser repl and wrote a minimalist benchmark function,
function benchmark(count, fun) {
var t0 = new Date();
for (i = 0; i < count; i++) {
fun();
}
var t1 = new Date();
return t1.getTime() - t0.getTime();
}
Back to the browser REPL:
(defn multiply [] (* 42 1.2))
Then try both native javascript multiplication, and its clojurescript variant in the javascript console,
benchmark(1000000, cljs.user.multiply);
benchmark(1000000, function(){ 42 * 1.2 });
What I found
Native javascript math is comparable to math in clojure
ClojureScript is 5-10 times slower than either of them
Now my question is, how can I improve the performance of my ClojureScript program?
There are some approaches I've considered so far
Fall back to using mutable javascript arrays and objects behind the scenes. (Is this possible at all?)
Fall back to using native javascript math operators. (Is this possible at all?)
Use javascript arrays explicitly with (aget js/v 0)
Use a less ambitious implementation of clojure-for-javascript, like https://github.com/chlorinejs/chlorine or https://github.com/gozala/wisp They generate a more idiomatic javascript, but they don't support namespaces which I 'm using a lot.
JavaScript has explicit return, so
function () { 42 * 1.2 }
does nothing; you'll want to benchmark
function () { return 42 * 1.2 }
instead. This happens to be exactly what the ClojureScript version compiles to, so there won't be any difference (in ClojureScript, basic arithmetic functions in non-higher-order usage get inlined as regular operator-based JavaScript expressions).
Now, Clojure is definitely faster than ClojureScript at this point. Part of the reason is that Clojure is still more carefully tuned than ClojureScript, although ClojureScript is improving at a pretty great pace in this department. Another part is that Clojure has a more mature JIT to take advantage of (the modern JS engines, V8 in particular, are pretty great, but not quite HotSpot-grade just yet).
The magnitude of the difference is somewhat tricky to measure, though; the fact that JITs are involved means that a loop with a body free of any side effects, such as the one in the question, will likely be optimized away, possibly even on the first run through it (through the use of on-stack replacement, used by HotSpot and I think also V8 -- I'd have to check to be sure though). So, better to benchmark something like
(def arr (long-array 1))
;;; benchmark this
(dotimes [_ 1000000]
(aset (longs arr) 0 (inc (aget (longs arr) 0))))
(longs call to avoid reflection in Clojure; could also use ^longs hint).
Finally, it certainly is the case, in both Clojure and ClojureScript, that for certain kinds of particularly performance-sensitive code it's best to use native arrays and such. Happily, there's no problem with doing so: on the ClojureScript side, you've got array, js-obj, aget, aset, make-array, you can use :mutable metadata on fields in deftype to be able to set! them in method bodies etc.
ClojureScript math is JavaScript math. Yes, if performance is critical, use JavaScript arrays and the provided low-level operators, these are guaranteed to produce optimal code where possible (i.e. no higher order usage). The ClojureScript persistent data structures are written this way: array mutation, arithmetic, bit twiddling.
I have a small example of efficient ClojureScript - http://github.com/swannodette/cljs-stl/blob/master/src/cljs_stl/spectral/demo.cljs that you might find useful as a guide.

How can I determine why my Racket code runs so slowly?

Just for fun, I wrote a quick Racket command-line script to parse old Unix fortune files. Fortune files are just giant text files, with a single % on a blank line separating entries.
Just as a quick first hack, I wrote the following Racket code:
(define fortunes
(with-input-from-file "fortunes.txt"
(λ ()
(regexp-split #rx"%" (port->string)))))
I thought it would run nearly instantly. Instead, it takes a very long time to run—on the order of a couple of minutes. In comparison, what I think of as equivalent Python:
with open('fortunes.txt') as f:
fortunes = f.read().split('%')
executes immediately, with equivalent results to the Racket code.
What am I doing wrong here? Yes, there's some obvious low-hanging fruit, such as I'm sure that things would be better if I didn't slurp the whole file into RAM with port->string, but the behavior is so pathologically bad I feel as if I must be doing something stupid at a much higher level than that.
Is there a more Racket-like way to do this with equivalently better performance? Is Racket I/O really poor for some operations? Is there some way to profile my code slightly deeper than the naïve profiler in DrRacket so I can figure out what about a given line is causing a problem?
EDIT: The fortunes file I'm using is FreeBSD's as found at http://fortunes.cat-v.org/freebsd/, which weighs in at about 2 MB. The best runtime for Racket 5.1.3 x64 on OS X Lion was:
real 1m1.479s
user 0m57.400s
sys 0m0.691s
For Python 2.7.1 x64, it was:
real 0m0.057s
user 0m0.029s
sys 0m0.015s
Eli's right that the time is being spent almost entirely in regexp-split (although a full second appears to be spent in port->string), but it's not clear to me that there's a preferred yet equally simple method.
Looks like most of the cost is due to running regexp-split on a string. The fastest alternative that I found was splitting a byte-string, then converting the results to a strings:
(map bytes->string/utf-8
(call-with-input-file "db"
(λ (i) (regexp-split #rx#"%" (port->bytes i)))))
With a random fortune DB of ~2MB, your code takes about 35s, and this version takes 33ms.
(I'm not sure why it takes so long on a string, yet, but it's definitely way too slow.)
EDIT: We tracked it to an efficiency bug. Rough description: when Racket does a regexp-match on a string, it will convert large parts of the string to a byte string (in UTF-8) for the search. This function is the core one that is implemented in C. regexp-split uses it repeatedly to find all of the matches, and therefore keeps re-doing this conversion. I'm looking at a way to do things better, but until it's fixed, use the above workaround.
This is now fixed in the latest Git HEAD version of Racket, see: github.com/plt/racket/commit/8eefaba. Your example now runs in 0.1 seconds for me.

Scheme Infix to Postfix

Let me establish that this is part of a class assignment, so I'm definitely not looking for a complete code answer. Essentially we need to write a converter in Scheme that takes a list representing a mathematical equation in infix format and then output a list with the equation in postfix format.
We've been provided with the algorithm to do so, simple enough. The issue is that there is a restriction against using any of the available imperative language features. I can't figure out how to do this in a purely functional manner. This is our fist introduction to functional programming in my program.
I know I'm going to be using recursion to iterate over the list of items in the infix expression like such.
(define (itp ifExpr)
(
; do some processing using cond statement
(itp (cdr ifExpr))
))
I have all of the processing implemented (at least as best I can without knowing how to do the rest) but the algorithm I'm using to implement this requires that operators be pushed onto a stack and used later. My question is how do I implement a stack in this function that is available to all of the recursive calls as well?
(Updated in response to the OP's comment; see the new section below the original answer.)
Use a list for the stack and make it one of the loop variables. E.g.
(let loop ((stack (list))
... ; other loop variables here,
; like e.g. what remains of the infix expression
)
... ; loop body
)
Then whenever you want to change what's on the stack at the next iteration, well, basically just do so.
(loop (cons 'foo stack) ...)
Also note that if you need to make a bunch of "updates" in sequence, you can often model that with a let* form. This doesn't really work with vectors in Scheme (though it does work with Clojure's persistent vectors, if you care to look into them), but it does with scalar values and lists, as well as SRFI 40/41 streams.
In response to your comment about loops being ruled out as an "imperative" feature:
(let loop ((foo foo-val)
(bar bar-val))
(do-stuff))
is syntactic sugar for
(letrec ((loop (lambda (foo bar) (do-stuff))))
(loop foo-val bar-val))
letrec then expands to a form of let which is likely to use something equivalent to a set! or local define internally, but is considered perfectly functional. You are free to use some other symbol in place of loop, by the way. Also, this kind of let is called 'named let' (or sometimes 'tagged').
You will likely remember that the basic form of let:
(let ((foo foo-val)
(bar bar-val))
(do-stuff))
is also syntactic sugar over a clever use of lambda:
((lambda (foo bar) (do-stuff)) foo-val bar-val)
so it all boils down to procedure application, as is usual in Scheme.
Named let makes self-recursion prettier, that's all; and as I'm sure you already know, (self-) recursion with tail calls is the way to go when modelling iterative computational processes in a functional way.
Clearly this particular "loopy" construct lends itself pretty well to imperative programming too -- just use set! or data structure mutators in the loop's body if that's what you want to do -- but if you stay away from destructive function calls, there's nothing inherently imperative about looping through recursion or the tagged let itself at all. In fact, looping through recursion is one of the most basic techniques in functional programming and the whole point of this kind of homework would have to be teaching precisely that... :-)
If you really feel uncertain about whether it's ok to use it (or whether it will be clear enough that you understand the pattern involved if you just use a named let), then you could just desugar it as explained above (possibly using a local define rather than letrec).
I'm not sure I understand this all correctly, but what's wrong with this simpler solution:
First:
You test if your argument is indeed a list:
If yes: Append the the MAP of the function over the tail (map postfixer (cdr lst)) to the a list containing only the head. The Map just applies the postfixer again to each sequential element of the tail.
If not, just return the argument unchanged.
Three lines of Scheme in my implementation, translates:
(postfixer '(= 7 (/ (+ 10 4) 2)))
To:
(7 ((10 4 +) 2 /) =)
The recursion via map needs no looping, not even tail looping, no mutation and shows the functional style by applying map. Unless I'm totally misunderstanding your point here, I don't see the need for all that complexity above.
Edit: Oh, now I read, infix, not prefix, to postfix. Well, the same general idea applies except taking the second element and not the first.

Resources