Takeuchi numbers in Clojure (performance) - performance

When computing Takeuchi numbers, we need to figure out the number of times the function calls itself. I quickly came up with:
(def number (atom 0))
(defn tak [x y z]
(if (<= x y)
y
(do
(dosync (swap! number inc))
(tak (tak (dec x) y z)
(tak (dec y) z x)
(tak (dec z) x y)))))
(defn takeuchi_number [n]
(dosync (reset! number 0))
(tak n 0 (inc n))
#number)
(time (takeuchi_number 10))
; 1029803
; "Elapsed time: 11155.012266 msecs"
But the performance is really bad. How to make it blazingly fast in Clojure ?

As someone says, removing the dosync seems to improve things by a factor of 10, but that isn't the whole story. Once the JVM has hotspotted your code it gets a further factor of 10 faster. This is why you should be using criterium or similar to test real-world speed...
(def number (atom 0))
(defn tak [x y z]
(if (<= x y)
y
(do
(swap! number inc)
(tak (tak (dec x) y z)
(tak (dec y) z x)
(tak (dec z) x y)))))
(defn takeuchi_number [n]
(reset! number 0)
(tak n 0 (inc n))
#number)
;=> (time (takeuchi_number 10))
; "Elapsed time: 450.028 msecs"
; 1029803
;=> (time (takeuchi_number 10))
; "Elapsed time: 42.008 msecs"
; 1029803
Original with dosync was about 5s on my machine, so we're two orders of base 10 magnitude up already! Is this the best we can do? Let's refactor to pure functions and get away from the counter.
(defn tak [c x y z]
(if (<= x y)
[c y]
(let [[a- x-] (tak 0 (dec x) y z)
[b- y-] (tak 0 (dec y) z x)
[c- z-] (tak 0 (dec z) x y)]
(recur (+' 1 a- b- c- c) x- y- z-))))
(defn takeuchi_number [n]
(tak 0 n 0 (inc n)))
;=> (time (takeuchi_number 10))
; "Elapsed time: 330.741 msecs"
; [1029803 11]
;=> (time (takeuchi_number 10))
; "Elapsed time: 137.829 msecs"
; [1029803 11]
;=> (time (takeuchi_number 10))
; "Elapsed time: 136.866 msecs"
; [1029803 11]
Not as good. The cost of holding the state in the vector and passing it around is likely an overhead. However, now we've refactored to purity, let's take advantage of our good behaviour!
=> (def tak (memoize tak))
#'euler.tak/tak
=> (time (takeuchi_number 10))
"Elapsed time: 1.401 msecs"
[1029803 11]
A healthy 3000 or so times faster. Works for me.

A purely functional way of implementing this would be for your tak function to return a pair [result count], where result is the actual result of the tak computation and count is the number of times the function recursively called itself. But in this case, I think that would cause all sorts of painful contortions in the body of the function and wouldn't be worth it.
The usage of atom here, while idiomatic Clojure, imposes unnecessary overhead; it's really targeted at synchronizing independent updates to shared state between threads. Basically what you want is a mutable object you can pass around to recursive function calls in the same thread, with no synchronization required. An array should be sufficient for that purpose:
(defn tak [x y z ^longs counter]
(if (<= x y)
y
(do
(aset counter 0 (inc (aget counter 0)))
(tak (tak (dec x) y z counter)
(tak (dec y) z x counter)
(tak (dec z) x y counter)
counter))))
(defn takeuchi_number [n]
(let [counter (long-array [0])]
(tak n 0 (inc n) counter)
(aget counter 0)))
Note that I've moved the counter definition from being a global constant to being a parameter on the helper function, to ensure that the mutable state is only used locally within that function.

Related

SICP 1.45 - Why are these two higher order functions not equivalent?

I'm going through the exercises in [SICP][1] and am wondering if someone can explain the difference between these two seemingly equivalent functions that are giving different results! Is this because of rounding?? I'm thinking the order of functions shouldn't matter here but somehow it does? Can someone explain what's going on here and why it's different?
Details:
Exercise 1.45: ..saw that finding a fixed point of y => x/y does not
converge, and that this can be fixed by average damping. The same
method works for finding cube roots as fixed points of the
average-damped y => x/y^2. Unfortunately, the process does not work
for fourth roots—a single average damp is not enough to make a
fixed-point search for y => x/y^3 converge.
On the other hand, if we
average damp twice (i.e., use the average damp of the average damp of
y => x/y^3) the fixed-point search does converge. Do some experiments
to determine how many average damps are required to compute nth roots
as a fixed-point search based upon repeated average damping of y => x/y^(n-1).
Use this to implement a simple procedure for computing the roots
using fixed-point, average-damp, and the repeated procedure
of Exercise 1.43. Assume that any arithmetic operations you need are
available as primitives.
My answer (note order of repeat and average-damping):
(define (nth-root-me x n num-repetitions)
(fixed-point (repeat (average-damping (lambda (y)
(/ x (expt y (- n 1)))))
num-repetitions)
1.0))
I see an alternate web solution where repeat is called directly on average damp and then that function is called with the argument
(define (nth-root-web-solution x n num-repetitions)
(fixed-point
((repeat average-damping num-repetition)
(lambda (y) (/ x (expt y (- n 1)))))
1.0))
Now calling both of these, there seems to be a difference in the answers and I can't understand why! My understanding is the order of the functions shouldn't affect the output (they're associative right?), but clearly it is!
> (nth-root-me 10000 4 2)
>
> 10.050110705350287
>
> (nth-root-web-solution 10000 4 2)
>
> 10.0
I did more tests and it's always like this, my answer is close, but the other answer is almost always closer! Can someone explain what's going on? Why aren't these equivalent? My guess is the order of calling these functions is messing with it but they seem associative to me.
For example:
(repeat (average-damping (lambda (y) (/ x (expt y (- n 1)))))
num-repetitions)
vs
((repeat average-damping num-repetition)
(lambda (y) (/ x (expt y (- n 1)))))
Other Helper functions:
(define (fixed-point f first-guess)
(define (close-enough? v1 v2)
(< (abs (- v1 v2))
tolerance))
(let ((next-guess (f first-guess)))
(if (close-enough? next-guess first-guess)
next-guess
(fixed-point f next-guess))))
(define (average-damping f)
(lambda (x) (average x (f x))))
(define (repeat f k)
(define (repeat-helper f k acc)
(if (<= k 1)
acc
;; compose the original function with the modified one
(repeat-helper f (- k 1) (compose f acc))))
(repeat-helper f k f))
(define (compose f g)
(lambda (x)
(f (g x))))
You are asking why “two seemingly equivalent functions” produce a different result, but the two functions are in effect very different.
Let’s try to simplify the problem to see why they are different. The only difference between the two functions are the two expressions:
(repeat (average-damping (lambda (y) (/ x (expt y (- n 1)))))
num-repetitions)
((repeat average-damping num-repetition)
(lambda (y) (/ x (expt y (- n 1)))))
In order to simplify our discussion, we assume num-repetition equal to 2, and a simpler function then that lambda, for instance the following function:
(define (succ x) (+ x 1))
So the two different parts are now:
(repeat (average-damping succ) 2)
and
((repeat average-damping 2) succ)
Now, for the first expression, (average-damping succ) returns a numeric function that calculates the average between a parameter and its successor:
(define h (average-damping succ))
(h 3) ; => (3 + succ(3))/2 = (3 + 4)/2 = 3.5
So, the expression (repeat (average-damping succ) 2) is equivalent to:
(lambda (x) ((compose h h) x)
which is equivalent to:
(lambda (x) (h (h x))
Again, this is a numeric function and if we apply this function to 3, we have:
((lambda (x) (h (h x)) 3) ; => (h 3.5) => (3.5 + 4.5)/2 = 4
In the second case, instead, we have (repeat average-damping 2) that produces a completely different function:
(lambda (x) ((compose average-damping average-damping) x)
which is equivalent to:
(lambda (x) (average-damping (average-damping x)))
You can see that the result this time is a high-level function, not an integer one, that takes a function x and applies two times the average-damping function to it. Let’s verify this by applying this function to succ and then applying the result to the number 3:
(define g ((lambda (x) (average-damping (average-damping x))) succ))
(g 3) ; => 3.25
The difference in the result is not due to numeric approximation, but to a different computation: first (average-damping succ) returns the function h, which computes the average between the parameter and its successor; then (average-damping h) returns a new function that computes the average between the parameter and the result of the function h. Such a function, if passed a number like 3, first calculates the average between 3 and 4, which is 3.5, then calculates the average between 3 (again the parameter), and 3.5 (the previous result), producing 3.25.
The definition of repeat entails
((repeat f k) x) = (f (f (f (... (f x) ...))))
; 1 2 3 k
with k nested calls to f in total. Let's write this as
= ((f^k) x)
and also define
(define (foo n) (lambda (y) (/ x (expt y (- n 1)))))
; ((foo n) y) = (/ x (expt y (- n 1)))
Then we have
(nth-root-you x n k) = (fixed-point ((average-damping (foo n))^k) 1.0)
(nth-root-web x n k) = (fixed-point ((average-damping^k) (foo n)) 1.0)
So your version makes k steps with the once-average-damped (foo n) function on each iteration step performed by fixed-point; the web's uses the k-times-average-damped (foo n) as its iteration step. Notice that no matter how many times it is used, a once-average-damped function is still average-damped only once, and using it several times is probably only going to exacerbate a problem, not solve it.
For k == 1 the two resulting iteration step functions are of course equivalent.
In your case k == 2, and so
(your-step y) = ((average-damping (foo n))
((average-damping (foo n)) y)) ; and,
(web-step y) = ((average-damping (average-damping (foo n))) y)
Since
((average-damping f) y) = (average y (f y))
we have
(your-step y) = ((average-damping (foo n))
(average y ((foo n) y)))
= (let ((z (average y ((foo n) y))))
(average z ((foo n) z)))
(web-step y) = (average y ((average-damping (foo n)) y))
= (average y (average y ((foo n) y)))
= (+ (* 0.5 y) (* 0.5 (average y ((foo n) y))))
= (+ (* 0.75 y) (* 0.25 ((foo n) y)))
;; and in general:
;; = (2^k-1)/2^k * y + 1/2^k * ((foo n) y)
The difference is clear. Average damping is used to dampen the possibly erratic jumps of (foo n) at certain ys, and the higher the k the stronger the damping effect, as is clearly seen from the last formula.

Returning the sum of positive squares

I'm trying to edit the current program I have
(define (sumofnumber n)
(if (= n 0)
1
(+ n (sumofnumber (modulo n 2 )))))
so that it returns the sum of an n number of positive squares. For example if you inputted in 3 the program would do 1+4+9 to get 14. I have tried using modulo and other methods but it always goes into an infinite loop.
The base case is incorrect (the square of zero is zero), and so is the recursive step (why are you taking the modulo?) and the actual operation (where are you squaring the value?). This is how the procedure should look like:
(define (sum-of-squares n)
(if (= n 0)
0
(+ (* n n)
(sum-of-squares (- n 1)))))
A definition using composition rather than recursion. Read the comments from bottom to top for the procedural logic:
(define (sum-of-squares n)
(foldl + ; sum the list
0
(map (lambda(x)(* x x)) ; square each number in list
(map (lambda(x)(+ x 1)) ; correct for range yielding 0...(n - 1)
(range n))))) ; get a list of numbers bounded by n
I provide this because you are well on your way to understanding the idiom of recursion. Composition is another of Racket's idioms worth exploring and often covered after recursion in educational contexts.
Sometimes I find composition easier to apply to a problem than recursion. Other times, I don't.
You're not squaring anything, so there's no reason to expect that to be a sum of squares.
Write down how you got 1 + 4 + 9 with n = 3 (^ is exponentiation):
1^2 + 2^2 + 3^2
This is
(sum-of-squares 2) + 3^2
or
(sum-of-squares (- 3 1)) + 3^2
that is,
(sum-of-squares (- n 1)) + n^2
Notice that modulo does not occur anywhere, nor do you add n to anything.
(And the square of 0 is 0 , not 1.)
You can break the problem into small chunks.
1. Create a list of numbers from 1 to n
2. Map a square function over list to square each number
3. Apply + to add all the numbers in squared list
(define (sum-of-number n)
(apply + (map (lambda (x) (* x x)) (sequence->list (in-range 1 (+ n 1))))))
> (sum-of-number 3)
14
This is the perfect opportunity for using the transducers technique.
Calculating the sum of a list is a fold. Map and filter are folds, too. Composing several folds together in a nested fashion, as in (sum...(filter...(map...sqr...))), leads to multiple (here, three) list traversals.
But when the nested folds are fused, their reducing functions combine in a nested fashion, giving us a one-traversal fold instead, with the one combined reducer function:
(define (((mapping f) kons) x acc) (kons (f x) acc)) ; the "mapping" transducer
(define (((filtering p) kons) x acc) (if (p x) (kons x acc) acc)) ; the "filtering" one
(define (sum-of-positive-squares n)
(foldl ((compose (mapping sqr) ; ((mapping sqr)
(filtering (lambda (x) (> x 0)))) ; ((filtering {> _ 0})
+) 0 (range (+ 1 n)))) ; +))
; > (sum-of-positive-squares 3)
; 14
Of course ((compose f g) x) is the same as (f (g x)). The combined / "composed" (pun intended) reducer function is created just by substituting the arguments into the definitions, as
((mapping sqr) ((filtering {> _ 0}) +))
=
( (lambda (kons)
(lambda (x acc) (kons (sqr x) acc)))
((filtering {> _ 0}) +))
=
(lambda (x acc)
( ((filtering {> _ 0}) +)
(sqr x) acc))
=
(lambda (x acc)
( ( (lambda (kons)
(lambda (x acc) (if ({> _ 0} x) (kons x acc) acc)))
+)
(sqr x) acc))
=
(lambda (x acc)
( (lambda (x acc) (if (> x 0) (+ x acc) acc))
(sqr x) acc))
=
(lambda (x acc)
(let ([x (sqr x)] [acc acc])
(if (> x 0) (+ x acc) acc)))
which looks almost as something a programmer would write. As an exercise,
((filtering {> _ 0}) ((mapping sqr) +))
=
( (lambda (kons)
(lambda (x acc) (if ({> _ 0} x) (kons x acc) acc)))
((mapping sqr) +))
=
(lambda (x acc)
(if (> x 0) (((mapping sqr) +) x acc) acc))
=
(lambda (x acc)
(if (> x 0) (+ (sqr x) acc) acc))
So instead of writing the fused reducer function definitions ourselves, which as every human activity is error-prone, we can compose these reducer functions from more atomic "transformations" nay transducers.
Works in DrRacket.

Mandelbrot set function does not perform as expected

Here is an example from Clojure Programming Paperback by Chas Emerick:
(import 'java.awt.image.BufferedImage
'(java.awt Color RenderingHints))
(defn- escape
[^double a0 ^double b0 ^long depth]
(loop [a a0, b b0, iteration 0]
(cond
(< 4 (+ (* a a) (* b b))) iteration
(>= iteration depth) -1
:else (recur (+ a0 (- (* a a) (* b b)))
(+ b0 (apply * [2 a b]))
(inc iteration)))))
(defn mandelbrot [rmin rmax imin imax
& {:keys [width height depth]
:or {width 80 height 40 depth 1000}}]
(let [mandelbrot-help
(fn [^double rmin ^double rmax
^double imin ^double imax
]
(let [stride-w (/ (- rmax rmin) width)
stride-h (/ (- imax imin) height)]
(loop [x 0
y (dec height)
escapes []]
(if (== x width)
(if (zero? y)
(partition width escapes)
(recur 0 (dec y) escapes))
(recur (inc x) y (conj escapes
(escape (+ rmin (* x stride-w))
(+ imin (* y stride-h))
depth)))))))]
(mandelbrot-help rmin rmax imin imax)))
(defn render-text
[mandelbrot-grid]
(doseq [row mandelbrot-grid]
(doseq [escape-iter row]
(print (if (neg? escape-iter)
\*
\space)))
(println)))
(defn render-image
[mandelbrot-grid]
(let [palette
(vec (for
[c (range 500)]
(Color/getHSBColor 0.0 0.0 (/ (Math/log c) (Math/log 500)))))
height (count mandelbrot-grid)
width (count (first mandelbrot-grid))
img (BufferedImage. width height BufferedImage/TYPE_INT_RGB)
^java.awt.Graphics2D g (.getGraphics img)]
(doseq [[y row] (map-indexed vector mandelbrot-grid)
[x escape-iter] (map-indexed vector row)]
(.setColor g (if (neg? escape-iter)
(palette 0)
(palette (mod (dec (count palette)) (inc escape-iter)))))
(.drawRect g x y 1 1))
(.dispose g)
img))
(do (time (mandelbrot -2.25 0.75 -1.5 1.5
:width 1600 :height 1200 :depth 1000))
nil)
Everything works except that it takes 60s on my machine, and only 8s according to the book (results on my laptop are consistently better in other examples).
Is there something that I did wrong?
Where did you get that code from? It is definitely not what appears either in the book (based on my PDF copy of it, page 449-452), or the code sample on github. In particular, the (apply * [2 a b]) in escape is crazy; that'll never be fast (at least not without any degree of [trivial] source-level optimization, which Clojure unfortunately doesn't apply). Even weirder, that particular snippet doesn't appear anywhere in the book, nor can I find it in the history of the git repo we used to collaborate on the writing of the book.
Maybe you were just tinkering with the examples? If not, I really would like to know where the sample you have came from, as it's absolutely not representative of our intent or best practice (obviously, given the timings you're seeing).
Anyway, here's the "fast escape" function from the book / github samples repo:
(defn- escape
[^double a0 ^double b0 depth]
(loop [a a0
b b0
iteration 0]
(cond
(< 4 (+ (* a a) (* b b))) iteration
(>= iteration depth) -1
:else (recur (+ a0 (- (* a a) (* b b)))
(+ b0 (* 2 (* a b)))
(inc iteration)))))
user> (do (time (mandelbrot -2.25 0.75 -1.5 1.5
:width 1600 :height 1200 :depth 1000))
nil)
"Elapsed time: 1987.460104 msecs"
Hinting the depth arg as ^long (something that I should have included in the book example) drops that to 1450ms on my laptop.

How to go about composing core functions, rather then using imperative style?

I have translated this code, the snippet below, from Python to Clojure. I replaced Python's while construct with Clojure's loop-recur here. But this doesn't look idiomatic.
(loop [d 2 [n & more] (list 256)]
(if (> n 1)
(recur (inc d)
(loop [x n sublist more]
(if (= (rem x d) 0)
(recur (/ x d) (conj sublist d))
(conj sublist x))))
(sort more)))
This routine gives me (3 3 31), that is prime factors of 279. For 256, it gives, (2 2 2 2 2 2 2 2), that means, 2^8.
Moreover, it performs worse for large values, say 987654123987546 instead of 279; whereas Python's counterpart works like charm.
How to start composing core functions, rather then translating imperative code as is? And specifically, how to improve this bit?
Thanks.
[Edited]
Here is the python code, I referred above,
def prime_factors(n):
factors = []
d = 2
while n > 1:
while n % d == 0:
factors.append(d)
n /= d
d = d + 1
return factors
A straight translation of the Python code in Clojure would be:
(defn prime-factors [n]
(let [n (atom n) ;; The Python code makes use of mutability which
factors (atom []) ;; isn't idiomatic in Clojure, but can be emulated
d (atom 2)] ;; using atoms
(loop []
(when (< 1 #n)
(loop []
(when (== (rem #n #d) 0)
(swap! factors conj #d)
(swap! n quot #d)
(recur)))
(swap! d inc)
(recur)))
#factors))
(prime-factors 279) ;; => [3 3 31]
(prime-factors 987654123987546) ;; => [2 3 41 14389 279022459]
(time (prime-factors 987654123987546)) ;; "Elapsed time: 13993.984 msecs"
;; same performance on my machine
;; as the Rosetta Code solution
You can improve this code to make it more idiomatic:
from nested loops to a single loop:
(loop []
(cond
(<= #n 1) #factors
(not= (rem #n #d) 0) (do (swap! d inc)
(recur))
:else (do (swap! factors conj #d)
(swap! n quot #d)
(recur))))))
get rid of the atoms:
(defn prime-factors [n]
(loop [n n
factors []
d 2]
(cond
(<= n 1) factors
(not= (rem n d) 0) (recur n factors (inc d))
:else (recur (quot n d) (conj factors d) d))))
replace == 0 by zero?:
(not (zero? (rem n d))) (recur n factors (inc d))
You can also overhaul it completely to make a lazy version of it:
(defn prime-factors [n]
((fn step [n d]
(lazy-seq
(when (< 1 n)
(cond
(zero? (rem n d)) (cons d (step (quot n d) d))
:else (recur n (inc d)))))
n 2))
I planned to have a section on optimization here, but I'm no specialist. The only thing I can say is that you can trivially make this code faster by interrupting the loop when d is greater than the square root of n:
(defn prime-factors [n]
(if (< 1 n)
(loop [n n
factors []
d 2]
(let [q (quot n d)]
(cond
(< q d) (conj factors n)
(zero? (rem n d)) (recur q (conj factors d) d)
:else (recur n factors (inc d)))))
[]))
(time (prime-factors 987654123987546)) ;; "Elapsed time: 7.124 msecs"
Not every loop unrolls cleanly into an elegant "functional" decomposition.
The Rosetta Code solution suggested by #edbond is pretty simple and concise; I would say it's idiomatic since no obvious "functional" solution is apparent. That solution runs noticeably faster on my machine than your Python version for 987654123987546.
More generally, if you're looking to expand your understanding of functional idioms, Bedra and Halloway's "Programming Clojure" (pp.90-95) presents an excellent comparison of different versions of the Fibonacci sequence, using loop, lazy seqs, and an elegant "functional" version. Chouser and Fogus's "Joy of Clojure" (MEAP version) also has a nice section on function composition.

map part of the vector efficiently in clojure

I wonder how this can be done in Clojure idiomatically and efficiently:
1) Given a vector containing n integers in it: [A0 A1 A2 A3 ... An]
2) Increase the last x items by 1 (let's say x is 100) so the vector will become: [A0 A1 A2 A3 ... (An-99 + 1) (An-98 + 1)... (An-1 + 1) (An + 1)]
One naive implementation looks like:
(defn inc-last [x nums]
(let [n (count nums)]
(map #(if (>= % (- n x)) (inc %2) %2)
(range n)
nums)))
(inc-last 2 [1 2 3 4])
;=> [1 2 4 5]
In this implementation, basically you just map the entire vector to another vector by examine each item to see if it needs to be increased.
However, this is an O(n) operation while I only want to change the last x items in the vector. Ideally, this should be done in O(x) instead of O(n).
I am considering using some functions like split-at/concat to implement it like below:
(defn inc-last [x nums]
(let [[nums1 nums2] (split-at x nums)]
(concat nums1 (map inc nums2))))
However, I am not sure if this implementation is O(n) or O(x). I am new to Clojure and not really sure what the time complexity will be for operations like concat/split-at on persistent data structures in Clojure.
So my questions are:
1) What the time complexity here in second implementation?
2) If it is still O(n), is there any idiomatic and efficient implementation that takes only O(x) in Clojure for solving this problem?
Any comment is appreciated. Thanks.
Update:
noisesmith's answer told me that split-at will convert the vector into a list, which was a fact I did not realised previously. Since I will do random access for the result (call nth after processing the vector), I would like to have an efficient solution (O(x) time) while keeping the vector instead of list otherwise nth will slow down my program as well.
Concat and split-at both turn the input into a seq, effectively a linked-list representation, O(x) time. Here is how to do it with a vector for O(n) performance.
user> (defn inc-last-n
[n x]
(let [count (count x)
update (fn [x i] (update-in x [i] inc))]
(reduce update x (range (- count n) count))))
#'user/inc-last-n
user> (inc-last-n 3 [0 1 2 3 4 5 6])
[0 1 2 3 5 6 7]
This will fail on input that is not associative (like seq / lazy-seq) because there is no O(1) access time in non-associative types.
inc-last is an implementation using a transient, which allows to get a modifiable "in place" vector in constant time and return a persistent! vector also in constant time, which allows to make the updates in O(x). The original implementation used an imperative doseq loop but, as mentioned in the comments, transient operations can return a new object, so it's better to keep doing things in a functional way.
I added a doall to the call to inc-last-2 since it returns a lazy seq, but inc-last and inc-last-3 returns a vector so the doall is needed to be able to compare them all.
According to some quick tests I made, inc-last and inc-last-3 don't actually differ much in performance, not even for huge vectors (10000000 elements). For the inc-last-2 implementation though, there's quite a difference even for a vector of 1000 elements, modifying only the last 10, it's ~100x slower. For smaller vectors or when the n is close to (count nums) the difference is not really that much.
(Thanks to Michał Marczyk for his useful comments)
(def x (vec (range 1000)))
(defn inc-last [n x]
(let [x (transient x)
l (count x)]
(->>
(range (- l n) l)
(reduce #(assoc! %1 %2 (inc (%1 %2))) x)
persistent!)))
(defn inc-last-2 [x nums]
(let [n (count nums)]
(map #(if (>= % (- n x)) (inc %2) %2)
(range n)
nums)))
(defn inc-last-3 [n x]
(let [l (count x)]
(reduce #(assoc %1 %2 (inc (%1 %2))) x (range (- l n) l))))
(time
(dotimes [i 100]
(inc-last 50 x)))
(time
(dotimes [i 100]
(doall (inc-last-2 10 x))))
(time
(dotimes [i 100]
(inc-last-3 50 x)))
;=> "Elapsed time: 49.7965 msecs"
;=> "Elapsed time: 1751.964501 msecs"
;=> "Elapsed time: 67.651 msecs"

Resources