Logarithmic runtime and tail recursion - algorithm

In school, I have been learning about runtime and writing more efficient algorithms using tail recursion and the like, and a little while back an assignment asked us to consider the function for calculating powers;
(define (exp x n)
(if (zero? n) 1 (* x (exp x (- n 1)))))
and we were tasked with writing an exp function with a runtime O(log n), so this was my answer:
(define (exp x n)
(cond
((zero? n) 1)
((= 1 n) x)
((even? n) (exp (* x x) (/ n 2)))
(else (* x (exp (* x x) (/ (- n 1) 2))))))
which simply comes from x^2n = (x^2)^n and x^2n+1 = x*(x^2)^n.
So I have been trying to think of a way to implement tail recursion to even further optimize this function, but I can't really think of a way to do this.Back to my question, Is there any sort of rule of thumb to know when you can write a polynomial runtime algorithm as a logarithmic runtime?
I ask this, because, as easy as it was to write this in such a way that its runtime is logarithmic, I never would have thought to do it without having been specifically asked to do so.

Regarding the first part of your question: it's relatively simple to turn the procedure into tail-recursive form, we just have to pass an additional parameter to accumulate the answer. To avoid creating an additional procedure I'll use a named let:
(define (exp x n)
(let loop ([x x] [n n] [acc 1])
(cond
((zero? n) acc)
((= n 1) (* x acc))
((even? n) (loop (* x x) (/ n 2) acc))
(else (loop (* x x) (/ (- n 1) 2) (* x acc))))))
And for the second question: the rule of thumb would be - if there's a way to halve the size of a problem at some point when making the recursive call (in such a way that the result can be computed accordingly), then that's a good sign that it might exist a logarithmic solution for it. Of course, that's not always so obvious.

Related

Leibniz Formula in Scheme

I've spent some time looking at the questions about this on here and throughout the internet but I can't really find anything that makes sense to me.
Basically I need help on realizing a function in scheme that evaluates Leibniz's formula when you give it a value k. The value you input lets the function know how many values in the series it should compute. This is what I have so far, I'm not sure what way I need to write this program to make it work. Thanks!
(define (fin-alt-series k)
(cond ((= k 1)4)
((> k 1)(+ (/ (expt -1 k) (+(* 2.0 k) 1.0)) (fin-alt-series (- k 1.0))))))
The base case is incorrect. And we can clean-up the code a bit:
(define (fin-alt-series k)
(cond ((= k 0) 1)
(else
(+ (/ (expt -1.0 k)
(+ (* 2 k) 1))
(fin-alt-series (- k 1))))))
Even better, we can rewrite the procedure to use tail recursion, it'll be faster this way:
(define (fin-alt-series k)
(let loop ((k k) (sum 0))
(if (< k 0)
sum
(loop (- k 1)
(+ sum (/ (expt -1.0 k) (+ (* 2 k) 1)))))))
For example:
(fin-alt-series 1000000)
=> 0.7853984133971936
(/ pi 4)
=> 0.7853981633974483

Not returning the answer i need

(define (checksum-2 ls)
(if (null? ls)
0
(let ([n 0])
(+ (+ n 1))(* n (car ls))(checksum-2 (cdr ls)))))
Ok, I have this code, its suppose to, if I wrote it right, the number (n) should increase by one every time it goes through the list, so n (in reality) should be like 1 2 3 4, but I want n to be multiplied by the car of the list.
Everything loads, but when the answer is returned I get 0.
Thanks!
If you format your code differently, you might have an easier time seeing what is going on:
(define (checksum-2 ls)
(if (null? ls)
0
(let ([n 0])
(+ (+ n 1))
(* n (car ls))
(checksum-2 (cdr ls)))))
Inside the let form, the expressions are evaluated in sequence but you're not using the results for any of them (except the last one). The results of the addition and multiplication are simply discarded.
What you need to do in this case is define a new helper function that uses an accumulator and performs the recursive call. I'm going to guess this is homework or a learning exercise, so I'm not going to give away the complete answer.
UPDATE: As a demonstration of the sort of thing you might need to do, here is a similar function in Scheme to sum the integers from 1 to n:
(define (sum n)
(define (sum-helper n a)
(if (<= n 0)
a
(sum-helper (- n 1) (+ a n))))
(sum-helper n 0))
You should be able to use a similar framework to implement your checksum-2 function.

weirdness in scheme

I was trying to implement Fermat's primality test in Scheme.
I wrote a procedure fermat2(initially called fermat1) which returns true
when a^p-1 congruent 1(mod p) (please read it correctly guys!!)
a
every prime p number should satisfy the procedure (And hence Fermat's little theorem .. )
for any a
But when I tried to count the number of times this procedure yields true for a fixed number of trials ... ( using countt procedure, described in code) I got shocking results ans
So I changed the procedure slightly (I don't see any logical change .. may be I'm blind) and named it fermat1(replacing older fermat1 , now old fermat1 ->fermat2) and it worked .. the prime numbers passed the test all the times ...
why on earth the procedure fermat2 called less number of times ... what is actually wrong??
if it is wrong why don't I get error ... instead that computation is skipped!!(I think so!)
all you have to do , to understand what I'm trying to tell is
(countt fermat2 19 100)
(countt fermat1 19 100)
and see for yourself.
Code:
;;Guys this is really weird
;;I might not be able to explain this
;;just try out
;;(countt fermat2 19 100)
;;(countt fermat1 19 100)
;;compare both values ...
;;did you get any error using countt with fermat2,if yes please specify why u got error
;;if it was because of reminder procedure .. please tell your scheme version
;;created on 6 mar 2011 by fedvasu
;;using mit-scheme 9.0 (compiled from source/microcode)
;; i cant use a quote it mis idents (unfriendly stack overflow!)
;;fermat-test based on fermat(s) little theorem a^p-1 congruent to 1 (mod p) p is prime
;;see MIT-SICP,or Algorithms by Vazirani or anyother number theory book
;;this is the correct logic of fermat-test (the way it handles 0)
(define (fermat1 n)
(define (tryout a x)
;; (display "I've been called\n")
(= (remainder (fast-exp a (- x 1)) x) 1))
;;this exercises the algorithm
;;1+ to avoid 0
(define temp (random n))
(if (= temp 0)
(tryout (1+ temp) n)
(tryout temp n)))
;;old fermat-test
;;which is wrong
;;it doesnt produce any error!!
;;the inner procedure is called only selective times.. i dont know when exactly
;;uncomment the display line to see how many times tryout is called (using countt)
;;i didnt put any condition when it should be called
;;rather it should be every time fermat2 is called
;;how is it so??(is it to avoid error?)
(define (fermat2 n)
(define (tryout a x)
;; (display "I've been called\n")
(= (remainder (fast-exp a (- x 1)) x) 1))
;;this exercises the algorithm
;;1+ to avoid 0
(tryout (1+ (random n)) n))
;;this is the dependency procedure for fermat1 and fermat2
;;this procedure calculates base^exp (exp=nexp bcoz exp is a keyword,a primitive)
;;And it is correct :)
(define (fast-exp base nexp)
;;this is iterative procedure where a*b^n = base^exp is constant always
;;A bit tricky though
(define (logexp a b n)
(cond ((= n 0) a);;only at the last stage a*b^n is not same as base^exp
((even? n) (logexp a (square b) (/ n 2)))
(else (logexp (* a b) b (- n 1)))))
(logexp 1 base nexp))
;;utility procedure which takes a procedure and its argument and an extra
;; argument times which tells number of times to call
;;returns the number of times result of applying proc on input num yielded true
;;counting the number times it yielded true
;;procedure yields true for fixed input,
;;by calling it fixed times)
;;uncommenting display line will help
(define (countt proc num times)
(define (pcount p n t c)
(cond ((= t 0)c)
((p n );; (display "I'm passed by fermat1\n")
(pcount p n (- t 1) (+ c 1)));;increasing the count
(else c)))
(pcount proc num times 0))
I had real pain .. figuring out what it actually does .. please follow the code and tell why this dicrepieancies?
Even (countt fermat2 19 100) called twice returns different results.
Let's fix your fermat2 since it's shorter. Definition is: "If n is a prime number and a is any positive integer less than n, then a raised to the nth power is congruent to a modulo n.". That means f(a, n) = a^n mod n == a mod n. Your code tells f(a, n) = a^(n-1) mod n == 1 which is different. If we rewrite this according to definition:
(define (fermat2 n)
(define (tryout a x)
(= (remainder (fast-exp a x) x)
(remainder a x)))
(tryout (1+ (random n)) n))
This is not correct yet. (1+ (random n)) returns numbers from 1 to n inclusive, while we need [1..n):
(define (fermat2 n)
(define (tryout a x)
(= (remainder (fast-exp a x) x)
(remainder a x)))
(tryout (+ 1 (random (- n 1))) n))
This is correct version but we can improve it's readability. Since you're using tryout only in scope of fermat2 there is no need in parameter x to pass n - latter is already bound in scope of tryout, so final version is
(define (fermat n)
(define (tryout a)
(= (remainder (fast-exp a n) n)
(remainder a n)))
(tryout (+ 1 (random (- n 1)))))
Update:
I said that formula used in fermat2 is incorrect. This is wrong because if a*k = b*k (mod n) then a = b (mod n). Error as Vasu pointed was in generating random number for test.

SICP exercise 1.16, where is my bug, because it looks right to me

I've just started working through this book for fun; I wish it were homework, but I could never afford to attend MIT, and there are tons of people smarter than me anyway. :p
fast-exp is supposed to find b^n, i.e. 4^2 = 16, 3^3 = 27
(define (fast-exp b n)
(define (fast-exp-iter n-prime a)
(cond ((= n-prime 1) a)
((= (remainder n-prime 2) 1) (fast-exp-iter (- n-prime 1) (* a b)))
(else (fast-exp-iter (/ n-prime 2) (* a b b)))))
(fast-exp-iter n 1))
fast-exp 4 2; Expected 16, Actual 2
You forgot to call fast-exp. Instead, you evaluated three separate atoms. To actually evaluate the fast-exp of 4 to the 2, you'd have to write
(fast-exp 4 2)
The solution you have written here is also incorrect. e.g. Check out (fast-exp 2 6). Expected: 64, actual: 32.
Your solution is calculating wrong answers. (See http://ideone.com/quT6A) In fact, how you in principle can write a tail-recursive fast exponentiation passing only two numbers as arguments? I don't think it's even possible, because in the middle of computation you don't know what multiplier to use if you encounter odd exponent.
But I can give an example of working solution that is exactly what is expected by SICP authors (iterative process using "invariant quantity" (a * b^n), where a is initially 1)
(define (pow x y)
(define (powi acc x y)
(cond
((= y 0) acc)
((odd? y) (powi (* acc x) x (- y 1)))
(else (powi acc (* x x) (/ y 2)))))
(powi 1 x y))

two methods of composing functions, how different in efficiency?

Let f transform one value to another, then I'm writing a function that repeats the transformation n times.
I have come up with two different ways:
One is the obvious way that
literally applies the function n
times, so repeat(f, 4) means x →
f(f(f(f(x))))
The other way is inspired from the
fast method for powering, which means
dividing the problem into two
problems that are half as large
whenever n is even. So repeat(f, 4)
means x → g(g(x)) where g(x) =
f(f(x))
At first I thought the second method wouldn't improve efficiency that much. At the end of the day, we would still need to apply f n times, wouldn't we? In the above example, g would still be translated into f o f without any further simplification, right?
However, when I tried out the methods, the latter method was noticeable faster.
;; computes the composite of two functions
(define (compose f g)
(lambda (x) (f (g x))))
;; identify function
(define (id x) x)
;; repeats the application of a function, naive way
(define (repeat1 f n)
(define (iter k acc)
(if (= k 0)
acc
(iter (- k 1) (compose f acc))))
(iter n id))
;; repeats the application of a function, divide n conquer way
(define (repeat2 f n)
(define (iter f k acc)
(cond ((= k 0) acc)
((even? k) (iter (compose f f) (/ k 2) acc))
(else (iter f (- k 1) (compose f acc)))))
(iter f n id))
;; increment function used for testing
(define (inc x) (+ x 1))
In fact, ((repeat2 inc 1000000) 0) was much faster than ((repeat1 inc 1000000) 0). My question is in what aspect was the second method more efficient than the first? Did re-using the same function object preserves storage and reduces the time spent for creating new objects?
After all, the application has to be repeated n times, or saying it another way, x→((x+1)+1) cannot be automatically reduced to x→(x+2), right?
I'm running on DrScheme 4.2.1.
Thank you very much.
You're right that both versions do the same number of calls to inc -- but there's more
overhead than that in your code. Specifically, the first version creates N closures, whereas
the second one creates only log(N) closures -- and if the closure creation is most of the work
then you'll see a big difference in performance.
There are three things that you can use to see this in more details:
Use DrScheme's time special form to measure the speed. In addition to the time that it
took to perform some computation, it will also tell you how much time was spent in GC.
You will see that the first version is doing some GC work, while the second doesn't.
(Well, it does, but it's so little, that it will probably not show.)
Your inc function is doing so little, that you're measuring only the looping overhead.
For example, when I use this bad version:
(define (slow-inc x)
(define (plus1 x)
(/ (if (< (random 10) 5)
(* (+ x 1) 2)
(+ (* x 2) 2))
2))
(- (plus1 (plus1 (plus1 x))) 2))
the difference between the two uses drops from a factor of ~11 to 1.6.
Finally, try this version out:
(define (repeat3 f n)
(lambda (x)
(define (iter n x)
(if (zero? n) x (iter (sub1 n) (f x))))
(iter n x)))
It doesn't do any compositions, and it works in roughly
the same speed as your second version.
The first method essentially applies the function n times, thus it is O(n). But the second method is not actually applying the function n times. Every time repeat2 is called it splits n by 2 whenever n is even. Thus much of the time the size of the problem is halved rather than merely decreasing by 1. This gives an overall runtime of O(log(n)).
As Martinho Fernandez suggested, the wikipedia article on exponentiation by squaring explains it very clearly.

Resources