time complexity of the acc function in scheme? - scheme

I have been trying to find a tight bound time complexity for this function with respect to just one of the arguments. I thought it was O(p^2) (or rather big theta) but I am not sure anymore.
(define (acc p n)
(define (iter p n result)
(if (< p 1)
result
(iter (/ p 2) (- n 1) (+ result n))))
(iter p n 1))

#sarahamedani, why would this be O(p^2)? It looks like O(log p) to me. The runtime should be insensitive to the value of n.
You are summing a series of numbers, counting down from n. The number of times iter will iterate depends on how many times p can be halved without becoming less than 1. In other words, the position of the leftmost '1' bit in p, minus one, is the number of times iter will iterate. That means the number of times iter runs is proportional to log p.

You might try to eyeball it, or go from it more systematically. Assuming we're doing this from scratch, we should try build a recurrence relation from the function definition.
We can assume, for the moment, a very simple machine model where arithmetic operations and variable lookups are constant time.
Let iter-cost be the name of the function that counts how many steps it takes to compute iter, and let it be a function of p, since iter's termination depends only on p. Then you should be able to write expressions for iter-cost(0). Can you do that for iter-cost(1), iter-cost(2), iter-cost(3), and iter-cost(4)?
More generally, given an p greater than zero, can you express iter-cost(p)? It will be in terms of constants and a recurrent call to iter-cost. If you can express it as a recurrence, then you're in a better position to express it in a closed form.

Related

How do i find Fibonacci numbers in Scheme?

(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
Just having some fun with SICP.
I completely understand the idea of Fibonacci algorithm but this code made me stuck up.
What exactly the last line does in compare to imperative style thing(is this just a basic recursion or)?
The procedure is implementing the Fibonacci series as an iterative process. In this case, fib is the main procedure that calls fib-iter, which does the actual work by means of an iteration. Notice that count is used to control the number of iterations we want, whereas a and b are used to store the results of the Fibonacci series for n-1 and n-2 respectively. The line (fib-iter (+ a b) a (- count 1)) is advancing the iteration to the next values.
Please take the time to read about iterative vs. recursive processes in the book, also read about tail recursion - these are the concepts you need to grasp for really understanding what's happening in the example.
For comparison, let's see how the same procedures would look using a more conventional syntax (Python's):
def fib(n):
return fib_iter(1, 0, n)
def fib_iter(a, b, count):
while count != 0: # same as asking `(if (= count 0) ...)`
a, b = a + b, a # same as passing `(+ a b) a` to recursive call
count -= 1 # same as `(- count 1)`
return b # same as returning `b` at end of recursion
As you see, the fib_iter procedure is simply iterating over a range of values controlled by the count variable, assigning a and b to the next values in the series, up until a number of iterations is completed; at this point the result is in b and is returned.

Creating and solving a recurrence relation for sine approximation

In SICP, there is a problem (exercise 1.15) that says
Exercise 1.15. The sine of an angle (specified in radians) can be
computed by making use of the approximation sin x x if x is
sufficiently small, and the trigonometric identity
sin(r) = 3sin(r/3) - 4sin^3(r/3)
to reduce the size of the argument of sin. (For purposes of this
exercise an angle is considered ``sufficiently small'' if its
magnitude is not greater than 0.1 radians.) These ideas are incorporated
in the following procedures:
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when (sine 12.15) is evaluated?
b. What is the order of growth in space and number of steps
(as a function of a) used by the process generated by the
sine procedure when (sine a) is evaluated?
You can analyze it by running it, and see that it becomes O(loga) where a is the input angle in radians.
However, this isn't sufficient. This should be provable via recurrence relation. I can set up a recurrence relation as such:
T(a) = 3T(a/3) - 4T(a/3)^3
Which is homogenous:
3T(a/3) - 4T(a/3)^3 - T(a) = 0
However, it is non-linear. I am unsure how to get the characteristic equation for this so that I can solve it and prove to myself O(loga) is more than just "intuitively true". No tutorial on the internet seems to cover this analysis, and the only thing I saw conclusively said that non-linear recurrences are pretty much impossible to solve.
Can anyone help?
Thank you!
You're confusing the computation with the amount of time the computation takes.
If θ≤0.1, then T(θ) is 1. Otherwise, it is T(θ/3)+k, where k is the time it takes to do four multiplications, a subtraction, and some miscellaneous bookkeeping.
It is evident that the argument for the ith recursion will be θ/3i, and therefore that the recursion will continue until θ/3i≤0.1. Since the smallest value of i for which that inequality is true is ⌈log3(θ/0.1)⌉, we can see that T(θ) = k*⌈log3(θ/0.1)⌉, which is O(logθ). (I left out the small constant factor which differentiates the last recursion from the other ones, since it makes no difference.)

calculate a running time of a function

I have trouble with coming up the running time of a function which calls other functions. For example, here is a function that convert the binary tree to a list:
(define (tree->list-1 tree)
(if (null? tree)
’()
(append (tree->list-1 (left-branch tree))
(cons (entry tree)
(tree->list-1 (right-branch tree))))))
The explanation is T(n) = 2*T(n/2) + O(n/2) because procedure append takes linear time. 
Solving above equation, we get T(n) = O(n * log n).
However, the cons is also a procedure that combines two element. In this case it goes though all the entry node, why don't we add another O(n) in the solution?
Thank you for any help.
Consider O(n^2) which is clearly quadratic.
Now consider O(n^2 + n), this still is quadratic, hence we can reduce this to O(n^2) as the + n is not significant (it does not change the "order of magnitude" (not sure this is the right term)).
The same applies here so we can reduce O([n*log(n)] + n) to O(n*log(n)). However we may not reduce this to O(log(n)) as this would be logarithmic, which is not.
If I understand correctly, you are asking about the difference between append and cons.
The time used by (cons a b) does not depend on the values of a and b. The call allocates some memory, tags it with a type tag ("pair") and stores pointers to the values a and b in the pair.
Compare this to (append xs ys). Here append needs to make a new list consisting of the elements in both xs and ys. This means that if xs is a list of n elements, then append needs to allocate n new pairs to hold the elements of xs.
In short: append needs to copy the elements in xs and thus the time is proportional to the length of xs. The function cons uses the time no matter what arguments it is called with.

DrRacket procedure body help (boolean-odd? x)

An iterative version of odd? for non-negative integer arguments can be written using and, or, and not. To do so, you have to take advantage of the fact that and and or are special forms that evaluate their arguments in order from left to right, exiting as soon as the value is determined. Write (boolean-odd? x) without using if or cond, but using and, or, not (boolean) instead. You may use + and -, but do not use quotient, remainder, /, etc.
A number is even if two divides it evenly, and odd if there if there is a remainder of one. In general, when you divide a number k by a number n, the remainder is one element of the set {0,1,…n-1}. You can generalize your question by asking whether, when k is divided by n, the remainder is in some privileged set of remainder values. Since this is almost certainly homework, I do not want to provide a direct answer to your question, but I'll answer this more general version, without sticking to the constraints of using only and and or.
(define (special-remainder? k n special-remainders)
(if (< k n)
(member k special-remainders)
(special-remainder? (- k n) special-remainders)))
This special-remainder? recursively divides k by n until a remainder less than n is found. Then n is tested for its specialness. In the case that you're considering, you'll be able to eliminate special-remainders, because you don't need (member k special-remainders). Since you only have one special remainder, you can just check whether k is that special remainder.
A positive odd number can be defined as 1 + 2n. Thus an odd number is:
If x is 1
If x is greater than 1 and x-2 is odd.
Thus one* solution that is tail recursive/iterative looks like this:
(define (odd? x)
(or (= ...) ; #t if 1
(and ... ; #f if less than 1
(odd? ...))); recurse with 2 less
*having played around with it it's many ways to do do this and still have it iterative and without if/cond.

Analysing the runtime efficiency of a Haskell function

I have the following solution in Haskell to Problem 3:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
However, it takes more than the minute limit given by Project Euler. So how do I analyze the time complexity of my code to determine where I need to make changes?
Note: Please do not post alternative algorithms. I want to figure those out on my own. For now I just want to analyse the code I have and look for ways to improve it. Thanks!
Two things:
Any time you see a list comprehension (as you have in divisors), or equivalently, some series of map and/or filter functions over a list (as you have in main), treat its complexity as Θ(n) just the same as you would treat a for-loop in an imperative language.
This is probably not quite the sort of advice you were expecting, but I hope it will be more helpful: Part of the purpose of Project Euler is to encourage you to think about the definitions of various mathematical concepts, and about the many different algorithms that might correctly satisfy those definitions.
Okay, that second suggestion was a bit too nebulous... What I mean is, for example, the way you've implemented isPrime is really a textbook definition:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
-- p is prime if its only divisors are 1 and p.
Likewise, your implementation of divisors is straightforward:
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
-- the divisors of n are the numbers between 1 and n that divide evenly into n.
These definitions both read very nicely! Algorithmically, on the other hand, they are too naïve. Let's take a simple example: what are the divisors of the number 10? [1, 2, 5, 10]. On inspection, you probably notice a couple things:
1 and 10 are pairs, and 2 and 5 are pairs.
Aside from 10 itself, there can't be any divisors of 10 that are greater than 5.
You can probably exploit properties like these to optimize your algorithm, right? So, without looking at your code -- just using pencil and paper -- try sketching out a faster algorithm for divisors. If you've understood my hint, divisors n should run in sqrt n time. You'll find more opportunities along these lines as you continue. You might decide to redefine everything differently, in a way that doesn't use your divisors function at all...
Hope this helps give you the right mindset for tackling these problems!
Let's start from the top.
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
For now, let's assume that certain things are cheap: incrementing numbers is O(1), doing mod operations is O(1), and comparisons with 0 are O(1). (These are false assumptions, but what the heck.) The divisors function loops over all numbers from 1 to n, and does an O(1) operation on each number, so computing the complete output is O(n). Notice that here when we say O(n), n is the input number, not the size of the input! Since it takes m=log(n) bits to store n, this function takes O(2^m) time in the size of the input to produce a complete answer. I'll use n and m consistently to mean the input number and input size below.
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
In the worst case, p is prime, which forces divisors to produce its whole output. Comparison to a list of statically-known length is O(1), so this is dominated by the call to divisors. O(n), O(2^m)
Your main function does a bunch of things at once, so let's break down subexpressions a bit.
filter ((==0) . (n `mod`))
This loops over a list, and does an O(1) operation on each element. This is O(m), where here m is the length of the input list.
filter isPrime
Loops over a list, doing O(n) work on each element, where here n is the largest number in the list. If the list happens to be n elements long (as it is in your case), this means this is O(n*n) work, or O(2^m*2^m) = O(4^m) work (as above, this analysis is for the case where it produces its entire list).
print . head
Tiny bits of work. Let's call it O(m) for the printing part.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
Considering all the subexpressions above, the filter isPrime bit is clearly the dominating factor. O(4^m), O(n^2)
Now, there's one final subtlety to consider: throughout the analysis above, I've consistently made the assumption that each function/subexpression was forced to produce its entire output. As we can see in main, this probably isn't true: we call head, which only forces a little bit of the list. However, if the input number itself isn't prime, we know for sure that we must look through at least half the list: there will certainly be no divisors between n/2 and n. So, at best, we cut our work in half -- which has no effect on the asymptotic cost.
Daniel Wagner's answer explains the general strategy of deriving bounds for the runtime complexity rather well. However, as is usually the case for general strategies, it yields too conservative bounds.
So, just for the heck of it, let's investigate this example in some more detail.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
(Aside: if n were prime, this would cause a runtime error when checking n `mod` 0 == 0, thus I change the list to [n, n-1 .. 2] so that the algorithm works for all n > 1.)
Let's split up the expression into its parts, so we can see and analyse each part more easily
main = print answer
where
n = 600851475143
candidates = [n, n-1 .. 2]
divisorsOfN = filter ((== 0) . (n `mod`)) candidates
primeDivisors = filter isPrime divisorsOfN
answer = head primeDivisors
Like Daniel, I work with the assumption that arithmetic operations, comparisons etc. are O(1) - although not true, that's a good enough approximation for all remotely reasonable inputs.
So, of the list candidates, the elements from n down to answer have to be generated, n - answer + 1 elements, for a total cost of O(n - answer + 1). For composite n, we have answer <= n/2, then that's Θ(n).
Generating the list of divisors as far as needed is then Θ(n - answer + 1) too.
For the number d(n) of divisors of n, we can use the coarse estimate d(n) <= 2√n.
All divisors >= answer of n have to be checked for primality, that's at least half of all divisors.
Since the list of divisors is lazily generated, the complexity of
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
is O(smallest prime factor of p), because as soon as the first divisor > 1 is found, the equality test is determined. For composite p, the smallest prime factor is <= √p.
We have < 2√n primality checks of complexity at worst O(√n), and one check of complexity Θ(answer), so the combined work of all prime tests carried out is O(n).
Summing up, the total work needed is O(n), since the cost of each step is O(n) at worst.
In fact, the total work done in this algorithm is Θ(n). If n is prime, generating the list of divisors as far as needed is done in O(1), but the prime test is Θ(n). If n is composite, answer <= n/2, and generating the list of divisors as far as needed is Θ(n).
If we don't consider the arithmetic operations to be O(1), we have to multiply with the complexity of an arithmetic operation on numbers the size of n, that is O(log n) bits, which, depending on the algorithms used, usually gives a factor slightly above log n and below (log n)^2.

Resources