Tree Recursive Fibonacci Algorithm Requires Linear Space? - algorithm

I know that the Fibonacci series grows exponentially, therefore a recursive algorithm has a required number of steps that grow exponentially, however SICP v2 says that a tree recursive Fibonacci algorithm requires linear space because we only need to keep track of the nodes above us in the tree.
I understand that the required number of steps grows linear with Fib(n) but I would also assume that because the tree is expanding exponentially, the memory required in this event would need to be exponential as well. Can someone explain why the memory required only expands linearly to N, and not exponentially?

I am guessing this is a consequence of the use of applicative order in evaluation. Given
(define (fib n)
(cond ((= n 0) 0) ((= n 1) 1) (else (+ fib (- n 1)) (fib (- n 2))))))
[from Structure and Interpretation of Computer Programs]
normal-order evaluation of (fib 5) would keep expanding until it got to primitive expressions:
(+ (+ (+ (+ (fib 1) (fib 0)) (fib 1)) (+ (fib 1) (fib 0))) (+ (+ (fib 1) (fib 0) (fib 0)))
That would result in all the leaves of the tree being stored in memory, which would require space space exponentially related to n.
But applicative-order evaluation should proceed differently, driving down to primitive expressions along one branch to two leaves, and then ascending the branch and accumulating any side branches. This would result in a maximum length expression for (fib 5) of:
(+ (+ (+ (+ (fib 1) (fib 0)) (fib 1)) (fib 2)) (fib 3))
This expression is much shorter than the expression used in normal-order evaluation. The length of this expression is not affected by the number of leaves in the tree, only the depth of the tree.
This is my answer after staring at that sentence in SICP for more time than I care to admit.

You do not store the whole tree but only as many stack frames as is the current depth you are in.

the difference between the normal-order evaluation and applicative-order evaluation is similar to the difference between depth first search algorithm and breadth first search algorithm algorithm.
In this cause, it is a normal-order evaluation, all the combinations as arguments would be evaluated one by one until there is no combination anymore by the order from left to right(when combination being is evaluating, if there are still combinations inside the the combination being evaluating, the first one of these combinations would be evaluating right after in the next evaluation, and go on like this.),
which means the space would first expanse and then shrink, when the first combination got evaluate.
And go on like this, the second, the third. Making the max space for the whole evaluation depend on the depth of the evaluation proceess.
Hence a tree recursive Fibonacci algorithm requires linear space.hope it would help

Related

How much time (Big-O) will an algorithm take which can rule out one third of possible numbers from 1 to N in each step?

I am abstracting the problem out. (it has nothing to do with prime numbers)
How much time (in terms of Big-O) will it take to determine if n is the solution?
If suppose I was able to design an algorithm which can rule out one third of the numbers from the possible answers {1,2,...,n} in the first step. Then successively rules out one third of the "remaining" numbers until all numbers are tested.
I have thought a lot about it but cant figure out it will be O(n log₃(n)) or O(log₃(n))
It depends on the algorithm, and on the value of N. You should be able to figure out and program an algorithm that takes O (sqrt (N)) rather easily, and it's not difficult to go down to O (sqrt (N) / log N). Anything better requires some rather deep mathematics, but there are algorithms that are a lot faster for large N.
Now when you say O (N log N), please don't guess these things. O (N log N) is ridiculous. The most stupid algorithm where you use nothing than the definition of a prime number is O (N).
Theoretically, the best effort is O(log^3 N), but the corresponding algorithm is not something you could figure out easily. See http://en.wikipedia.org/wiki/AKS_primality_test
There are more practical probabilistic algorithms though.
BTW. About 'ruling out one third' etc. It does not matter would it be 'log base 3' or 'log base 10' and so on. O(log N) roughly means 'any base logarithm' because they all can be reduced to each other by constant multiplier only. So the complexity of such algorithm will be log N * complexity_of_reduction_step. But the problem is, that 'single step' will hardly take the constant time. And if so, it will not help in achieving O(log N).

Segment tree: amount of numbers smaller than x

I'm trying to solve this problem.
I found tutorial for this problem but I don't get how to build segment tree that will find amount of numbers less than x in O(log n) (x can change). In tutorial it has been omitted.
Can anyone explain me how to do it ?
It is pretty simple:
Store a sorted array of all numbers in a range covered by a particular node( O(n * log n) memory and time for initialization).
To answer a query, decompose the query segment into O(log n) nodes(the same way as it is done for a standard min/max/sum segment tree) and run binary search over the array stored in each of those nodes to find the number of elements less than x. It gives O(log^2 n) time per query. You can also achieve O(log n) using fractional cascading, but it is not necessary.

Complexity of iterated logarithm on base 2

Assuming iterated logarithm is defined as it is here: http://en.wikipedia.org/wiki/Iterated_logarithm
How should I go about comparing its complexity to other functions, for example lg(lg(n))? So far I've done all the comparing by calculating the limits, but how do you calculate a limit of iterated logarithm?
I understand it grows very slowly, slower than lg(n), but is there some function that grows at the same speed maybe as lg*(n) (where lg* is iterated logarithm on base 2) so it would ease comparing it to other functions? This way I could also compare lg*(lg(n)) to lg(lg*(n)) for example. Any tips on comparing functions to each other based on speed of growing would be appreciated.
The iterated logarithm function log* n isn't easily compared to another function that has similar behavior, the same way that log n isn't easily compared to another function with similar behavior.
To understand the log* function, I've found it helpful to keep a few things in mind:
This function grows extremely slowly. Figure that log* 22222 = 5, and that tower of 2s is a quantity that's bigger than the number of atoms in the known universe.
It plays with logarithms the way that logarithms play with division. The quotient rule for logarithms says that log (n / 2) = log n - log 2 = log n - 1, assuming our logs are base-2. One intuition for this is that, in a sense, log n represents "how many times do you have to divide n by two to drop n down to 1?," so log (n / 2) "should" be log n - 1 because we did one extra division beforehand. On the other hand, (log n) / 2 is probably a lot smaller than log n - 1. Similarly, log* n counts "how many times do you have to take the log of n before you drop n down to some constant?" so log* log n = log* n - 1 because we've taken one additional log. On the other hand, log log* n is probably much smaller than log* n - 1.
Hope this helps!

What is fastest way to calculate factorial in common lisp?

What is fastest way to calculate factorial in Common Lisp? For start there is end-tail recursion
(defun factorial (n &optional (acc 1))
(if (<= n 1)
acc
(factorial (- n 1) (* acc n))))
But it is fastest possible way?
You have implemented the naive algorithm for computing factorials. There are several with better asymptotic performance, see for example http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
The fastest ones are based on the prime factorization of the factorial.

time complexity of the composite function in terms of n

Assuming n is a positive integer, the composite function performs as follows:
(define (composite? n)
(define (iter i)
(cond ((= i n) #f)
((= (remainder n i) 0) #t)
(else (iter (+ i 1)))))
(iter 2))
It seems to me that the time complexity (with a tight bound) here is O(n) or rather big theta(n). I am just eyeballing it right now. Because we are adding 1 to the argument of iter every time we loop through, it seems to be O(n). Is there a better explanation?
The function as written is O(n). But if you change the test (= i n) to (< n (* i i)) the time complexity drops to O(sqrt(n)), which is a considerable improvement; if n is a million, the time complexity drops from a million to a thousand. That test works because if n = pq, one of p and q must be less than the square root of n while the other is greater than the square root of n; thus, if no factor is found less than the square root of n, n cannot be composite. Newacct's answer correctly suggests that the cost of the arithmetic matters if n is large, but the cost of the arithmetic is log log n, not log n as newacct suggests.
Different people will give you different answers depending on what they assume and what they factor into the problem.
It is O(n) assuming that the equality and remainder operations you do inside each loop are O(1). It is true that the processor does these in O(1), but that only works for fixed-precision numbers. Since we are talking about asymptotic complexity, and since "asymptotic", by definition, deals with what happens when things grow without bound, we need to consider numbers that are arbitrarily big. (If the numbers in your problem were bounded, then the running time of the algorithm would also be bounded, and thus the entire algorithm would be technically O(1), obviously not what you want.)
For arbitrary-precision numbers, I would say that equality and remainder in general take time proportional to the size of the number, which is log n. (Unless you can optimize that away in amortized analysis somehow) So, if we consider that, the algorithm would be O(n log n). Some might consider this to be nitpicky

Resources