What is fastest way to calculate factorial in Common Lisp? For start there is end-tail recursion
(defun factorial (n &optional (acc 1))
(if (<= n 1)
acc
(factorial (- n 1) (* acc n))))
But it is fastest possible way?
You have implemented the naive algorithm for computing factorials. There are several with better asymptotic performance, see for example http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
The fastest ones are based on the prime factorization of the factorial.
Related
I have n numbers between 0 and (n^4 - 1) what is the fastest way I can sort them.
Of course, nlogn is trivial, but I thought about the option of Radix Sort with base n and than it will be linear time, but I am not sure because of the -1.
Thanks for help!
I think you are misunderstanding the efficiency of Radix Sort. From Wikipedia:
Radix sort complexity is O(wn) for n keys which are integers of word size w. Sometimes w is presented as a constant, which would make radix sort better (for sufficiently large n) than the best comparison-based sorting algorithms, which all perform O(n log n) comparisons to sort n keys. However, in general w cannot be considered a constant: if all n keys are distinct, then w has to be at least log n for a random-access machine to be able to store them in memory, which gives at best a time complexity O(n log n).
I personally would implement quicksort choosing an intelligent pivot. Using this method you can achieve about 1.188 n log n efficiency.
If we use Radix Sort in base n we get the desired linear time complexity, the -1 doesn't matter.
We will represent the numbers in base n:
Then we get : <= (log(base n) of (n^4 - 1)) * (n + n) <= 4 * (2n) <= O(n).
n is for n numbers, the other n is just the digits span (overestimate) and log of n^4 - 1 is less than log n^4 which is 4 in base n. Overall linear time complexity.
Thanks for the help anyway! If I did something wrong please notify me!
I am currently working on some problems from my textbook, about Big-O notation, and how functions can dominate each other.
These are the functions that I am looking at from my book.
n²
n² + 1000n
n (if n is odd)
n³ (if n is even)
n (if n ≤ 100)
n³ (if n > 100)
I am trying to figure out which functions that #1 dominates. I know that both #1 and #2 simplify to n², so it does not dominate #2. However, the split functions (#3 and #4) are giving me problems. #1 dominates the function only on a certain condition, and under the other condition, #1 is being dominated by the other function. So does this mean that, since it is not always dominating it, that it doesn't technically count as dominating it at all? Does function #1 not dominate any of these functions, or does it dominate #3, for all odd numbers, and #4 for all numbers ≤ 100? The way I see it is, #1 does not dominate #2, only dominates #3 for odd numbers, and only dominates #4 for numbers ≤ 100. Am I on the right track?
Thanks for any help anyone can provide. I'm having a real tough time trying to reason this out to myself.
I am not sure what "dominates" means in your case. Lets say "f(n) dominates g(n)" translates to f(n) ∈ O(g(n)), where O(g(n)) is the worst case complexity.
So we should calculate the worst case complexity first:
n² is in Θ(n²)
n² + 1000n is also in Θ(n²)
n (if n is odd) n³ (if n is even) is in Θ(n³) (just picking the worst case which appears in 50% of all cases for random choices of n)
n (if n ≤ 100) n³ (if n > 100) is also in Θ(n³), since Big-O depends on asymptotics (large values of n).
Now we can compare the worst case complexities and see #1 dominates only #2.
Maybe you want to change the worst case complexity to an average case. But only for #3 there could be a change.
After calculating (n³ + n) / 2 we notice, that even the average case of #3 is in Θ(n³).
If you look at the best case you get the first change, but also only for #3. Here the best case is Θ(n), so here is #3 dominated by #1.
Notice that the best case of #4 is not Θ(n), since the complexity holds only for n → ∞, so we ignore all cases of n < c₀ where c₀ is a constant.
I know that the Fibonacci series grows exponentially, therefore a recursive algorithm has a required number of steps that grow exponentially, however SICP v2 says that a tree recursive Fibonacci algorithm requires linear space because we only need to keep track of the nodes above us in the tree.
I understand that the required number of steps grows linear with Fib(n) but I would also assume that because the tree is expanding exponentially, the memory required in this event would need to be exponential as well. Can someone explain why the memory required only expands linearly to N, and not exponentially?
I am guessing this is a consequence of the use of applicative order in evaluation. Given
(define (fib n)
(cond ((= n 0) 0) ((= n 1) 1) (else (+ fib (- n 1)) (fib (- n 2))))))
[from Structure and Interpretation of Computer Programs]
normal-order evaluation of (fib 5) would keep expanding until it got to primitive expressions:
(+ (+ (+ (+ (fib 1) (fib 0)) (fib 1)) (+ (fib 1) (fib 0))) (+ (+ (fib 1) (fib 0) (fib 0)))
That would result in all the leaves of the tree being stored in memory, which would require space space exponentially related to n.
But applicative-order evaluation should proceed differently, driving down to primitive expressions along one branch to two leaves, and then ascending the branch and accumulating any side branches. This would result in a maximum length expression for (fib 5) of:
(+ (+ (+ (+ (fib 1) (fib 0)) (fib 1)) (fib 2)) (fib 3))
This expression is much shorter than the expression used in normal-order evaluation. The length of this expression is not affected by the number of leaves in the tree, only the depth of the tree.
This is my answer after staring at that sentence in SICP for more time than I care to admit.
You do not store the whole tree but only as many stack frames as is the current depth you are in.
the difference between the normal-order evaluation and applicative-order evaluation is similar to the difference between depth first search algorithm and breadth first search algorithm algorithm.
In this cause, it is a normal-order evaluation, all the combinations as arguments would be evaluated one by one until there is no combination anymore by the order from left to right(when combination being is evaluating, if there are still combinations inside the the combination being evaluating, the first one of these combinations would be evaluating right after in the next evaluation, and go on like this.),
which means the space would first expanse and then shrink, when the first combination got evaluate.
And go on like this, the second, the third. Making the max space for the whole evaluation depend on the depth of the evaluation proceess.
Hence a tree recursive Fibonacci algorithm requires linear space.hope it would help
What is the standard way of writing "the big-O of the greatest of m and n"?
It can be written as
O(m+n)
It might not look the same at first, but it is, since
max(m, n) <= m+n <= 2max(m, n)
If you want, you can also just write O(max(m, n))
Assuming n is a positive integer, the composite function performs as follows:
(define (composite? n)
(define (iter i)
(cond ((= i n) #f)
((= (remainder n i) 0) #t)
(else (iter (+ i 1)))))
(iter 2))
It seems to me that the time complexity (with a tight bound) here is O(n) or rather big theta(n). I am just eyeballing it right now. Because we are adding 1 to the argument of iter every time we loop through, it seems to be O(n). Is there a better explanation?
The function as written is O(n). But if you change the test (= i n) to (< n (* i i)) the time complexity drops to O(sqrt(n)), which is a considerable improvement; if n is a million, the time complexity drops from a million to a thousand. That test works because if n = pq, one of p and q must be less than the square root of n while the other is greater than the square root of n; thus, if no factor is found less than the square root of n, n cannot be composite. Newacct's answer correctly suggests that the cost of the arithmetic matters if n is large, but the cost of the arithmetic is log log n, not log n as newacct suggests.
Different people will give you different answers depending on what they assume and what they factor into the problem.
It is O(n) assuming that the equality and remainder operations you do inside each loop are O(1). It is true that the processor does these in O(1), but that only works for fixed-precision numbers. Since we are talking about asymptotic complexity, and since "asymptotic", by definition, deals with what happens when things grow without bound, we need to consider numbers that are arbitrarily big. (If the numbers in your problem were bounded, then the running time of the algorithm would also be bounded, and thus the entire algorithm would be technically O(1), obviously not what you want.)
For arbitrary-precision numbers, I would say that equality and remainder in general take time proportional to the size of the number, which is log n. (Unless you can optimize that away in amortized analysis somehow) So, if we consider that, the algorithm would be O(n log n). Some might consider this to be nitpicky