Algorithm to beat Strassen's Algorithm - algorithm

Professor Caesar wishes to develop a matrix-multiplication algorithm that is
asymptotically faster than Strassen’s algorithm. His algorithm will use the divide and-conquer method, dividing each matrix into pieces of size n/4 x n/4, and the divide and combine steps together will take Theta(n^2) time.

You don't really specify what the question is here, but I guess it is to disprove that this trivial algorithm runs faster than Strassen.
Say you divide your matrices into blocks each of dimension (n / k) X (n / k) (in your question, k is 4). Then each matrix will have k2 blocks, and there will be k3 block multiplications (each block in the first matrix will be multiplied by k blocks in the second matrix). Consequently, the complexity recurrence is
T(n) = k3 T(n / k) + Θ(n2).
By case 1 of the Master theorem, this implies
T(n) = Θ(nlogk(k3)) = Θ(n3).
This is the same as ordinary matrix multiplication. It does not beat Strassen, obviously.

Related

Iterative fibonacci complexity, cannot undersant why it is O(n^2)

Please don't confuse the question with recursive fibonacci, which has the complexity 2^n.
This is the fibonacci iterative code i use :
def f(n):
a, b = 0, 1
for i in range(0, n):
a, b = b, a + b
return a
I tried to find the complexity and i got it T(n) = n * 4 + 4 = 4n + 4, but the graph that i got is no linear at all and is more of a n^2. For example:
print(timerf(250000)/timerf(50000))
This gives me result around 25.
I plotted a figure:
This shows that the fibonacci iterative method should be with complexity n^2. How to explain this?
Iterative method complexity is O(n)*cost_of_addition
Usually people assume cost_of_addition to be a constant, but in case of Fibonacci numbers we quickly outgrow this assumption.
Since F(n) grows exponentially, number of digits in it is O(n). So the resulting complexity is O(n^2).
Maybe the reason is that addition of integers does not take constant time but linear - O(number of bits)

Prove 3-Way Quicksort Big-O Bound

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it?
There's a subtle difference between finding the complexity of an algorithm and proving it.
To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average, you split your problem of size n into three smaller problems of size n/3, so you will get, in è log_3(n)` steps in average, to problems of size 1. With experience, you will start getting the feeling of this approach and be able to deduce the complexity of algorithms just by thinking about them in terms of subproblems involved.
To prove that this algorithm runs in O(nlogn) in the average case, you use the Master Theorem. To use it, you have to write the recursion formula giving the time spent sorting your array. As we said, sorting an array of size n can be decomposed into sorting three arrays of sizes n/3 plus the time spent building them. This can be written as follows:
T(n) = 3T(n/3) + f(n)
Where T(n) is a function giving the resolution "time" for an input of size n (actually the number of elementary operations needed), and f(n) gives the "time" needed to split the problem into subproblems.
For 3-Way quicksort, f(n) = c*n because you go through the array, check where to place each item and eventually make a swap. This places us in Case 2 of the Master Theorem, which states that if f(n) = O(n^(log_b(a)) log^k(n)) for some k >= 0 (in our case k = 0) then
T(n) = O(n^(log_b(a)) log^(k+1)(n)))
As a = 3 and b = 3 (we get these from the recurrence relation, T(n) = aT(n/b)), this simplifies to
T(n) = O(n log n)
And that's a proof.
Well, the same prove actually holds.
Each iteration splits the array into 3 sublists, on average the size of these sublists is n/3 each.
Thus - number of iterations needed is log_3(n) because you need to find number of times you do (((n/3) /3) /3) ... until you get to one. This gives you the formula:
n/(3^i) = 1
Which is satisfied for i = log_3(n).
Each iteration is still going over all the input (but in a different sublist) - same as quicksort, which gives you O(n*log_3(n)).
Since log_3(n) = log(n)/log(3) = log(n) * CONSTANT, you get that the run time is O(nlogn) on average.
Note, even if you take a more pessimistic approach to calculate the size of the sublists, by taking minimum of uniform distribution - it will still get you first sublist of size 1/4, 2nd sublist of size 1/2, and last sublist of size 1/4 (minimum and maximum of uniform distribution), which will again decay to log_k(n) iterations (with a different k>2) - which will yield O(nlogn) overall - again.
Formally, the proof will be something like:
Each iteration takes at most c_1* n ops to run, for each n>N_1, for some constants c_1,N_1. (Definition of big O notation, and the claim that each iteration is O(n) excluding recursion. Convince yourself why this is true. Note that in here - "iteration" means all iterations done by the algorithm in a certain "level", and not in a single recursive invokation).
As seen above, you have log_3(n) = log(n)/log(3) iterations on average case (taking the optimistic version here, same principles for pessimistic can be used)
Now, we get that the running time T(n) of the algorithm is:
for each n > N_1:
T(n) <= c_1 * n * log(n)/log(3)
T(n) <= c_1 * nlogn
By definition of big O notation, it means T(n) is in O(nlogn) with M = c_1 and N = N_1.
QED

Big O, what is the complexity of summing a series of n numbers?

I always thought the complexity of:
1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).
But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2).
What am I missing here?
The big O notation can be used to determine the growth rate of any function.
In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2).
You are confusing complexity of runtime and the size (complexity) of the result.
The running time of summing, one after the other, the first n consecutive numbers is indeed O(n).1
But the complexity of the result, that is the size of “sum from 1 to n” = n(n – 1) / 2 is O(n ^ 2).
1 But for arbitrarily large numbers this is simplistic since adding large numbers takes longer than adding small numbers. For a precise runtime analysis, you indeed have to consider the size of the result. However, this isn’t usually relevant in programming, nor even in purely theoretical computer science. In both domains, summing numbers is usually considered an O(1) operation unless explicitly required otherwise by the domain (i.e. when implementing an operation for a bignum library).
n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting from 1). I think you're confusing an algorithm with big-oh notation!
If you thought of it as a function, then the big-oh complexity of this function is O(1):
public int sum_of_first_n_integers(int n) {
return (n * (n+1))/2;
}
The naive implementation would have big-oh complexity of O(n).
public int sum_of_first_n_integers(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
}
Even just looking at each cell of a single n-by-n matrix is O(n^2), since the matrix has n^2 cells.
There really isn't a complexity of a problem, but rather a complexity of an algorithm.
In your case, if you choose to iterate through all the numbers, the the complexity is, indeed, O(n).
But that's not the most efficient algorithm. A more efficient one is to apply the formula - n*(n+1)/2, which is constant, and thus the complexity is O(1).
So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:
On each concatenation, a new copy of the string is created, and the
two strings are copied over, character by character. The first
iteration requires us to copy x characters. The second iteration
requires copying 2x characters. The third iteration requires 3x, and
so on. The total time therefore is O(x + 2x + ... + nx). This reduces
to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2
or, O(n²).)
For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that n² is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.
You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).
If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.
None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.
There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)
That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.
Adding the first n numbers:
Consider the algorithm:
Series_Add(n)
return n*(n+1)/2
this algorithm indeed runs in O(|n|^2), where |n| is the length (the bits) of n and not the magnitude, simply because multiplication of 2 numbers, one of k bits and the other of l bits runs in O(k*l) time.
Careful
Considering this algorithm:
Series_Add_pseudo(n):
sum=0
for i= 1 to n:
sum += i
return sum
which is the naive approach, you can assume that this algorithm runs in linear time or generally in polynomial time. This is not the case.
The input representation(length) of n is O(logn) bits (any n-ary coding except unary), and the algorithm (although it is running linearly in the magnitude) it runs exponentially (2^logn) in the length of the input.
This is actually the pseudo-polynomial algorithm case. It appears to be polynomial but it is not.
You could even try it in python (or any programming language), for a medium length number like 200 bits.
Applying the first algorithm the result comes in a split second, and applying the second, you have to wait a century...
1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0
since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.
now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.
adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)
Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no?
No.
using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia
That again leaves us to O(n^2).
It's equivalent to BigO(n^2), because it is equivalent to (n^2 + n) / 2 and in BigO you ignore constants, so even though the squared n is divided by 2, you still have exponential growth at the rate of square.
Think about O(n) and O(n/2) ? We similarly don't distinguish the two, O(n/2) is just O(n) for a smaller n, but the growth rate is still linear.
What that means is that as n increase, if you were to plot the number of operations on a graph, you would see a n^2 curve appear.
You can see that already:
when n = 2 you get 3
when n = 3 you get 6
when n = 4 you get 10
when n = 5 you get 15
when n = 6 you get 21
And if you plot it like I did here:
You see that the curve is similar to that of n^2, you will have a smaller number at each y, but the curve is similar to it. Thus we say that the magnitude is the same, because it will grow in time complexity similarly to n^2 as n grows bigger.
answer of sum of series of n natural can be found using two ways. first way is by adding all the numbers in loop. in this case algorithm is linear and code will be like this
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
it is analogous to 1+2+3+4+......+n. in this case complexity of algorithm is calculated as number of times addition operation is performed which is O(n).
second way of finding answer of sum of series of n natural number is direst formula n*(n+1)/2. this formula use multiplication instead of repetitive addition. multiplication operation has not linear time complexity. there are various algorithm available for multiplication which has time complexity ranging from O(N^1.45) to O (N^2). therefore in case of multiplication time complexity depends on the processor's architecture. but for the analysis purpose time complexity of multiplication is considered as O(N^2). therefore when one use second way to find the sum then time complexity will be O(N^2).
here multiplication operation is not same as the addition operation. if anybody has knowledge of computer organisation subject then he can easily understand the internal working of multiplication and addition operation. multiplication circuit is more complex than the adder circuit and require much higher time than the adder circuit to compute the result. so time complexity of sum of series can't be constant.

Determining complexity of an integer factorization algorithm

I'm starting to study computational complexity, BigOh notation and the likes, and I was tasked to do an integer factorization algorithm and determine its complexity. I've written the algorithm and it is working, but I'm having trouble calculating the complexity. The pseudo code is as follows:
DEF fact (INT n)
BEGIN
INT i
FOR (i -> 2 TO i <= n / i STEP 1)
DO
WHILE ((n MOD i) = 0)
DO
PRINT("%int X", i)
n -> n / i
DONE
DONE
IF (n > 1)
THEN
PRINT("%int", n)
END
What I attempted to do, I think, is extremely wrong:
f(x) = n-1 + n-1 + 1 + 1 = 2n
so
f(n) = O(n)
Which I think it's wrong because factorization algorithms are supposed to be computationally hard, they can't even be polynomial. So what do you suggest to help me? Maybe I'm just too tired at this time of the night and I'm screwing this all up :(
Thank you in advance.
This phenomenon is called pseudopolynomiality: a complexity that seems to be polynomial, but really isn't. If you ask whether a certain complexity (here, n) is polynomial or not, you must look at how the complexity relates to the size of the input. In most cases, such as sorting (which e.g. merge sort can solve in O(n lg n)), n describes the size of the input (the number of elements). In this case, however, n does not describe the size of the input; it is the input value. What, then, is the size of n? A natural choice would be the number of bits in n, which is approximately lg n. So let w = lg n be the size of n. Now we see that O(n) = O(2^(lg n)) = O(2^w) - in other words, exponential in the input size w.
(Note that O(n) = O(2^(lg n)) = O(2^w) is always true; the question is whether the input size is described by n or by w = lg n. Also, if n describes the number of elements in a list, one should strictly speaking count the bits of every single element in the list in order to get the total input size; however, one usually assumes that in lists, all numbers are bounded in size (to e.g. 32 bits)).
Use the fact that your algorithm is recursive. If f(x) is the number of operations take to factor, if n is the first factor that is found, then f(x)=(n-1)+f(x/n). The worst case for any factoring algorithm is a prime number, for which the complexity of your algorithm is O(n).
Factoring algorithms are 'hard' mainly because they are used on obscenely large numbers.
In big-O notation, n is the size of input, not the input itself (as in your case). The size of the input is lg(n) bits. So basically your algorithm is exponential.

Complexity of recursive factorial program

What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n).
If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).
You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.
If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.
Assuming you're talking about the most naive factorial algorithm ever:
factorial (n):
if (n = 0) then return 1
otherwise return n * factorial(n-1)
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.
Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.
However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.
If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.
If you want to analyze the complexity of the easy algorithm for computing N!
factorial(N)
f=1
for i = 2 to N
f=f*i
return f
then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:
sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.
The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.
The time-complexity of recursive factorial would be:
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
So,
The time complexity for one recursive call would be:
T(n) = T(n-1) + 3 (3 is for As we have to do three constant operations like
multiplication,subtraction and checking the value of n in each recursive
call)
= T(n-2) + 6 (Second recursive call)
= T(n-3) + 9 (Third recursive call)
.
.
.
.
= T(n-k) + 3k
till, k = n
Then,
= T(n-n) + 3n
= T(0) + 3n
= 1 + 3n
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).

Resources