Integer multiplication algorithm using a divide and conquer approach? - algorithm

As homework, I should implement integer multiplication on numbers of a 1000 digits using a divide and conquer approach that works below O(n). What algorithm should I look into?

Schönhage–Strassen algorithm is the one of the fastest multiplication algorithms known. It takes O(n log n log log n) time.
Fürer's algorithm is the fastest large number multiplication algorithm known so far and takes O(n*log n * 2O(log*n)) time.
I don't think any multiplication algorithm could take less than or even equal to O(n). That's simply not possible.

Take a look at the Karatsuba algorithm. It involves a recursion step which you can easily model with divide-and-conquer.

Related

What is the time complexity to calculate 2^5000?

What is the time complexity to calculate 2^5000 ?
I approached it, by recursion but then it leads to O(N) where N = power of a number. Is there any way to reduce this time complexity ?
I think that you are interested in general approach, not only in this given example.
You can calculate N-th integer power using Log(N) operations with exponentiation by squaring approach
But note that number 2^N consists of about N binary digits (bits) and simple writing in memory is O(N) operation

Is an algorithm with a worst-case time complexity of O(n) always faster than an algorithm with a worst-case time complexity of O(n^2)?

This question has appeared in my algorithms class. Here's my thought:
I think the answer is no, an algorithm with worst-case time complexity of O(n) is not always faster than an algorithm with worst-case time complexity of O(n^2).
For example, suppose we have total-time functions S(n) = 99999999n and T(n) = n^2. Then clearly S(n) = O(n) and T(n) = O(n^2), but T(n) is faster than S(n) for all n < 99999999.
Is this reasoning valid? I'm slightly skeptical that, while this is a counterexample, it might be a counterexample to the wrong idea.
Thanks so much!
Big-O notation says nothing about the speed of an algorithm for any given input; it describes how the time increases with the number of elements. If your algorithm executes in constant time, but that time is 100 billion years, then it's certainly slower than many linear, quadratic and even exponential algorithms for large ranges of inputs.
But that's probably not really what the question is asking. The question is asking whether an algorithm A1 with worst-case complexity O(N) is always faster than an algorithm A2 with worst-case complexity O(N^2); and by faster it probably refers to the complexity itself. In which case you only need a counter-example, e.g.:
A1 has normal complexity O(log n) but worst-case complexity O(n^2).
A2 has normal complexity O(n) and worst-case complexity O(n).
In this example, A1 is normally faster (i.e. scales better) than A2 even though it has a greater worst-case complexity.
Since the question says Always it means it is enough to find only one counter example to prove that the answer is No.
Example for O(n^2) and O(n logn) but the same is true for O(n^2) and O(n)
One simple example can be a bubble sort where you keep comparing pairs until the array is sorted. Bubble sort is O(n^2).
If you use bubble sort on a sorted array, it will be faster than using other algorithms of time complexity O(nlogn).
You're talking about worst-case complexity here, and for some algorithms the worst case never happen in a practical application.
Saying that an algorithm runs faster than another means it run faster for all input data for all sizes of input. So the answer to your question is obviously no because the worst-case time complexity is not an accurate measure of the running time, it measures the order of growth of the number of operations in a worst case.
In practice, the running time depends of the implementation, and is not only about this number of operations. For example, one has to care about memory allocated, cache-efficiency, space/temporal locality. And obviously, one of the most important thing is the input data.
If you want examples of when the an algorithm runs faster than another while having a higher worst-case complexity, look at all the sorting algorithms and their running time depending of the input.
You are correct in every sense, that you provide a counter example to the statement. If it is for exam, then period, it should grant you full mark.
Yet for a better understanding about big-O notation and complexity stuff, I will share my own reasoning below. I also suggest you to always think the following graph when you are confused, especially the O(n) and O(n^2) line:
Big-O notation
My own reasoning when I first learnt computational complexity is that,
Big-O notation is saying for sufficient large size input, "sufficient" depends on the exact formula (Using the graph, n = 20 when compared O(n) & O(n^2) line), a higher order one will always be slower than lower order one
That means, for small input, there is no guarantee a higher order complexity algorithm will run slower than lower order one.
But Big-O notation tells you an information: When the input size keeping increasing, keep increasing....until a "sufficient" size, after that point, a higher order complexity algorithm will be always slower. And such a "sufficient" size is guaranteed to exist*.
Worst-time complexity
While Big-O notation provides a upper bound of the running time of an algorithm, depends on the structure of the input and the implementation of the algorithm, it may generally have a best complexity, average complexity and worst complexity.
The famous example is sorting algorithm: QuickSort vs MergeSort!
QuickSort, with a worst case of O(n^2)
MergeSort, with a worst case of O(n lg n)
However, Quick Sort is basically always faster than Merge Sort!
So, if your question is about Worst Case Complexity, quick sort & merge sort maybe the best counter example I can think of (Because both of them are common and famous)
Therefore, combine two parts, no matter from the point of view of input size, input structure, algorithm implementation, the answer to your question is NO.

Does an algorithm exist that finds the product of a n*n matrix in less than n^3 iteration?

I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.

Is O(log n) always faster than O(n)

If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question.
No. If one algorithm runs in N/100 and the other one in (log N)*100, then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.
No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one.
In real-world situations, usually the point where the O(log n) algorithm would overtake the O(n) algorithm would come very quickly. There is a big difference between O(log n) and O(n), just like there is a big difference between O(n) and O(n^2).
If you ever have the chance to read Programming Pearls by Jon Bentley, there is an awesome chapter in there where he pits a O(n) algorithm against a O(n^2) one, doing everything possible to give O(n^2) the advantage. (He codes the O(n^2) algorithm in C on an Alpha, and the O(n) algorithm in interpreted BASIC on an old Z80 or something, running at about 1MHz.) It is surprising how fast the O(n) algorithm overtakes the O(n^2) one.
Occasionally, though, you may find a very complex algorithm which has complexity just slightly better than a simpler one. In such a case, don't blindly choose the algorithm with a better big-O -- you may find that it is only faster on extremely large problems.
For the input of size n, an algorithm of O(n) will perform steps proportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
More Answers from stackoverflow

Alternatives for Gaussian elimination transform and conquer algorithm

Gaussian elimination algorithm in transform and conquer has O(n3) complexity. Is there any technique that give more efficient complexity of this algorithm?
There are algorithms for matrix inversion with better asymptotic complexity, e.g., the Strassen algorithm with complexity O(n2.807) and the Coppersmith–Winograd algorithm with complexity O(n2.376).
(Note that the complexity of matrix multiplication and matrix inversion are the same)
It depends on which complexity you measure:
Number of multiplications: No, by changing the technique you can only worsen the complexity of Gaussian elimination.
Number of time steps: Yes, parallel implementation of the row operations reduces time complexity to O(n).

Resources