This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Fast convolution algorithm
I have two arrays a and b of N length. I want to calculate the result array as
res[i+j] += a[i]*b[j]
Is it possible to calculate this using FFT or something similar in time faster than N^2. I saw this question already 1D Fast Convolution without FFT but am not sure how to do that using FFT.
EG: A=[1,2,3],B[2,4,6]
res[3] = A[1]*B[2]+A[2]*B[1]
Thanks in advance
From what i understand you want the FFT algorithm. here you have an implementation of this algorithm, and also a good explanation on how to implement the FFT algorithm.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am stuck on a homework question. The question is as follows.
Consider four programs - A, B, C, and D - that have the following performances.
A: O(log n)
B: O(n)
C: O(n2)
C: O(2n)
If each program requires 10 seconds to solve a problem of size 1000, estimate the time required by each program when the size of its problem increases to 2000.
I am pretty sure that O(n) would just double to 20 seconds since we are doubling the size and this would represent a loop in Java that iterates n number of times. Doubling n would double the output. But I am completely lost on numbers 1, 3, and 4.
I am not looking for direct answers to this question, but rather for someone to dumb down the way I can arrive at the answer. Maybe by explaining what each of these Big O notations is actually doing on the back end. If I understood the way that the algorithm is calculated and where all the elements fit into some sort of equation to solve for time, that would be awesome. Thank you in advance.
I have spent weeks combing through the textbook, but it is all written in a very complicated matter that I am having a hard time digesting. Videos online haven't been much help either.
Let's have an example (the one that you don't have in your list): O(n^3).
The ratio between the sizes of your problems is 2: 2000/1000 = 2. The big-O notation gives you an estimation that if you have a problem of size n the complexity of the problem of the size 2n would be... (2n)^3 = 8n^3. That is 8 times higher than the original task.
I hope that would help.
This question already has answers here:
Example of O(n!)?
(16 answers)
Closed 5 years ago.
I can't seem to find any examples that uses O(n!) time complexity.
I can't seem to comprehend how it works. Please help
A trivial example is the random sort algorithm. It randomly shuffles its input until it gets it sorted.
Of course, it has strictly no use in the real world, but still, it is O(n!).
EDIT: As pointed out in the comments, this is actually the average time performance of this algorithm. The best-case time complexity is O(1), which happens when the algorithm finds the right permutation right away, and is unbounded in the worst case, since you have no guarantee that the right permutation will come up.
This question already has an answer here:
Efficient Algorithms for Computing a matrix times its transpose [closed]
(1 answer)
Closed 6 years ago.
Is there any way to compute the dot product of a matrix and it's transpose matrix that is faster than the normal, O(n^3) way? I have matrix of 1000 rows and 1000 columns. If I assume n=1000, then I need to find the product the matrix and it's transpose matrix in something around O(n^2) or O(logn*n^2) time. Is it possible?
Yes, as there are already faster algorithms for general matrix multiplication, like the Strassen algorithm, which is ~O(N^2.8)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was reading R.G.Droomey's book How to solve it by Computer and in chapter 3 I found this problem - " Design and implement an algorithm to iteratively compute the reciprocal of a number." I am totally confused how to do that as he was teaching before how to compute the square roots and then suddenly comes up with this question. What's the co-relation?
And what would be the algorithm for this? Plus why do we need this when we can directly find the reciprocal of a number?
Iteratively computing any function probably asks you to use some numerical analysis method, like Newton-Raphson (http://en.wikipedia.org/wiki/Newton%27s_method) or binary search.
This method, along with the whole concept of numerical analysis (http://en.wikipedia.org/wiki/Numerical_analysis) allows you to calculate the root of a function f(x) by approximation without using any given formula for the solution.
As an example, you can calculate the root of f(x) = 5*x^2 + sqrt(x) + ln(x), where it is difficult to find a solution formula.
Plus why do we need this when we can directly find the reciprocal of a
number?
Imagine that you need to calculate the reciprocal of a number in a machine where you cannot calculate a division, but only addition, subtraction and multiplication. How do you do it? You use numerical analysis :)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to find a binary logarithm very fast? (O(1) at best)
how does the log function work.
How the log of a with base b is calculated.
There are many algorithms for doing this. One of my favorites (WARNING: shameless plug) is this one based on the Fibonacci numbers. The code contains a pretty elaborate comment that goes into how the math works. Given a and ab, it runs in time O(lg b) and O(1) space.
Hope this helps!