I successfully computed 2d dct of an image using the classic algorithm and also using as a combination of 1d arrays. Those 2 methods have a time complexity of n^4 and n^3 respectively.
While implementing on an image it takes a very long time to compute. like 7 minutes for a 512 x 512 image using the one with n^3 complexity.
Is there any other algorithms to compute DCT having a minimal time complexity?
How does matlab do it so quickly though?
there are 2 common approaches for fast DCT.
DCT from DFT
1D DCT can be derived from 1D DFT in O(n) so when FFT algorithm is applied you got O(n.log(n)) for 1D and O(n^2.log(n)) for 2D. For more info see:
I am looking for a simple algorithm for fast DCT and IDCT of matrix [NxM]
This approach is more used as it is a bit more easy to implement. There are more ways on how to derive DCT from DFT some use the same array size and the other uses double size for DFT.
Fast DCT
There are also fast DCT equations out there but they are not commonly used because they are not very well known and not well documented online. Another more inmportant point is that the recursive breakdown involves both DCT and DST and the split is done usually into 3 therms instead of 2 which makes the implementation much harder. And also we need fast DST implementation which is analogy to DCT so it also breaks down to 3 therms and is using both DCT and DST. The bright side is that it does not involve complex domain but as you can imagine there is much more code needed in comparison to #1.
From a quick search I found this
The fast DCT-IV/DST-IV computation via the MDCT
But to find relevant info about fast DCT in real domain is a problem because most articles are either hard wired (constant n) implementations or are using approach #1. And when you find something it usually does contains errors and does not work. Your best bet for this approach is to find some old paper or book on computer graphics or discrete math.
Given an invertible matrix M over the rationals Q, the inverse matrix M^(-1) is again a matrix over Q. Are their (efficient) libraries to compute the inverse precisely?
I am aware of high-performance linear algebra libraries such as BLAS/LAPACK, but these libraries are based on floating point arithmetic and are thus not suitable for computing precise (analytical) solutions.
Motivation: I want to compute the absorption probabilities of a large absorbing Markov chain using its fundamental matrix. I would like to do so precisely.
Details: By large, I mean a 1000x1000 matrix in the best case, and a several million dimensional matrix in the worst case. The further I can scale things the better. (I realize that the worst case is likely far out of reach.)
You can use the Eigen matrix library, which with little effort works on arbitrary scalar types. There is an example in the documentation how to use it with GMPs mpq_class: http://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
Of course, as #btilly noted, most of the time you should not calculate the inverse, but calculate a matrix decomposition and use that to solve equation systems. For rational numbers you can use any LU-decomposition, or if the matrix is symmetric, the LDLt decomposition. See here for a catalogue of decompositions.
What is the best algorithm to perform parallel multiplication of large integers?
I think your best choice is using Fast Fourier multiplication. you can use parallel algorithms for calculating FFT and its reverse, also adding elements can be parallelise.
I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n.
This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?
Rayleigh iteration has cubic convergence. You may want to implement also the power method and see how they compare, since you need LU or QR decomposition of your matrix.
http://en.wikipedia.org/wiki/Rayleigh_quotient_iteration
Following #rchilton's comment, you can apply this to A* A.
The idea of looking for the largest eigenpair is analogous to finding a large power of the matrix, as the lower frequency modes get damped out during the iteration. The Lanczos algorithm, is one of a few such algorithms that rely on the so-called Ritz eigenvectors during the decomposition. From Wikipedia:
The Lanczos algorithm is an iterative algorithm ... that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very large sparse matrices. In latent semantic indexing, for instance, matrices relating millions of documents to hundreds of thousands of terms must be reduced to singular-value form.
The technique works even if the system is not sparse, but if it is large and dense it has the advantage that it doesn't all have to be stored in memory at the same time.
How does it work?
The power method for finding the largest eigenvalue of a matrix A can be summarized by noting that if x_{0} is a random vector and x_{n+1}=A x_{n}, then in the large n limit, x_{n} / ||x_{n}|| approaches the normed eigenvector corresponding to the largest eigenvalue.
Non-square matrices?
Noting that your system is not a square matrix, I'm pretty sure that the SVD problem can be decomposed into separate linear algebra problems where the Lanczos algorithm would apply. A good place to ask such questions would be over at https://math.stackexchange.com/.
I've been reading a lot about Fast Fourier Transform and am trying to understand the low-level aspect of it. Unfortunately, Google and Wikipedia are not helping much at all.. and I have like 5 different algorithm books open that aren't helping much either.
I'm trying to find the FFT of something simple like a vector [1,0,0,0]. Sure I could just plug it into Matlab but that won't help me understand what's going on underneath. Also, when I say I want to find the FFT of a vector, is that the same as saying I want to find the DFT of a vector just with a more efficient algorithm?
You're right, "the" Fast Fourier transform is just a name for any algorithm that computes the discrete Fourier transform in O(n log n) time, and there are several such algorithms.
Here's the simplest explanation of the DFT and FFT as I think of them, and also examples for small N, which may help. (Note that there are alternative interpretations, and other algorithms.)
Discrete Fourier transform
Given N numbers f0, f1, f2, …, fN-1, the DFT gives a different set of N numbers.
Specifically: Let ω be a primitive Nth root of 1 (either in the complex numbers or in some finite field), which means that ωN=1 but no smaller power is 1. You can think of the fk's as the coefficients of a polynomial P(x) = ∑fkxk. The N new numbers F0, F1, …, FN-1 that the DFT gives are the results of evaluating the polynomial at powers of ω. That is, for each n from 0 to N-1, the new number Fn is P(ωn) = ∑0≤k≤N-1 fkωnk.
[The reason for choosing ω is that the inverse DFT has a nice form, very similar to the DFT itself.]
Note that finding these F's naively takes O(N2) operations. But we can exploit the special structure that comes from the ω's we chose, and that allows us to do it in O(N log N). Any such algorithm is called the fast Fourier transform.
Fast Fourier Transform
So here's one way of doing the FFT. I'll replace N with 2N to simplify notation. We have f0, f1, f2, …, f2N-1, and we want to compute P(ω0), P(ω1), … P(ω2N-1) where we can write
P(x) = Q(x) + ωNR(x) with
Q(x) = f0 + f1x + … + fN-1xN-1
R(x) = fN + fN+1x + … + f2N-1x2N-1
Now here's the beauty of the thing. Observe that the value at ωk+N is very simply related to the value at ωk:
P(ωk+N) = ωN(Q(ωk) + ωNR(ωk)) = R(ωk) + ωNQ(ωk). So the evaluations of Q and R at ω0 to ωN-1 are enough.
This means that your original problem — of evaluating the 2N-term polynomial P at 2N points ω0 to ω2N-1 — has been reduced to the two problems of evaluating the N-term polynomials Q and R at the N points ω0 to ωN-1. So the running time T(2N) = 2T(N) + O(N) and all that, which gives T(N) = O(N log N).
Examples of DFT
Note that other definitions put factors of 1/N or 1/√N.
For N=2, ω=-1, and the Fourier transform of (a,b) is (a+b, a-b).
For N=3, ω is the complex cube root of 1, and the Fourier transform of (a,b,c) is (a+b+c, a+bω+cω2, a+bω2+cω). (Since ω4=ω.)
For N=4 and ω=i, and the Fourier transform of (a,b,c,d) is (a+b+c+d, a+bi-c-di, a-b+c-d, a-bi-c+di). In particular, the example in your question: the DFT on (1,0,0,0) gives (1,1,1,1), not very illuminating perhaps.
The FFT is just an efficient implementation of the DFT. The results should be identical for both, but in general the FFT will be much faster. Make sure you understand how the DFT works first, since it is much simpler and much easier to grasp.
When you understand the DFT then move on to the FFT. Note that although the general principle is the same, there are many different implementations and variations of the FFT, e.g. decimation-in-time v decimation-in-frequency, radix 2 v other radices and mixed radix, complex-to-complex v real-to-complex, etc.
A good practical book to read on the subject is Fast Fourier Transform and Its Applications by E. Brigham.
Yes, the FFT is merely an efficient DFT algorithm. Understanding the FFT itself might take some time unless you've already studied complex numbers and the continuous Fourier transform; but it is basically a base change to a base derived from periodic functions.
(If you want to learn more about Fourier analysis, I recommend the book Fourier Analysis and Its Applications by Gerald B. Folland)
I'm also new to Fourier transforms and I found this online book very helpful:
The Scientists and Engineer's Guide to Digital Signal Processing
The link takes you to the chapter on the Discrete Fourier Transform. This chapter explains the difference between all the Fourier transforms, as well as where you'd use which one and pseudocode that shows how you go about calculating the Discrete Fourier Transform.
If you seek a plain English explanation of DFT and a bit of FFT as well, instead of academic goggledeegoo, then you must read this: http://blogs.zynaptiq.com/bernsee/dft-a-pied/
I couldn't have explained it better myself.