How to use halide to realize 1D convolution operation - halide

How to use halide to realize 1D convolution operation
How to use halide to realize 1D convolution operation

Related

Cheaply compute QR factorization in Julia?

In Julia, I would like to compute the QR factorization of a matrix in a program. However, just using the qr() function is (relatively) very expensive computationally compared to the cost of the rest of my program. Is there any way to more cheaply compute a matrix's qr() factorization other than simply using the qr()? I also want to avoid/minimize storing and allocating arrays whenever possible.

'Squaring a polynomial' versus 'Multiplying the polynomial with itself using FFT'

The fastest known algorithm for polynomial multiplication is using Fast Fourier Transformation (FFT).
In a special case of multiplying a polynomial with itself, I am interested in knowing if any squaring algorithm performs better than FFT. I couldn't find any resource which deals with this aspect.
Number Theoretic Transform (NTT) is faster than FFT
Why? Because you using just integer modular arithmetics on some ring instead of floating point complex numbers while the properties stays the same (as NTT is sort of form of DFT ...). So if your polynomials are integer use NTT which is faster... if they are float you need to use FFT
FFT based squaring is faster than FFT based multiplying by itself
Why? Because you need just 1x NTT/FFT and 1x iNTT/iFFT while multiplying needs 2xNTT/FFT and 1x iNTT/iFFT so you spare one transformation ... the rest is the same
for small enough polynomials is squaring without FFT fastest
For more info see:
Fast bignum square computation
its not the same problem but very similar ... as bignum data words are similar to your polynomial coefficients. So most of it applies to your problem too

dgemm or dgemv for Matrix Multiplication?

I know dgemv is for matrix-vector, but which is more efficient? Using dgemm directly for matrix multiplication or using dgemv to do the matrix multiplication by multiplying the Matrix A with each individual column of matrix B using dgemv?
If you make repeated calls to DGEMV, you will not benefit from cache tiling and re-use, which are the biggest advantages good DGEMM implementations have. DGEMM is vastly more efficient than multiple calls to DGEMV.

Is Fast Fourier transform algorithm appropriate for image gradient computation?

I've a matrix of size mXn and a filter [-1 0 1] on which I need to perform convolution. I'm able to do this in O(n^2) steps, but on further googling fast fourier transform keeps on popping up everywhere. I would like to know if FFT is appropriate for this problem. The matrix has random integers only. But if I were to have floating values, will it make a difference? Is FFT meant for a problem like this?
If your filter has only two nonzero elements, computing the convolution by definition will only take O(n*m) steps (which is the size of your data). FFT isn't gonna help you in that case: a 2D FFT would take something like O(n*m*(log n+log m)).
To sum up: when you have a simple, localized filter, the best way to perform convolution is computing the sum directly. When you need to compute convolutions or correlations with bigger data (think correlation with another image) or perform complex operations, FFT can help you.

Parallel multiplication of large integers

What is the best algorithm to perform parallel multiplication of large integers?
I think your best choice is using Fast Fourier multiplication. you can use parallel algorithms for calculating FFT and its reverse, also adding elements can be parallelise.

Categories

Resources