I want to ask a few questions.
URL of the video:
https://www.youtube.com/watch?v=9SwuRRe-2Jk&lc=UgyiumSTV11t3SQGNU94AaABAg
1.why is the code
double x = new double[] {1, -2, 3, 4 ,5 ,-6 ,7, 8}
only inside -2 and -6 to add a minus sign then other numbers don't need
2.I don’t know much about the results in this URL of the video.
Can explain it?
Thank you master
enter image description here
why in the code double x = new double[] {1, -2, 3, 4 ,5 ,-6 ,7, 8}
only -2 and -6 have a minus sign while other numbers don't need
This is only a test vector which would be replaced by whatever signals you want to transform using the FFT. Such signals could all sorts of values, so a test vector that includes somewhat arbitrary positive and negative numbers is reasonable.
I don’t know much about the results in this URL of the video. Can you explain it?
A full explanation of the theory behind the Discrete Fourier Transform (and the efficient Fast Fourier Transform algorithm to compute it) is beyond the scope of this post. However, from the test performed there are a few things that could and should easily be noticed:
Computing the inverse transform on the FFT of the input gives you back the original input
Computing the FFT of a real signal gives you a signal with Hermitian symmetry: first line is purely real, as is line N/2+1 (in this case where N=4, the third line is purely real), and the other lines pair up in complex conjugate symmetry (in this case where N=4, line 2 & 4)
Computing the FFT of an complex signal does not gives you a signal with Hermitian symmetry
Related
I want to sample K items from a stream of N items that I see one at a time. I don't know how big N is until the last item turns up, and I want the space consumption to depend on K rather than N.
So far I've described a reservoir sampling problem. The major ask though is that I'd like the samples to be 'evenly spaced', or at least more evenly spaced than reservoir sampling manages. This is vague; one formalization would be that the sample indices are a low-discrepancy sequence, but I'm not particularly tied to that.
I'd also like the process to be random and every possible sample to have a non-zero probability of appearing, but I'm not particularly tied to this either.
My intuition is that this is a feasible problem, and the algorithm I imagine preferentially drops samples from the 'highest density' part of the reservoir in order to make space for samples from the incoming stream. It also seems like a common enough problem that someone should have written a paper on it, but Googling combinations of 'evenly spaced', 'reservoir', 'quasirandom', 'sampling' haven't gotten me anywhere.
edit #1: An example might help.
Example
Suppose K=3, and I get items 0, 1, 2, 3, 4, 5, ....
After 3 items, the sample would be [0, 1, 2], with spaces of {1}
After 6 items, I'd like to most frequently get [0, 2, 4] with its spaces of {2}, but commonly getting samples like [0, 3, 5] or [0, 2, 4] with spaces of {2, 3} would be good too.
After 9 items, I'd like to most frequently get [0, 4, 8] with its spaces of {4}, but commonly getting samples like [0, 4, 7] with spaces of {4, 3} would be good too.
edit #2: I've learnt a lesson here about providing lots of context when requesting answers. David and Matt's answers are promising, but in case anyone sees this and has a perfect solution, here's some more information:
Context
I have hundreds of low-res videos streaming through a GPU. Each stream is up to 10,000 frames long, and - depending on application - I want to sample 10 to 1000 frames from each. Once a stream is finished and I've got a sample, it's used to train a machine learning algorithm, then thrown away. Another stream is started in its place. The GPU's memory is 10 gigabytes, and a 'good' set of reservoirs occupies a few gigabytes in the current application and plausibly close to the entire memory in future applications.
If space isn't at a premium, I'd oversample using the uniform random reservoir algorithm by some constant factor (e.g., if you need k items, sample 10k) and remember the index that each sampled item appeared at. At the end, use dynamic programming to choose k indexes to maximize (e.g.) the sum of the logs of the gaps between consecutive chosen indexes.
Here's an algorithm that doesn't require much extra memory. Hopefully it meets your quality requirements.
The high-level idea is to divide the input into k segments and choose one element uniformly at random from each segment. Given the memory constraint, we can't make the segments as even as we would like, but they'll be within a factor of two.
The simple version of this algorithm (that uses 2k reservoir slots and may return a sample of any size between k and 2k) starts by reading the first k elements, then proceeds in rounds. In round r (counting from zero), we read k 2r elements, using the standard reservoir algorithm to choose one random sample from each segment of 2r. At the end of each round, we append these samples to the existing reservoir and do the following compression step. For each pair of consecutive elements, choose one uniformly at random to retain and discard the other.
The complicated version of this algorithm uses k slots and returns a sample of size k by interleaving the round sampling step with compression. Rather than write a formal description, I'll demonstrate it, since I think that will be easier to understand.
Let k = 8. We pick up after 32 elements have been read. I use the notation [a-b] to mean a random element whose index is between a and b inclusive. The reservoir looks like this:
[0-3] [4-7] [8-11] [12-15] [16-19] [20-23] [24-27] [28-31]
Before we process the next element (32), we have to make room. This means merging [0-3] and [4-7] into [0-7].
[0-7] [32] [8-11] [12-15] [16-19] [20-23] [24-27] [28-31]
We merge the next few elements into [32].
[0-7] [32-39] [8-11] [12-15] [16-19] [20-23] [24-27] [28-31]
Element 40 requires another merge, this time of [16-19] and [20-23]. In general, we do merges in a low-discrepancy order.
[0-7] [32-39] [8-11] [12-15] [16-23] [40] [24-27] [28-31]
Keep going.
[0-7] [32-39] [8-11] [12-15] [16-23] [40-47] [24-27] [28-31]
At the end of the round, the reservoir looks like this.
[0-7] [32-39] [8-15] [48-55] [16-23] [40-47] [24-31] [56-63]
We use standard techniques from FFT to undo the butterfly permutation of the new samples and move them to the end.
[0-7] [8-15] [16-23] [24-31] [32-39] [40-47] [48-55] [56-63]
Then we start the next round.
Perhaps the simplest way to do reservoir sampling is to associate a random score with each sample, and then use a heap to remember the k samples with the highest scores.
This corresponds to applying a threshold operation to white noise, where the threshold value is chosen to admit the correct number of samples. Every sample has the same chance of being included in the output set, exactly as if k samples were selected uniformly.
If you sample blue noise instead of white noise to produce your scores, however, then applying a threshold operation will produce a low-discrepancy sequence and the samples in your output set will be more evenly spaced. This effect occurs because, while white noise samples are all independent, blue noise samples are temporally anti-correlated.
This technique is used to create pleasing halftone patterns (google Blue Noise Mask).
Theoretically, it works for any final sampling ratio, but realistically its limited by numeric precision. I think it has a good chance of working OK for your range of 1-100, but I'd be more comfortable with 1-20.
There are many ways to generate blue noise, but probably your best choices are to apply a high-pass filter to white noise or to construct an approximation directly from 1D Perlin noise.
I was reading Neural Network with Few Multiplications and I'm having trouble understanding how Binary or Ternary Connect eliminate the need for multiplication.
They explain that by stochastically sampling the weights from [-1, 0, 1], we eliminate the need to multiply and Wx can be calculated using only sign changes. However, even with weights strictly -1, 0, and 1, how can I change the signs of x without multiplication?
eg. W = [0,1,-1] and x = [0.3, 0.2, 0.4]. Wouldn't I still need to multiply W and x to get [0, 0.2, -0.4]? Or is there some other way to change the sign more efficiently than multiplication?
Yes. All the general-purpose processors I know of since the "early days" (say, 1970) have a machine operation to take the magnitude of one number, the sign of another, and return the result. The data transfer happens in parallel: the arithmetic part of the operation is a single machine cycle.
Many high-level languages have this capability as a built-in function. It often comes under a name such as "copy_sign".
I read now tons of different explanations of the gaussian blur and I am really confused.
I roughly understand how the gaussian blur works.
http://en.wikipedia.org/wiki/Gaussian_blur
I understood that we choose 3*sigma as the maxium size for our mask because the values will get really small.
But my three questions are:
How do I create a gaussian mask with the sigma only?
If I understood it correctly, the mask gives me the weights, then
I place the mask on the top left pixel. I multiply the weights for
each value of the pixels in the mask. Then I move the mask to the
next pixel. I do this for all pixels. Is this correct?
I also know that 1D masks are faster. So I create a mask for x and a
mask for y. Lets say my mask would look like this. (3x3)
1 2 1
2 4 2
1 2 1
How would my x and y mask look like?
1- A solution to create a gaussian mask is to setup an N by N matrix, with N=3*sigma (or less if you want a coarser solution), and fill each entry (i,j) with exp(-((i-N/2)^2 + (j-N/2)^2)/(2*sigma^2)). As a comment mentioned, taking N=3*sigma just means that you truncate your gaussian at a "sufficiently small" threshold.
2- yes - you understood correctly. A small detail is that you'll need to normalize by the sum of your weights (ie., divide the result of what you said by the sum of all the elements of your matrix). The other option is that you can build your matrix already normalized, so that you don't need to perform this normalization at the end (the normalized gaussian formula becomes exp(-((i-N/2)^2 + (j-N/2)^2)/(2*sigma^2))/(2*pi*sigma))
3- In your specific case, the 1D version is [1 2 1] (ie, both your x and y masks) since you can obtain the matrix you gave with the multiplication transpose([1 2 1]) * [1 2 1]. In general, you can directly build these 1D gaussians using the 1D gaussian formula which is similar as the one above : exp(-((i-N/2)^2)/(2*sigma^2)) (or the normalized version exp(-((i-N/2)^2)/(2*sigma^2)) / (sigma*sqrt(2*pi)))
I have lots of large (around 5000 x 5000) matrices that I need to invert in Matlab. I actually need the inverse, so I can't use mldivide instead, which is a lot faster for solving Ax=b for just one b.
My matrices are coming from a problem that means they have some nice properties. First off, their determinant is 1 so they're definitely invertible. They aren't diagonalizable, though, or I would try to diagonlize them, invert them, and then put them back. Their entries are all real numbers (actually rational).
I'm using Matlab for getting these matrices and for this stuff I need to do with their inverses, so I would prefer a way to speed Matlab up. But if there is another language I can use that'll be faster, then please let me know. I don't know a lot of other languages (a little but of C and a little but of Java), so if it's really complicated in some other language, then I might not be able to use it. Please go ahead and suggest it, though, in case.
I actually need the inverse, so I can't use mldivide instead,...
That's not true, because you can still use mldivide to get the inverse. Note that A-1 = A-1 * I. In MATLAB, this is equivalent to
invA = A\speye(size(A));
On my machine, this takes about 10.5 seconds for a 5000x5000 matrix. Note that MATLAB does have an inv function to compute the inverse of a matrix. Although this will take about the same amount of time, it is less efficient in terms of numerical accuracy (more info in the link).
First off, their determinant is 1 so they're definitely invertible
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
My matrices are coming from a problem that means they have some nice properties.
Of course, there are other more efficient algorithms that can be employed if your matrix is sparse or has other favorable properties. But without any additional info on your specific problem, there is nothing more that can be said.
I would prefer a way to speed Matlab up
MATLAB uses Gauss elimination to compute the inverse of a general matrix (full rank, non-sparse, without any special properties) using mldivide and this is Θ(n3), where n is the size of the matrix. So, in your case, n=5000 and there are 1.25 x 1011 floating point operations. So on a reasonable machine with about 10 Gflops of computational power, you're going to require at least 12.5 seconds to compute the inverse and there is no way out of this, unless you exploit the "special properties" (if they're exploitable)
Inverting an arbitrary 5000 x 5000 matrix is not computationally easy no matter what language you are using. I would recommend looking into approximations. If your matrices are low rank, you might want to try a low-rank approximation M = USV'
Here are some more ideas from math-overflow:
https://mathoverflow.net/search?q=matrix+inversion+approximation
First suppose the eigen values are all 1. Let A be the Jordan canonical form of your matrix. Then you can compute A^{-1} using only matrix multiplication and addition by
A^{-1} = I + (I-A) + (I-A)^2 + ... + (I-A)^k
where k < dim(A). Why does this work? Because generating functions are awesome. Recall the expansion
(1-x)^{-1} = 1/(1-x) = 1 + x + x^2 + ...
This means that we can invert (1-x) using an infinite sum. You want to invert a matrix A, so you want to take
A = I - X
Solving for X gives X = I-A. Therefore by substitution, we have
A^{-1} = (I - (I-A))^{-1} = 1 + (I-A) + (I-A)^2 + ...
Here I've just used the identity matrix I in place of the number 1. Now we have the problem of convergence to deal with, but this isn't actually a problem. By the assumption that A is in Jordan form and has all eigen values equal to 1, we know that A is upper triangular with all 1s on the diagonal. Therefore I-A is upper triangular with all 0s on the diagonal. Therefore all eigen values of I-A are 0, so its characteristic polynomial is x^dim(A) and its minimal polynomial is x^{k+1} for some k < dim(A). Since a matrix satisfies its minimal (and characteristic) polynomial, this means that (I-A)^{k+1} = 0. Therefore the above series is finite, with the largest nonzero term being (I-A)^k. So it converges.
Now, for the general case, put your matrix into Jordan form, so that you have a block triangular matrix, e.g.:
A 0 0
0 B 0
0 0 C
Where each block has a single value along the diagonal. If that value is a for A, then use the above trick to invert 1/a * A, and then multiply the a back through. Since the full matrix is block triangular the inverse will be
A^{-1} 0 0
0 B^{-1} 0
0 0 C^{-1}
There is nothing special about having three blocks, so this works no matter how many you have.
Note that this trick works whenever you have a matrix in Jordan form. The computation of the inverse in this case will be very fast in Matlab because it only involves matrix multiplication, and you can even use tricks to speed that up since you only need powers of a single matrix. This may not help you, though, if it's really costly to get the matrix into Jordan form.
I am trying to do some image processing and I would like to apply the LoG kernel. I know the formula, which is :
But I didn't understand how to obtain the kernel matrix with this formula. From what I have read, I have a matrix of n x n and I apply this formula to every cell in that matrix, but what should be the starting values within that matrix in the first place.
Also, I have the same question with the Laplacian filer. I know the formula, which is:
and also, from what I have read, the 3 x 3 filter should be the matrix:
x = [1 1 1; 1 -4 1; 1 1 1]
but can you please tell me how to apply the formula in order to obtain the matrix, or at least indicate me a tutorial of how to apply this.
Basically, we are just going from continuous space to discrete space. The first derivative in continuous time (space) is analogous to the first difference in discrete time (space). To compute the first difference of a discrete-time signal, you convolve [1 -1] over the signal. To compute the second difference, you convolve a signal with [1 -2 1] (which is [1 -1] convolved with itself, or equivalently, convolving the signal with [1 -1] twice).
To calculate the second difference in two dimensions, you convolve the input image with the matrix you mentioned in your question. That means that you take the 3-by-3 mask (i.e, the matrix you mentioned), multiply all nine numbers with nine pixels in the image, and sum the products to get one output pixel. Then you shift the mask to the right, and do it again. Each shift will produce one output pixel. You do that across the entire image.
To get the mask for a Gaussian filter, just sample the two-dimensional Gaussian function for any arbitrary sigma.
This may help: convolution matrix, Gaussian filter