This is a follow-up to another stack overflow question, here:
3D Correspondences from fundamental matrix
Just like in that question, I am trying to get a camera matrix from a fundamental matrix, the ultimate goal being 3d reconstruction from 2d points. The answer given there is good, and correct. I just don't understand it. It says, quote, "If you have access to Hartley and Zisserman's textbook, you can check section 9.5.3 where you will find what you need." He also provides a link to source code.
Now, here's what section 9.5.3 of the book, among other things, says:
Result 9.12. A non-zero matrix F is the fundamental matrix
corresponding to a pair of camera matrices P and P if and only if PTFP
is skew-symmetric.
That, to me, is gibberish. (I looked up skew-symmetric - it means the inverse is its negative. I have no idea how that is relevant to anything.)
Now, here is the source code given (source):
[U,S,V] = svd(F);
e = U(:,3);
P = [-vgg_contreps(e)*F e];
This is also a mystery.
So what I want to know is, how does one explain the other? Getting that code from that statement seems like black magic. How would I, or anyone, figure out that "A non-zero matrix F is the fundamental matrix corresponding to a pair of camera matrices P and P if and only if PTFP is skew-symmetric." means what the code is telling you to do, which is basically
'Take the singular value decomposition. Take the first matrix. Take the third column of that. Perform some weird re-arrangment of its values. That's your answer.' How would I have come up with this code on my own?
Can someone explain to me the section 9.5.3 and this code in plain English?
Aha, that "PTFP" is actually something I have also wondered about and could not find the answer in literature. However, this is what I figured out:
The 4x4 skew-symmetric matrix you are mentioning is not just any matrix. It is actually the dual Plücker Matrix of the baseline (see also https://en.wikipedia.org/wiki/Pl%C3%BCcker_matrix). In other words, it only gives you the line on which the camera centers are located, which is not useful for reconstruction tasks as such.
The condition you mention is identical to the more popularized fact that the fundamental matrix for the view 1 & 0 is the negative transpose of the fundamental matrix for the views 0 & 1 (using MATLAB/Octave syntax here)
Consider first that the fundamental matrix maps a point x0 in one image to line l1 in the other
l1=F*x0
Next, consider that the transpose of the projection matrix back-projects a lines l1 in the image to a plane E in space
E=P1'*l1
(I find this beautifully simple and understated in most geometry / computer vision classes)
Now, I will use a geometric argument: Two lines are corresponding epipolar lines iff they correspond to the same epipolar plane i.e. the back-projection of either line gives the same epipolar plane. Algebraically:
E=P0'*l0
E=P1'*l1
thus (the important equation)
P0'*l0=P1'*l1
Now we are almost there. Let's assume we have a 3D point X and its two projections
x0=P0*X
x1=P1*X
and the epipolar lines
l1=F*x0
l0=-F'*x1
We can just put that into the important equation and we have for all X
P0'*-F'*P1*X=P1'*F*P0*X
and finally
P0'*-F'*P1=P1'*F*P0
As you can see, the left-hand-side is the negative transpose of the right-hand-side. So this matrix is a skew symmetric 4x4 matrix.
I also published these thoughts in Section II B (towards the end of the paragraph) in the following paper. It should also explain why this matrix is a representation of the baseline.
Aichert, André, et al. "Epipolar consistency in transmission imaging."
IEEE transactions on medical imaging 34.11 (2015): 2205-2219.
https://www.ncbi.nlm.nih.gov/pubmed/25915956
Final note to #john ktejik : skew-symmetry means that a matrix is identical to its negative transpose (NOT inverse transpose)
Related
If this question is more suited for the Mathematics stack exchange, please let me know.
I know the basic idea of vector images: images are represented by mathematical equations, rather than a bitmap, which makes them infinitely scalable.
Computing this seems straight forward for something like a straight line, but for one that is more complex, such as:
https://www.wolframalpha.com/input/?i=zoidberg-like+curve
I am wondering how would the program even begin to produce that equation as the output.
Does it break the image into little curve segments and try to approximate each one? What if a large, multi-segment part of an image could be represented efficiently using only one equation, but because the computer only "sees" one segment at a time, it doesn't realize this. Would the computer test every combination of segments?
I was just curious and wondering if anyone could provide a high-level description of the basic process.
As an example, consider an image represented by equation (1):
y = abs(x)
It could also be represented as (2):
y = -x (-inf, 0)
y = x (0, inf)
But you would only be able to figure out that it could be represented as (1) if you knew what the entire image looked like. If you were scanning from left to right and trying to represent the image as an equation, then you would end up with (2).
While I was reading the blog of Colah,
In the diagram we can clearly see that zt is going to
~ht and not rt
But the equations say otherwise. Isn’t this supposed to be zt*ht-1 And not rt*ht-1.
Please correct me if I’m wrong.
I see this is somehow old, however, if you still haven't figured it out and care, or for any other person who would end up here, the answer is that the figure and equations are consistent. Note that, the operator (x) in the diagram (the pink circle with an X in it) is the Hadamard product, which is an element-wise multiplication between two tensors of the same size. In the equations, this operator is illustrated by * (usually it is represented by a circle and a dot at its center). ~h_t is the output of the tanh operator. The tanh operator receives a linear combination of the input at time t, x_t, and the result of the Hadamard product between r_t and h_{t-1}. Note that r_t should have already been updated by passing the linear combination of x_t and h_{t-1} through a sigmoid. I hope the reset is clear.
I have been trying to implement a locally-weighted logistic regression algorithm in Ruby. As far as I know, no library currently exists for this algorithm, and there is very little information available, so it's been difficult.
My main resource has been the dissertation of Dr. Kan Deng, in which he described the algorithm in what I feel is pretty light detail. My work so far on the library is here.
I've run into trouble when trying to calculate B (beta). From what I understand, B is a (1+d x 1) vector that represents the local weighting for a particular point. After that, pi (the probability of a positive output) for that point is the sigmoid function based on the B for that point. To get B, use the Newton-Raphson algorithm recursively a certain number of times, probably no more than ten.
Equation 4-4 on page 66, the Newton-Raphson algorithm itself, doesn't make sense to me. Based on my understanding of what X and W are, (x.transpose * w * x).inverse * x.transpose * w should be a (1+d x N) matrix, which doesn't match up with B, which is (1+d x 1). The only way that would work, then, is if e were a (N x 1) vector.
At the top of page 67, under the picture, though, Dr. Deng just says that e is a ratio, which doesn't make sense to me. Is e Euler's Constant, and it just so happens that that ratio is always 2.718:1, or is it something else? Either way, the explanation doesn't seem to suggest, to me, that it's a vector, which leaves me confused.
The use of pi' is also confusing to me. Equation 4-5, the derivative of the sigmoid function w.r.t. B, gives a constant multiplied by a vector, or a vector. From my understanding, though, pi' is just supposed to be a number, to be multiplied by w and form the diagonal of the weight algorithm W.
So, my two main questions here are, what is e on page 67 and is that the 1xN matrix I need, and how does pi' in equation 4-5 end up a number?
I realize that this is a difficult question to answer, so if there is a good answer then I will come back in a few days and give it a fifty point bounty. I would send an e-mail to Dr. Deng, but I haven't been able to find out what happened to him after 1997.
If anyone has any experience with this algorithm or knows of any other resources, any help would be much appreciated!
As far as I can see, this is just a version of Logistic regression in which the terms in the log-likelihood function have a multiplicative weight depending on their distance from the point you are trying to classify. I would start by getting familiar with an explanation of logistic regression, such as http://czep.net/stat/mlelr.pdf. The "e" you mention seems to be totally unconnected with Euler's constant - I think he is using e for error.
If you can call Java from Ruby, you may be able to make use of the logistic classifier in Weka described at http://weka.sourceforge.net/doc.stable/weka/classifiers/functions/Logistic.html - this says "Although original Logistic Regression does not deal with instance weights, we modify the algorithm a little bit to handle the instance weights." If nothing else, you could download it and look at its source code. If you do this, note that it is a fairly sophisticated approach - for instance, they check beforehand to see if all the points actually lie pretty much in some subspace of the input space, and project down a few dimensions if they do.
I am trying to apply Random Projections method on a very sparse dataset. I found papers and tutorials about Johnson Lindenstrauss method, but every one of them is full of equations which makes no meaningful explanation to me. For example, this document on Johnson-Lindenstrauss
Unfortunately, from this document, I can get no idea about the implementation steps of the algorithm. It's a long shot but is there anyone who can tell me the plain English version or very simple pseudo code of the algorithm? Or where can I start to dig this equations? Any suggestions?
For example, what I understand from the algorithm by reading this paper concerning Johnson-Lindenstrauss is that:
Assume we have a AxB matrix where A is number of samples and B is the number of dimensions, e.g. 100x5000. And I want to reduce the dimension of it to 500, which will produce a 100x500 matrix.
As far as I understand: first, I need to construct a 100x500 matrix and fill the entries randomly with +1 and -1 (with a 50% probability).
Edit:
Okay, I think I started to get it. So we have a matrix A which is mxn. We want to reduce it to E which is mxk.
What we need to do is, to construct a matrix R which has nxk dimension, and fill it with 0, -1 or +1, with respect to 2/3, 1/6 and 1/6 probability.
After constructing this R, we'll simply do a matrix multiplication AxR to find our reduced matrix E. But we don't need to do a full matrix multiplication, because if an element of Ri is 0, we don't need to do calculation. Simply skip it. But if we face with 1, we just add the column, or if it's -1, just subtract it from the calculation. So we'll simply use summation rather than multiplication to find E. And that is what makes this method very fast.
It turned out a very neat algorithm, although I feel too stupid to get the idea.
You have the idea right. However as I understand random project, the rows of your matrix R should have unit length. I believe that's approximately what the normalizing by 1/sqrt(k) is for, to normalize away the fact that they're not unit vectors.
It isn't a projection, but, it's nearly a projection; R's rows aren't orthonormal, but within a much higher-dimensional space, they quite nearly are. In fact the dot product of any two of those vectors you choose will be pretty close to 0. This is why it is a generally good approximation of actually finding a proper basis for projection.
The mapping from high-dimensional data A to low-dimensional data E is given in the statement of theorem 1.1 in the latter paper - it is simply a scalar multiplication followed by a matrix multiplication. The data vectors are the rows of the matrices A and E. As the author points out in section 7.1, you don't need to use a full matrix multiplication algorithm.
If your dataset is sparse, then sparse random projections will not work well.
You have a few options here:
Option A:
Step 1. apply a structured dense random projection (so called fast hadamard transform is typically used). This is a special projection which is very fast to compute but otherwise has the properties of a normal dense random projection
Step 2. apply sparse projection on the "densified data" (sparse random projections are useful for dense data only)
Option B:
Apply SVD on the sparse data. If the data is sparse but has some structure SVD is better. Random projection preserves the distances between all points. SVD preserves better the distances between dense regions - in practice this is more meaningful. Also people use random projections to compute the SVD on huge datasets. Random Projections gives you efficiency, but not necessarily the best quality of embedding in a low dimension.
If your data has no structure, then use random projections.
Option C:
For data points for which SVD has little error, use SVD; for the rest of the points use Random Projection
Option D:
Use a random projection based on the data points themselves.
This is very easy to understand what is going on. It looks something like this:
create a n by k matrix (n number of data point, k new dimension)
for i from 0 to k do #generate k random projection vectors
randomized_combination = feature vector of zeros (number of zeros = number of features)
sample_point_ids = select a sample of point ids
for each point_id in sample_point_ids do:
random_sign = +1/-1 with prob. 1/2
randomized_combination += random_sign*feature_vector[point_id] #this is a vector operation
normalize the randomized combination
#note that the normal random projection is:
# randomized_combination = [+/-1, +/-1, ...] (k +/-1; if you want sparse randomly set a fraction to 0; also good to normalize by length]
to project the data points on this random feature just do
for each data point_id in dataset:
scores[point_id, j] = dot_product(feature_vector[point_id], randomized_feature)
If you are still looking to solve this problem, write a message here, I can give you more pseudocode.
The way to think about it is that a random projection is just a random pattern and the dot product (i.e. projecting the data point) between the data point and the pattern gives you the overlap between them. So if two data points overlap with many random patterns, those points are similar. Therefore, random projections preserve similarity while using less space, but they also add random fluctuations in the pairwise similarities. What JLT tells you is that to make fluctuations 0.1 (eps)
you need about 100*log(n) dimensions.
Good Luck!
An R Package to perform Random Projection using Johnson- Lindenstrauss Lemma
RandPro
I'm cross-posting this from math.stackexchange.com because I'm not getting any feedback and it's a time-sensitive question for me.
My question pertains to linear separability with hyperplanes in a support vector machine.
According to Wikipedia:
...formally, a support vector machine
constructs a hyperplane or set of
hyperplanes in a high or infinite
dimensional space, which can be used
for classification, regression or
other tasks. Intuitively, a good
separation is achieved by the
hyperplane that has the largest
distance to the nearest training data
points of any class (so-called
functional margin), since in general
the larger the margin the lower the
generalization error of the
classifier.classifier.
The linear separation of classes by hyperplanes intuitively makes sense to me. And I think I understand linear separability for two-dimensional geometry. However, I'm implementing an SVM using a popular SVM library (libSVM) and when messing around with the numbers, I fail to understand how an SVM can create a curve between classes, or enclose central points in category 1 within a circular curve when surrounded by points in category 2 if a hyperplane in an n-dimensional space V is a "flat" subset of dimension n − 1, or for two-dimensional space - a 1D line.
Here is what I mean:
That's not a hyperplane. That's circular. How does this work? Or are there more dimensions inside the SVM than the two-dimensional 2D input features?
This example application can be downloaded here.
Edit:
Thanks for your comprehensive answers. So the SVM can separate weird data well by using a kernel function. Would it help to linearize the data before sending it to the SVM? For example, one of my input features (a numeric value) has a turning point (eg. 0) where it neatly fits into category 1, but above and below zero it fits into category 2. Now, because I know this, would it help classification to send the absolute value of this feature for the SVM?
As mokus explained, support vector machines use a kernel function to implicitly map data into a feature space where they are linearly separable:
Different kernel functions are used for various kinds of data. Note that an extra dimension (feature) is added by the transformation in the picture, although this feature is never materialized in memory.
(Illustration from Chris Thornton, U. Sussex.)
Check out this YouTube video that illustrates an example of linearly inseparable points that become separable by a plane when mapped to a higher dimension.
I am not intimately familiar with SVMs, but from what I recall from my studies they are often used with a "kernel function" - essentially, a replacement for the standard inner product that effectively non-linearizes the space. It's loosely equivalent to applying a nonlinear transformation from your space into some "working space" where the linear classifier is applied, and then pulling the results back into your original space, where the linear subspaces the classifier works with are no longer linear.
The wikipedia article does mention this in the subsection "Non-linear classification", with a link to http://en.wikipedia.org/wiki/Kernel_trick which explains the technique more generally.
This is done by applying what is know as a [Kernel Trick] (http://en.wikipedia.org/wiki/Kernel_trick)
What basically is done is that if something is not linearly separable in the existing input space ( 2-D in your case), it is projected to a higher dimension where this would be separable. A kernel function ( can be non-linear) is applied to modify your feature space. All computations are then performed in this feature space (which can be possibly of infinite dimensions too).
Each point in your input is transformed using this kernel function, and all further computations are performed as if this was your original input space. Thus, your points may be separable in a higher dimension (possibly infinite) and thus the linear hyperplane in higher dimensions might not be linear in the original dimensions.
For a simple example, consider the example of XOR. If you plot Input1 on X-Axis, and Input2 on Y-Axis, then the output classes will be:
Class 0: (0,0), (1,1)
Class 1: (0,1), (1,0)
As you can observe, its not linearly seperable in 2-D. But if I take these ordered pairs in 3-D, (by just moving 1 point in 3-D) say:
Class 0: (0,0,1), (1,1,0)
Class 1: (0,1,0), (1,0,0)
Now you can easily observe that there is a plane in 3-D to separate these two classes linearly.
Thus if you project your inputs to a sufficiently large dimension (possibly infinite), then you'll be able to separate your classes linearly in that dimension.
One important point to notice here (and maybe I'll answer your other question too) is that you don't have to make a kernel function yourself (like I made one above). The good thing is that the kernel function automatically takes care of your input and figures out how to "linearize" it.
For the SVM example in the question given in 2-D space let x1, x2 be the two axes. You can have a transformation function F = x1^2 + x2^2 and transform this problem into a 1-D space problem. If you notice carefully you could see that in the transformed space, you can easily linearly separate the points(thresholds on F axis). Here the transformed space was [ F ] ( 1 dimensional ) . In most cases , you would be increasing the dimensionality to get linearly separable hyperplanes.
SVM clustering
HTH
My answer to a previous question might shed some light on what is happening in this case. The example I give is very contrived and not really what happens in an SVM, but it should give you come intuition.