Projective transformation fitting - algorithm

Given set of points in 3D ( X = (x1, x2, x3), Y = (y1, y2, y3) ), how can I fit transformation from X to Y?
As far as I know this is called projective transformation.
Here is example of X and Y.
Blue and red lines in X are parallel, but they are not parallel in Y.

Projective transformations in 3d have an associated 4x4 matrix (modulo a constant multiplication). You can find the matrix with least square fitting.

Well. I found some useful information:
This transformation is non-linear and it is not possible to represent non-linear transformation with a matrix. There are some tricks such as using homogenous coordinates. but it doesn't make all non-linear transformations representable using matrices.
However, approximating a nonlinear function by a linear function is possible.

So, the task is to find best fitting linear transformation, right?
There is a simple solution using linear regression.
Say the transformation matrix is named A and has dimensions 3x3. And say you have N vectors (points) in 3D before and after the transformation - so you have matrices X and Y of 3 rows and N columns. Then the transformation is:
Y = A X + B
where B is a vector of length 3 and specifies the shift. You can rewrite the matrix multiplication using indices:
y[i,j] = sum(k=1..3)(a[i,k] * x[k,j]) + b[i]
for i = 1..3 and j = 1 .. N. So, you have 12 unknown variables (a, b), and 3 * N equations. For N >= 4, you simply find the best solution using linear regression.
For example, in R it is very easy:
# input data
X = matrix(c(c(0, 0, 0), c(1, 0, 0), c(0, 1, 0), c(0, 1, 1)), nrow = 3)
Y = matrix(c(c(1, 0, 1), c(2, 0, 1), c(1, 1, 1), c(1, 1, 2)), nrow = 3)
# expected transformation: A is identity matrix, b is [1, 0, 1]
N = dim(Y)[2]
# transform data for regression
a1 = rbind(t(X), matrix(rep(0, 3*2*N), ncol = 3))
a2 = rbind(matrix(rep(0, 3*N), ncol = 3), t(X), matrix(rep(0, 3*N), ncol = 3))
a3 = rbind(matrix(rep(0, 3*2*N), ncol = 3), t(X))
b1 = rep(1:0, c(N, 2*N))
b2 = rep(c(0, 1, 0), each = N)
b3 = rep(0:1, c(2*N, N))
y = as.vector(t(Y))
# do the regression
summary(lm(y ~ 0 + a1 + a2 + a3 + b1 + b2 + b3))
And the output is:
[...]
Coefficients:
Estimate Std. Error t value Pr(>|t|)
a11 1.000e+00 NA NA NA
a12 -2.220e-16 NA NA NA
a13 -3.612e-32 NA NA NA
a21 7.850e-17 NA NA NA
a22 1.000e+00 NA NA NA
a23 -1.743e-32 NA NA NA
a31 0.000e+00 NA NA NA
a32 0.000e+00 NA NA NA
a33 1.000e+00 NA NA NA
b1 1.000e+00 NA NA NA
b2 -7.850e-17 NA NA NA
b3 1.000e+00 NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 12 and 0 DF, p-value: NA
as expected.

Related

Formula to get next question in quiz basing on previous statistics

My goal is to dynamically determine what question should be next in quiz by using statistics of previous answers
So, I have:
Question with difficulty field (1-100)
Maximum score you can get in question (let it be 256)
Score user have reached in question (x out of max)
I want to somehow combine these paramaters in formula to choose most suitable next question for user
How can I do it?
My idea was to give user a question with median difficulty as first one and then check if user scored less than 50% of maximum, then get questions with 25 percentile difficulty else get 75 percentile. Then repeat this schema on a smaller stint (25-50 percentile or 50-75 percentile and so on)
Let's assume that the player has a fixed function score = f(difficulty) that gives for each difficulty the expected score percentage. Once we know this function, we can invert it and find the difficulty level that will give us the expected score we want.
However, the function is not known. But we have samples of this function in the form of our previous questions. So, we can fit a function to these samples. If you have knowledge about the form of the dependence, you can include that knowledge in the shape of your fitted function. I will simply assume a truncated linear function:
score = f(difficulty) = max(0, min(m * difficulty + n, 1))
The two parameters that we need to find are m and n. If we remove all sample questions where the user scored 100% or 0%, we can ignore the truncation. Then, we have a list of samples that form a linear system of equations:
score1 = m * difficulty1 + n
score2 = m * difficulty2 + n
score3 = m * difficulty3 + n
...
This system will usually not have a solution. So, we can solve for a least-squares solution. To do this, we will incrementally build a 2x2 matrix A and a 2-dimensional vector b that represent the system A * x = b. We will start with the zero matrix and the zero vector. For each question, we will update:
/ A11 A12 \ += / difficulty * difficulty difficulty \
\ A21 A22 / \ difficulty 1 /
/ b1 \ += / difficulty * score \
\ b2 / \ score /
Once we have added at least two questions, we can solve:
m = (A12 * b2 - A22 * b1) / (A12 * A12 - A11 * A22)
n = (A12 * b1 - A11 * b2) / (A12 * A12 - A11 * A22)
And we can find the difficulty for an expected score of P as:
difficulty = (P - n) / m
Let's do an example. The following table contains a few questions and the state of the function after adding the question.
diff score | A11 A12 A22 b1 b2 | m n
--------------+----------------------------+-------------
70 0.3 | 4900 70 1 21 0.3 |
50 0.4 | 7400 120 2 41 0.7 | -0.005 0.65
40 0.5 | 9000 160 3 61 1.2 | -0.006 0.74
35 0.7 | 10225 195 4 85.5 1.9 | -0.010 0.96
Here is the fitted function and the sample questions:
And if we want to find the difficulty for an expected score of e.g. 75%, we get:
difficulty(0.75) = 21.009

Bilinear image interpolation / scaling - A calculation example

I would like to ask you about some bilinear interpolation / scaling details. Let's assume that we have this matrix:
|100 | 50 |
|70 | 20 |
This is a 2 x 2 grayscale image. Now, I would like scale it by factor of two and my matrix looks like this:
| 100 | f1 | 50 | f2 |
| f3 | f4 | f5 | f6 |
| 70 | f7 | 20 | f8 |
so if we would like to calculate f4, the calculation is defined as
f1 = 100 + 0.5(50 - 100) = 75
f7 = 70 + 0.5(20 - 70) = 45
and now finally:
f4 = 75 + 0.5(45 - 75) = 60
However, I can't really understand what calculations are proper for f3 or f1
Do we do the bilinear scaling in each direction separately? Therefore, this would mean that:
f3 = 100 + 0.5(70 - 100) = 85
f1 = 100 + 0.5(50 - 100) = 75
Also, how should I treat f2, f6, f8. Are those points simply being copied like in the nearest neighbor algorithm?
I would like to point you to this very insightful graphic from Wikipedia that illustrates how to do bilinear interpolation for one point:
Source: Wikipedia
As you can see, the four red points are what is known. These points you know before hand and P is the point we wish to interpolate. As such, we have to do two steps (as you have indicated in your post). To handle the x coordinate (horizontal), we must calculate what the interpolated value is row wise for the top row of red points and the bottom row of red points. This results in the two blue points R1 and R2. To handle the y coordinate (vertical), we use the two blue points and interpolate vertically to get the final P point.
When you resize an image, even though we don't visually see what I'm about to say, but imagine that this image is a 3D signal f. Each point in the matrix is in fact a 3D coordinate where the column location is the x value, the row location is the y value and the z value is the quantity / grayscale value of the matrix itself. Therefore, doing z = f(x,y) is the value of the matrix at location (x,y) in the matrix. In our case, because you're dealing with images, each value of (x,y) are integers that go from 1 up to as many rows/columns as we have depending on what dimension you're looking at.
Therefore, given the coordinate you want to interpolate at (x,y), and given the red coordinates in the image above, which we call them x1,y1,x2,y2 as per the diagram - specifically going with the convention of the diagram and referencing how images are accessed: x1 = 1, x2 = 2, y1 = 2, y2 = 1, the blue coordinates R1 and R2 are computed via 1D interpolation column wise using the same row both points coincide on:
R1 = f(x1,y1) + (x - x1)/(x2 - x1)*(f(x2,y1) - f(x1,y1))
R2 = f(x1,y2) + (x - x1)/(x2 - x1)*(f(x2,y2) - f(x1,y2))
It's important to note that (x - x1) / (x2 - x1) is a weight / proportion of how much of a mix the output consists of between the two values seen at f(x1,y1) and f(x2,y1) for R1 or f(x1,y2) and f(x2,y2) for R2. Specifically, x1 is the starting point and (x2 - x1) is the difference in x values. You can verify that substituting x1 as x gives us 0 while x2 as x gives us 1. This weight fluctuates between [0,1] which is required for the calculations to work.
It should be noted that the origin of the image is at the top-left corner, and so (1,1) is at the top-left corner. Once you find R1 and R2, we can find P by interpolating row wise:
P = R2 + (y - y2)/(y2 - y1)*(R1 - R2)
Again, (y - y2) / (y2 - y1) denote the proportion / mix of how much R1 and R2 contribute to the final output P. As such, you calculated f5 correctly because you used four known points: The top left is 100, top right is 50, bottom left is 70 and bottom right is 20. Specifically, if you want to compute f5, this means that (x,y) = (1.5,1.5) because we're halfway in between the 100 and 50 due to the fact that you're scaling the image by two. If you plug in these values into the above computation, you will get the value of 60 as you expected. The weights for both calculations will also result in 0.5, which is what you got in your calculations and that's what we expect.
If you compute f1, this corresponds to (x,y) = (1.5,1) and if you substitute this into the above equation, you will see that (y - y2)/(y2 - y1) gives you 0 or the weight is 0, and so what is computed is just R2, corresponding to the linear interpolation along the top row only. Similarly, if we computed f7, this means we want to interpolate at (x,y) = (1.5,2). In this case, you will see that (y - y2) / (y2 - y1) is 1 or the weight is 1 and so P = R2 + (R1 - R2), which simplifies to R1 and is the linear interpolation along the bottom row only.
Now there's the case of f3 and f5. Those both correspond to (x,y) = (1,1.5) and (x,y) = (2,1.5) respectively. Substituting these values in for R1 and R2 and P for both cases give:
f3
R1 = f(1,2) + (1 - 1)/(2 - 1)*(f(2,2) - f(1,2)) = f(1,2)
R2 = f(1,1) + (1 - 1)/(2 - 1)*(f(1,2) - f(1,1)) = f(1,1)
P = R1 + (1.5 - 1)*(R1 - R2) = f(1,2) + 0.5*(f(1,2) - f(1,1))
P = 70 + 0.5*(100 - 70) = 85
f5
R1 = f(1,2) + (2 - 1)/(2 - 1)*(f(2,2) - f(1,2)) = f(2,2)
R2 = f(1,1) + (2 - 1)/(2 - 1)*(f(1,2) - f(1,1)) = f(1,2)
P = R1 + (1.5 - 1)*(R1 - R2) = f(2,2) + 0.5*(f(2,2) - f(1,2))
P = 20 + 0.5*(50 - 20) = 35
So what does this tell us? This means that you are interpolating along the y-direction only. This is apparent when we take a look at P. Examining the calculations more thoroughly of P for each of f3 and f5, you see that we are considering values along the vertical direction only.
As such, if you want a definitive answer, f1 and f7 are found by interpolating along the x / column direction only along the same row. f3 and f5 are found by interpolating y / row direction along the same column. f4 uses a mixture of f1 and f7 to compute the final value as you have already seen.
To answer your final question, f2, f6 and f8 are filled in based on personal preference. These values are considered to be out of bounds, with the x and y values both being 2.5 and that's outside of our [1,2] grid for (x,y). In MATLAB, the default implementation of this is to fill any values outside of the defined boundaries to be not-a-number (NaN), but sometimes, people extrapolate using linear interpolation, copy the border values, or perform some elaborate padding like symmetric or circular padding. It depends on what situation you're in, but there is no correct and definitive answer on how to fill in f2, f6 and f8 - it all depends on your application and what makes the most sense to you.
As a bonus, we can verify that my calculations are correct in MATLAB. We first define a grid of (x,y) points in the [1,2] range, then resize the image so that it's twice as large where we specify a resolution of 0.5 per point rather than 1. I'm going to call your defined matrix A:
A = [100 50; 70 20]; %// Define original matrix
[X,Y] = meshgrid(1:2,1:2); %// Define original grid of points
[X2,Y2] = meshgrid(1:0.5:2.5,1:0.5:2.5) %// Define expanded grid of points
B = interp2(X,Y,A,X2,Y2,'linear'); %// Perform bilinear interpolation
The original (x,y) grid of points looks like:
>> X
X =
1 2
1 2
>> Y
Y =
1 1
2 2
The expanded grid to expand the size of the matrix by twice as much looks like:
>> X2
X2 =
1.0000 1.5000 2.0000 2.5000
1.0000 1.5000 2.0000 2.5000
1.0000 1.5000 2.0000 2.5000
1.0000 1.5000 2.0000 2.5000
>> Y2
Y2 =
1.0000 1.0000 1.0000 1.0000
1.5000 1.5000 1.5000 1.5000
2.0000 2.0000 2.0000 2.0000
2.5000 2.5000 2.5000 2.5000
B is the output using X and Y as the original grid of points and X2 and Y2 are the points we want to interpolate at.
We get:
>> B
B =
100 75 50 NaN
85 60 35 NaN
70 45 20 NaN
NaN NaN NaN NaN

A feature ranking algorithm

if I have the following partitions or subsets with the corresponding scores as follows:
{X1,X2} with score C1
{X2,X3} with score C2
{X3,X4} with score C3
{X4,X1} with score C4
I want to write an algorithm that will rank the Xs based on the corresponding score of the subset they appeared in.
one way for example will be to do the following:
X1 = (C1 + C4)/2
X2 = (C1 + C2)/2
X3 = (C2 + C3)/2
X4 = (C3 + C4)/2
and then sort the results.
is there a more efficient or better ideas to do the ranking?
If you think that the score of a set is the sum of the scores of each object, you can write your equation in matrix form as :
C = M * X
where C is a vector of length 4 with components C1, C2, C3, C4, M is the matrix (in your case, as I understand this may vary)
1 1 0 0
0 1 1 0
0 0 1 1
1 0 0 1
and X is the unknown. You can then use Gaussian elimination to determine X and the get the ranking as you suggested.

Determine elements of a matrix when sum of rows and columns are given

There is 4x4 matrix with all 4 diagonal elements zero. All other elements are non negative integers. Sum of all 4 rows and 4 columns are known individually. Is it possible to determine the remaining 12 elements of the matrix? Eg
0 1 1 0 sum=2
2 0 0 1 sum=3
4 1 0 0 sum=5
0 1 6 0 sum=7
sum=6 sum=3 sum=7 sum=1
Any guidance will be very helpful.
Thanks
The matrix is
0 a12 a13 a14
a21 0 a23 a24
a31 a32 0 a34
a41 a42 a43 0
The problem is to solve a set of linear equations:
a12 + a13 + a14 = c1
a21 + a23 + a24 = c2
and so on. We have 12 variables and 8 equations (4 for the rows and 4 for the columns). To solve a linear equation system in 12 variables, we generally need 12 equations. Since the number of equations is lesser, the system will not have a unique solution. It may have infinitely many solutions.
The matrix is
0 a12 a13 a14
a21 0 a23 a24
a31 a32 0 a34
a41 a42 a43 0
The problem is to solve a set of linear equations:
a12 + a13 + a14 = r1
a21 + a23 + a24 = r2
a31 + a32 + a34 = r3
a41 + a43 + a44 = r4
a21 + a31 + a41 = c1
a12 + a32 + a42 = c2
a13 + a23 + a43 = c3
a14 + a34 + a44 = c4
Thus you need to solve an equation of the form Ax = b with A consisting of only 0 and 1 coefficients. Use Gauss Elimination and Euclidian Algorithm to find integer Matrices S, D, T such that D is in Diagonal form and SDT = A. If you do not know how to do this search the web for Smith normal form algorithm.
Then
SDTx = Ax = b
Thus
DTx = S-1Ax = S-1b
Since D is in diagonal form you can check if you can solve
Dy = S-1b
for y. You also find a base for the (Homogenous) solution space. This in turn can then be used to cut down the complexity in the search for the positive solutions of the original equation.

Method for interpolating value at the midpoint of a square, given the values, first and second derivatives at the corners?

All the practical examples of spatial interpolation I've been able to find work by sampling additional surrounding points to estimate the derivatives. But is there a simpler method if the derivatives are already known—and if you only need the value (and derivatives) for the single point at the center of the known points?
To illustrate: suppose for each of the points (1, 1), (-1, 1), (-1, -1), and (1, -1) you know f(x, y), f'(x), f''(x), f'(y), and f''(y) — and you want interpolated values at (0, 0) for f(x, y), f'(x), f''(x), f'(y), and f''(y).
First of all the problem as posed does not make sense. In multi-variable calculus we don't have derivatives, we have partial derivatives. Lots of them.
Suppose you have the value, first partial derivatives and second partial derivatives at the corners. So at each corner we know the value, the partial by x, the partial by y, the second partial by x by x, the second partial by x by y, and the second partial by y by y. We have 6 pieces of data per corner, for 24 pieces of data total.
Next what we do is try to fit this to an appropriate polynomial. 24 terms, that would be, a0 + a1 x + a2 y + a3 x^2 + a4 x y + a5 y^2 + a6 x^3 + a7 x^2 y + a8 x y^2 + a9 y^3 + a10 x^4 + a11 x^3 y + a12 x^2 y^2 + a13 x y^3 + a14 y^4 + a15 x^5 + a16 x^4 y + a17 x^3 y^2 + a18 x^2 y^3 + a18 x y^4 + a19 y^5 + a20 x^6 + a21 x^4 y^2 + a22 x^2 y^4 + a23 y^6. (I had to leave out some 6th power terms because I was hitting that 24 limit.)
If you calculate that out, matching up all of those values against all of those points you get 24 equations in 24 variables. Solve and you get all of the coefficients to use. Plug in the value (0, 0) and you have your interpolation.
Straightforward, tedious, and not for the faint of heart.

Resources