Ive been looking around the net for ages tryin to find how to derive the 2d transformation matices for the above functions. Couldnt find it in my notes for college and it was a past exam question wondering if anybody could help for revision purposes? cheers
A transformation matrix is simply a short-hand for applying a function to the x and y values of a point, independently. In the case of translation, x' = 1*x + 0*y + dx*1 and y' = 0*x + 1*y + dy * 1. The matrix representation of these two equations is as follows:
[[ 1 0 dx ] [[ x ] [[ x' ]
[ 0 1 dy ] [ y ] = [ y' ]
[ 0 0 1 ]] [ 1 ]] [ 1 ]]
The other matrices can be similarly derived--simply determine what x' and y' should be, in terms of x, y and 1.
See Wikipedia, for instance.
Related
How to create a set of turtles with the rising distribution of their location from the edge of the environment to the middle?
You can use something like this in setup
let center-x (max-pxcor + min-pxcor) / 2
let center-y (max-pycor + min-pycor) / 2
let std-dev 5 ; change this to vary how clumped the turtles are
crt 100
[
set xcor random-normal center-x std-dev
set ycor random-normal center-y std-dev
]
That will work if you have world-wrapping on. If world-wrapping is off, you would have to add some code to check that the values of xcor and ycor from random-normal are within the world (e.g., the turtle's new xcor is between min-pxcor and max-pxcor) -- otherwise, the code will sometimes try to put the new turtle outside the space, which is an error.
You could also use a triangular distribution that varies the density of turtles linearly from a peak at the center of the space to zero at the edge.
let center-x (max-pxcor + min-pxcor) / 2
let center-y (max-pycor + min-pycor) / 2
crt 100
[
set xcor random-triangular min-pxcor center-x max-pxcor
set ycor random-triangular min-pycor center-y max-pycor
]
NetLogo does not have this triangular distribution built-in, so you need to add this procedure to your code:
to-report random-triangular [a-min a-mode a-max]
; Return a random value from a triangular distribution
; Method from https://en.wikipedia.org/wiki/Triangular_distribution#Generating_Triangular-distributed_random_variates
; Obtained 2015-11-27
if (a-min > a-mode) or (a-mode > a-max) or (a-min >= a-max)
[ error (word "Random-triangular received illegal parameters (min, mode, max): " a-min " " a-mode " " a-max) ]
let a-rand random-float 1.0
let F (a-mode - a-min) / (a-max - a-min)
ifelse a-rand < F
[ report a-min + sqrt (a-rand * (a-max - a-min) * (a-mode - a-min)) ]
[ report a-max - sqrt ((1 - a-rand) * (a-max - a-min) * (a-max - a-mode)) ]
end
Here is the Newton's method code from Wikipedia page:
x0 = 1 # The initial guess
f(x) = x^2 - 2 # The function whose root we are trying to find
fprime(x) = 2x # The derivative of the function
tolerance = 1e-7 # 7 digit accuracy is desired
epsilon = 1e-14 # Do not divide by a number smaller than this
maxIterations = 20 # Do not allow the iterations to continue indefinitely
solutionFound = false # Have not converged to a solution yet
for i = 1:maxIterations
y = f(x0)
yprime = fprime(x0)
if abs(yprime) < epsilon # Stop if the denominator is too small
break
end
global x1 = x0 - y/yprime # Do Newton's computation
if abs(x1 - x0) <= tolerance # Stop when the result is within the desired tolerance
global solutionFound = true
break
end
global x0 = x1 # Update x0 to start the process again
end
if solutionFound
println("Solution: ", x1) # x1 is a solution within tolerance and maximum number of iterations
else
println("Did not converge") # Newton's method did not converge
end
When I implement this I see that there are cases I need to apply new initial guess:
When functions (i.e: f, fPrime) give Infinity or NaN result (e.g in C#, this happens when result = 1/x when x=0, result = √x when x=-1,...)
When abs(yprime) < epsilon
When x0 is too large for y/yprime (e.g x0 = 1e99 but y/yprime = 1e25, this will make x1 = x0 while it's mathematically wrong, this will make the algorithm leads to nowhere).
My app allows user to input the math function and the initial guess, (e.g: Initial guess for x can be 1e308, function can be 9=√(-81+x), 45=InverseSin(x), 3=√(x-1e99),... ).
So when the initial guess is bad, my app will automatically apply the new initial guess with hope that it can give the result.
My current solution: the initial guess is the array of values:
double[] arrInitialGuess =
{
[User's initial guess], 0, 1, -1, 2, -2,... (you know, Factorial n!)..., 7.257416E+306, -7.257416E+306,
}
I have the following questions:
Is the big number (e.g 7.257416E+306) even needed? because I see that in x1 = x0 - y/yprime, if the initial guess x0 is too big compare to y/yprime, it programmatically leads to nowhere. If the big number is pointless, what is the cap for initial guess (e.g 1e17?)
2. What is better for the array of initial guess: the factorial n! {+-1, +-2, +-6,...}, or 2^x {+-2^0, +-2^1, +-2^2,...}, or 10^x {+-1e0, +-1e1, +-1e2,...},...
If my predefined-array-initial-guess method is not good, is there any better way to get new initial guess for Newton's method? (e.g an algorithm to get next initial-guess?)
Update:
Change of thought, the pre-defined array of initial guess doesn't work.
For example, I have the formula: 8=3/x => y=8-3/x which gives this graph
In this case, I can find the solution when initial guess is in the range [ 0.1 ; 0.7 ], so if I have the pre-defined initial guess arrray = {0, 1, 2,..., Inf}, it won't do me any good but wasting my precious resource.
So my new thought now is: steering the next initial guess base on the graph. The idea is: applying the last guess and compare with current guess to see that the value of y is heading toward 0 or not, so that I can determine to increase or decrease the next initial guess to steer the y toward 0. But I still consider the pre-defined initial guess idea in case the guesses all give Infinity value.
Update 2:
New thought: pick the new initial guess in the range [ x0; x1 ] where
there is no error between x0 and x1 (e.g there is no error divide by zero when apply a value in the range [ x0; x1 ]). So I can form the line AB: A(x0, y0) and B(x1, y1).
y0 and y1 have different sign: (y0 > 0 && y1 < 0) || (y0 < 0 && y1 > 0). So that the line AB can cut the x axis (which cause a big possibility there is an y = 0 somewhere between y0 and y1, if the graph isn't too weird).
Try to narrow the range [ x0; x1 ] as small as possible, then run a few initial guesses between the range.
Good evening everybody.
I have been using NN quite often already, so I thought it's time to face the background.
As a result I have been spending quite a lot of hours with my c++ implementation of a Neural Network from scratch. Still, I do not get any useful output.
My issue is the clean OOP and efficient implementation, especially what I have to backpropagate from one Layer class to the next. I am aware that I'm just skipping the full calculation/forwarding of the jacobian matrices, but from my understanding this isn't necessary, since most of the entries will be cut out.
I have a softmax class with size n:
Forward Pass: It takes an input vector input of length n and creates an output vector output of size n.
sum = 0; for (int i = 0; i < n; i++) sum += e^input[ i ].
Then it calculates an output vector output of length n with:
output [ i ] = e^input [ i ] / sum
Backward Pass: It takes a feedback vector target of size n, the target value.
I do not have weights or biase in my softmax class, so I just calculate the feedback vector feedback of size n:
feedback[ i ] = output[ i ] - target[ i ]
That is what I return from my softmax layer.
I have a fully Connected class: m -> n
Forward Pass: It take an input vector of size m.
I calculate the net activity vector net of size n, and an output vector of size n:
net[ i ] = b[ i ];
for (int j = 0; j < m; j++) net[ i ] += w[ i ][ j ] * input[ i ]
output [ i ] = 1 / (1 + e^-net[ i ])
Backward Pass: It takes an feedback vector of size n from the following layer.
b'[ i ] = b[ i ] + feedback[ i ] * 1 * learningRate
w'[ i ][ j ] = w[ i ][ j ] + feedback[ i ] * input[ j ] * learningRate
The new feedback array of size m:
feedback'[ i ] = 0;
feedback'[ i ] += feedback[ j ] * weights[ i ][ j ] * (output[ j ] * (1 - output[ j ]))
Of course, the feedback from one fully connected layer will be passed to the next, and so on.
I've been reading a few articles and found this one quite nice:
https://www.ics.uci.edu/~pjsadows/notes.pdf
I feel like my implementation should be identical to what I read in such papers, but even after a small number of training examples (~100) my network output is getting close to a constant. Basically as if it would be just depending on the biase.
So, could someone please give me a hint if I'm wrong with my theoretical understanding, or if I just have some issues with my implementation?
I am implementing a Szudik's pairing function in Matlab, where i pair 2 values coming from 2 different matrices X and Y, into a unique value given by the function 'CantorPairing2D(X,Y), After this i reverse the process to check for it's invertibility given by the function 'InverseCantorPairing2( X )'. But I seem to get an unusual problem, when i check this function for small matrices of size say 10*10, it works fine, but the for my code i have to use a 256 *256 matrices A and B, and then the code goes wrong, actually what it gives is a bit strange, because when i invert the process, the values in the matrix A, are same as cvalues of B in some places, for instance A(1,1)=B(1,1), and A(1,2)=B(1,2). Can somebody help.
VRNEW=CantorPairing2D(VRPRO,BLOCK3);
function [ Z ] = CantorPairing2D( X,Y )
[a,~] =(size(X));
Z=zeros(a,a);
for i=1:a
for j=1:a
if( X(i,j)~= (max(X(i,j),Y(i,j))) )
Z(i,j)= X(i,j)+(Y(i,j))^2;
else
Z(i,j)= (X(i,j))^2+X(i,j)+Y(i,j);
end
end
end
Z=Z./1000;
end
function [ A,B ] = InverseCantorPairing2( X )
[a, ~] =(size(X));
Rfinal=X.*1000;
A=zeros(a,a);
B=zeros(a,a);
for i=1:a
for j=1:a
if( ( Rfinal(i,j)- (floor( sqrt(Rfinal(i,j))))^2) < floor(sqrt(Rfinal(i,j))) )
T=floor(sqrt(Rfinal(i,j)));
B(i,j)=T;
A(i,j)=Rfinal(i,j)-T^2;
else
T=floor( (-1+sqrt(1+4*Rfinal(i,j)))/2 );
A(i,j)=T;
B(i,j)=Rfinal(i,j)-T^2-T;
end
end
end
end
Example if A= 45 16 7 17
7 22 11 25
11 12 9 17
2 11 3 5
B= 0 0 0 1
0 0 0 1
1 1 1 1
1 3 0 0
Then after pairing i get
C =2.0700 0.2720 0.0560 0.3070
1.4060 0.5060 0.1320 0.6510
0.1330 0.1570 0.0910 0.3070
0.0070 0.1350 0.0120 0.0300
after the inverse pairing i should get the same A and same B. But for bigger matrices it is giving unusual behaviour, because some elements of A are same as B.
If possible it would help immensely a counter example where your code does fail.
I got to reproduce your code behaviour and I have rewritten your code in a vectorised fashion. You should get the bug, but hopefully it is a first step to uncover the underlying logic and find the bug itself.
I am not familiar with the specific algorithm, but I observe a discrepancy in the CantorPairing definition.
for elements where Y = X your if statement would be false, since X = max(X,X); so for those elements your Z would be X^2+X+Y, but for hypothesis X =Y, therefore your would have:
X^2+X+X = X^2+2*X;
now, if we perturb slightly the equation and suppose Y = X + 10*eps, your if statement would be true (since Y > X) and your Z would be X + Y ^2; since X ~=Y we can approximate to X + X^2
therefore your equation is very temperamental to numerical approximation ( and you definitely have a discontinuity in Z). Again, I am not familiar with the algorithm and it may very well be the behaviour you want, but it is unlikely: so I am pointing this out.
Following is my version of your code, I report it also because I hope it will be pedagogical in getting you acquainted with logical indexing and vectorized code (which is the idiomatic form for MATLAB, let alone much faster than nested for loops).
function [ Z ] = CantorPairing2D( X,Y )
[a,~] =(size(X));
Z=zeros(a,a);
firstConditionIndeces = Y > X; % if Y > X then X is not the max between Y and X
% update elements on which to apply first equation
Z(firstConditionIndeces) = X(firstConditionIndeces) + Y(firstConditionIndeces).^2;
% update elements on the remaining elements
Z(~firstConditionIndeces) = X(~firstConditionIndeces).^2 + X(~firstConditionIndeces) + Y(~firstConditionIndeces) ;
Z=Z./1000;
end
function [ A,B ] = InverseCantorPairing2( X )
[a, ~] =(size(X));
Rfinal=X.*1000;
A=zeros(a,a);
B=zeros(a,a);
T = zeros(a,a) ;
% condition deciding which updates to be applied
indecesToWhichApplyFstFcn = Rfinal- (floor( sqrt(Rfinal )))^2 < floor(sqrt(Rfinal)) ;
% elements on which to apply the first update
T(indecesToWhichApplyFstFcn) = floor(sqrt(Rfinal )) ;
B(indecesToWhichApplyFstFcn) = floor(Rfinal(indecesToWhichApplyFstFcn)) ;
A(indecesToWhichApplyFstFcn) = Rfinal(indecesToWhichApplyFstFcn) - T(indecesToWhichApplyFstFcn).^2;
% updates on which to apply the remaining elements
A(~indecesToWhichApplyFstFcn) = floor( (-1+sqrt(1+4*Rfinal(~indecesToWhichApplyFstFcn )))/2 ) ;
B(~indecesToWhichApplyFstFcn) = Rfinal(~indecesToWhichApplyFstFcn) - T(~indecesToWhichApplyFstFcn).^2 - T(~indecesToWhichApplyFstFcn) ;
end
My data exists in 128 dimensions, I'trying to reduce my data to 3 dimensions to visualize my data and preserve the Euclidean distance. Then distance represent the similarity between two data points.
Original data X: 5 * 128 (5 data points)
[[ -4.46e-02 1.57e-01 2.17e-01 1.24e-01 6.01e-02 7.61e-02
6.38e-02 -1.05e-01 -2.55e-02 5.99e-02 -8.38e-02 5.93e-02
-1.58e-01 -1.05e-01 1.31e-01 -5.33e-02 -4.18e-02 9.32e-02
-1.62e-02 -9.19e-02 -1.30e-01 8.56e-02 -6.13e-02 3.78e-02
7.84e-02 -9.74e-02 -9.42e-02 7.47e-02 -4.65e-02 7.36e-03
-9.19e-04 1.37e-01 -8.52e-02 9.27e-02 6.50e-02 -2.61e-02
7.21e-02 -1.83e-01 -2.49e-02 -9.85e-03 1.57e-01 -7.98e-02
1.50e-01 -1.40e-01 -2.39e-02 4.19e-02 6.98e-02 -1.27e-02
-7.56e-02 4.44e-02 1.86e-01 -2.22e-03 -1.79e-02 -3.90e-02
7.72e-02 4.47e-02 -8.15e-02 -4.31e-02 -6.52e-03 7.73e-02
-1.37e-02 5.78e-02 -1.25e-01 -1.58e-01 1.37e-01 9.34e-02
-6.07e-03 -1.69e-01 -2.12e-01 2.14e-01 -4.05e-02 1.29e-01
4.42e-02 1.71e-01 -2.13e-02 8.00e-03 7.17e-02 4.57e-03
-6.55e-03 -1.66e-01 3.73e-02 1.01e-01 -1.26e-03 1.96e-02
5.44e-02 -1.04e-01 -5.32e-02 -1.57e-02 -6.31e-02 1.89e-01
2.43e-02 1.59e-02 9.13e-03 -4.41e-02 -5.96e-03 1.03e-01
4.33e-02 -3.94e-02 7.85e-02 3.61e-02 -2.32e-02 3.69e-03
-9.57e-03 -1.47e-02 2.61e-02 -4.15e-04 1.41e-02 -4.22e-02
-7.42e-02 1.07e-01 9.08e-03 3.45e-02 6.41e-02 -5.37e-02
1.57e-02 -1.91e-01 8.21e-02 3.31e-02 3.57e-02 1.37e-02
1.56e-01 6.25e-02 4.54e-02 -1.07e-02 1.08e-01 2.69e-02
9.57e-02 -1.24e-01]
...
]
Original distance matrix dist:
dist = DataArray(squareform(pdist(X, 'euclidean')))
[[ 0. , 0.67, 0.62, 0.7 , 0.67],
[ 0.67, 0. , 0.48, 0.76, 0.46],
[ 0.62, 0.48, 0. , 0.7 , 0.48],
[ 0.7 , 0.76, 0.7 , 0. , 0.6 ],
[ 0.67, 0.46, 0.48, 0.6 , 0. ]]
T-SNE:
from sklearn.manifold import TSNE
model = TSNE(n_components=3, random_state=0)
x_tsne = model.fit_transform(x)
x_tsne:
[[ 1.78e-04 4.02e-05 1.01e-04]
[ 2.25e-04 1.90e-04 -1.00e-04]
[ 9.43e-05 -1.72e-05 -1.21e-05]
[ 4.02e-05 1.36e-05 1.49e-04]
[ 7.44e-05 1.08e-05 4.45e-05]]
dist_tsne:
[[ 0.00e+00, 2.55e-04, 1.52e-04, 1.49e-04, 1.22e-04],
[ 2.55e-04, 0.00e+00, 2.60e-04, 3.57e-04, 2.75e-04],
[ 1.52e-04, 2.60e-04, 0.00e+00, 1.72e-04, 6.62e-05],
[ 1.49e-04, 3.57e-04, 1.72e-04, 0.00e+00, 1.10e-04],
[ 1.22e-04, 2.75e-04, 6.62e-05, 1.10e-04, 0.00e+00]]
I compares dist and dist_tsne, I noticed that the values are not same, and they are not even proportional. How can I preserve the Euclidean distance while reduce the dimension?
That's theoretically not possible in general.
Your original data is living in much more dimensions and you can't throw away some of them while retaining the distances.
An example:
Imagine the 3 points of an equilateral triangle (in 2d-space)
Every pair of points has the same distance
Try to map this to a 1-dimensional sequence (number line)
It's not possible to keep the pairwise distances
The task of T-SNE and others is: map these point to some lower-dimensional space while keeping the distances visually so that we humans grasp some information hidden in many dimensions.