Finding the point which minimizes the distance given a point in space and constraints? - algorithm

I have a question regarding an algorithm:
We have a fixed point in 2D space let's call it S(x,y) and the length of two links joining (L1 and L2). These two links are connected at a common joint called E(x,y). And we have another point in the space which is end point of the L2 which we call F(x,y).
So we L1 have two end points S and E where as L2 has E and F.
When we are given a point P(x,y) in space. How can we find the coordinate of F(x,y) which is closest to P? I wanted to find the angle of θ1 and θ2 which takes the links L1 and L2 to that point?
See this link to get the graphical representation of my problem
See this pic http://postimage.org/image/qlekcv1qz/, where you will be able to see the real problem I have right now.
So I have formulated this as optimization problem. Where the Objective function is:
* arg min |P-F|
with constraints θ1 and θ2 where θ1 ∈ [ O , π] and θ2 ∈ [ O , π/2].
So we have,
* xE = xS + L1 * Cosθ1 and yE = yS + L1 * Sinθ1
* xF = xE + L2 * Cos (θ1 + θ2 ) and yF = yE + L2 * sin ( θ1 + θ2)
Here we have length of L1 = 105 and L2 = 113.7 and Point S is the origin i.e xS = O and yS = O.
Can you give a hint how code up my function or any optimization problem which gives me the values of θ1 and θ2, such that the distance between Point F and point P is minimized.

So if I understand correctly, your description is equivalent of having two rigid rods of length L1 and L2, with one end of L1 fixed at S, the other end connected to L2 by a flexible joint (at some undefined point E), and you want to get the other end of L2 (point F) as close to some point P as possible. If this is the case then:
If |L1-L2| < |P-S| < |L1+L2| then F = P
If |L1-L2| > |P-S| then F = S + (P-S)*|L1-L2|/|P-S|
If |P-S| > |L1+L2| then F = S + (P-S)*|L1+L2|/|P-S|
Is that what you want?
See imnage
http://postimage.org/image/l1ktt0qtb/
If point P is closer to point S than the distance |L1-L2| (assuming they are unequal), then point F cannot 'reach' point P, even with the angle at E bent to 180 ndegrees. Then the closest you can get is somewhere on the the circle with radius |L1-L2| and centre S. In this case the best F is given by the vector with direction (P-S), and magnitude |L1-L2|, my case 2 above and Figure A below. Note that if L1=L2 this will never be the case.
If point P is further from point S than the distance |L1+L2|, then point F cannot 'reach' point P, even with the angle at E straightened to 0 degrees. Then the closest you can get is aomewhere on the the circle with radius |L1+L2| and centre S. In this case the best F is given by the vector with direction (P-S), and magnitude |L1+L2|, my case 3 above and Figure B below.
If point P is betwen the two limiting circles, then there will be two solutions (one as shown in Figure 3 below, and the other with L1 and L2 reflected in the mirror line formewd by the vector P-S. In this case the 'best' F is equal to point P.
If you want to know the angles Theta1 and Theta 2 then that is a different question (I see you have added that now).
Use the cosine rule for triangles with no right angle.
The rule is
C = acos[(a^2 + b^2 - c^2)/(2ab)]
where a triangle has sides of length a,b, and c, and C is the angle between sides a and b. You are trying to produce a triangle with sides l1, l2, and d=|S-P|, which will be possible so as long as no two of the lengths are shorter (in sum) than the third one.
By substituting l1, l2, and d for a,b, anc c appropriately you will be able to solve for each of the internal angles, A, B, and C. Then you can use these angles A,B,C plus the angle between the vector P-S and horizontal (call that D perhaps?) to calculate your theta1 and theta2.

Related

Problem implementing Attentive Pooling Network for Question Answering

I'm following this paper to implement and Attentive Pooling Network to build a Question Answering system. In chapter 2.1, it speaks about the CNN layer:
where q_emb is a question where each token (word) has been embedded using word2vec. q_emb has shape (d, M). d is the dimension of the word embedding and M the length of the question. In a similar way, a_emb is the embedding of the answer with shape (d, L).
My question is: how is the convolution done and how is it possible that W_1 and b_1 are the same for both the operations? In my opinion at least b_1 should have a different dimension in each case (and it should be a matrix, not a vector....).
At the moment I've implemented this operation in PyTorch:
### Input is a tensor of shape (batch_size, 1, M or L, d*k)
conv2 = nn.Conv2d(1, c, (d*k, 1))
I find that the authors of the paper are trusting the readers to assume/figure out a lot of things here. From what I read, here is what I could gather:
W1 should be a 1 X dk matrix because that is the only shape that would make sense in order to get Q as c X M matrix.
Assuming this, b1 need not be an matrix. From the above, you could get a c X 1 X M matrix which could be reshaped to c X M matrix easily and b1 could be a c X 1 vector which could be broadcasted and added to the rest of the matrix.
Since, c, d and k are hyper parameters, you could easily have the same W1 and b1 for both Q and A.
This is what I think so far, I will re read and edit in case anythings amiss.

Calculate the displacement coordinates of a semi-articulated truck

As shown in the image below, I'm creating a program that will make a 2D animation of a truck that is made up of two articulated parts.
The truck pulls the trailer.
The trailer moves according to the docking axis on the truck.
Then, when the truck turns, the trailer should gradually align itself with the new angle of the truck, as it does in real life.
I would like to know if there is any formula or algorithm that does this calculation in an easy way.
I've already seen inverse kinematics equations, but I think for just 2 parts it would not be so complex.
Can anybody help me?
Let A be the midpoint under the front axle, B be the midpoint under the middle axle, and C be the midpoint under the rear axle. For simplicity assume that the hitch is at point B. These are all functions of time t, for example A(t) = (a_x(t), a_y(t).
The trick is this. B is moving directly towards A with the component of A's velocity in that direction. Or in symbols, dB/dt = (dA/dt).(A-B)/||A-B|| And similarly, dC/dt = (dB/dt).(B-C)/||B-C|| where . is the dot product.
This turns into a non-linear first-order system in 6 variables. This can be solved with normal techniques, such as https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods.
UPDATE: Added code
Here is a Python implementation. You can replace it with https://rosettacode.org/wiki/Runge-Kutta_method for your favorite language and your favorite linear algebra library. Or even hand-roll that.
For my example I started with A at (1, 1), B at (2, 1) and C at (2, 2). Then pulled A to the origin in steps of size 0.01. That can be altered to anything that you want.
#! /usr/bin/env python
import numpy
# Runga Kutta method.
def RK4(f):
return lambda t, y, dt: (
lambda dy1: (
lambda dy2: (
lambda dy3: (
lambda dy4: (dy1 + 2*dy2 + 2*dy3 + dy4)/6
)( dt * f( t + dt , y + dy3 ) )
)( dt * f( t + dt/2, y + dy2/2 ) )
)( dt * f( t + dt/2, y + dy1/2 ) )
)( dt * f( t , y ) )
# da is a function giving velocity of a at a time t.
# The other three are the positions of the three points.
def calculate_dy (da, A0, B0, C0):
l_ab = float(numpy.linalg.norm(A0 - B0))
l_bc = float(numpy.linalg.norm(B0 - C0))
# t is time, y = [A, B, C]
def update (t, y):
(A, B, C) = y
dA = da(t)
ab_unit = (A - B) / float(numpy.linalg.norm(A-B))
# The first term is the force. The second is a correction to
# cause roundoff errors in length to be selfcorrecting.
dB = (dA.dot(ab_unit) + float(numpy.linalg.norm(A-B))/l_ab - l_ab) * ab_unit
bc_unit = (B - C) / float(numpy.linalg.norm(B-C))
# The first term is the force. The second is a correction to
# cause roundoff errors in length to be selfcorrecting.
dC = (dB.dot(bc_unit) + float(numpy.linalg.norm(B-C))/l_bc - l_bc) * bc_unit
return numpy.array([dA, dB, dC])
return RK4(update)
A0 = numpy.array([1.0, 1.0])
B0 = numpy.array([2.0, 1.0])
C0 = numpy.array([2.0, 2.0])
dy = calculate_dy(lambda t: numpy.array([-1.0, -1.0]), A0, B0, C0)
t, y, dt = 0., numpy.array([A0, B0, C0]), .02
while t <= 1.01:
print( (t, y) )
t, y = t + dt, y + dy( t, y, dt )
By the answers I saw, I realized that the solution is not really simple and will have to be solved by an Inverse Kinematics algorithm.
This site is an example and it is a just a start, although it still does not solve everything, since the point C is fixed and in the case of the truck it should move.
Based on this Analytic Two-Bone IK in 2D article, I made a fully functional model in Geogebra, where the nucleus consists of two simple mathematical equations.

SICP - Which functions converge to fixed points?

In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia

Simplifying recursive mean calculation

If we have
Ei = mean [abs (Hi - p) for p in Pi]
H = mean [H0, H1, ... Hi, ... Hn]
P = concat [P0, P1, ... Pi, ... Pn]
then does there exist a more efficient way to compute
E = mean [abs (H - p) for p in P]
in terms of H, P, and the Eis and His, given that H, E, and P go on to be used as Hi, Ei, and Pi for some i, at a higher recursive level?
If we store the length of Pi as Li at each stage, then we can let
L = sum [L0, L1, ... Li, ... Ln]
allowing us to perform the somewhat easier calculation
E = sum ([abs (H - p) for p in P] / L)
but the use of the abs function seems to severely restrict the kinds of algebraic manipulations we can use to simplify the numerator.
No. Imagine you have just two groups, and one group has H1 = 1 and the other group has H2 = 2. Imagine that every p in P1 is either 0 or 2, and every p in P2 in is either 1 or 3. Now you will always have E1 = 1 and E2 = 1, regardless of the actual values in P1 and P2. However, you can see that if all p in P1 are 2, and all p in P2 are 1, then E will be minimized (specifically 0.5) because H = 1.5. Or all p in P1 could be 0 and all p in P2 could be 3, in which case E would be maximized. (specifically 1.5). And you could get any answer for E in between 0.5 and 1.5 depending on the distribution of the p. If you don't actually go and look at all the individual p, there's no way to tell what exact value of E you will get between 0.5 and 1.5. So you can't do any better than O(n) time to compute E, where n is the total size of P, which is the same running time if you just compute your desired quantity E directly from it's definition formula.

Best way to do an iteration scheme

I hope this hasn't been asked before, if so I apologize.
EDIT: For clarity, the following notation will be used: boldface uppercase for matrices, boldface lowercase for vectors, and italics for scalars.
Suppose x0 is a vector, A and B are matrix functions, and f is a vector function.
I'm looking for the best way to do the following iteration scheme in Mathematica:
A0 = A(x0), B0=B(x0), f0 = f(x0)
x1 = Inverse(A0)(B0.x0 + f0)
A1 = A(x1), B1=B(x1), f1 = f(x1)
x2 = Inverse(A1)(B1.x1 + f1)
...
I know that a for-loop can do the trick, but I'm not quite familiar with Mathematica, and I'm concerned that this is the most efficient way to do it. This is a justified concern as I would like to define a function u(N):=xNand use it in further calculations.
I guess my questions are:
What's the most efficient way to program the scheme?
Is RecurrenceTable a way to go?
EDIT
It was a bit more complicated than I tought. I'm providing more details in order to obtain a more thorough response.
Before doing the recurrence, I'm having problems understanding how to program the functions A, B and f.
Matrices A and B are functions of the time step dt = 1/T and the space step dx = 1/M, where T and M are the number of points in the {0 < x < 1, 0 < t} region. This is also true for vector the function f.
The dependance of A, B and f on x is rather tricky:
A and B are upper and lower triangular matrices (like a tridiagonal matrix; I suppose we can call them multidiagonal), with defined constant values on their diagonals.
Given a point 0 < xs < 1, I need to determine it's representative xn in the mesh (the closest), and then substitute the nth row of A and B with the function v( x) (transposed, of course), and the nth row of f with the function w( x).
Summarizing, A = A(dt, dx, xs, x). The same is true for B and f.
Then I need do the loop mentioned above, to define u( x) = step[T].
Hope I've explained myself.
I'm not sure if it's the best method, but I'd just use plain old memoization. You can represent an individual step as
xstep[x_] := Inverse[A[x]](B[x].x + f[x])
and then
u[0] = x0
u[n_] := u[n] = xstep[u[n-1]]
If you know how many values you need in advance, and it's advantageous to precompute them all for some reason (e.g. you want to open a file, use its contents to calculate xN, and then free the memory), you could use NestList. Instead of the previous two lines, you'd do
xlist = NestList[xstep, x0, 10];
u[n_] := xlist[[n]]
This will break if n > 10, of course (obviously, change 10 to suit your actual requirements).
Of course, it may be worth looking at your specific functions to see if you can make some algebraic simplifications.
I would probably write a function that accepts A0, B0, x0, and f0, and then returns A1, B1, x1, and f1 - say
step[A0_?MatrixQ, B0_?MatrixQ, x0_?VectorQ, f0_?VectorQ] := Module[...]
I would then Nest that function. It's hard to be more precise without more precise information.
Also, if your procedure is numerical, then you certainly don't want to compute Inverse[A0], as this is not a numerically stable operation. Rather, you should write
A0.x1 == B0.x0+f0
and then use a numerically stable solver to find x1. Of course, Mathematica's LinearSolve provides such an algorithm.

Resources