How to determine if a point lies OVER a triangle in 3D - algorithm

I need an example of fast algorithm allowing to calculate if a point lies over a triangle in 3D. I mean if the projection of this point on a plane containing given triangle is inside of this triangle.
I need to calculate distance between a point and a triangle (between a point and the face of this triangle if its projection lies inside the triangle or between a point and an edge of a triangle if its projection lays outside the triangle).
I hope I made it clear enough. I found some examples for 2D using barycentric coordinates but can't find any for 3D. Is there a faster way than calculating projection of a point, projecting this projected point and a given triangle to 2D and solving standard "point in triangle" problem?

If the triangle's vertices are A, B, C and the point is P, then begin by finding the triangle's normal N. For this just compute N = (B-A) X (C-A), where X is the vector cross product.
For the moment, assume P lies on the same side of ABC as its normal.
Consider the 3d pyramid with faces ABC, ABP, BCP, CAP. The projection of P onto ABC is inside it if and only if the dihedral angles between ABC and each of the other 3 triangles are all less than 90 degrees. In turn, these angles are equal to the angle between N and the respective outward-facing triangle normal! So our algorithm is this:
Let N = (B-A) X (C-A), N1 = (B-A) X (P-A), N2 = (C-B) X (P-B), N3 = (A-C) X (P-C)
return N1 * N >= 0 and N2 * N >= 0 and N3 * N >= 0;
The stars are dot products.
We still need to consider the case where P lies on the opposite side of ABC as its normal. Interestingly, in this case the vectors N1, N2, N3 now point into the pyramid, where in the above case they point outward. This cancels the opposing normal, and the algorithm above still provides the right answer. (Don't you love it when that happens?)
Cross products in 3d each require 6 multiplies and 3 subtractions. Dot products are 3 multiplies and 2 additions. On average (considering e.g. N2 and N3 need not be calculated if N1 * N < 0), the algorithm needs 2.5 cross products and 1.5 dot products. So this ought to be pretty fast.
If the triangles can be poorly formed, then you might want to use Newell's algorithm in place of the arbitrarily chosen cross products.
Note that edge cases where any triangle turns out to be degenerate (a line or point) are not handled here. You'd have to do this with special case code, which is not so bad because the zero normal says much about the geometry of ABC and P.
Here is C code, which uses a simple identity to reuse operands better than the math above:
#include <stdio.h>
void diff(double *r, double *a, double *b) {
r[0] = a[0] - b[0];
r[1] = a[1] - b[1];
r[2] = a[2] - b[2];
}
void cross(double *r, double *a, double *b) {
r[0] = a[1] * b[2] - a[2] * b[1];
r[1] = a[2] * b[0] - a[0] * b[2];
r[2] = a[0] * b[1] - a[1] * b[0];
}
double dot(double *a, double *b) {
return a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
}
int point_over_triangle(double *a, double *b, double *c, double *p) {
double ba[3], cb[3], ac[3], px[3], n[3], nx[3];
diff(ba, b, a);
diff(cb, c, b);
diff(ac, a, c);
cross(n, ac, ba); // Same as n = ba X ca
diff(px, p, a);
cross(nx, ba, px);
if (dot(nx, n) < 0) return 0;
diff(px, p, b);
cross(nx, cb, px);
if (dot(nx, n) < 0) return 0;
diff(px, p, c);
cross(nx, ac, px);
if (dot(nx, n) < 0) return 0;
return 1;
}
int main(void) {
double a[] = { 1, 1, 0 };
double b[] = { 0, 1, 1 };
double c[] = { 1, 0, 1 };
double p[] = { 0, 0, 0 };
printf("%s\n", point_over_triangle(a, b, c, p) ? "over" : "not over");
return 0;
}
I've tested it lightly and it seems to be working fine.

Let's assume that the vertices of the triangle are v, w, and the origin 0. Let's call the point p.
For the benefit of other readers, here's the barycentric approach for 2D point-in-triangle, to which you alluded. We solve the following system in variables beta:
[v.x w.x] [beta.v] [p.x]
[v.y w.y] [beta.w] = [p.y] .
Test whether 0 <= beta.v && 0 <= beta.w && beta.v + beta.w <= 1.
For 3D projected-point-in-triangle, we have a similar but overdetermined system:
[v.x w.x] [beta.v] [p.x]
[v.y w.y] [beta.w] = [p.y] .
[v.z w.z] [p.z]
The linear least squares solution gives coefficients beta for the point closest to p on the plane spanned by v and w, i.e., the projection. For your application, a solution via the following normal equations likely will suffice:
[v.x v.y v.z] [v.x w.x] [beta.v] [v.x v.y v.z] [p.x]
[w.x w.y w.z] [v.y w.y] [beta.w] = [w.x w.y w.z] [p.y] ,
[v.z w.z] [p.z]
from which we can reduce the problem to the 2D case using five dot products. This should be comparable in complexity to the method that Nico suggested but without the singularity.

Related

Algorithm to find lines perpendicular to a given line of the form Ax+By+C=0

Is there a way to find the lines perpendicular to a given lines given all the lines are of the form Ax+By+C=0?
I came up with a solution which takes quadratic running time. Is there a better way?
this is my code:
public class Perpendicular {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n=in.nextInt();
ArrayList<Line> list_of_lines=new ArrayList<Line>();
for(long i=0;i<n;i++){
long a=in.nextLong();
long b=in.nextLong();
long c=in.nextLong();
list_of_lines.add(new Line(a,b,c));
}
long p[]=new long[n];
Arrays.fill(p,0);
for(int i=0;i<n;i++){
for(int j=1;j<n;j++){
if(list_of_lines.get(i).slope()*list_of_lines.get(j).slope()== -1){
p[i]++;//num of perpendicular lines to i
p[j]++;//num of perpendicular lines to j
}
}
}
}
}
class Line{
public long a,b,c;
public Line(long a,long b,long c){
this.a=a;
this.b=b;
this.c=c;
}
public double slope(){
return a/b;
}
}
Any line which is guided by the equation -Bx + Ay + D = 0 would be the set of lines perpendicular to the given family of lines(Ax + By + C = 0).
You just need to check and make sure that the product of their slopes are equal to -1.
So, a family of lines of the nature -Bx + Ay + D = 0 would satisfy this criteria.
To check all family of lines now is quite easy and is much easier than your expected quadratic solution.
EDIT :-
You don't need to run your loop in this way. Just ensure that when you've checked for perpendicularity between 2 lines, you don't perform the commutative check again. Ex; You checked perpendicularity between 1 and 2, then you don't need to duplicate your effort checking perpendicularity between 2 and 1. This should be avoided in your code by improving the second-loop initialisation by j = i+1;.
Using the suggested edit, you will also not need to check same lines for perpendicularity as that is impossible to be true. So, line k shouldn't be tested with itself for perpendicularity check. This has been easily resolved using the hint 1 itself.
Convert the equation to the form y = m*x + c, m is the slope and c is the y-intercept. Now, all lines perpendicular to this set of lines will have slope -1/m (perpendicular lines have slope product -1), which should give you the equation.
If you have a line Ax + By + C = 0, then perpendicular line A1x + B1y + C1 = 0 should conform the dot product v * v1 = 0 (* is dot product here). So (A, B) * (A1 * B1) = A1 * A + B1 * B = 0. Where one of the solutions is A = -B1, B = A1.
So your line has the form Bx - Ay + C1 = 0, where C1 is any point. Which means that it is O(1).
Two lines would be perpendicular iff their slopes multiply to -1. (Assuming none of them are horizontal/vertical)
You can group the lines according to their slope in O(n log n).
For each pair of groups with perpendicular slopes each pair of lines in them would be an answer which you can iterate over. O(n lg n + num_of_answers)
Thus the algorithm would be O(n lg n + num_of_answers).
Note that there could be O(n^2) such pairs; but if you just need to find the number of such pairs, it could be found in O(n lg n).

Search in array with high dimensions having specific properties

I have a 3D array in which values are monotonic. How to find all (x,y), |f(X,Y,Z) – v1| < t.
There are Omega(n^2) points whose coordinates sum to n - 1. Nothing is known a priori about how the values of these points compare to each other, so, in the worst case, all of them must be inspected. An upper bound that matches up to constant factors is provided by running the 2D algorithm in each constant-z slice.
For each value (eg. v1), execute the following steps:
Execute the 2D algorithm for the 4 cube faces tangent to the X axis (Y=0, Y=n-1, Z=0, Z=n-1). Index the resulting set of matching (X, Y, Z) cells by X coordinate for the next step.
Execute the 2D algorithm for all n slices along the X axis (X=0..n-1), using the result of step 1 to initialize the first boundary point for the 2D algorithm. If there are no matching cells for the given x coordinate, move on to the next slice in constant time.
Worst case complexity will be O(O(2D algorithm) * n).
For multiple values (v2, etc.) keep a cache of function evaluations, and re-execute the algorithm for each value. For 100^3, a dense array would suffice.
It might be useful to think of this as an isosurface extraction algorithm, though your monotonicity constraint makes it easier.
If the 3d array is monotonically non-decreasing in each dimension then we know that if
f(x0, y0, z0) < v1 - t
or
f(x1, y1, z1) > v1 + t
then no element of the sub-array f(x0...x1, y0...y1, z0...z1) can contain any interesting point. To see this consider for example that
f(x0, y0, z0) <= f(x, y0, z0) <= f(x, y, z0) <= f(x, y, z)
holds for each (x, y, z) of the sub-array, and a similar relation holds (with reversed direction) for (x1, y1, z1). Thus f(x0, y0, z0) and f(x1, y1, z1) are the minimum and maximum value of the sub-array, respectively.
A simple search approach can then be implemented by using a recursive subdivision scheme:
template<typename T, typename CBack>
int values(Mat3<T>& data, T v0, T v1, CBack cback,
int x0, int y0, int z0, int x1, int y1, int z1) {
int count = 0;
if (x1 - x0 <= 2 && y1 - y0 <= 2 && z1 - z0 <= 2) {
// Small block (1-8 cells), just scan it
for (int x=x0; x<x1; x++) {
for (int y=y0; y<y1; y++) {
for (int z=z0; z<z1; z++) {
T v = data(x, y, z);
if (v >= v0 && v <= v1) cback(x, y, z);
count += 1;
}
}
}
} else {
T va = data(x0, y0, z0), vb = data(x1-1, y1-1, z1-1);
count += 2;
if (vb >= v0 && va <= v1) {
int x[] = {x0, (x0 + x1) >> 1, x1};
int y[] = {y0, (y0 + y1) >> 1, y1};
int z[] = {z0, (z0 + z1) >> 1, z1};
for (int ix=0; ix<2; ix++) {
for (int iy=0; iy<2; iy++) {
for (int iz=0; iz<2; iz++) {
count += values<T, CBack>(data, v0, v1, cback,
x[ix], y[iy], z[iz],
x[ix+1], y[iy+1], z[iz+1]);
}
}
}
}
}
return count;
}
The code basically accepts a sub-array and simply skips the search if the lowest element is too big or the highest element is too small, and splits the array in 8 sub-cubes otherwise. The recursion ends when the sub-array is small (2x2x2 or less) and a full scan is performed in this case.
Experimentally I found that with this quite simple approach an array with 100x200x300 elements generated by setting element f(i,j,k) to max(f(i-1,j,k), f(i,j-1,k), f(i,j,k-1)) + random(100) can be searched for the middle value and t=1 checking only about 3% of the elements (25 elements checked for each element found within range).
Data 100x200x300 = 6000000 elements, range [83, 48946]
Looking for [24594-1=24593, 24594+1=24595]
Result size = 6850 (5.4 ms)
Full scan = 6850 (131.3 ms)
Search count = 171391 (25.021x, 2.857%)
Since the function is non-decreasing, I think you can do something with binary searches.
Inside a (x, 1, 1) (column) vector you can do a binary search to find the range that matches your requirement which would be O(log(n)).
To find which column vectors to look in you can do a binary search over (x, y, 1) (slices) vectors checking just the first and last points to know if the value can fall in them which will take again O(log(n)).
To know which slices to look in you can binary search the whole cube checking the 4 points ((0, 0), (x, 0), (x, y), (0, y)) which would take O(log(n)).
So in total, the algorithm will take log(z) + a * log(y) + b * log(x) where a is the number of matching slices and b is the number of matching columns.
Naively calculating the worst case is O(y * z * log(x)).

Find the a location in a matrix so that the cost of every one moving to that location is smallest

There is a matrix, m×n. Several groups of people locate at some certain spots. In the following example, there are three groups and the number 4 indicates there are four people in this group. Now we want to find a meeting point in the matrix so that the cost of all groups moving to that point is the minimum. As for how to compute the cost of moving one group to another point, please see the following example.
Group1: (0, 1), 4
Group2: (1, 3), 3
Group3: (2, 0), 5
. 4 . .
. . . 3
5 . . .
If all of these three groups moving to (1, 1), the cost is:
4*((1-0)+(1-1)) + 5*((2-1)+(1-0))+3*((1-1)+(3-1))
My idea is :
Firstly, this two dimensional problem can be reduced to two one dimensional problem.
In the one dimensional problem, I can prove that the best spot must be one of these groups.
In this way, I can give a O(G^2) algorithm.(G is the number of group).
Use iterator's example for illustration:
{(-100,0,100),(100,0,100),(0,1,1)},(x,y,population)
for x, {(-100,100),(100,100),(0,1)}, 0 is the best.
for y, {(0,100),(0,100),(1,1)}, 0 is the best.
So it's (0, 0)
Is there any better solution for this problem.
I like the idea of noticing that the objective function can be decomposed to give the sum of two one-dimensional problems. The remaining problems look a lot like the weighted median to me (note "solves the following optimization problem in "http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html" or consider what happens to the objective function as you move away from the weighted median).
The URL above seems to say the weighted median takes time n log n, which I guess means that you could attain their claim by sorting the data and then doing a linear pass to work out the weighted median. The numbers you have to sort are in the range [0, m] and [0, n] so you could in theory do better if m and n are small, or - of course - if you are given the data pre-sorted.
Come to think of it, I don't see why you shouldn't be able to find the weighted median with a linear time randomized algorithm similar to that used to find the median (http://en.wikibooks.org/wiki/Algorithms/Randomization#find-median) - repeatedly pick a random element, use it to partition the items remaining, and work out which half the weighted median should be in. That gives you expected linear time.
I think this can be solved in O(n>m?n:m) time and O(n>m?n:m) space.
We have to find the median of x coordinates and median of all y coordinates in the k points and the answer will be (x_median,y_median);
Assumption is this function takes in the following inputs:
total number of points :int k= 4+3+5 = 12;
An array of coordinates:
struct coord_t c[12] = {(0,1),(0,1),(0,1), (0,1), (1,3), (1,3),(1,3),(2,0),(2,0),(2,0),(2,0),(2,0)};
c.int size = n>m ? n:m;
Let the input of the coordinates be an array of coordinates. coord_t c[k]
struct coord_t {
int x;
int y;
};
1. My idea is to create an array of size = n>m?n:m;
2. int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].x] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int x_median = i;
//similarly with y coordinate.
int array[size] = {0} ; //initialize all the elements in the array to zero
for(i=0;i<k;i++)
{
array[c[i].y] = +1;
count++;
}
int tempCount =0;
for(i=0;i<k;i++)
{
if(array[i]!=0)
{
tempCount += array[i];
}
if(tempCount >= count/2)
{
break;
}
}
int y_median = i;
coord_t temp;
temp.x = x_median;
temp.y= y_median;
return temp;
Sample Working code for MxM matrix with k points:
*Problem
Given a MxM grid . and N people placed in random position on the grid. Find the optimal meeting point of all the people.
/
/
Answer:
Find the median of all the x coordiates of the positions of the people.
Find the median of all the y coordinates of the positions of the people.
*/
#include<stdio.h>
#include<stdlib.h>
typedef struct coord_struct {
int x;
int y;
}coord_struct;
typedef struct distance {
int count;
}distance;
coord_struct toFindTheOptimalDistance (int N, int M, coord_struct input[])
{
coord_struct z ;
z.x=0;
z.y=0;
int i,j;
distance * array_dist;
array_dist = (distance*)(malloc(sizeof(distance)*M));
for(i=0;i<M;i++)
{
array_dist[i].count =0;
}
for(i=0;i<N;i++)
{
array_dist[input[i].x].count +=1;
printf("%d and %d\n",input[i].x,array_dist[input[i].x].count);
}
j=0;
for(i=0;i<=N/2;)
{
printf("%d\n",i);
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
printf("x coordinate = %d",j-1);
int x= j-1;
for(i=0;i<M;i++)
array_dist[i].count =0;
for(i=0;i<N;i++)
{
array_dist[input[i].y].count +=1;
}
j=0;
for(i=0;i<N/2;)
{
if(array_dist[j].count !=0)
i+=array_dist[j].count;
j++;
}
int y =j-1;
printf("y coordinate = %d",j-1);
z.x=x;
z.y =y;
return z;
}
int main()
{
coord_struct input[5];
input[0].x =1;
input[0].y =2;
input[1].x =1;
input[1].y =2;
input[2].x =4;
input[2].y =1;
input[3].x = 5;
input[3].y = 2;
input[4].x = 5;
input[4].y = 2;
int size = m>n?m:n;
coord_struct x = toFindTheOptimalDistance(5,size,input);
}
Your algorithm is fine, and divide the problem into two one-dimensional problem. And the time complexity is O(nlogn).
You only need to divide every groups of people into n single people, so every move to left, right, up or down will be 1 for each people. We only need to find where's the (n + 1) / 2th people stand for row and column respectively.
Consider your sample. {(-100,0,100),(100,0,100),(0,1,1)}.
Let's take the line numbers out. It's {(-100,100),(100,100),(0,1)}, and that means 100 people stand at -100, 100 people stand at 100, and 1 people stand at 0.
Sort it by x, and it's {(-100,100),(0,1),(100,100)}. There is 201 people in total, so we only need to set the location at where the 101th people stands. It's 0, and that's for the answer.
The column number is with the same algorithm. {(0,100),(0,100),(1,1)}, and it's sorted. The 101th people is at 0, so the answer for column is also 0.
The answer is (0,0).
I can think of O(n) solution for one dimensional problem, which in turn means you can solve original problem in O(n+m+G).
Suppose, people are standing like this, a_0, a_1, ... a_n-1: a_0 people at spot 0, a_1 at spot 1. Then the solution in pseudocode is
cur_result = sum(i*a_i, i = 1..n-1)
cur_r = sum(a_i, i = 1..n-1)
cur_l = a_0
for i = 1:n-1
cur_result = cur_result - cur_r + cur_l
cur_r = cur_r - a_i
cur_l = cur_l + a_i
end
You need to find point, where cur_result is minimal.
So you need O(n) + O(m) for solving 1d problems + O(G) to build them, meaning total complexity is O(n+m+G).
Alternatively you solve 1d in O(G*log G) (or O(G) if data is sorted) using the same idea. Choose the one from expected number of groups.
you can solve this in O(G Log G) time by reducing it to, two one dimensional problems as you mentioned.
And as to how to solve it in one dimension, just sort them and go through them one by one and calculate cost moving to that point. This calculation can be done in O(1) time for each point.
You can also avoid Log(G) component if your x and y coordinates are small enough for you to use bucket/radix sort.
Inspired by kilotaras's idea. It seems that there is a O(G) solution for this problem.
Since everyone agree with the two dimensional problem can be reduced to two one dimensional problem. I will not repeat it again. I just focus on how to solve the one dimensional problem
with O(G).
Suppose, people are standing like this, a[0], a[1], ... a[n-1]. There is a[i] people standing at spot i. There are G spots having people(G <= n). Assuming these G spots are g[1], g[2], ..., g[G], where gi is in [0,...,n-1]. Without losing generality, we can also assume that g[1] < g[2] < ... < g[G].
It's not hard to prove that the optimal spot must come from these G spots. I will pass the
prove here and left it as an exercise if you guys have interest.
Since the above observation, we can just compute the cost of moving to the spot of every group and then chose the minimal one. There is an obvious O(G^2) algorithm to do this.
But using kilotaras's idea, we can do it in O(G)(no sorting).
cost[1] = sum((g[i]-g[1])*a[g[i]], i = 2,...,G) // the cost of moving to the
spot of first group. This step is O(G).
cur_r = sum(a[g[i]], i = 2,...,G) //How many people is on the right side of the
second group including the second group. This step is O(G).
cur_l = a[g[1]] //How many people is on the left side of the second group not
including the second group.
for i = 2:G
gap = g[i] - g[i-1];
cost[i] = cost[i-1] - cur_r*gap + cur_l*gap;
if i != G
cur_r = cur_r - a[g[i]];
cur_l = cur_l + a[g[i]];
end
end
The minimal of cost[i] is the answer.
Using the example 5 1 0 3 to illustrate the algorithm.
In this example,
n = 4, G = 3.
g[1] = 0, g[2] = 1, g[3] = 3.
a[0] = 5, a[1] = 1, a[2] = 0, a[3] = 3.
(1) cost[1] = 1*1+3*3 = 10, cur_r = 4, cur_l = 5.
(2) cost[2] = 10 - 4*1 + 5*1 = 11, gap = g[2] - g[1] = 1, cur_r = 4 - a[g[2]] = 3, cur_l = 6.
(3) cost[3] = 11 - 3*2 + 6*2 = 17, gap = g[3] - g[2] = 2.

Find If 4 Points Form a Quadrilateral

Can someone please show me an algorithm to write a function that returns true if 4 points form a quadrilateral, and false otherwise? The points do not come with any order.
I've tried to check all permutations of the 4 points and see if there's 3 points that forms a straight line. If there's 3 points that forms a straight line than it's not quadrilateral. But then I realize that there's no way to tell the order. And then I struggle for several hours of thinking and googling with no result :(
I've read these questions:
find if 4 points on a plane form a rectangle?
Determining ordering of vertices to form a quadrilateral
But still find no solution. In the case of 1, it can't detect another kind of quadrilateral, and in 2 it assumes that the points are quadirateral already. Are there any other way to find out if 4 points form a quadirateral?
Thanks before.
EDIT FOR CLARIFICATION:
I define quadrilateral as simple quadrilateral, basically all shapes shown in this picture:
except the shape with "quadrilateral" and "complex" caption.
As for problems with the "checking for collinear triplets" approach, I tried to check the vertical, horizontal, and diagonal lines with something like this:
def is_linear_line(pt1, pt2, pt3):
return (pt1[x] == pt2[x] == pt3[x] ||
pt1[y] == pt2[y] == pt3[y] ||
slope(pt1, pt2) == slope(pt2, pt3))
And realize that rectangle and square will count as linear line since the slope of the points will be all the same. Hope this clears things out.
This is for checking if a quadrilateral is convex. Not if it is a simple quadrilateral.
I did like this in objective-c https://github.com/hfossli/AGGeometryKit/
extern BOOL AGQuadIsConvex(AGQuad q)
{
BOOL isConvex = AGLineIntersection(AGLineMake(q.bl, q.tr), AGLineMake(q.br, q.tl), NULL);
return isConvex;
}
BOOL AGLineIntersection(AGLine l1, AGLine l2, AGPoint *out_pointOfIntersection)
{
// http://stackoverflow.com/a/565282/202451
AGPoint p = l1.start;
AGPoint q = l2.start;
AGPoint r = AGPointSubtract(l1.end, l1.start);
AGPoint s = AGPointSubtract(l2.end, l2.start);
double s_r_crossProduct = AGPointCrossProduct(r, s);
double t = AGPointCrossProduct(AGPointSubtract(q, p), s) / s_r_crossProduct;
double u = AGPointCrossProduct(AGPointSubtract(q, p), r) / s_r_crossProduct;
if(t < 0 || t > 1.0 || u < 0 || u > 1.0)
{
if(out_pointOfIntersection != NULL)
{
*out_pointOfIntersection = AGPointZero;
}
return NO;
}
else
{
if(out_pointOfIntersection != NULL)
{
AGPoint i = AGPointAdd(p, AGPointMultiply(r, t));
*out_pointOfIntersection = i;
}
return YES;
}
}
There is no way to determine both vertex order and presence of a quadrilateral in the same operation unless you use operations that are far more expensive than what you're already performing.
Checking for collinear triplets (like you did) will exclude cases where the four points form triangles or straight lines.
To exclude also the complex quadrilateral (with crossing edges):
A quadrilateral formed by the points A, B, C and D is complex, if the intersection of AB and CD (if any) lies between the points A and B, and the same applies for BC and DA.
Do you have any more inputs than the 4 points? because if 4 points succeed to your test, they can always form 3 different quadrilaterals, sometime of different family. For example, Take a square, add 2 diagonal and remove the side.
So with only 4 points as input, you cannot do better than what you are already doing.
Let A, B, C and D be the four points. You have to assume that the edges are A-B, B-C, C-D, and D-A. If you can't make that assumption, the four points will always form a quadrilateral.
if (A-B intersects C-D) return false
if (B-C intersects A-D) return false
return true
First, find all side and diagonal sizes using the distance formula:
This.
Next, find all angles using this formula:
This
reference from: https://algorithmdotcpp.blogspot.com/2022/01/find-type-of-quadrilateral-with-given-points.html
Code in python:
# array
# read 8 number
points = list(map(int,input().split()))
# create separate variable for coordinates
xa = points[0]
ya = points[1]
xb = points[2]
yb = points[3]
xc = points[4]
yc = points[5]
xd = points[6]
yd = points[7]
# edge of quadrilateral using edge formula
# finding edge using distance formula.
a = math.sqrt((xb - xa) * (xb - xa) + (yb - ya) * (yb - ya))
b = math.sqrt((xc - xb) * (xc - xb) + (yc - yb) * (yc - yb))
c = math.sqrt((xd - xc) * (xd - xc) + (yd - yc) * (yd - yc))
d = math.sqrt((xa - xd) * (xa - xd) + (ya - yd) * (ya - yd))
# diagonal of quadrilateral
# find diagonals.
diagonal_ac = math.sqrt((xc - xa) * (xc - xa) + (yc - ya) * (yc - ya))
diagonal_bd = math.sqrt((xd - xb) * (xd - xb) + (yd - yb) * (yd - yb))
# angles
# angles of quadrilateral
# find angle using angle formula.
A = math.acos((a * a + d * d - diagonal_bd * diagonal_bd) / (2 * a * d))
B = math.acos((b * b + a * a - diagonal_ac * diagonal_ac) / (2 * b * a))
C = math.acos((c * c + b * b - diagonal_bd * diagonal_bd) / (2 * c * b))
D = math.acos((d * d + c * c - diagonal_ac * diagonal_ac) / (2 * d * c))
Now we can determine whether the type of quadrilateral or quadrilateral is not found using if-else conditions.
# if angles are equal means(90*)
if (A == B and A == C and A == D):
# if edge size are equal
if (a == b and a == c and a == d):
# square
print("Quadrilateral is square...\n")
print("area of square :", a * a)
# else
else:
# rectangular
print("Quadrilateral is rectangular...\n")
print("area of square :", a * b)
# angles are not equal but edges are equal
elif (a == b and a == c and a == d):
# diamond
print("Quadrilateral is diamond(Rhombus)...\n")
# opposite edges(sides) are equal
elif (a == c and b == d):
# parallelogram
print("Quadrilateral is parallelogram...")
else:
print("Quadrilateral is just a quadrilateral...\n")

suggest an algorithm for the following puzzle!

There are n petrol bunks arranged in circle. Each bunk is separated from the rest by a certain distance. You choose some mode of travel which needs 1litre of petrol to cover 1km distance. You can't infinitely draw any amount of petrol from each bunk as each bunk has some limited petrol only. But you know that the sum of litres of petrol in all the bunks is equal to the distance to be covered.
ie let P1, P2, ... Pn be n bunks arranged circularly. d1 is distance between p1 and p2, d2 is distance between p2 and p3. dn is distance between pn and p1.Now find out the bunk from where the travel can be started such that your mode of travel never runs out of fuel.
There is an O(n) algorithm.
Assume v[0] = p1 - d1, v[1] = p2 - d2, ... , v[n-1] = pn - dn. All we need to do is finding a starting point i, such that all the partial sum is no less than 0, i.e., v[i] >= 0, v[i] + v[(i+1)%n] >= 0, v[i] + v[(i+1)%n] + v[(i+2)%n] >= 0, ..., v[i]+...+v[(i+n-1)%n] >= 0.
We can find such a start point by calculating s[0] = v[0], s[1] = v[0]+v[1], s[2] = v[0]+v[1]+v[2], ..., s[n-1] = v[0] + ... + v[n-1], and pick up the minimum s[k]. Then the index (k+1)%n is the start point.
Proof: Assume the minimum element is s[k]. By the problem description, there must be the minimum s[k] <= 0.
Because the total sum v[0] + v[1] + ... + v[n-1] = 0, we have v[k+1] + v[k+2] + ... v[n-1] = -s[k] >= 0, and it is impossible that v[k+1] + ... v[j] < 0 (k < j < n). (Because if v[k+1] + ... v[j] < 0, then s[j] < s[k], which is contradictory with the assumption that s[k] is minimum.) So we have v[k+1] >= 0, v[k+1] + v[k+2] >= 0, ..., v[k+1] + v[k+2] + ... + v[n-1] >= 0.
Because s[k] is the minimum one, We also have v[k+1] + v[k+2] + ... + v[n-1] + v[0] = -s[k] + v[0] = -s[k] + s[0] >= 0, -s[k] + v[0] + v[1] = -s[k] + s[1] >= 0, ..., -s[k] + v[0] + v[1] + ... + v[k-1] = -s[k] + s[k-1] >= 0. So all the parital sum starting from (k+1) is no less than 0. QED.
Let's choose a junk algorithm that we know is wrong to see why it is wrong...
Notation...
Current Point: (gallons of gas at Current Point, gallons required to make next point)-> Remaining Gas (gallons)
In a little more mathematical form:
P[i]: (g(P[i]), d(P[i+1])) -> sum of (g(P[i]) - d(P[i+1])) from i=1 to current point-1
(And now for the bad algorithm...)
P1: (2,2) -> 0 (at P2)
P2: (5,3) -> 2 (at P3)
P3: (2,4) -> 0 (at P4)
P4: (2,5) -> -3 (ran out of gas 3 miles short of P5)
In order to make it to P5, we have to have three extra gallons of gas by the time we make it to P3, and in order to have 3 extra gallons at P3, we need to have 3 extra gallons at P1:
??? -> +3 (at P1)
P1: (2,2) -> 0+3 = 3 (at P2)
P2: (5,3) -> 2+3 = 5 (at P3)
P3: (2,4) -> 0+3 = 3 (at P4)
P4: (2,5) -> -3 +3 = 0 (made it to P5)
The trick, therefore, is to find the very worst sections -- where you are not given enough gas to traverse them. We know we can't start from P4, P3, P2, or P1. We have to start somewhere earlier and save up enough gas to make it through the bad section.
There will no doubt be multiple bad sections within the circle, making this somewhat complicated, but it's actually quite easy to figure out how to do this.
It's possible that the next few points following the very worst stretch in the circle could be traveled after the stretch, but only if they make no changes to your gas reserves. (e.g. the point after the worst stretch gives you 2 gallons of gas and makes you go 2 gallons of distance to the next point.)
In some cases, however, the worst section MUST be covered last. That's because before you start on that section, you need as much gas saved up as possible, and the next point after the worst stretch might give you the very last bit of gas that you need, which means you need to traverse it prior to taking on the worst stretch. Although there may exist multiple solutions, the simple fact of the matter is that traversing the worst section last is ALWAYS a solution. Here's some code:
class point_ {
int gasGiven_;
int distanceToNextPoint_;
public:
int gasGiven() {return gasGiven_;}
int distanceToNextPoint {return distanceToNextPoint_;}
}
class Circle_ {
public:
numberOfPoints;
point_ *P;
}
In main():
int indexWorstSection=0;
int numberPointsWorstSection=0;
int worstSum=0;
int currentSum=0;
int i=0;
int startingPoint =0;
// construct the circle, set *P to malloc of numberOfPoints point_'s, fill in all data
while (i<(Circle.numberOfPoints-1) || currentSum<0)
{
currentSum += Circle.P[i].gasGiven() - Circle.P[i].distanceToNextPoint();
if (currentSum < worstSum) { worstSum = currentSum; indexWorstSection=i-numberPointsWorstSection; startingPoint=i;}
if (currentSum>0) { currentSum=0; }
else { numberPointsWorstSection++; }
if (i==(Circle.numberOfPoints-1)) { i=0; }
else { i++; }
}
if (indexWorstSection<0) indexWorstSection=Circle.numberOfPoints+indexWorstSection;
The reason why you can't make it a for-loop is because the worst section might be, for example, from i=(Circle.numberOfPoints -2) to i=3. If the currentSum is under zero, it must continue back at the start of the array.
Haven't tried the code and haven't done any serious programming in almost a decade. Sorry if it has some bugs. You will no doubt have to clean this up a bit.
This does what several of the other answers do - finds the minimum of the "graph" created by the net-change-in-petrol deltas as you circle around. Depending on where we start, the exact values may be moved uniformly upwards or downwards compared to some other starting position, but the index of the minimal value is always a meaningful indication of where we can start and know we'll never run out of petrol. This implementation tries to minimise memory use and completes in O(n).
#include <iostream>
int dist[] = { 3, 10, 2, 4, 6, 9 };
int refill[] = { 3, 4, 6, 3, 7, 11 };
static const int n = sizeof dist / sizeof *dist;
int main()
{
int cum = 0, min = 0, min_index = 0;
for (int i = 0; i < n; ++i)
{
cum += refill[i] - dist[i];
std::cout << cum << ' ';
if (cum <= min)
{
min = cum;
min_index = i;
}
}
std::cout << "\nstart from index " << (min_index + 1) % n << " (0-based)\n";
}
See it running on ideone.com
Here's an approach that works in O(n) time and O(1) space. Start at any station, call it station 0, and advance until you run out of gas. If you don't run out of gas, done. Otherwise, if you run out between stations k and k+1, start over again at station k+1. Make a note if you pass 0 again, and if you run out after that it can't be done.
The reason this works is because if you start at station i and run out of gas between stations k and k+1, then you will also run out of gas before station k+1 if you start at any station between i and k.
Here's an algorithm, given an arrays P (petrol) and D (distance):
int position = 0;
int petrol = P[0];
int distance = D[0];
int start = 0;
while (start < n) {
while (petrol >= distance) {
petrol += P[++position % N] - distance;
distance = D[position % N];
if (position % N == start)
return start;
}
start = position;
petrol = P[start];
}
return -1;
Each leg of the trip has a net effect on fuel, adding from storage and using to make the trip. All you need to do is loop through once, keeping track of your fuel level when you arrive at each point, even if it is negative. Keep track of the lowest fuel level and which point it occurred on. If you start at that point, you will be able to make it around from there without running out of fuel. This assumes that you start with an empty tank and only get gas from the place you are starting, also that you can always take all the gas, you won't ever get full and have to leave gas behind.
Let's say you have 5 points, P1 to P5:
Point Fuel Distance to next point
P1 5 8
P2 3 4
P3 12 7
P4 1 4
P5 7 5
If you choose P1, then load up on 5 fuel, travelling to P2 leaves you with -3 fuel. Going on you get these numbers:
Point Fuel Balance (before refueling)
P1 0
P2 -3
P3 -4
P4 1
P5 -2
P1 0
So if you start at the lowest value, P3, you can make it back around (fuel 0 to start, 5 at P4, 2 at P5, 4 at P1, 1 at P2, 0 back at P3)
float [] storedFuel = { 1, 1, 1, 1, 1, 1 };
float [] distance = { 1, 1, 1, 1, 1, 1 };
int n = 6;
int FindMinimumPosition()
{
float fuel = 1;
int position = 0;
float minimumFuel = 1;
int minimumPosition = 0;
while (position < n)
{
fuel += storedFuel[position];
fuel -= distance[position];
position++; // could be n which is past array bounds
if (fuel < minimumFuel) {
minimumFuel = fuel;
minimumPosition = position % n;
}
}
return minimumPosition
}
Off the top of my head, here's an algorithm that should work:
let e1 = (the amount of petrol in P1) - d1 (i.e., the excess petrol in P1 over what is needed to travel to P2), and similarly for e2, ..., en. (These numbers can be positive or negative.)
Form the partial sums s1 = e1; s2 = e1 + e2; ..., sn = e1 + e2 + ... + en. We know from the conditions of the problem that sn = 0.
The problem now is to find a circular permutation of the bunks (or, more simply, of the e values) so that none of the s values are negative. One can simply repeatedly shift one, updating the s values, until a solution is found. (It's not immediately obvious that there is always a solution, but I think there is. If you shift n times without finding a solution, then you're guaranteed that there is none.)
This is an O(n^2) algorithm--not very good. A good heuristic (possibly an exact solution) may be to shift so that the largest-magnitude negative s value is shifted to position n.
For each gap, find the profit or loss earned by filling up at the bunk before the gap, and then crossing the gap. At an arbitrary point work out total amount of petrol remaining at each point of a complete circle, accepting that it may be negative at some point.
Now repeatedly circularly shift that solution. Remove the information for the last point, which will now be the first point. Set up an offset to take account of the fact that you are now starting one point further back, and will more (or less) petrol at each remaining point. Add in information for the new point, taking account of the offset. This solution is feasible if the minimum amount of petrol at any point, plus the offset, is greater than zero.
You can use some sort of log n data structure (sorted map, priority queue...) to keep track of the minimum so this takes the cost down to n log n.
Here is an O(n) solution
private int findStartingPoint(int[] petrol, int[] dist, int[] mileage){
int curPoint = 0;
boolean done = false;
while(curPoint < petrol.length && !done)
{
int prevPoint = curPoint;
int nextPoint = (curPoint+1) % petrol.length;
int nextSolutionPoint = curPoint + 1;
int totalPetrol = 0;
while(nextPoint != curPoint){
totalPetrol += petrol[prevPoint] - (dist[prevPoint]/mileage[prevPoint]);
if(totalPetrol < 0){
curPoint = nextSolutionPoint;
break;
}
prevPoint = nextPoint;
nextPoint = (nextPoint + 1) % petrol.length;
nextSolutionPoint++;
}
if(curPoint == nextPoint){
return curPoint;
}
}
return -1;
}
}

Resources