Triangle partitioning - algorithm

This was a problem in the 2010 Pacific ACM-ICPC contest. The gist of it is trying to find a way to partition a set of points inside a triangle into three subtriangles such that each partition contains exactly a third of the points.
Input:
Coordinates of a bounding triangle: (v1x,v1y),(v2x,v2y),(v3x,v3y)
A number 3n < 30000 representing the number of points lying inside the triangle
Coordinates of the 3n points: (x_i,y_i) for i=1...3n
Output:
A point (sx,sy) that splits the triangle into 3 subtriangles such that each subtriangle contains exactly n points.
The way the splitting point splits the bounding triangle into subtriangles is as follows: Draw a line from the splitting point to each of the three vertices. This will divide the triangle into 3 subtriangles.
We are guaranteed that such a point exists. Any such point will suffice (the answer is not necessarily unique).
Here is an example of the problem for n=2 (6 points). We are given the coordinates of each of the colored points and the coordinates of each vertex of the large triangle. The splitting point is circled in gray.
Can someone suggest an algorithm faster than O(n^2)?

Here's an O(n log n) algorithm. Let's assume no degeneracy.
The high-level idea is, given a triangle PQR,
P
C \
/ S\
R-----Q
we initially place the center point C at P. Slide C toward R until there are n points inside the triangle CPQ and one (S) on the segment CQ. Slide C toward Q until either triangle CRP is no longer deficient (perturb C and we're done) or CP hits a point. In the latter case, slide C away from P until either triangle CRP is no longer deficient (we're done) or CQ hits a point, in which case we begin sliding C toward Q again.
Clearly the implementation cannot “slide” points, so for each triangle involving C, for each vertex S of that triangle other than C, store the points inside the triangle in a binary search tree sorted by angle with S. These structures suffice to implement this kinetic algorithm.
I assert without proof that this algorithm is correct.
As for the running time, each event is a point-line intersection and can be handled in time O(log n). The angles PC and QC and RC are all monotonic, so each of O(1) lines hits each point at most once.

Main idea is: if we have got the line, we can try to find a point on it using linear search. If the line is not good enough, we can move it using binary search.
Sort the points based on the direction from vertex A. Sort them for B and C too.
Set current range for vertex A to be all the points.
Select 2 middle points from the range for vertex A. These 2 points define subrange for 'A'. Get some line AD lying between these points.
Iterate for all the points lying between B and AD (starting from BA). Stop when n points found. Select subrange of directions from B to points n and next after n (if there is no point after n, use BC). If less than n points can be found, set current range for vertex A to be the left half of the current range and go to step 3.
Same as step 4, but for vertex C.
If subranges A, B, C intersect, choose any point from there and finish. Otherwise, if A&B is closer to A, set current range for vertex A to be the right half of the current range and go to step 3. Otherwise set current range for vertex A to be the left half of the current range and go to step 3.
Complexity: sorting O(n * log n), search O(n * log n). (Combination of binary and linear search).

Here is an approach that takes O(log n) passes of cost n each.
Each pass starts with an initial point, which divides the triangle into there subtriangles. If each has n points, we are finished. If not, consider the subtriangle which is furthest away from the desired n. Suppose it has too many, just for now. The imbalances sum to zero, so at least one of the other two subtriangles has too few points. The third subtriangle either also has too few, or has exactly n points - or the original subtriangle would not have the highest discrepancy.
Take the most imbalanced subtriangle and consider moving the centre point along the line leading away from it. As you do so, the imbalance of the most imbalanced point will reduce. For each point in the triangle, you can work out when that point crosses into or out of the most imbalanced subtriangle as you move the centre point. Therefore you can work out in time n where to move the centre point to give the most imbalanced triangle any desired count.
As you move the centre point you can choose whether points move in our out of the most imbalanced subtriangle, but you can't chose which of the other two subtriangles they go to, or from - but you can predict which easily from which side of the line along which you are sliding the centre point they live, so you can move the centre point along this line to get the lowest maximum discrepancy after the move. In the worst case, all of the points moved go into, or out of, the subtriangle that was exactly balanced. However, if the imbalanced subtriangle has n + k points, by moving k/2 of them, you can move, at worst, to the case where it and the previously balanced subtriangle are out by k/2. The third subtriangle may still be unbalanced by up to k, in the other direction, but in this case a second pass will reduce the maximum imbalance to something below k/2.
Therefore in the case of a large unbalance, we can reduce it by at worst a constant factor in two passes of the above algorithm, so in O(log n) passes the imbalance will be small enough that we are into special cases where we worry about an excess of at most one point. Here I am going to guess that the number of such special cases is practically enumerable in a program, and the cost amounts to a small constant addition.

I think there is a linear time algorithm. See the last paragraph of the paper "Illumination by floodlights- by Steiger and Streinu". Their algorithm works for any k1, k2, k3 that sum up to n. Therefore, k1=k2=k3=n/3 is a special case.
Here is the link where you can find the article. http://www.sciencedirect.com/science/article/pii/S0925772197000278 a CiteSeerX link is http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.4634

Related

Maximum area of self-intersecting polygon

I'm looking for an algorithm for calculating the maximum area in a self-intersecting polygon. As it's self-intersecting it is not trivial to calculate the area with methods such as the shoelace formula.
Example:
The polygon in the example prioritises vertices with the "smallest" letter alphabetically, sometimes going back to the same vertex in a non-alphabetical way as it is self-intersecting. Although that shouldn't make a difference in the expected area.
In this case the algorithm should output 40: the area of the square (36) plus the area of the 4 outer triangles (4).
Constraints:
The intersection points are known in advance, no need to calculate them (as the example shows)
The last point is guaranteed to connect back to the figure, i.e. no dangling lines
The polygon is always traceable, i.e. it can be drawn, without lifting the pen
Time complexity ideally not worse than O(n log(n)) where n is the number of points
Thanks for your help!
I think I've got it.
Find the vertex with the lowest x. In case of a tie, pick the one with the lowest y. This process is O(n)
Of the 2+ segments connected to the vertex found in point 1, pick the one that forms the smallest angle with the x-axis, so as to start traversing the polygon in a clockwise direction. This process is O(s) where s is the number of segments that are connected to the starting vertex.
Keep choosing the next connected segments ignoring the order of the segments in the original polygon. In case of an intersection, pick the segment that forms the smallest angle with the current segment with the direction negated, measured clockwise. This is in order to choose the segment that lays on the outside of the polygon. This process is O(n i/2) where i is the average number of segments connected to each vertex.
Once back to the starting point, calculate the area using the shoelace formula. This could actually be calculated in parallel whilst traversing the polygon in points 2 and 3.
This algorithm's worst case time complexity is O(n i/2) where n is the number of vertices and i is the average number of segments connected to each vertex. The best case complexity is O(n) for a polygon that never intersects. In my case the polygons rarely intersect so this process is close to O(n).
Build the set of segments from the points given
For each point, test a ray to see if it intersects an even or odd number of those segments.
The even intersect counts are internal points, remove them.
Shoelace algo tells you the area of the remaining shape.
Let me think of a way where this isn't n^2, because you are comparing n points with n segments.

Trouble understanding Closest-Pair divide and conquer algorithm

I'm new into coding and today I completed the trivial solution for the Closest-Pair problem in a 2-D space. (2 for loops)
However I gave up finding any solution which could do it in O(n log n). Even after researching it, I still don't understand how this can be faster than the trivial method.
What I understand:
-> At first we split the array in 2 halfs and sort everything only considering the X coordinates. This can be done in n log n.
Next there are recursive calls which "find the two points with the lowest distance" in each half. But how is this done exactly below O(n^2)?
In my understanding it is impossible to find the lowest distance between N/2 points without checking every single one of them.
There is a solution in 1-D which absolutely makes sense to me. After sorting we know, that the distance between two non-adjacent points can't be lower than the distance of at least 2 adjacent ones. However this is not true for 2-D space, since we have an additional Y coordinate which could lead to the lowest distance between two points which are not adjacent on the X axis.
First of all, heed the advice of user #Evg - this answer cannot substitute the comprehensive description and mathematically rigorous analysis of the algorithm.
However, here are some ideas to get the intuition started:
(Recursion structure)
The question states:
Next there are recursive calls which "find the two points with the lowest distance" in each half. But how is this done exactly below O(n^2)? In my understanding it is impossible to find the lowest distance between N/2 points without checking every single one of them.
The recursion, however, does not stop at level 1 - assume for the sake of the argument that some O(n log n) algorithm works. Finding closest pairs among N/2 points applying that very algorithm takes O(N/2 log N/2) - not O((N/2)^2).
(Consequences of finding a closest pair in one half)
If you have found a closest pair (p, q) in the 'left' half of the point set, this pair's distance sets an upper bound to the width of a corridor around the halving line from which a closer pair (r, s) with r from the left, s from the right half can be drawn. If the closest distance found so far is 'small', it significantly reduces the size of the candidate set. As the points have been ordered by their x coordinate, the algorithm can exploit the information efficiently.
Said corridor may still cover up to the whole set of N points, but if it does, it provides information of the geometry of the point set: the points of each half will basically be aligned along a vertical line. This information can be exploited algorithmically - the most naive way would be to execute the algorithm once again but sorting along y coordinates and halving the point set by a horizontal line. Note that executing any algorithm a constant number of times does not change asymptotic run time expressed by the O(.) notation.
(Finding a close pair with one point from each half)
Consider checking a pair of points (r, s), one point from each half. It is known that the difference in their x and y coordinates, resp., mustn't exceed the minimal distance d found so far. It is known from the recursion that there can be no points r', s' (r' from the left, s' from the right half) closer to r, s, resp., than d. So given some r there cannot be 'many' candidates from the other half.
Imagine a circle of radius d drawn around r. Any point s from the other half being closer than d must be located within that circle. Let there be a few of them - however, the minimum distance among each pair still be at least d. The maximum number of points that can be distributed within a circle of radius d such that the distance between each pair of them is at least d is 7 - think of a regular hexagon with side length d and its center coinciding with the circle's center.
So after the recursion, at most every r from the left half needs to be checked against at max a constant number of points from the other half which makes the part of the algorithm after the recursion run in O(N).
Note that finding the pairing candidates for a given r is an efficient operation - the points from both halves have been sorted by the same criterion.

Finding all empty triangles

I have a small set of N points in the plane, N < 50.
I want to enumerate all triples of points from the set that form a triangle containing no other point.
Even though the obvious brute force solution could be viable for my tiny N, it has complexity O(N^4).
Do you know a way to decrease the time complexity, say to O(N³) or O(N²) that would keep the code simple ? No library allowed.
Much to my surprise, the number of such triangles is pretty large. Take any point as a center and sort the other ones by increasing angle around it. This forms a star-shaped polygon, that gives N-1 empty triangles, hence a total of Ω(N²). It has been shown that this bound is tight [Planar Point Sets with a Small Number of Empty convex Polygons, I. Bárány and P. Valtr].
In the case of points forming a convex polygon, all triangles are empty, hence O(N³). Chances of a fast algorithm are getting low :(
The paper "Searching for empty Convex polygons" by Dobkin, David P. / Edelsbrunner, Herbert / Overmars, Mark H. contains an algorithm linear in the number of possible output triangles for solving this problem.
A key problem in computational geometry is the identification of subsets of a point set having particular properties. We study this problem for the properties of convexity and emptiness. We show that finding empty triangles is related to the problem of determininng pairs of vertices that see each other in starshaped polygon. A linear time algorithm for this problem which is of independent interest yields an optimal algorithm for finding all empty triangles. This result is then extended to an algorithm for finding
empty convex r-gons (r > 3) and for determining a largest empty convex subset. Finally, extensions to higher dimensions are mentioned.
The sketch of the algorithm by Dobkin, Edelsbrunner and Overmars goes as follows for triangles:
for every point in turn, build the star-shaped polygon formed by sorting around it the points on its left. This takes N sorting operations (which can be lowered to total complexity O(N²) via an arrangement, anyway).
compute the visibility graph inside this star-shaped polygon and report all the triangles that are formed with the given point. This takes N visibility graph constructions, for a total of M operations, where M is the number of empty triangles.
Shortly, from every point you construct every empty triangle on the left of it, by triangulating the corresponding star-shaped polygon in all possible ways.
The construction of the visibility graph is a special version for the star-shaped polygon, which works in a traversal around the polygon, where every vertex has a visibility queue which gets updated.
The figure shows a star-shaped polygon in blue and the edges of its visibility graph in orange. The outline generates 6 triangles, and inner visibility edges 8 of them.
for each pair of points (A, B):
for each of the two half-planes defined by (A, B):
initialize a priority queue Q to empty.
for each point M in the half plane,
with increasing angle(AB, AM):
if angle(BA, BM) is smaller than all angles in Q:
print A,B,M
put M in Q with priority angle(BA, BM)
Inserting and querying the minimum in a priority queue can both be done in O(log N) time, so the complexity is O(N^3 log N) this way.
If I understand your questions, what you're looking for is https://en.wikipedia.org/wiki/Delaunay_triangulation
To quote from said Wikipedia article: "The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex v is added, we split in three the triangle that contains v, then we apply the flip algorithm. Done naively, this will take O(n) time: we search through all the triangles to find the one that contains v, then we potentially flip away every triangle. Then the overall runtime is O(n2)."

Intersection of N rectangles

I'm looking for an algorithm to solve this problem:
Given N rectangles on the Cartesian coordinate, find out if the intersection of those rectangles is empty or not. Each rectangle can lie in any direction (not necessary to have its edges parallel to Ox and Oy)
Do you have any suggestion to solve this problem? :) I can think of testing the intersection of each rectangle pair. However, it's O(N*N) and quite slow :(
Abstract
Either use a sorting algorithm according to smallest X value of the rectangle, or store your rectangles in an R-tree and search it.
Straight-forward approach (with sorting)
Let us denote low_x() - the smallest (leftmost) X value of a rectangle, and high_x() - the highest (rightmost) X value of a rectangle.
Algorithm:
Sort the rectangles according to low_x(). # O(n log n)
For each rectangle in sorted array: # O(n)
Finds its highest X point. # O(1)
Compare it with all rectangles whose low_x() is smaller # O(log n)
than this.high(x)
Complexity analysis
This should work on O(n log n) on uniformly distributed rectangles.
The worst case would be O(n^2), for example when the rectangles don't overlap but are one above another. In this case, generalize the algorithm to have low_y() and high_y() too.
Data-structure approach: R-Trees
R-trees (a spatial generalization of B-trees) are one of the best ways to store geospatial data, and can be useful in this problem. Simply store your rectangles in an R-tree, and you can spot intersections with a straightforward O(n log n) complexity. (n searches, log n time for each).
Observation 1: given a polygon A and a rectangle B, the intersection A ∩ B can be computed by 4 intersection with half-planes corresponding to each edge of B.
Observation 2: cutting a half plane from a convex polygon gives you a convex polygon. The first rectangle is a convex polygon. This operation increases the number of vertices at most per 1.
Observation 3: the signed distance of the vertices of a convex polygon to a straight line is a unimodal function.
Here is a sketch of the algorithm:
Maintain the current partial intersection D in a balanced binary tree in a CCW order.
When cutting a half-plane defined by a line L, find the two edges in D that intersect L. This can be done in logarithmic time through some clever binary or ternary search exploiting the unimodality of the signed distance to L. (This is the part I don't exactly remember.) Remove all the vertices on the one side of L from D, and insert the intersection points to D.
Repeat for all edges L of all rectangles.
This seems like a good application of Klee's measure. Basically, if you read http://en.wikipedia.org/wiki/Klee%27s_measure_problem there are lower bounds on the runtime of the best algorithms that can be found for rectilinear intersections at O(n log n).
I think you should use something like the sweep line algorithm: finding intersections is one of its applications. Also, have a look at answers to this questions
Since the rectangles must not be parallel to the axis, it is easier to transform the problem to an already solved one: compute the intersections of the borders of the rectangles.
build a set S which contains all borders, together with the rectangle they're belonging to; you get a set of tuples of the form ((x_start,y_start), (x_end,y_end), r_n), where r_n is of course the ID of the corresponding rectangle
now use a sweep line algorithm to find the intersections of those lines
The sweep line stops at every x-coordinate in S, i.e. all start values and all end values. For every new start coordinate, put the corresponding line in a temporary set I. For each new end-coordinate, remove the corresponding line from I.
Additionally to adding new lines to I, you can check for each new line whether it intersects with one of the lines currently in I. If they do, the corresponding rectangles do, too.
You can find a detailed explanation of this algorithm here.
The runtime is O(n*log(n) + c*log(n)), where c is the number of intersection points of the lines in I.
Pick the smallest rectangle from the set (or any rectangle), and go over each point within it. If one of it's point also exists in all other rectangles, the intersection is not empty. If all points are free from ALL other rectangles, the intersection is empty.

Connected points with-in a grid

Given a collection of random points within a grid, how do you check efficiently that they are all lie within a fixed range of other points. ie: Pick any one random point you can then navigate to any other point in the grid.
To clarify further: If you have a 1000 x 1000 grid and randomly placed 100 points in it how can you prove that any one point is within 100 units of a neighbour and all points are accessible by walking from one point to another?
I've been writing some code and came up with an interesting problem: Very occasionally (just once so far) it creates an island of points which exceeds the maximum range from the rest of the points. I need to fix this problem but brute force doesn't appear to be the answer.
It's being written in Java, but I am good with either pseudo-code or C++.
I like #joel.neely 's construction approach but if you want to ensure a more uniform density this is more likely to work (though it would probably produce more of a cluster rather than an overall uniform density):
Randomly place an initial point P_0 by picking x,y from a uniform distribution within the valid grid
For i = 1:N-1
Choose random j = uniformly distributed from 0 to i-1, identify point P_j which has been previously placed
Choose random point P_i where distance(P_i,P_j) < 100, by repeating the following until a valid P_i is chosen in substep 4 below:
Choose (dx,dy) each uniformly distributed from -100 to +100
If dx^2+dy^2 > 100^2, the distance is too large (fails 21.5% of the time), go back to previous step.
Calculate candidate coords(P_i) = coords(P_j) + (dx,dy).
P_i is valid if it is inside the overall valid grid.
Just a quick thought: If you divide the grid into 50x50 patches and when you place the initial points, you also record which patch they belong to. Now, when you want to check if a new point is within 100 pixels of the others, you could simply check the patch plus the 8 surrounding it and see if the point counts match up.
E.g., you know you have 100 random points, and each patch contains the number of points they contain, you can simply sum up and see if it is indeed 100 — which means all points are reachable.
I'm sure there are other ways, tough.
EDIT: The distance from the upper left point to the lower right of a 50x50 patch is sqrt(50^2 + 50^2) = 70 points, so you'd probably have to choose smaller patch size. Maybe 35 or 36 will do (50^2 = sqrt(x^2 + x^2) => x=35.355...).
Find the convex hull of the point set, and then use the rotating calipers method. The two most distant points on the convex hull are the two most distant points in the set. Since all other points are contained in the convex hull, they are guaranteed to be closer than the two extremal points.
As far as evaluating existing sets of points, this looks like a type of Euclidean minimum spanning tree problem. The wikipedia page states that this is a subgraph of the Delaunay triangulation; so I would think it would be sufficient to compute the Delaunay triangulation (see prev. reference or google "computational geometry") and then the minimum spanning tree and verify that all edges have length less than 100.
From reading the references it appears that this is O(N log N), maybe there is a quicker way but this is sufficient.
A simpler (but probably less efficient) algorithm would be something like the following:
Given: the points are in an array from index 0 to N-1.
Sort the points in x-coordinate order, which is O(N log N) for an efficient sort.
Initialize i = 0.
Increment i. If i == N, stop with success. (All points can be reached from another with radius R)
Initialize j = i.
Decrement j.
If j<0 or P[i].x - P[j].x > R, Stop with failure. (there is a gap and all points cannot be reached from each other with radius R)
Otherwise, we get here if P[i].x and P[j].x are within R of each other. Check if point P[j] is sufficiently close to P[i]: if (P[i].x-P[j].x)^2 + (P[i].y-P[j].y)^2 < R^2`, then point P[i] is reachable by one of the previous points within radius R, and go back to step 4.
Keep trying: go back to step 6.
Edit: this could be modified to something that should be O(N log N) but I'm not sure:
Given: the points are in an array from index 0 to N-1.
Sort the points in x-coordinate order, which is O(N log N) for an efficient sort.
Maintain a sorted set YLIST of points in y-coordinate order, initializing YLIST to the set {P[0]}. We'll be sweeping the x-coordinate from left to right, adding points one by one to YLIST and removing points that have an x-coordinate that is too far away from the newly-added point.
Initialize i = 0, j = 0.
Loop invariant always true at this point: All points P[k] where k <= i form a network where they can be reached from each other with radius R. All points within YLIST have x-coordinates that are between P[i].x-R and P[i].x
Increment i. If i == N, stop with success.
If P[i].x-P[j].x <= R, go to step 10. (this is automatically true if i == j)
Point P[j] is not reachable from point P[i] with radius R. Remove P[j] from YLIST (this is O(log N)).
Increment j, go to step 6.
At this point, all points P[j] with j<i and x-coordinates between P[i].x-R and P[i].x are in the set YLIST.
Add P[i] to YLIST (this is O(log N)), and remember the index k within YLIST where YLIST[k]==P[i].
Points YLIST[k-1] and YLIST[k+1] (if they exist; P[i] may be the only element within YLIST or it may be at an extreme end) are the closest points in YLIST to P[i].
If point YLIST[k-1] exists and is within radius R of P[i], then P[i] is reachable with radius R from at least one of the previous points. Go to step 5.
If point YLIST[k+1] exists and is within radius R of P[i], then P[i] is reachable with radius R from at least one of the previous points. Go to step 5.
P[i] is not reachable from any of the previous points. Stop with failure.
New and Improved ;-)
Thanks to Guillaume and Jason S for comments that made me think a bit more. That has produced a second proposal whose statistics show a significant improvement.
Guillaume remarked that the earlier strategy I posted would lose uniform density. Of course, he is right, because it's essentially a "drunkard's walk" which tends to orbit the original point. However, uniform random placement of the points yields a significant probability of failing the "path" requirement (all points being connectible by a path with no step greater than 100). Testing for that condition is expensive; generating purely random solutions until one passes is even more so.
Jason S offered a variation, but statistical testing over a large number of simulations leads me to conclude that his variation produces patterns that are just as clustered as those from my first proposal (based on examining mean and std. dev. of coordinate values).
The revised algorithm below produces point sets whose stats are very similar to those of purely (uniform) random placement, but which are guaranteed by construction to satisfy the path requirement. Unfortunately, it's a bit easier to visualize than to explain verbally. In effect, it requires the points to stagger randomly in a vaguely consistant direction (NE, SE, SW, NW), only changing directions when "bouncing off a wall".
Here's the high-level overview:
Pick an initial point at random, set horizontal travel to RIGHT and vertical travel to DOWN.
Repeat for the remaining number of points (e.g. 99 in the original spec):
2.1. Randomly choose dx and dy whose distance is between 50 and 100. (I assumed Euclidean distance -- square root of sums of squares -- in my trial implementation, but "taxicab" distance -- sum of absolute values -- would be even easier to code.)
2.2. Apply dx and dy to the previous point, based on horizontal and vertical travel (RIGHT/DOWN -> add, LEFT/UP -> subtract).
2.3. If either coordinate goes out of bounds (less than 0 or at least 1000), reflect that coordinate around the boundary violated, and replace its travel with the opposite direction. This means four cases (2 coordinates x 2 boundaries):
2.3.1. if x < 0, then x = -x and reverse LEFT/RIGHT horizontal travel.
2.3.2. if 1000 <= x, then x = 1999 - x and reverse LEFT/RIGHT horizontal travel.
2.3.3. if y < 0, then y = -y and reverse UP/DOWN vertical travel.
2.3.4. if 1000 <= y, then y = 1999 - y and reverse UP/DOWN vertical travel.
Note that the reflections under step 2.3 are guaranteed to leave the new point within 100 units of the previous point, so the path requirement is preserved. However, the horizontal and vertical travel constraints force the generation of points to "sweep" randomly across the entire space, producing more total dispersion than the original pure "drunkard's walk" algorithm.
If I understand your problem correctly, given a set of sites, you want to test whether the nearest neighbor (for the L1 distance, i.e. the grid distance) of each site is at distance less than a value K.
This is easily obtained for the Euclidean distance by computing the Delaunay triangulation of the set of points: the nearest neighbor of a site is one of its neighbor in the Delaunay triangulation. Interestingly, the L1 distance is greater than the Euclidean distance (within a factor sqrt(2)).
It follows that a way of testing your condition is the following:
compute the Delaunay triangulation of the sites
for each site s, start a breadth-first search from s in the triangulation, so that you discover all the vertices at Euclidean distance less than K from s (the Delaunay triangulation has the property that the set of vertices at distance less than K from a given site is connected in the triangulation)
for each site s, among these vertices at distance less than K from s, check if any of them is at L1 distance less than K from s. If not, the property is not satisfied.
This algorithm can be improved in several ways:
the breadth-first search at step 2 should of course be stopped as soon as a site at L1 distance less than K is found.
during the search for a valid neighbor of s, if a site s' is found to be at L1 distance less than K from s, there is no need to look for a valid neighbor for s': s is obviously one of them.
a complete breadth-first search is not needed: after visiting all triangles incident to s, if none of the neighbors of s in the triangulation is a valid neighbor (i.e. a site at L1 distance less than K), denote by (v1,...,vn) the neighbors. There are at most four edges (vi, vi+1) which intersect the horizontal and vertical axis. The search should only be continued through these four (or less) edges. [This follows from the shape of the L1 sphere]
Force the desired condition by construction. Instead of placing all points solely by drawing random numbers, constrain the coordinates as follows:
Randomly place an initial point.
Repeat for the remaining number of points (e.g. 99):
2.1. Randomly select an x-coordinate within some range (e.g. 90) of the previous point.
2.2. Compute the legal range for the y-coordinate that will make it within 100 units of the previous point.
2.3. Randomly select a y-coordinate within that range.
If you want to completely obscure the origin, sort the points by their coordinate pair.
This will not require much overhead vs. pure randomness, but will guarantee that each point is within 100 units of at least one other point (actually, except for the first and last, each point will be within 100 units of two other points).
As a variation on the above, in step 2, randomly choose any already-generated point and use it as the reference instead of the previous point.

Resources