Okay,
I am dealing with a problem, that I can't seem to solve. A little help ?
Problem: Given a wooden line of length M which has n hooks and the positions of these hooks are given as an array ( 0 < p < M ). There is a spring attached to each hook and each spring has a metal ball at the other end. All balls are of the same radius, R and the same mass m. The springs have the same stiffness coefficient.
How do we find the optimum position of the balls such that the springs and the system is in equilibrium ? The metal balls are not allowed to go before or after the line. i.e ends of the balls cannot be < 0 or > M. It is possible to have multiple hooks at the same position in the array.
Assumptions: The given array is always valid.
You can ignore the vertical stretch and only consider the stretch of the spring in the horizontal directions. The problem can be seen as 1D in nature then.
Limits: O(nlogn) solution or better is sought here.
Example: M = 10, array = [ 4, 4 ], R = 1 ( Diameter is 2 ), optimum ball position = [ 3, 5 ]
What I've tried so far:
take one hook/ball at a time, create clustors if two balls hit each other. Place them symmetrically at centroid of the hooks. Bottleneck O(n^2) since balls keep hitting each other
Put all balls at the complete centroid of the hooks. return max of 3 sub-problems recursively..
a) balls that are being stretched left, b) balls being stretched right, c) balls in middle of these. Bottleneck The 3 subproblems may have overlaps and getting the overlaps good seems awkward.
Here is a sort of binary search to find the correct position of each of the balls.
Start the balls next to each in order of their connections the farthest left each can go.
Calculate the amount of space you have for the balls to move (the distance from the right-most ball to the right edge), and use half of this as your starting increment.
Calculate the net force on each ball from the spring, its neighbors, and the edges.
Move each ball the increment in the direction of the net force on it, or keep it where it is if there is no net force.
If the increment is below the precision you want or all balls had no net force on them, stop. Else, Divide the increment by 2 and go to step 3.
Related
I have a (m*n) matrix, in which lies 3 different points. I have to find the minimum number of moves such that all 3 points converge at a point inside the matrix.
Solution(s) I have tried till now:
Brute force solution: Try all the points in the matrix, find the
point which needed the minimum moves from the 3 given points.
Bound the three points as a triangle and try the points with in this region. This will more likely to be the centroid of this triangle.
I would like to know about other optimized solutions for this problem. Thanks.
Each move you change either x- or y- coordinates by +/- 1. Vertical and horizontal distances are that way independent. So, firstly lead points to one x-coordinate, then y-coordinate. The most optimal way of doing it is moving points with minimal and maximal x to the x-position of the third point. Repeat same for y and you are done.
This way the coordinate of that final point is x coordinate of middle point on x-axis and y coordinate of middle point on y-axis (there can be many such points though, but this must be one of the minimizing set). (if they overlap, obviously either of the overlapping coordinates will be middle one).
Programmatically take an array of x-values, remove max and min values, same with y and you'll be left with your closest-to-all point.
For Manhattan distance:
Simply take the middle x-coordinate and the middle y-coordinate of the 3 points as the target. By middle I mean the median of the sorted coordinates.
Example:
Input: (1,5), (2,4), (3,6)
The x-points are 1, 2 and 3 (sort to 1, 2, 3). The middle one is 2.
The y-points are 5, 4 and 6 (sort to 4, 5, 6). The middle one is 5.
Thus the point that minimizes the distance is (2,5).
Proof:
If we were to start at the above mentioned point and move in any direction, the distance to the point we're moving towards would decrease and the distance to the other 2 points would increase, thus every move by 1 would cause a total decrease of 1 but a total increase of 2. Once we've moved past the last point, all 3 distances would increase, thus the total increase will be 3 with no decrease. Thus any move causes an increase in distance, thus the above point is optimal.
For Euclidean distance:
(I realize now that you said minimum moves, thus this is probably not what you want)
This is somewhat more complicated. It involves minimizing this equation:
sqrt((x-x1)^2 + (y-y1)^2) + sqrt((x-x2)^2 + (y-y2)^2) + sqrt((x-x3)^2 + (y-y3)^2)
Because of the sqrt's, one cannot simplify the equation by saying that x and y are distinct.
However, the greedy approach is likely to work:
Pick any point (maybe the one that minimizes the Manhattan distance is a good start).
While one of the neighbouring points of the picked point results in lower euclidean distance, pick that point.
I think this is one of the variants of the http://en.wikipedia.org/wiki/Steiner_tree_problem - probably the http://en.wikipedia.org/wiki/Rectilinear_Steiner_tree problem. This looks sufficiently intimidating that I would stick to trying all points within the triangle.
Actually with just points converging on a single point I think sashkello is right
By "Group", I mean a set of pixels such that every pixel at least have one adjacent pixel in the same set, the drawing shows an example of a group.
I would like to find the pixel which is having the greatest straight line distance from a designated pixel (for example, the green pixel). And the straight line connecting the two pixels (the red line) must not leave the group.
My solution is looping through the degrees and simulating the progress of the lines starting from the green pixel with the degree and see which line travelled the farthest distance.
longestDist = 0
bestDegree = -1
farthestX = -1
farthestY = -1
FOR EACH degree from 0 to 360
dx=longestDist * cos(degree);
dy=longestDist * sin(degree);
IF Point(x+dx , y+dy) does not belong to the group
Continue with next degree
//Because it must not be the longest line, so skip it
END IF
(farthestX , farthestY) = simulate(x,y,degree)
d = findDistance(x , y , farthestX , farthestY)
IF d > longestDist
longestDist = d
bestDegree = degree
END IF
END FOR
It is obviously not the best algorithm. Thus I am asking for help here.
Thank you and sorry for my poor English.
I wouldn't work with angles. But I'm pretty sure the largest distance will always be between two pixels at the edge of the set, thus I'd trace the outline: From any pixel in the set go to any direction until you reach the edge of the set. Then move (couter)clockwise along the edge. Do this with any pixel as starting point and you'll be able to find the largest distance. It's still pretty greedy, but I thought it might give you an alternative starting point to improve upon.
Edit: What just came to my mind: When you have a start pixel s and the end pixel e. In the first iteration using s the corresponding e will be adjacent (the next one along the edge in clockwise direction). As you iterate along the edge the case might occur, that there is no straight line through the set between s and e. In that case the line will hit another part of the set-edge (the pixel p) though. You can continue iteration of the edge at that pixel (e = p)
Edit2: And if you hit a p you'll know that there can be no longer distance between s and e so in the next iteration of s you can skip that whole part of the edge (between s and p) and start at p again.
Edit3: Use the above method to find the first p. Take that p as next s and continue. Repeat until you reach your first p again. The max distance will be between two of those p unless the edge of the set is convex in which case you wont find a p.
Disclaimer: This is untested and are just ideas from the top of my head, no drawings have been made to substantiate my claims and everything might be wrong (i.e. think about it for yourself before you implement it ;D)
First, note that the angle discretization in your algorithm may depend on the size of the grid. If the step is too large, you can miss certain cells, if it is too small, you will end up visiting the same cell again and again.
I would suggest that you enumerate the cells in the region and test the condition for each one individually instead. The enumeration can be done using breadth-first or depth-first search (I think the latter would be preferable, since it will allow one to establish a lower bound quickly and do some pruning).
One can maintain the farthest point X found so far and for each new point in the region, check whether (a) the point is further away than the one found so far and (b) it's connected to the origin by a straight line passing through the cells of the region only. If both conditions are satisfied, update the X, else go on with the search. If condition (a) is not satisfied, condition (b) doesn't have to be checked.
The complexity of this solution would be O(N*M), where N is the number of cells in the region and M is the larger dimension of the region (max(width,height)). If performance is of essence, more sophisticated heuristics can be applied, but for a reasonably sized grid this should work fine.
Search for pixel, not for slope. Pseudocode.
bestLength = 0
for each pixel in pixels
currentLength = findDistance(x, y, pixel.x, pixel.y)
if currentLength > bestLength
if goodLine(x, y, pixel.x, pixel.y)
bestLength = currentLength
bestX = pixel.x
bestY = pixel.y
end
end
end
You might want to sort pixels descending by |dx| + |dy| before that.
Use a double data-structure:
One that contains the pixels sorted by angle.
The second one sorted by distance (for fast access, this should also contain "pointers" for the first data structure).
Walk through the angle sorted one, and check for each pixel that the line is within the region. Some pixels will have the same angle, so you can walk from the origin along the line, and go till you go out from the region. You can eliminate all the pixels which are beyond that point. Also, if the maximum distance increased, remove all pixels which have a shorter distance.
Treat your region as a polygon instead of a collection of pixels. From this you can get a list of line segments (the edges of your polygon).
Draw a line from your start pixel to each pixel you are checking. The longest line that does not intersect any of the line segments of your polygon indicates your most distant pixel that is reachable by a straight line from your pixel.
There are various optimizations you can make to this and a few edges cases to check, but let me know if you understand the idea before i post those... in particular, do you understand what I mean by treating as a polygon instead of a collection of pixels?
To add, this approach will be significantly faster than any angle based approach or approach that requires "walking" for all but the smallest collections of pixels. You can further optimize because your problem is equivalent to finding the most distant endpoint of a polygon edge that can be reached via an unintersected straight line from your start point. This can be done in O(N^2), where N is the number of edges. Note that N will be much much smaller than the number of pixels, and many of the algorithms proposed that use angles an/or pixel iteration are be going to instead depend on the number of pixels.
I'm having a hard time finding an admissible heuristic for the Hungarian Rings puzzle. I'm planing on using IDA* algorithm to solve and am writing the program in Visual Basic. All I am lacking is how to implement the actual solving of the puzzle. I've implemented both the left and right rings into their own arrays and have functions that rotate each ring clockwise and counterclockwise. I'm not asking for code, just somewhere to get started is all.
Here is the 2 ring arrays:
Dim leftRing(19) As Integer
' leftRing(16) is bottom intersection and leftRing(19) is top intersection
Dim rightRing(19) As Integer
' rightRing(4) is top intersection and rightRing(19) is bottom intersection
In the arrays, I store the following as the values for each color:
Red value = 1 Yellow = 2 Blue = 3 and Black = 4
I suggest counting "errors" in each ring separately - how many balls need to be replaced to make the ring solved (1 9-color, 1 10-color, one lone ball from a 9-color). At most two balls can be fixed using a rotation, then another rotation is needed to fix another two. Compute the distance of each ring individually = 2n-1 where n is half the amount of bad positions and take the larger of them. You can iterate over all twenty positions when looking for one that has the least amount of errors, but I suppose there's a better way to compute this metric (apart from simple pruning).
Update:
The discussion with Gareth Reed points to the following heuristic:
For each ring separately, count:
the number of color changes. The target amount is three color changes per ring, and at most four color changes may be eliminated at a time. Credits go to Gareth for this metric.
the count of different colors, neglecting their position. There should be: 10 balls of one 10-color, 9 balls of one 9-color and one ball of the other 9-color. At most 2 colors can be changed at a time.
The second heuristic can be split into three parts:
there should be 10 10-balls and 10 9-balls. Balls over ten need to be replaced.
there should be only one color of 10-balls. Balls of the minor color need to be replaced.
there should be only one ball of a 9-color. Other balls of the color need to be replaced. If all are the same color, and 9-color is not deficient, one additional ball need to be replaced.
Take the larger of both estimates. Note that you will need to alternate the rings, so 2n-1 moves are actually needed for n replacements. If both estimates are equal, or the larger one is for the latest moved ring, add an additional one. One of the rings will not be improved by the first move.
Prune all moves that rotate the same ring twice (assuming a move metric that allows large rotations). These have already been explored.
This should avoid all large local minima.
I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below?
Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points.
Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d)
Since there will be a great number of grid points and points to search from, a simple exhaustive:
for each grid point:
for each point:
if distance between points < minDistance:
minDistance = distance between points
is not a good alternative.
I was thinking of doing something along the lines of:
create a container of size A*B*C where each element holds a container of points
for each point:
define indexX = round((point position x - grid min position x)/d)
// same for y and z
add the point to the correct index of the container
for each grid point:
search the container of that grid point and find the closest point
if no points in container and D > 0.5d:
search the 26 container indices nearest to the grid point for a closest point
.. continue with next layer until a point is found or the distance to that layer
is greater than D
Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.
Actually, I think I have a better way to go, as the number of grid points is much larger than the number of sample points. Let |Grid| = N, |Samples| = M, then the nearest neighbor search algorithms will be something like O(N lg M), as you need to look up all N grid points, and each lookup is (best case) O(lg M).
Instead, loop over the sample points. Store for each grid point the closest sample point found so far. For each sample point, just check all grid points within distance D of the sample to see if the current sample is closer than any previously processed samples.
The running time is then O(N + (D/d)^3 M) which should be better when D/d is small.
Even when D/d is larger, you might still be OK if you can work out a cutoff strategy. For example, if we're checking a grid point distance 5 from our sample, and that grid point is already marked as being distance 1 from a previous sample, then all grid points "beyond" that grid point don't need to be checked because the previous sample is guaranteed to be closer than the current sample we're processing. All you have to do is (and I don't think it is easy, but should be doable) define what "beyond" means and figure out how to iterate through the grid to avoid doing any work for areas "beyond" such grid points.
Take a look at octrees. They're a data structure often used to partition 3d spaces efficiently in such a manner as to improve efficiency of lookups for objects that are near each other spatially.
You can build a nearest neighbor search structure (Wikipedia) on your sample points, then ask it for each of your grid points. There are a bunch of algorithms mentioned on the Wikipedia page. Perhaps octtrees, kd-trees, or R-trees would be appropriate.
One approach, which may or may not suit your application, is to recast your thinking and define each grid 'point' to be the centre of a cube which divides your space into cells. You then have a 3D array of such cells and store the points in the cells -- choose the most appropriate data structure. To use your own words, put the points in buckets in the first place.
I guess that you may be running some sort of large scale simulation and the approach I suggest is not unusual in such applications. At each time step (if I've guessed correctly) you have to recalculate the distance from the cell to the nearest point, and move points from cells to cells. This will parallelise very easily.
EDIT: Googling around for particle-particle and particle-particle particle-mesh may throw up some ideas for you.
A note on Keith Randall's method,
expanding shells or cubes around the startpoints:
One can expand in various orders. Here's some python-style pseudocode:
S = set of 1m startpoints
near = grid 500x500x500 -> nearest s in S
initially s for s in S, else 0
for r in 1 .. D:
for s in S:
nnew = 0
for p in shell of radius r around s:
if near[p] == 0:
near[p] = s
nnew += 1
if nnew == 0:
remove s from S # bonk, stop expanding from s
"Stop expanding from s early" is fine in 1d (bonk left, bonk right);
but 2d / 3d shells are irregular.
It's easier / faster to do whole cubes in one pass:
near = grid 500x500x500 -> { dist, nearest s in S }
initially { 0, s } for s in self, else { infinity, 0 }
for s in S:
for p in approximatecube of radius D around s:
if |p - s| < near[p].dist: # is s nearer ?
near[p] = { |p - s|, s }
Here "approximatecube" may be a full DxDxD cube,
or you could lop off the corners like (here 2d)
0 1 2 3 4
1 1 2 3 4
2 2 3 4 4
3 3 4 4
4 4 4
Also fwiw,
with erik's numbers, there are on average 500^3/1M ~ 2^7 ~ 5^3 empties
per sample point.
So I at first thought that 5x5x5 cubes around 1M sample points
would cover most of the whole grid.
Not so, ~ 1/e of the gridpoints stay empty -- Poisson distibution.
Given a collection of random points within a grid, how do you check efficiently that they are all lie within a fixed range of other points. ie: Pick any one random point you can then navigate to any other point in the grid.
To clarify further: If you have a 1000 x 1000 grid and randomly placed 100 points in it how can you prove that any one point is within 100 units of a neighbour and all points are accessible by walking from one point to another?
I've been writing some code and came up with an interesting problem: Very occasionally (just once so far) it creates an island of points which exceeds the maximum range from the rest of the points. I need to fix this problem but brute force doesn't appear to be the answer.
It's being written in Java, but I am good with either pseudo-code or C++.
I like #joel.neely 's construction approach but if you want to ensure a more uniform density this is more likely to work (though it would probably produce more of a cluster rather than an overall uniform density):
Randomly place an initial point P_0 by picking x,y from a uniform distribution within the valid grid
For i = 1:N-1
Choose random j = uniformly distributed from 0 to i-1, identify point P_j which has been previously placed
Choose random point P_i where distance(P_i,P_j) < 100, by repeating the following until a valid P_i is chosen in substep 4 below:
Choose (dx,dy) each uniformly distributed from -100 to +100
If dx^2+dy^2 > 100^2, the distance is too large (fails 21.5% of the time), go back to previous step.
Calculate candidate coords(P_i) = coords(P_j) + (dx,dy).
P_i is valid if it is inside the overall valid grid.
Just a quick thought: If you divide the grid into 50x50 patches and when you place the initial points, you also record which patch they belong to. Now, when you want to check if a new point is within 100 pixels of the others, you could simply check the patch plus the 8 surrounding it and see if the point counts match up.
E.g., you know you have 100 random points, and each patch contains the number of points they contain, you can simply sum up and see if it is indeed 100 — which means all points are reachable.
I'm sure there are other ways, tough.
EDIT: The distance from the upper left point to the lower right of a 50x50 patch is sqrt(50^2 + 50^2) = 70 points, so you'd probably have to choose smaller patch size. Maybe 35 or 36 will do (50^2 = sqrt(x^2 + x^2) => x=35.355...).
Find the convex hull of the point set, and then use the rotating calipers method. The two most distant points on the convex hull are the two most distant points in the set. Since all other points are contained in the convex hull, they are guaranteed to be closer than the two extremal points.
As far as evaluating existing sets of points, this looks like a type of Euclidean minimum spanning tree problem. The wikipedia page states that this is a subgraph of the Delaunay triangulation; so I would think it would be sufficient to compute the Delaunay triangulation (see prev. reference or google "computational geometry") and then the minimum spanning tree and verify that all edges have length less than 100.
From reading the references it appears that this is O(N log N), maybe there is a quicker way but this is sufficient.
A simpler (but probably less efficient) algorithm would be something like the following:
Given: the points are in an array from index 0 to N-1.
Sort the points in x-coordinate order, which is O(N log N) for an efficient sort.
Initialize i = 0.
Increment i. If i == N, stop with success. (All points can be reached from another with radius R)
Initialize j = i.
Decrement j.
If j<0 or P[i].x - P[j].x > R, Stop with failure. (there is a gap and all points cannot be reached from each other with radius R)
Otherwise, we get here if P[i].x and P[j].x are within R of each other. Check if point P[j] is sufficiently close to P[i]: if (P[i].x-P[j].x)^2 + (P[i].y-P[j].y)^2 < R^2`, then point P[i] is reachable by one of the previous points within radius R, and go back to step 4.
Keep trying: go back to step 6.
Edit: this could be modified to something that should be O(N log N) but I'm not sure:
Given: the points are in an array from index 0 to N-1.
Sort the points in x-coordinate order, which is O(N log N) for an efficient sort.
Maintain a sorted set YLIST of points in y-coordinate order, initializing YLIST to the set {P[0]}. We'll be sweeping the x-coordinate from left to right, adding points one by one to YLIST and removing points that have an x-coordinate that is too far away from the newly-added point.
Initialize i = 0, j = 0.
Loop invariant always true at this point: All points P[k] where k <= i form a network where they can be reached from each other with radius R. All points within YLIST have x-coordinates that are between P[i].x-R and P[i].x
Increment i. If i == N, stop with success.
If P[i].x-P[j].x <= R, go to step 10. (this is automatically true if i == j)
Point P[j] is not reachable from point P[i] with radius R. Remove P[j] from YLIST (this is O(log N)).
Increment j, go to step 6.
At this point, all points P[j] with j<i and x-coordinates between P[i].x-R and P[i].x are in the set YLIST.
Add P[i] to YLIST (this is O(log N)), and remember the index k within YLIST where YLIST[k]==P[i].
Points YLIST[k-1] and YLIST[k+1] (if they exist; P[i] may be the only element within YLIST or it may be at an extreme end) are the closest points in YLIST to P[i].
If point YLIST[k-1] exists and is within radius R of P[i], then P[i] is reachable with radius R from at least one of the previous points. Go to step 5.
If point YLIST[k+1] exists and is within radius R of P[i], then P[i] is reachable with radius R from at least one of the previous points. Go to step 5.
P[i] is not reachable from any of the previous points. Stop with failure.
New and Improved ;-)
Thanks to Guillaume and Jason S for comments that made me think a bit more. That has produced a second proposal whose statistics show a significant improvement.
Guillaume remarked that the earlier strategy I posted would lose uniform density. Of course, he is right, because it's essentially a "drunkard's walk" which tends to orbit the original point. However, uniform random placement of the points yields a significant probability of failing the "path" requirement (all points being connectible by a path with no step greater than 100). Testing for that condition is expensive; generating purely random solutions until one passes is even more so.
Jason S offered a variation, but statistical testing over a large number of simulations leads me to conclude that his variation produces patterns that are just as clustered as those from my first proposal (based on examining mean and std. dev. of coordinate values).
The revised algorithm below produces point sets whose stats are very similar to those of purely (uniform) random placement, but which are guaranteed by construction to satisfy the path requirement. Unfortunately, it's a bit easier to visualize than to explain verbally. In effect, it requires the points to stagger randomly in a vaguely consistant direction (NE, SE, SW, NW), only changing directions when "bouncing off a wall".
Here's the high-level overview:
Pick an initial point at random, set horizontal travel to RIGHT and vertical travel to DOWN.
Repeat for the remaining number of points (e.g. 99 in the original spec):
2.1. Randomly choose dx and dy whose distance is between 50 and 100. (I assumed Euclidean distance -- square root of sums of squares -- in my trial implementation, but "taxicab" distance -- sum of absolute values -- would be even easier to code.)
2.2. Apply dx and dy to the previous point, based on horizontal and vertical travel (RIGHT/DOWN -> add, LEFT/UP -> subtract).
2.3. If either coordinate goes out of bounds (less than 0 or at least 1000), reflect that coordinate around the boundary violated, and replace its travel with the opposite direction. This means four cases (2 coordinates x 2 boundaries):
2.3.1. if x < 0, then x = -x and reverse LEFT/RIGHT horizontal travel.
2.3.2. if 1000 <= x, then x = 1999 - x and reverse LEFT/RIGHT horizontal travel.
2.3.3. if y < 0, then y = -y and reverse UP/DOWN vertical travel.
2.3.4. if 1000 <= y, then y = 1999 - y and reverse UP/DOWN vertical travel.
Note that the reflections under step 2.3 are guaranteed to leave the new point within 100 units of the previous point, so the path requirement is preserved. However, the horizontal and vertical travel constraints force the generation of points to "sweep" randomly across the entire space, producing more total dispersion than the original pure "drunkard's walk" algorithm.
If I understand your problem correctly, given a set of sites, you want to test whether the nearest neighbor (for the L1 distance, i.e. the grid distance) of each site is at distance less than a value K.
This is easily obtained for the Euclidean distance by computing the Delaunay triangulation of the set of points: the nearest neighbor of a site is one of its neighbor in the Delaunay triangulation. Interestingly, the L1 distance is greater than the Euclidean distance (within a factor sqrt(2)).
It follows that a way of testing your condition is the following:
compute the Delaunay triangulation of the sites
for each site s, start a breadth-first search from s in the triangulation, so that you discover all the vertices at Euclidean distance less than K from s (the Delaunay triangulation has the property that the set of vertices at distance less than K from a given site is connected in the triangulation)
for each site s, among these vertices at distance less than K from s, check if any of them is at L1 distance less than K from s. If not, the property is not satisfied.
This algorithm can be improved in several ways:
the breadth-first search at step 2 should of course be stopped as soon as a site at L1 distance less than K is found.
during the search for a valid neighbor of s, if a site s' is found to be at L1 distance less than K from s, there is no need to look for a valid neighbor for s': s is obviously one of them.
a complete breadth-first search is not needed: after visiting all triangles incident to s, if none of the neighbors of s in the triangulation is a valid neighbor (i.e. a site at L1 distance less than K), denote by (v1,...,vn) the neighbors. There are at most four edges (vi, vi+1) which intersect the horizontal and vertical axis. The search should only be continued through these four (or less) edges. [This follows from the shape of the L1 sphere]
Force the desired condition by construction. Instead of placing all points solely by drawing random numbers, constrain the coordinates as follows:
Randomly place an initial point.
Repeat for the remaining number of points (e.g. 99):
2.1. Randomly select an x-coordinate within some range (e.g. 90) of the previous point.
2.2. Compute the legal range for the y-coordinate that will make it within 100 units of the previous point.
2.3. Randomly select a y-coordinate within that range.
If you want to completely obscure the origin, sort the points by their coordinate pair.
This will not require much overhead vs. pure randomness, but will guarantee that each point is within 100 units of at least one other point (actually, except for the first and last, each point will be within 100 units of two other points).
As a variation on the above, in step 2, randomly choose any already-generated point and use it as the reference instead of the previous point.