I need to generate random numbers which are separated by some distance('r') from an interval.These numbers would represent the center of a circle, I want these circles to be at random locations and they should not overlap which is why the condition. One approach I thought of is to divide the range into set of intervals of length 'r'(radius of circle),from that set, get a set of alternative intervals(S), choose randomly from this set S and then choose a number randomly in each of these intervals. Could you suggest an approach which would give better randomness?
Related
I have two sets of 3D points and I want to find the closest point in the second set for each point in the first set. In a more difficult case, the sets may have different numbers of points, and I need to find the closest pairs of points. I'm not sure what this problem is called, but I have some brute-force ideas for solving it. For example, I could calculate the distance between all pairs of points and choose the pairs with the shortest total distance. The maximum number of points in each set is 20, so I don't need the most efficient solution.
I'm designing a program where each of N agents is assigned a value K. There are N fixed locations, each with coordinates (x,y), and each location is assigned one agent.
What algorithm could I use to distribute all agents among the locations such that the linear distance between the agents with the highest values of K is maximized? (Specifically between the agents in the highest quintile of K values.)
If it matters, N will likely fall in the range of 10-30.
Google tells me that (30 choose 6) = 593775 so if you can work out a formula that tells you how good each possible choice of K fixed locations from N is, you can probably afford to evaluate it for all possible choices.
Here is a heuristic for larger parameter values. Calculate all distances between pairs of points and sort them into increasing order. Read out pairs from this in order and merge groups of points linked by each pair, using Union-Find to keep track of the groups of points created in this way. Stop when one of the groups reaches the required size, and that group is your answer.
I have a distributions of point distance to a parallel line. Each distributions have an area more populated which represent the point channel. I would like to extract the minimum and maximum represented by the red line in the graphs? The eyes can do it easily but how to do it robustly with an algorithm?
The x axis represents the perpendicular distance of the points to the line from 0 to 100m.
The y axis represents the number of points that have their distance in a certain bin.
Example 1
Example 2
Since the distribution comes from a set of distances from points to a line, and the values are in order, you may try to compute the normal distribution that models your samples. From there, get as margins (your red bars) the mean +/- x*sigma, where x can be the value you want (maybe 1 or 2).
If the points were not in order, you may get some percentile (0.25, for example) of the full list of values as a threshold, and assume your populated part of the distribution starts there for values higher than that percentile.
I have been trying to figure out how to write a function to bin a sample of data together based on its density (Number of Occurrence/ Length of edge). But there are not alot of examples out there.
The output would give a vector of edges where both :
the number of bins are given by how many are required to group data that have different density by a threshold (maybe 40%?)
and the length of the edges are determined by if the adjacent data groups have similar density. (Similar density are grouped together, but if the neighboring bin is 40% more or less in density, it would require another bin).
So to illustrate my point, below is a simple example:
I have data values that ranges from 1 to 10 and I have 10 observations of it where x=[1,2,3,4,5,5,5,6, 6,7];
x would result in a range with edges that are [1,5,6,7,8], so there are four states just because the bins represent different density clusters.
Just to mention my actual data is continuous, any help is appreciated.
I thought of a preliminary algorithm for large data samples:
Sort data in ascending order.
Group data where at least a group has 10 elements
Calculate and compare density to group similar ones together.
I got stuck on the 3rd point. Where I am not sure how to effectively group them. My obstacle comes from if the density increases slowly, but gradually e.g. Density: 1,2,3,4,5,6,7,8,9,10
Where do I call it break and say that one group has a different density from another.
I need algorithm ideas for generating points in 2D space with defined minimum and maximum possible distances between points.
Bassicaly, i want to find a good way to insert a point in a 2D space filled with points, in such manner that the point has random location, but is also more than MINIMUM_DISTANCE_NUM and less than MAXIMUM_DISTANCE_NUM away from nearest points.
I need it for a game, so it should be fast, and not depending on random probability.
Store the set of points in a Kd tree. Generate a new point at random, then examine its nearest neighbors which can be looked up quickly in the Kd tree. If the point is accepted (i.e. MIN_DIST < nearest neighbor < MAX_DIST), then add it to the tree.
I should note that this will work best under conditions where the points are not too tightly packed, i.e. MIN * N << L where N is the number of points and L is the dimension of the box you are putting them in. If this is not true, then most new points will be rejected. But think about it, in this limit, you're packing marbles into a box, and the arrangement can't be very "random" at all above a certain density.
You could use a 2D regular grid of Points (P0,P1,P2,P3,...,P(m*n), when m is width and n is height of this grid )
Each point is associated with 1) a boolean saying wether this grid point was used or not and 2) a 'shift' from this grid position to avoid too much regularity. (or you can put the point+shift coordinates in your grid allready)
Then when you need a new point, just pick a random point of your grid which was not used, state this point as 'used' and use the Point+shift in your game.
Depending on n, m, width/height of your 2D space, and the number of points you're gonna use, this could be just fine.
A good option for this is to use Poisson-Disc sampling. The algorithm is efficient (O(n)) and "produces points that are tightly-packed, but no closer to each other than a specified minimum distance, resulting in a more natural pattern".
https://www.jasondavies.com/poisson-disc/
How many points are you talking about? If you have an upper limit to the number of points, you can generate (pre-compute) an array of points and store them in the array. Take that array and store them in a file.
You'll have all the hard computational processing done before the map loads (so that way you can use any random point generating algorithm) and then you have a nice fast way to get your points.
That way you can generate a ton of different maps and then randomly select one of the maps to generate your points.
This only works if you have an upper limit and you can precomputer the points before the game loads