Size Algorithm - algorithm

Ok here's a little problem I would love to get some help on.
I have a view and the viewport size will vary based on user screen resolution. The viewport needs to contain N boxes which are lined up next to each other from right to left and take up all of the horizontal space in the viewport. Now if all the boxes could be the same size this would be easy, just divide the viewport width by N and you're away.
The problem is that each box needs to be 10% smaller than the box to its left hand side, so for example if the viewport is 271 pixels wide and there are three boxes I will be returned [100, 90, 81]
So I need an algorithm that when handed the width of the viewport and the number of horizontal boxs will return an array containing the width of that each of the boxes needs to be in order to fill the width of the viewport and reduce each boxes size by 10%.
Answers in any OO language is cool. Would just like to get some ideas on how to approach this and maybe see who can come up with the most elegant solution.
Regards,
Chris

Using a simple geometric progression, in Python,
def box_sizes(width, num_boxes) :
first_box = width / (10 * (1 - 0.9**n))
return [first_box * 0.9**i for i in range(n)]
>>> box_sizes(100, 5)
[24.419428096993972, 21.977485287294574, 19.779736758565118, 17.801763082708607, 16.021586774437747]
>>> sum(_)
100.00000000000001
You may want to tidy up the precision, or convert to integers, but that should do it.

This is really a mathematical problem. With two boxes given:
x = size of the first box
n = number of boxes
P = number of pixels
then
x + 0.9x = P
3: x + 0.9x + 0.81x = P
4: x + 0.9x + 0.81x + 0.729x = P
which is, in fact, a geometric series in the form:
S(n) = a + ar + arn + ... + arn-1
where:
a = size of the first box
r = 0.9
S(n) = P
S(n) = a(1-rn)/(1-r)
so
x = 0.1P/(1-0.9n)
which (finally!) seems correct and can be solved for any (P,n).

It's called Geometric Progression and there is a Wikipedia article on it. The formulas are there too. I believe that cletus has made a mistake with his f(n). Corrected. :)

public static int[] GetWidths(int width, int partsCount)
{
double q = 0.9;
int[] result = new int[partsCount];
double a = (width * (1 - q)) / (1 - Math.Pow(q, partsCount));
result[0] = (int) Math.Round(a);
int sum = result[0];
for (int i = 1; i < partsCount - 1; i++)
{
result[i] = (int) Math.Round( a * Math.Pow(q, i) );
sum += result[i];
}
result[partsCount - 1] = width - sum;
return result;
}
It's because it is geometric progression.

Let:
x = size of the first box
n = number of boxes
P = number of pixels
n = 1: x = P
n = 2: x + .9x = P
n = 3: x + .9x + .81x = P
P = x sum[1..n](.9 ^ (n - 1))
Therefore:
x = P / sum[1..n](.9 ^ (n - 1))
Using the Geometric Progression formula:
x = P (.9 - 1) / ((.9 ^ n) - 1))
Test:
P = 100
n = 3
Gives:
x = 36.9

Start with computing the sum of boxes' widths, assuming the first box is 1, second 0.81, etc. You can do this iteratively or from the formula for geometric series. Then scale each box by the (viewport width)/(sum of original boxes' width) ratio.

Like others have mentioned, the widths of the boxes form a geometric progression. Given viewport width W and number of boxes N, we can solve directly for the width of the widest box X. To fit N boxes within the viewport, we need
X + 0.9 X + 0.9^2 X + ... + 0.9^(N-1) X <= W
{ 1 + 0.9 + 0.9^2 + ... + 0.9^(N-1) } X <= W
Applying the formula for the sum of a geometric progression gives
(1 - 0.9^N) X / (1 - 0.9) <= W
X <= 0.1 W / (1 - 0.9^N)
So there you have it, a closed-form expression which gives you the width of the widest box X.

Related

How Do You Draw Concentric Squares?

-Create a sketch of 10 concentric squares of different colors
-Incorporate user input when the mouse or keyboard is pressed changed the colors of the squares
-Code must use variables/ loops/ and decision structures.
If your problem is having them be concentric, use rectMode()
rectMode(CENTER);
for (int i = 0; i < 10; i++) {
rect(width / 2, height / 2, 10 * (i + 1));
}
The term concentric, while usually used for circles, is actually just based on the Latin for "same centre". Hence concentric squares are just those that have the same center (where the diaganols meet).
So, let's say you need the upper left corner (where X increases across to the right, Y increases down to the bottom) and side length. To work out the center of an existing square:
centX = X + length / 2
centY = Y + length / 2
Then to work out the upper left co-ordinates for a new square of given length (that's concentric with the first):
X = centX - length / 2
Y = centY - length / 2
You can wrap that up in a function (pseudo-code) with somwething like:
def makeConcentricSquare(origX, origY, origLen, newLen):
newX = origX + origLen / 2 - newLen / 2
newY = origY + origLen / 2 - newLen / 2
return (newX, newY, newLen)
This is, of course, assuming your squares are horizontal in nature. You can do similar things to rotate them but I'll leave that as an exercise for the reader, especially since the specifications make no mention of allowing for it :-)

smooth coloring algorithm for the mandelbrot set

I know there are a lot of questions alrady answered about this. However, mine varies slightly. Whenever we implement the smooth coloring algorithim as I understand it.
mu = 1 + n + math.log2(math.log2(z)) / math.log2(2)
where n is the escape iteration and 2 is the power z is to, and if im not mistaken z is the modulus of the complex number at that escape iteration. We then use this renormalized escape value in our linear interpolation between colors to produce a smoothly banded mandelbrot set. I've seen answers to other questions about this where we run this value through a HSB to RGB conversion, however I still fail to understand how this would provide a smooth gradient of colors and how to implement this in python.
However, whenever I attempted to implement this it produces floating point RGB values, but there isn't an image format that I know of, besides a .tiff file, that would support this, and if we round off to integers we still have unsmooth banding. So how is this supposed to produce a smoothly banded image if we cannot directly use the RGB values it produces? Example code of what I tried below, since I don't undertand fully how to implement this I made an attempt at a solution that somewhat produces smooth banding. This produces a somewhat smoothly banded image between two colors blue for the full set and a progressively whiter color the further we zoom in on the set to the point where at a certain depth everything just appears blurred. Since I'm using tkinter to do this I had to convert the RGB values to hex to be able to draw them to the canvas.
I;m computing the set recursively, and in my other function (not posted below) i am setting the window width and height then iterating over these for the pixels of the tkinter window and computing this recursion in the inner loop.
def linear_interp(self, color_1, color_2, i):
r = (color_1[0] * (1 - i)) + (color_2[0] * i)
g = (color_1[1] * (1 - i)) + (color_2[1] * i)
b = (color_1[2] * (1 - i)) + (color_2[2] * i)
rgb_list = [r, g, b]
for value in rgb_list:
if value > MAX_COLOR:
rgb_list[rgb_list.index(value)] = MAX_COLOR
if value < 0:
rgb_list[rgb_list.index(value)] = abs(value)
return (int(rgb_list[0]), int(rgb_list[1]),
int(rgb_list[2]))
def rgb_to_hex(self, color):
return "#%02x%02x%02x" % color
def mandel(self, x, y, z, iteration):
bmin = 100
bmax = 255
power_z = 2
mod_z = math.sqrt((z.real * z.real) + (z.imag * z.imag))
#If its not in the set or we have reached the maximum depth
if abs(z) >= float(power_z) or iteration == DEPTH:
z = z
if iteration > 255:
factor = (iteration / DEPTH) * 255
else:
factor = iteration
logs = math.log2(math.log2(abs(z) + 1 ) / math.log2(power_z))
r = g = math.floor(factor + 5 - logs)
b = bmin + (bmax - bmin) * r / 255
rgb = (abs(r), abs(g), abs(round(b)))
self.canvas.create_line(x, y, x + 1, y + 1,
fill = self.rgb_to_hex(rgb))
else:
z = (z * z) + self.c
self.mandel(x, y, z, iteration + 1)
return z
The difference between colors #000000, #010000, ..., #FE0000, #FF0000 is so small that you obtain a smooth gradient from black to red. Hence, simply round your values: Suppose your smoothened color values of your smoothness function range from 0 to (excl) 1, then you simply use
(int) (value * 256)

How much do two rectangles overlap?

I have two rectangles a and b with their sides parallel to the axes of the coordinate system. I have their co-ordinates as x1,y1,x2,y2.
I'm trying to determine, not only do they overlap, but HOW MUCH do they overlap? I'm trying to figure out if they're really the same rectangle give or take a bit of wiggle room. So is their area 95% the same?
Any help in calculating the % of overlap?
Compute the area of the intersection, which is a rectangle too:
SI = Max(0, Min(XA2, XB2) - Max(XA1, XB1)) * Max(0, Min(YA2, YB2) - Max(YA1, YB1))
From there you compute the area of the union:
SU = SA + SB - SI
And you can consider the ratio
SI / SU
(100% in case of a perfect overlap, down to 0%).
While the accepted answer is correct, I think it's worth exploring this answer in a way that will make the rationale for the answer completely obvious. This is too common an algorithm to have an incomplete (or worse, controversial) answer. Furthermore, with only a passing glance at the given formula, you may miss the beauty and extensibility of the algorithm, and the implicit decisions that are being made.
We're going to build our way up to making these formulas intuitive:
intersecting_area =
max(0,
min(orange.circle.x, blue.circle.x)
- max(orange.triangle.x, blue.triangle.x)
)
* max(0,
min(orange.circle.y, blue.circle.y)
- max(orange.triangle.y, blue.triangle.y)
)
percent_coverage = intersecting_area
/ (orange_area + blue_area - intersecting_area)
First, consider one way to define a two dimensional box is with:
(x, y) for the top left point
(x, y) for the bottom right point
This might look like:
I indicate the top left with a triangle and the bottom right with a circle. This is to avoid opaque syntax like x1, x2 for this example.
Two overlapping rectangles might look like this:
Notice that to find the overlap you're looking for the place where the orange and the blue collide:
Once you recognize this, it becomes obvious that overlap is the result of finding and multiplying these two darkened lines:
The length of each line is the minimum value of the two circle points, minus the maximum value of the two triangle points.
Here, I'm using a two-toned triangle (and circle) to show that the orange and the blue points are compared with each other. The small letter 'y' after the two-toned triangle indicates that the triangles are compared along the y axis, the small 'x' means they are compared along the x axis.
For example, to find the length of the darkened blue line you can see the triangles are compared to look for the maximum value between the two. The attribute that is compared is the x attribute. The maximum x value between the triangles is 210.
Another way to say the same thing is:
The length of the new line that fits onto both the orange and blue lines is found by subtracting the furthest point on the closest side of the line from the closest point on the furthest side of the line.
Finding those lines gives complete information about the overlapping areas.
Once you have this, finding the percentage of overlap is trivial:
But wait, if the orange rectangle does not overlap with the blue one then you're going to have a problem:
With this example, you get a -850 for our overlapping area, that can't be right. Even worse, if a detection doesn't overlap with either dimension (neither on the x or y axis) then you will still get a positive number because both dimensions are negative. This is why you see the Max(0, ...) * Max(0, ...) as part of the solution; it ensures that if any of the overlaps are negative you'll get a 0 back from your function.
The final formula in keeping with our symbology:
It's worth noting that using the max(0, ...) function may not be necessary. You may want to know if something overlaps along one of its dimensions rather than all of them; if you use max then you will obliterate that information. For that reason, consider how you want to deal with non-overlapping bounding boxes. Normally, the max function is fine to use, but it's worth being aware what it's doing.
Finally, notice that since this comparison is only concerned with linear measurements it can be scaled to arbitrary dimensions or arbitrary overlapping quadrilaterals.
To summarize:
intersecting_area =
max(0,
min(orange.circle.x, blue.circle.x)
- max(orange.triangle.x, blue.triangle.x)
)
* max(0,
min(orange.circle.y, blue.circle.y)
- max(orange.triangle.y, blue.triangle.y)
)
percent_coverage = intersecting_area
/ (orange_area + blue_area - intersecting_area)
I recently ran into this problem as well and applied Yves' answer, but somehow that led to the wrong area size, so I rewrote it.
Assuming two rectangles A and B, find out how much they overlap and if so, return the area size:
IF A.right < B.left OR A.left > B.right
OR A.bottom < B.top OR A.top > B.bottom THEN RETURN 0
width := IF A.right > B.right THEN B.right - A.left ELSE A.right - B.left
height := IF A.bottom > B.bottom THEN B.bottom - A.top ELSE A.bottom - B.top
RETURN width * height
Just fixing previous answers so that the ratio is between 0 and 1 (using Python):
# (x1,y1) top-left coord, (x2,y2) bottom-right coord, (w,h) size
A = {'x1': 0, 'y1': 0, 'x2': 99, 'y2': 99, 'w': 100, 'h': 100}
B = {'x1': 0, 'y1': 0, 'x2': 49, 'y2': 49, 'w': 50, 'h': 50}
# overlap between A and B
SA = A['w']*A['h']
SB = B['w']*B['h']
SI = np.max([ 0, 1 + np.min([A['x2'],B['x2']]) - np.max([A['x1'],B['x1']]) ]) * np.max([ 0, 1 + np.min([A['y2'],B['y2']]) - np.max([A['y1'],B['y1']]) ])
SU = SA + SB - SI
overlap_AB = float(SI) / float(SU)
print 'overlap between A and B: %f' % overlap_AB
# overlap between A and A
B = A
SB = B['w']*B['h']
SI = np.max([ 0, 1 + np.min([A['x2'],B['x2']]) - np.max([A['x1'],B['x1']]) ]) * np.max([ 0, 1 + np.min([A['y2'],B['y2']]) - np.max([A['y1'],B['y1']]) ])
SU = SA + SB - SI
overlap_AA = float(SI) / float(SU)
print 'overlap between A and A: %f' % overlap_AA
The output will be:
overlap between A and B: 0.250000
overlap between A and A: 1.000000
Assuming that the rectangle must be parallel to x and y axis as that seems to be the situation from the previous comments and answers.
I cannot post comment yet, but I would like to point out that both previous answers seem to ignore the case when one side rectangle is totally within the side of the other rectangle. Please correct me if I am wrong.
Consider the case
a: (1,1), (4,4)
b: (2,2), (5,3)
In this case, we see that for the intersection, height must be bTop - bBottom because the vertical part of b is wholly contained in a.
We just need to add more cases as follows: (The code can be shorted if you treat top and bottom as the same thing as right and left, so that you do not need to duplicate the conditional chunk twice, but this should do.)
if aRight <= bLeft or bRight <= aLeft or aTop <= bBottom or bTop <= aBottom:
# There is no intersection in these cases
return 0
else:
# There is some intersection
if aRight >= bRight and aLeft <= bLeft:
# From x axis point of view, b is wholly contained in a
width = bRight - bLeft
elif bRight >= aRight and bLeft <= aLeft:
# From x axis point of view, a is wholly contained in b
width = aRight - aLeft
elif aRight >= bRight:
width = bRight - aLeft
else:
width = aRight - bLeft
if aTop >= bTop and aBottom <= bBottom:
# From y axis point of view, b is wholly contained in a
height = bTop - bBottom
elif bTop >= aTop and bBottom <= aBottom:
# From y axis point of view, a is wholly contained in b
height = aTop - aBottom
elif aTop >= bTop:
height = bTop - aBottom
else:
height = aTop - bBottom
return width * height
Here is a working Function in C#:
public double calculateOverlapPercentage(Rectangle A, Rectangle B)
{
double result = 0.0;
//trivial cases
if (!A.IntersectsWith(B)) return 0.0;
if (A.X == B.X && A.Y == B.Y && A.Width == B.Width && A.Height == B.Height) return 100.0;
//# overlap between A and B
double SA = A.Width * A.Height;
double SB = B.Width * B.Height;
double SI = Math.Max(0, Math.Min(A.Right, B.Right) - Math.Max(A.Left, B.Left)) *
Math.Max(0, Math.Min(A.Bottom, B.Bottom) - Math.Max(A.Top, B.Top));
double SU = SA + SB - SI;
result = SI / SU; //ratio
result *= 100.0; //percentage
return result;
}
[ymin_a, xmin_a, ymax_a, xmax_a] = list(bbox_a)
[ymin_b, xmin_b, ymax_b, xmax_b] = list(bbox_b)
x_intersection = min(xmax_a, xmax_b) - max(xmin_a, xmin_b) + 1
y_intersection = min(ymax_a, ymax_b) - max(ymin_a, ymin_b) + 1
if x_intersection <= 0 or y_intersection <= 0:
return 0
else:
return x_intersection * y_intersection
#User3025064 is correct and is the simplest solution, though, exclusivity must be checked first for rectangles that do not intersect e.g., for rectangles A & B (in Visual Basic):
If A.Top =< B.Bottom or A.Bottom => B.Top or A.Right =< B.Left or A.Left => B.Right then
Exit sub 'No intersection
else
width = ABS(Min(XA2, XB2) - Max(XA1, XB1))
height = ABS(Min(YA2, YB2) - Max(YA1, YB1))
Area = width * height 'Total intersection area.
End if
The answer of #user3025064 is the right answer. The accepted answer inadvertently flips the inner MAX and MIN calls.
We also don't need to check first if they intersect or not if we use the presented formula, MAX(0,x) as opposed to ABS(x). If they do not intersect, MAX(0,x) returns zero which makes the intersection area 0 (i.e. disjoint).
I suggest that #Yves Daoust fixes his answer because it is the accepted one that pops up to anyone who searches for that problem. Once again, here is the right formula for intersection:
SI = Max(0, Min(XA2, XB2) - Max(XA1, XB1)) * Max(0, Min(YA2, YB2) - Max(YA1, YB1))
The rest as usual. Union:
SU = SA + SB - SI
and ratio:
SI/SU

Smoothing issue with Diamond-Square algorithm

I am using the diamond-square algorithm to generate random terrain.
It works fine except I get these large cone shapes either sticking out of or into the terrain.
The problem seems to be that every now and then a point gets set either way too high or way too low.
Here is a picture of the problem
And it can be better seen when I set the smoothness really high
And here is my code -
private void CreateHeights()
{
if (cbUseLand.Checked == false)
return;
int
Size = Convert.ToInt32(System.Math.Pow(2, int.Parse(tbDetail.Text)) + 1),
SideLength = Size - 1,
d = 1025 / (Size - 1),
HalfSide;
Heights = new Point3D[Size, Size];
float
r = float.Parse(tbHeight.Text),
Roughness = float.Parse(RoughnessBox.Text);
//seeding all the points
for (int x = 0; x < Size; x++)
for (int y = 0; y < Size; y++)
Heights[x, y] = Make3DPoint(x * d, 740, y * d);
while (SideLength >= 2)
{
HalfSide = SideLength / 2;
for (int x = 0; x < Size - 1; x = x + SideLength)
{
for (int y = 0; y < Size - 1; y = y + SideLength)
{
Heights[x + HalfSide, y + HalfSide].y =
(Heights[x, y].y +
Heights[x + SideLength, y].y +
Heights[x, y + SideLength].y +
Heights[x + SideLength, y + SideLength].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
}
}
for (int x = 0; x < Size - 1; x = x + SideLength)
{
for (int y = 0; y < Size - 1; y = y + SideLength)
{
if (y != 0)
Heights[x + HalfSide, y].y = (Heights[x, y].y + Heights[x + SideLength, y].y + Heights[x + HalfSide, y + HalfSide].y + Heights[x + HalfSide, y - HalfSide].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
if (x != 0)
Heights[x, y + HalfSide].y = (Heights[x, y].y + Heights[x, y + SideLength].y + Heights[x + HalfSide, y + HalfSide].y + Heights[x - HalfSide, y + HalfSide].y) / 4 - r + ((float)(random.NextDouble() * r) * 2);
}
}
SideLength = SideLength / 2;
r = r / Roughness;
}
}
Gavin S. P. Miller gave a SIGGRAPH '86 talk about how Fournier, Fussel & Carpenter's original algorithm was fundamentally flawed. So what you're seeing is normal for any naive implementation of the Diamond Square algorithm. You will require a separate approach for smoothing, either post each Diamond-Square compound step, or as a post-process to all diamond-square iterations (or both). Miller addressed this. Weighting and box or gaussian filtering are one option; seeding the initial array to a greater degree than just the initial 4 points (i.e., replicating the resultsets of the first few steps of diamond-square either manually or using some built-in intelligence, but instead supplying unbiased values); the more initial information you give the array before increasing the detail using diamond-square, the better your results will be.
The reason appears to be in how the Square step is performed. In the Diamond step, we've taken the average of the four corners of a square to produce that square's centre. Then, in the subsequent Square step, we take the average of four orthogonally-adjacent neighbours, one of which is the square's centre point we just produced. Can you see the problem? Those original corner height values are contributing too much to the subsequent diamond-square iteration, because they are contributing both through their own influence AND through the midpoint that they created. This causes the spires (extrusive and intrusive), because locally-derived points tend more strongly toward those early points... and because (typically 3) other points do as well, this creates "circular" influences around those points, as you iterate to higher depths using Diamond-Square. So these kinds of "aliasing" issues only appear when the initial state of the array is underspecified; in fact, the artifacting that occurs can be seen as a direct geometric consequence of using only 4 points to start with.
You can do one of the following:
Do local filtering -- generally expensive.
Pre-seed the initial array more thoroughly -- requires some intelligence.
Never smooth too many steps down from a given set of initial points -- which applies even if you do seed the initial array, it's all just a matter of relative depths in conjunction with your own maximum displacement parameters.
I believe the size of the displacement r in each iteration should be proportional to the size of the current rectangle. The logic behind this is that a fractal surface is scale invariant, so the variation in height in any rectangle should be proportional to the size of that rectangle.
In your code, the variation in height is proportional to r, so you should keep it proportional to the size of your current grid size. In other words: multiply r by the roughness before the loop and divide r by 2 in each iteration.
So, instead of
r = r / Roughness;
you should write
r = r / 2;
The actual flaw in the above algorithm is an error in conceptualization and implementation. Diamond square as an algorithm has some artifacting but this is range based artifacts. So the technical max for some pixels is higher than some other pixels. Some pixels are directly given values by the randomness while others acquire their values by the diamond and squared midpoint interpolation processes.
The error here is that you started from zero. And repeatedly added the value to the current value. This causes the range of diamond squared to start at zero and extend upwards. It must actually start at zero and go both up and down depending on the randomness. So the top range thing won't matter. But, if you don't realize this and naively implement everything as added to the value, rather than starting at zero and fluctuating from there, you will expose the hidden artifacts.
Miller's notes were right, but the flaw is generally hidden within the noise. This implementation is shows those problems. That is NOT normal. And can be fixed a few different ways. This was one of the reasons why after I extended this algorithm to remove all the memory restrictions and size restrictions and made it infinite and deterministic1, I then still switched away from the core idea here (the problems extending it to 3d and optimizing for GPUs also played a role.2
Instead of just smoothening with an average, you can use a 2-D median filter to take out extremes. It is simple to implement, and usually generates the desired effect with a lot of noise.

Shortest distance between points on a toroidally wrapped (x- and y- wrapping) map?

I have a toroidal-ish Euclidean-ish map. That is the surface is a flat, Euclidean rectangle, but when a point moves to the right boundary, it will appear at the left boundary (at the same y value), given by x_new = x_old % width
Basically, points are plotted based on: * see edit
(x_new, y_new) = ( x_old % width, y_old % height)
Think Pac Man -- walking off one edge of the screen will make you appear on the opposite edge.
What's the best way to calculate the shortest distance between two points? The typical implementation suggests a large distance for points on opposite corners of the map, when in reality, the real wrapped distance is very close.
The best way I can think of is calculating Classical Delta X and Wrapped Delta X, and Classical Delta Y and Wrapped Delta Y, and using the lower of each pair in the Sqrt(x^2+y^2) distance formula.
But that would involve many checks, calculations, operations -- some that I feel might be unnecessary.
Is there a better way?
edit
When an object moves, it moves to position (x_old,y_old), runs it through the above formula, and stores (x_new, y_new) as its position. The above formula was only added to clarify what happens when objects move across the boundary; in reality, only one (x,y) pair is stored in each object at a time.
The best way I can think of is calculating Classical Delta X and Wrapped Delta X, and Classical Delta Y and Wrapped Delta Y, and using the lower of each pair in the Sqrt(x^2+y^2) distance formula.
That's it, I don't think there is any quicker way. But it's not too hard of a computation; you could do something like
dx = abs(x1 - x2);
if (dx > width/2)
dx = width - dx;
// again with x -> y and width -> height
(I trust you can translate that into your preferred language)
The shortest distance between two points in a periodic domain can be computed as follows without using any loops.
dx = x2-x1
dx = dx - x_width*ANINT(dx/x_width)
This will give a signed shortest distance. ANINT is an intrinsic FORTRAN function such that ANINT(x) gives the nearest whole number whose magnitude is less than abs(x)+0.5, with the same sign as x. For eg., ANINT(0.51)=1.0 , ANINT(-0.51)=-1.0 , etc. Similar functions exist for other languages.
To find the smallest delta in the a-axis for new coordinates with values a1 and a2, where aBoundaryis the boundary on the a-axis:
def delta(a1, a2, aBoundary):
return min(abs(a2 - a1), abs(a2 + aBoundary - a1))
So if you have two points with new coordinates x1,y1 and x2,y2, you can just do:
sumOfSquares(delta(x1,x2,width), delta(y1,y2,height))
This is effectively what you suggest, but I wouldn't say it's "many checks, calculations and operations".
No distance can be greater than width/2 and height/2. If you get a difference (X1-X2) greater than width/2, substract width/2 to get the short-cut distance. Calculate distance then as usual.
(delta_x, delta_y)=
(min(width - abs(x_new - x_new), abs(x_new - x_old)),
min(height - abs(y_new - y_old), abs(y_new - y_old)))
Man I did something WAY different...
there's a little extra fuctionality in here but the core is the distance on a wrapped screen...
from math import sqrt
import pytweening
class ClosestPoint_WD(object):
def __init__(self, screen_size, point_from, point_to):
self._width = screen_size[0]
self._height = screen_size[1]
self._point_from = point_from
self._point_to = point_to
self._points = {}
self._path = []
def __str__(self):
value = "The dictionary:" + '\n'
for point in self._points:
value = value + str(point) + ":" + str(self._points[point]) + '\n'
return value
def distance(self, pos0, pos1):
dx = pos1[0] - pos0[0]
dy = pos1[1] - pos0[1]
dz = sqrt(dx**2 + dy**2)
return dz
def add_point_to_dict(self, x, y):
point = x, y
self._points[point] = 0
def gen_points(self):
max_x = self._width * 1.5 - 1
max_y = self._height * 1.5 - 1
# point 1, original point
self.add_point_to_dict(self._point_to[0], self._point_to[1])
# add the second point: x-shifted
if self._point_to[0] + self._width <= max_x:
self.add_point_to_dict(self._point_to[0] + self._width, self._point_to[1])
else:
self.add_point_to_dict(self._point_to[0] - self._width, self._point_to[1])
# add the third point: y-shifted
if self._point_to[1] + self._height <= max_y:
self.add_point_to_dict(self._point_to[0], self._point_to[1] + self._height)
else:
self.add_point_to_dict(self._point_to[0], self._point_to[1] - self._height)
# add the fourth point: diagonally shifted
if self._point_to[0] + self._width <= max_x:
if self._point_to[1] + self._height <= max_y:
self.add_point_to_dict(self._point_to[0] + self._width, self._point_to[1] + self._height)
else:
self.add_point_to_dict(self._point_to[0] + self._width, self._point_to[1] - self._height)
else:
if self._point_to[1] + self._height <= max_y:
self.add_point_to_dict(self._point_to[0] - self._width, self._point_to[1] + self._height)
else:
self.add_point_to_dict(self._point_to[0] - self._width, self._point_to[1] - self._height)
def calc_point_distances(self):
for point in self._points:
self._points[point] = self.distance(self._point_from, point)
def closest_point(self):
d = self._points
return min(d, key=d.get)
def update(self, cur_pos, target):
self._point_from = cur_pos
self._point_to = target
self._points = {}
self.gen_points()
self.calc_point_distances()
self.shortest_path()
def shortest_path(self):
path = pytweening.getLine(self._point_from[0], self._point_from[1], self.closest_point()[0], self.closest_point()[1])
#path = pytweening.getLine((self._point_from)
ret_path = []
for point in path:
ret_path.append((point[0] % self._width, point[1] % self._height))
self._path = ret_path
return self._path
you can't use the “abs” function with the mod operator!
xd =(x1-x2+Width)%Width
yd=(y1-y2+Height)%Height
D=sqrt(xd^2+yd^2)

Resources