https://www.geeksforgeeks.org/bresenhams-circle-drawing-algorithm/
I was looking at Bresenham's algorithm which I'm trying to use to make a MS paint style application. I've implemented it into python and it works. However, I was not sure HOW this worked. I understood all of the algorithm except for the decision parameter. Specifically why it has to be d = 3 – (2 * r) , d = d + (4*x) + 6 or d = d + 4 * (x – y) + 10. Is anyone familiar with the algorithm or understands the math behind how these were derived? I understood the theory behind the algorithm for lines, but I'm having a hard time understanding the circle drawing.
If you just drew pixel (x,y), then the next pixel to be drawn is either (x+1,y) or (x+1,y-1)
The actual condition used determine which one to choose is appoximately which one is closest to the ideal circle. Specifically (x+1,y-1) is chosen if (x+1)² + y² - r² > r² - (x+1)² - (y-1)²
Collecting like terms, simplifies to 2(x+1)² + y² + (y-1)² - 2r² > 0
Expanding gives 2x² + 2y² - 2r² + 4x - 2y + 3 > 0
That expression on the left is d. Initially, x=0 and y=r, so most of those terms are zero or cancel out and we have d = 3 - 2y = 3 - 2r
The other expressions you ask about indicate how d changes after you pick the next pixel.
http://www.wolframalpha.com/input/?i=simplify+(2(x%2B2)%C2%B2+%2B+(y-1)%C2%B2+%2B+(y-2)%C2%B2+-+2r%C2%B2)+-+(2(x%2B1)%C2%B2+%2B+y%C2%B2+%2B+(y-1)%C2%B2+-+2r%C2%B2)
http://www.wolframalpha.com/input/?i=simplify+(2(x%2B2)%C2%B2+%2B+y%C2%B2+%2B+(y-1)%C2%B2+-+2r%C2%B2)+-+(2(x%2B1)%C2%B2+%2B+y%C2%B2+%2B+(y-1)%C2%B2+-+2r%C2%B2)
Related
We are given an M x N map (basically a 2D array) of values (can be negative), and you gotta find the path that makes the most money.
The trick is that the "drill" that you're using is three units large.
So to drill a hole in a certain place (blue), you gotta make sure the three above it (red) are drilled as well;
and it's exponential up, meaning if you wanna dig a deeper hole, then you gotta dig the three above, and the three above those ones, etc
So far I have an inefficient, semi-brute force (kinda) algorithm that's O(n^2) so as soon as sample size goes up (for a 900 x 1200 sample for example), the algorithm can't be done (we have a 3 minutes limit).
I'm suspecting maybe dynamic programming could be a way, but I'm not sure at all how to implement that.
Let me know if anything else comes to mind.
We've worked with Python by the way.
Thank you guys in advance.
You can calculate all the values in O(n * m) instead of O((n * m)^2).
Let's say we have input matrix A and want to calculate the result for each position resulting in matrix B.
The first row of B is just the same as A.
The second row of B we get by summing the 3 values above (or 2 on the edges).
For all the following rows (r = row, c = column):
c not on the edge: B[r][c] = B[r - 1][c - 1] + B[r - 1][c + 1] + A[r - 1][c] - B[r - 2][c].
c on the left edge: B[r][0] = B[r - 1][1] + A[r - 1][0]
c on the right edge: B[r][len - 1] = B[r - 1][len - 2] + A[r - 1][len - 1]
If you color the matrix and the values you sum, you can easily see what we are doing. Basically we sum left and right neighbour above to get the value, but we miss the value in the middle and the value of the result two rows above is calculated twice, so we subtract it.
This is an unsolved problem from my past arbitrary-precision rational numbers C++ assignment.
For calculation, I used this expression from Wikipedia (a being the initial guess, r being its remainder):
I ended up, just by guessing from experiments, with this approach:
Use an integer square root function on the numerator/denominator, use that as the guess
Iterate the continued fraction until the binary length of the denominator was at least the target precision
This worked well enough to get me through the official tests, however, from my testing, the precision was too high (sometimes almost double) – i.e. the code was inefficient – and I had no proof it worked on any input (and hence no confidence in the code).
A simplified excerpt from the code (natural/rational store arbitrary length numbers, assume all operations return fractions in their simplest form):
rational sqrt(rational input, int precision) {
rational guess(isqrt(input.numerator), isqrt(input.denominator)); // a
rational remainder = input - power(guess, 2); // r
rational result = guess;
rational expansion;
while (result.denominator.size() <= precision) {
expansion = remainder / (2 * guess + expansion);
result = guess + expansion;
// Handle rational results
if (power(root, 2) == input) {
break;
}
}
return result;
}
Can it be done better? If so, how?
Square roots can easily and very accurately be calculated by the General Continued Fractions (GCF). Being general means it can have any positive number as the numerator in contrast to the Regular or Simple Continued Fractions (RCF) where the numerators are all 1s. In order to comprehend the answer as a whole, it is best to start from the beginning.
The method used to solve the square root of any positive number n by a GFC (a + x) whereas a being the integral and x being the continued fractional part, is;
n − a^2
√n = a + x ⇒ n = a^2 + 2ax + x^2 ⇒ n − a^2 = x(2a + x) ⇒ x = _______
2a + x
Right at this moment you have a GCF since x nicely gets placed at the denominator and once you replace x with it's definition you get an indefinitely extending definition of x. Regarding a, you are free to choose it among integers which are less than the √n. So if you want to find √11 then a can be chosen among 1, 2 or 3. However it's always better to chose the biggest one in order to be able to simplify the GCF into an RCF at the next stage.
Remember that x = (n − a^2) / (2a + x) and n = 11 and a = 3. Now if we write the first two terms then we may simplify the GCF to RCF with all numerators as 1.
2 2 divide both 1
x = _____ ⇒ _________ ⇒ numerator and ⇒ _________ = x
6 + x 6 + 2 denominator by 2 3 + 1
_____ _____
6 + x 6 + x
Accordingly our RCF for √11 is;
1 ___
√11 = 3 + x ⇒ 3 + _____________ = [3;3,6]
1
3 + _________
1
6 + _____
1
3 + _....
6
Notice the coefficient notation [3; 3, 6, 3, 6, ...] which in this particular case resembles an infinite array. This is how RCF's are expressed in coefficient notation, the first item being the a and the tail after ; are the RCF coefficients of x. These two are sufficient since we already know that in RCF all numerators are fixed to 1.
Coming back to your precision question. You now have √11 = 3 + x where x is your RCF as [3;3,6,3,6,3,6...]. Normally you can try by picking a depth and reducing from right like [3,3,6,3,6,3,6...].reduceRight((p,c) => c + 1/p) as it would be done in JS. Not a precise enough result.? Then try it again from another depth. This is in fact how it is descriped in the linked wikipedia topic as bottom up. However it would be much efficient to go from left to right (top to bottom) by calculating the intermediate convergents one after the other, at a single pass. Every next intermediate convergent yields a better precision for you to test and decide weather to stop or continue. When you reach to a coefficient sufficient enough just stop there. Having said that, once you reach to the desired coefficient you may still do some fine tuning by increasing or decreasing that coefficient. Decreasing the coefficients at even indices or increasing the ones at odd indices would decrease the convergent and vice versa.
So in order to be able to do a left to right (top to bottom) analysis there is a special rule as
n2/d2 = (xn * n1 + n0)/(xn * d1 + d0)
We need to know last two interim convergents (n0/d0 and n1/d1) along with the current coefficient xn in order to be able calculate the next convergent (n2/d2).
We will start with two initial convergents as Infinity (n0/d0 = 1/0) and the a that we've chosen above (Remember √n = a + x) which is 3 so (n1/d1 = 3/1). Knowing that the 3 before the semicolon is in fact a, our first xn is the 3 right after the semicolon in our coefficients array [3;»» 3 ««,6,3,6,3,6...].
After we calculate n2/d2 and do our test, if need be, for the next step we will shift our convergents to the left so that we have the last two ready to calculate the next convergent. n0/d0 <- n1/d1 <- n2/d2
Here i present the table for the n2/d2 = (xn * n1 + n0)/(xn * d1 + d0) rule.
n0/d0 n1/d1 xn index n2/d2 decimal val.
_____ ______ __ _____ ________ ____________
1/0 3/1 3 1 odd 10/3 3.33333333..
3/1 10/3 6 2 evn 63/19 3.31578947..
10/3 63/19 3 3 odd 199/60 3.31666666..
63/19 199/60 6 4 evn 1257/379 3.31662269..
. . . . . .
. . . . . .
So as you may notice we are very quickly approaching to √11 which is 3.31662479... Note that the odd indices overshoot and evens undershoot due to cascading reciprocals. Since √11 is an irrational this will continue convergining indefinitely up until we say enough.
Remember, as mentioned earlier, once you reach to the desired coefficient you may still do some fine tuning by increasing or decreasing that coefficient (xn). Decreasing the coefficients at even indices or increasing the ones at odd indices would decrease the convergent and vice versa.
The problem here is, not all √n can simply be turned into RCF by a simple division as shown above. For a more generalized way to generate RCF from any √n you may check a more recent answer of mine.
I have a matrix (with 0 and 1), representing a castle wall and its surroundings.
My task is to place n archers on the wall in a way, that they covers as much surroundings as they can. There are two rules:
1: Every archer has range 1 - that means, he can shoot only on adjoining tiles (each tile has 8-neighbours), which aren't wall (friendly fire is banned in this army!).
2: If so happens, that multiple archers can shoot at same tile, that tile only counts as one.
I am struggling to find effective solution for this - partially because I don't know, if there exist any in polynomial time. Is there any?
I guess first step is to use BFS algorithm to rate every tile on matrix, but I don't know how to effectivelly solve it with second rule. The brute force solution is quite simple - rate every position and then try all of them, which would be very very uneffective - I think O(|possible placements|^n), which isn't nice.
Simple example:
The grey-colored tiles represents the wall. Numbers inside a tiles are representing coverage of archer placed on that tile. Just to be sure, I added orange ones, which are representing coverage of archer standing on tile b2. And the last information - correct solution for this is b2 and b6, with coverage of 14. It cannot be b2 and b4, because they covers only 11 tiles.
I don't see how the problem can be solved in guaranteed polynomial time, but you can express the problem as an integer linear program, which can be solved using an ILP solver such as GLPK.
Let c[i] be 0-1 integer variables, one for each square of surrounding. These will represent whether this square is covered by at least one archer.
Let a[i] be 0-1 integer variables, one for each square of castle wall. These will represent whether an archer stands on this square.
There must be at most n archers: sum(a[i] for i in castle walls) <= n
The c[i] must be zero if there's no adjacent archer: sum(a[j] for j castle wall adjacent to i) >= c[i]
The optimization target is to maximize sum(c[i]).
For example, suppose this is the map (where . is surrounding, and # castle wall), and want to place two archers.
....
.###
....
Then we have this ILP problem, where all the variables are 0-1 integer variables.
maximize (
c[0,0] + c[0,1] + c[0,2] + c[0,3]
+ c[1,0]
+ c[2,0] + c[2,1] + c[2,2] + c[2,3])
such that:
a[1,1] + a[1,2] + a[1,3] <= 2
c[0,0] <= a[1,1]
c[0,1] <= a[1,1] + a[1,2]
c[0,2] <= a[1,1] + a[1,2] + a[1,3]
c[0,3] <= a[1,2] + a[1,3]
c[1,0] <= a[1,1]
c[2,0] <= a[1,1]
c[2,1] <= a[1,1] + a[1,2]
c[2,2] <= a[1,1] + a[1,2] + a[1,3]
c[2,3] <= a[1,2] + a[1,3]
The center-coordinates of the 2D-voxels represent a 2D-point-set. Using these coordinates, the red dot in the picture refers to the (approximate) center of mass/gravity, i.e. the mean of all coordinates. Ignore the different grayvalues, though they coincidentally provide for a slightly better visibility of the 2D voxels :-)
The green dot (roughly) is what I would like to get, but how (?), in a principled manner (?). So essentially we have a connected set in 2D voxel-space or alternatively a set of 2D-points, integer coordinates can be assumed, if it helps. I'd like to determine a point, which is "central" with respect to the shape, but definitely on the shape, i.e. part of the set.
Pseudocode and/or C/C++ welcome :-)
Update: If the structure was thicker, I'd actually like the green point to be somewhat central rather than on the contour.
here is solution that can find the green point in O(N) :-
Find mean (xm,ym)
Suppose (xm+a),(ym+b) is a point in the dataset
E(xi,yi) = sum of squared distances of all points from (xi,yi)
E(xm,ym) = k because it is the mean.
E(xm+a,ym+b) = summation => (xi-(xm+a))^2 + (yi-(ym+b))^2
= summation => ((xi-xm)-a)^2 + ((yi-ym)-b)^2
= summation => (xi-xm)^2 + (yi-ym)^2 + a^2 + b^2 - 2a*(xi-xm) - 2b*(yi-ym)
= summation => (xi-xm)^2 + (yi-ym)^2 + summation => a^2 + b^2 +.....
summation => (xi-xm)^2 + (yi-ym)^2 = E(xm,ym) = k
Hence
E(a,b) = summation => a^2 + b^2 - 2a*(xi-xm) - 2b*(yi-ym)
as a,b are constant in summation
E(a,b) = (a^2+b^2)*N - 2a*(summation=>(xi-xm)) - 2b*(summation=>(yi-ym))
summation=>(xi-xm) = 0
summation=>(y1-ym) = 0
E(a,b) = a^2 + b^2
Now to get green point which will minimize E(a,b)
a = xk-xm
b = yk-ym
find (xp,yp)=>minimum{E(a,b)} among all (xk,yk)
summation=>(xi-xm) and summation=>(yi-ym) can be found in O(N) after finding mean
hence E(a,b) can be found in O(1) and (xp,yp) in O(N)
I see a lot of people using PR(A) = 1 - d + d * \sigma PR(E)/L(E) as the pagerank formula. But it's really PR(A) = (1 - d)/N + \sigma PR(E)/L(E). I know there won't be that much a difference since if PR(A) > PR(B) then it's still the same whichever formula you use. Also in Larry Page's paper on PageRank he said that when added together all pageranks should equal 1
If you want that all values sum up to one you should use the (1-d)/n + d * sum(...) version.