Time complexity for middlepoint circle algorithm - algorithm

https://en.wikipedia.org/wiki/Midpoint_circle_algorithm
https://www.geeksforgeeks.org/mid-point-circle-drawing-algorithm/
I have been looking into the midpoint circle algorithm and have come across conflicting information on its time complexity. On the Wikipedia page, complexity is not mentioned, but in the GeeksforGeeks article, it is listed as O(x - y).
In the above geeksforgeeks article it mentioned
Time Complexity: O(x – y)
Auxiliary Space: O(1)
As x and y is always unchange and it is a number, should the time complexity be O(1) or O(r) where r is the radius of the circle?
I guess O(r) as loop through a 2d vector with n * m size is O(N*M)
If I want to loop through the circumference of circle it should be O(2 * pi * r) where constant can be take away and become O(r)
If I want to change a bit of the algorithm and loop through every cell inside the circle it should be O(r^2) , which come from O(pi * r * r) and take away the constant pi?
Disclaimer : Did not take any algorithm course in Uni , trying to self learn CS.

IMO, O(X-Y) is meaningless.
To draw a full circumference, the algorithm sets 4R pixels. Computing the coordinates of the pixels takes constant time (essentially a handful of additions). So O(R) is correct.
To draw a whole disk, you set about πR² pixels. A simple method works by scanning the circumscribed square, comprised of 4R² pixels.

Related

Find best circular permutation that minimizes average distance between 2 ordered lists of points

Given 2 ordered sets of n points each, A and B, how do I find the best circular permutation which minimizes the average pairwise-distance (using the distance of your choice) between points.
In other words, how do I algorithmically find k such that it minimizes sum(||A[i] - B[(i + k) % n||) with 0 <= k < n? (I have omitted the division by n because minimizing the total distance should yield the same result as the mean I believe).
One extra requirement is that the algorithm should be usable in N-Dimensional spaces so I can't just sort the arrays.
I could obviously compute every pairwise distance but that would yield O(n^2) (n x n pairwise distance computation + n accumulations) complexity which is sub-optimal ([edit] I mean here that I sure hope one can do better than brute force).
Application:
One application is in graphics where I want to map each point of a shape to a point of another shape without creating crossing edges. See drawing below where we map each point of the red shape to a point on the blue shape.
I have two ideas.
If you're willing to optimize the sum of squares of distances, then there's an O(n log n) time algorithm based on fast convolution. The modified objective allows us to find the contribution of each coordinate separately for each possible rotation. Then we sum element-wise and choose the best.
To solve the reduced problem in 1D: we want
sum_i (A[i] - B[(i+k) mod n])**2
for each k. Do some algebra:
sum_i (A[i] - B[(i+k) mod n])**2 =
sum_i (A[i]**2 - 2*A[i]*B[(i+k) mod n] + B[(i+k) mod n]**2) =
sum_i A[i]**2 + sum_i B[i]**2 - 2*sum_i (A[i]*B[(i+k) mod n]).
The first two terms are the same for all k, so just compute them. The vector of third terms for all k can be computed quickly in bulk as a constant times A convolved with the reverse of B.
My second idea is a recursive heuristic. If n is small, just brute force it. Otherwise, make a smaller instance by computing the midpoint of each pair of consecutive points in each list. Recursively align these. Then multiply the heuristic rotation by two and check it against the rotations one up and one down from it. In constant dimensions, this yields a recurrence like T(n) = T(n/2) + O(n), which is O(n).

Closest pair of points brute-force; Why O(n^2)?

I feel stupid for asking this question, but...
For the "closest pair of points" problem (see this if unfamiliar with it), why is the worst-case running time of the brute-force algorithm O(n^2)?
If say n = 4, then there would only be 12 possible pair of points to compare in the search space, if we also consider comparing two points from either direction. If we don't compare two points twice, then it's going to be 6.
O(n^2) doesn't add up to me.
The actual number of comparisons is:
, or .
But, in big-O notation, you are only concerned about the dominant term. At very large values of , the term becomes less important, as does the coefficient on the term. So, we just say it's .
Big-O notation isn't meant to give you the exact formula for the time taken or number of steps. It only gives you the order of the complexity/time so you can get a sense of how it scales for large inputs.
Applying brute force, we are forced to check all the possible pairs.Assuming N points,for each point there are N-1 other points for which we need to calculate the distance. So total possible distances calculated = N points * N-1 other points. But in process we double counted distances. Distance between A to B remains whether A to B or B to A is calculated. Hence N*(N-1)/2. Hence O(N^2).
In big-O notation, you can factor out multiplied constants, so:
O(k*(n^2)) = O(n^2)
The idea is that the constant (1/2 in the OP example, since distance comparison is reflective) doesn't really tell us anything new about the complexity. It still gets bigger with the square of the input.
In the brute-force version of the algorithm you compare all possible pairs of points. For each of n points you have (n - 1) other points to compare and if we take every pair once we end up with (n * (n - 1)) / 2 comparisons. The pessimistic complexity of O(n^2) means that the number of operations is bound by k * n^2 for some constant k. Big O notation can't tell you the exact number of operations but a function to which it is proportional when the size of data (n) increases.

Big O: What is the relationship between O(log(n)) with real time

I've calculated the real time in millisecond for my algo.
I've plotted a graph comparing the actual time taken by my algorithm in Milli-Seconds(y-axis) to 'n'(x-axis) where n is the number of nodes in the tree I'm working on.
How do I relate this graph to O(log(n)) if my algorithm should ideally have a O(log(n)) complexity.
Assuming your algorithm is in O(log n) the graphs should make for a nice comparison. But don't plot log n, you need k * log n + c for some constants k and c.
The constant k describes the duration of a single step of your algorithm, whereas c summaries all constant (initialization) cost.
Depending on what you want to achieve and your algorithm / implementation you might see effects like processor cache misses, garbage collection or similar stuff with increasing n.
In case you can save n,log(n),runtime(n):
You can use 3 visualization approaches (I used Excel since it is easy and fast):
Draw a QQ-plot between log(n) and your run time:
This figure shows you the difference between the 'Theoretical' run time function and the 'Empirical' run time function. A straight line (or close) implies that they are close.
Draw two plots on the same graph: the horizontal axes is n, and the two functions are log(n) and the run time you obtained for each n:
The third analysis is the statistical approach: plot a graph where the horizontal axis is n, and the vertical axes is runtime(n). Now, add a logarithmic trend line and Rsquare.
The trend line can give you the best a,b where runtime(n)=a*log(n)+b . The correlation between the runtime and log(n) is better as Rsquare gets higher.

What is the complexity of the following statement?

I'm practicing for an upcoming test by completing past tests. One of the questions asks me to determine the worst-case time complexity (in Big-O) for an algorithm. I am unsure of the correctness of my thought process when looking at the following algorithm:
Adjusting the color values of each Pixel in a Picture with height N and width M.
If we consider a simpler case of adjusting color values of each VERTICAL (N) pixel in a picture then this algorithm would be simply O(N). When we factor in the WIDTH (M) then we need to multiply M * N because for every row of pixels there is a horizontal pixel. Thus I conclude that the above algorithm has a worst-time complexity of O(M * N).
Any help or hints would be greatly appreciated! Thank you!
Assuming that "Adjusting the color values of each Pixel" takes constant time, your reasoning is correct, since there are N*M pixels, the complexity is O(N*M).
For your information, to make your answer more complete, you should also mention the assumption, that is that you assume "Adjusting the color values of each Pixel" takes constant time. If that process (which is repeated N*M times) takes, say O(M), then the algorithm is O(N*M*M), since for each pixel you need to do an O(M) operation.

Algorithm - Find the the number of rectangles covering a given rectangle area

This is not a homework problem . Its an interview question . I am not able to come up with good solution for this problem .
Problem :
Given an n*n (bottom left(0,0) , top right(n,n)) grid and n rectangles with sides parallel to the coordinate axis. The bottom left and top right coordinates for the n rectangles are provided in the form (x1,y1)(x1',y1') .... (xn,yn)(xn',yn'). There are M queries which asks for the number of rectangles that cover a rectangle with coordinates (a,b)(c,d). How do I solve it in an efficient way ? Is there a way to precompute for all coordinate positions so that I can return the answer in O(1) .
Constraints:
1<= n <= 1000
It is straightforward to create, in O(n^4) space and O(n^5) time, a data structure that provides O(1) lookups. If M exceeds O(n^2) it might be worthwhile to do so. It also is straightforward to create, in O(n^2) space and O(n^3) time, a data structure that provides lookups in O(n) time. If M is O(n^2), that may be a better tradeoff; ie, take O(n^3) precomputation time and O(n^3) time for O(n^2) lookups at O(n) each.
For the precomputation, make an n by n array of lists. Let L_pq denote the list for cell p,q of the n by n grid. Each list contains up to n rectangles, with lists all ordered by the same relation (ie if Ri < Rj in one list, Ri < Rj in every list that pair is in). The set of lists takes time O(n^3) to compute, taken either as "for each C of n^2 cells, for each R of n rectangles, if C in R add R to L_C" or as "for each R of n rectangles, for each cell C in R, add R to L_C".
Given a query (a,b,c,d), in time O(n) count the size of the intersection of lists L_ab and L_cd. For O(1) lookups, first do the precomputation mentioned above, and then for each a,b, for each c>a and d<b, do the O(n) query mentioned above and save the result in P[a,b,c,d] where P is an appropriately large array of integers.
It is likely that an O(n^3) or perhaps O(n^2 · log n) precomputation method exists using either segment trees, range trees, or interval trees that can do queries in O(log n) time.

Resources