The original problem was discussed in here: Algorithm to find special point k in O(n log n) time
Simply we have an algorithm that finds whether a set of points in the plane has a center of symmetry or not.
I wonder is there a way to prove a lower bound (nlogn) to this algorithm? I guess we need to use this algorithm to solve a simplier problem, such as sorting, element uniqueness, or set uniqueness, therefore we can conclude that if we can solve e.g. element uniqueness by using this algorithm, it can be at least nlogn.
It seems like the solution is something to do with element uniqueness, but i couldn't figure out a way to manipulate this into center of symmetry algorithm.
Check this paper
The idea is if we can reduce problem A to problem B, then B is no harder than A.
That said, if problem B has lower bound Ω(nlogn), then problem A is guaranteed the same lower bound.
In the paper, the author picked the following relatively approachable problem to be B: given two sets of n real numbers, we wish to decide whether or not they are identical.
It's obvious that this introduced problem has lower bound Ω(nlogn). Here's how the author reduced our problem at hand to the introduced problem (A, B denote the two real sets in the following context):
First observe that that your magical point k must be in the center.
build a lookup data structure indexed by vector position (O(nlog n))
calculate the centroid of the set of points (O(n))
for each point, calculate the vector position of its opposite and check for its existence in the lookup structure (O(log n) * n)
Appropriate lookup data structures can include basically anything that allows you to look something up efficiently by content, including balanced trees, oct-trees, hash tables, etc.
Related
I'm reading CLRS Algorithms Edition 3 and I have two problems for my homework (I'm not asking for answers, I promise!). They are essentially the same question, just applied to Kruskal or to Prim. They are as follows:
Suppose that all edge weights in a graph are integers in the range from 1 to |V|. How fast can you make [Prim/Kruskal]'s algorithm run? What if the edge weights are integers in the range from 1 to W for some constant W?
I can see the logic behind the answers I'm thinking of and what I'm finding online (ie sort the edge weights using a linear sort, change the data structure being used, etc), so I don't need help answering it. But I'm wondering why there is a difference between the answer if the range is 1 to |V| and 1 to W. Why ask the same question twice? If it's some constant W, it could literally be anything. But honestly, so could |V| - we could have a crazy large graph, or a very small one. I'm not sure how the two questions posed in this problem are different, and why I need two separate approaches for both of them.
There's a difference in complexity between an algorithm that runs in O(V) time and O(W) time for constant W. Sure, V could be anything, as could W, but that's not really the point: one is linear, one, is O(1). The question is then for which algorithms could having a restricted range of edge-weights impact complexity (based, as you suggest on edge-weight sort time and choice in data-structure), and what would the actual new optimal complexity be for linearly bounded edge-weights vs. for edge-weights bounded by a constant, W.
Having bounded edge-weights could open up new possibilities for sorting algorithms for Kruskal's, and might change the data structure you'd want to use to implement the queue for Prim's along with the most optimal way you could implement extract-min and update-key operations for that queue. The extent to which edge-weights are bounded can impact whether a particular change in data structure or implementation is even beneficial to make in terms of final complexity.
For example, knowing that the n elements of a list are bounded in value by a constant W makes it so that a switch to radix sort would improve the asymptotic complexity of sorting them, but if I instead only knew that they were bounded in value by 2^n there would be no advantage in changing to radix sort over the traditional methods and their O(n*logn) sorting complexity.
I'm tutoring a student and one of her assignments is to describe an O(nlogn) algorithm for the closest pair of points in the one-dimensional case. But the restriction is she's not allowed to use a divide-and-conquer approach. I understand the two-dimensional case from a question a user posted some years ago. I'll link it in case someone wants to look at it: For 2-D case (plane) - "Closest pair of points" algorithm.
However, for the 1-D case, I can only think of a solution which involves checking each and every point on the line and comparing it to the closest point to the left and right of it. But this solution isn't O(nlogn) since checking each point will take time proportional to n and the comparisons for each point would take time proportional to 2n. I'm not sure where log(n) would come from without using a divide-and-conquer approach.
For some reason, I can't come up with a solution. Any help would be appreciated.
Hint: If the points were ordered from left to right, what would you do, and what would the complexity be? What is the complexity of ordering the points first?
It seems to me that one could:
Sort the locations into order - O(n log n)
Find the differences between the ordered locations - O(n)
Find the smallest difference - O(n)
The smallest difference defines the two closest points.
The overall result would be O(n log n).
Algorithm requirements
Input is an arbitrary square matrix M of size N×N, which just fits in memory.
The algorithm's output must be true if M[i,j] = M[j,i] for all j≠i, false otherwise.
Obvious solutions
Check if the transpose equals the matrix itself (MT=M). Easiest to program in many environments, but (usually) consumes twice the memory and requires N² comparisons worst case. Therefore, this is O(N²) and has high peak memory.
Check if the lower triangular part equals the upper triangular part. Of course, the algorithm returns on the first inequality found. This would make the worst case (worst case being, the matrix is indeed symmetric) require N²/2 - N comparisons, since the diagonal does not need to be checked. So although it is better than option 1, this is still O(N²).
Question
Although it's hard to see how it would be possible (the N² elements will all have to be compared somehow), is there an algorithm doing this check that is better than O(N²)?
Or, provided there is a proof of non-existence of such an algorithm: how to implement this most efficiently for a multi-core CPU (Intel or AMD) taking into account things like cache-friendliness, optimal branch prediction, other compiler-specific specializations, etc.?
This question stems mostly from academic interest, although I imagine a practical use could be to determine what solver to use if the matrix describes a linear system AX=b...
Since you will have to examine all the elements except the diagonal, the complexity IMO can't be better than O (n^2).
For a dense matrix, the answer is a definite "no", because any uninspected (non-diagonal) elements could be different from their transposed counterparts.
For standard representations of a sparse matrix, the same reasoning indicates that you can't generally do better than the input size.
However, the same reasoning doesn't apply to arbitrary matrix representations. For example, you could store sparse representations of the symmetric and antisymmetric components of your matrix, which can easily be checked for symmetry in O(1) time by checking if antisymmetric element has any components at all...
I think you can take a probabilistic approach here.
I think it's not a chance/coincidence that x randomly picked lower coordinate elements will match to their upper triangular counter part. The chance is very high that the matrix is indeed symmetric.
So instead of going through all the ½n² - n elements you can check p random coordinates and tell if the matrix is symmetric with confidence:
p / (½n² - n)
you can then decide a threshold above which you believe that the matrix must be a symmetric matrix.
If a problem X reduces to a problem Y is the opposite reduction also possible? Say
X = Given an array tell if all elements are distinct
Y = Sort an array using comparison sort
Now, X reduces to Y in linear time i.e. if I can solve Y, I can solve X in linear time. Is the reverse always true? Can I solve Y, given I can solve X? If so, how?
By reduction I mean the following:
Problem X linear reduces to problem Y if X can be solved with:
a) Linear number of standard computational steps.
b) Constant calls to subroutine for Y.
Given the example above:
You can determine if all elements are distinct in O(N) if you back them up with a hash table. Which allows you to check existence in O(1) + the overhead of the hash function (which generally doesn't matter). IF you are doing a non-comparison based sort:
sorting algorithm list
Specialized sort that is linear:
For simplicity, assume you're sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti:
For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in your list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, simply break a rod in two so that one piece is of length x units; discard the other piece.)
Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod--this one is clearly the longest! Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed.
So given a very specialized case of your problem, your statement would hold. This will not hold in the general case though, which seems to be more what you are after. It is very similar to when people think they have solved TSP, but have instead created a constrained version of the general problem that is solvable using a special algorithm.
Suppose I can solve a problem A in constant time O(1) but problem B has a best case exponential time solution O(2^n). It is likely that I can come up with an insanely complex way of solving problem A in O(2^n) ("reducing" problem A to B) as well but if the answer to your question was "YES", I should then be able to make all exceedingly difficult problems solvable in O(1). Surely, that cannot be the case!
Assuming I understand what you mean by reduction, let's say that I have a problem that I can solve in O(N) using an array of key/value pairs, that being the problem of looking something up from a list. I can solve the same problem in O(1) by using a Dictionary.
Does that mean I can go back to my first technique, and use it to solve the same problem in O(1)?
I don't think so.
There is a question on an assignment that was due today which solutions have been released for, and I don't understand the correct answer. The question deals with best-case performance of disjoint sets in the form of disjoint set forests that utilize the weighed union algorithm to improve performance (the smaller of the trees has its root connected as a child to the root of the larger of the two trees) but without using the path compression algorithm.
The question is whether the best case performance of doing (n-1) Union operations on n singleton nodes and m>=n Find operations in any order is Omega(m*logn) which the solution confirms is correct like this:
There is a sequence S of n-1 Unions followed by m >= n Finds that takes Omega(m log n) time. The sequence S starts with a sequence n-1 Unions that builds a tree with depth Omega(log n). Then it has m>=n Finds, each one for the deepest leaf of that tree, so each one takes
(log n) time.
My question is, why does that prove the lower bound is Omega(m*logn) is correct? Isn't that just an isolated example of when the bound would be Omega(m*logn) that doesn't prove it for all inputs? I am certain one needs to only show one counter-example when disproving a claim but needs to prove a predicate for all possible inputs in order to prove its correctness.
In my answer, I pointed out the fact that you could have a case when you start off by joining two singleton nodes together. You then join in another singleton to that 2-node tree with 3 nodes sharing the same parent, then another etc., until you join together all the n nodes. You then have a tree where n-1 nodes all point up to the same parent, which is essentially the result you obtain if you use path compression. Then every FIND is executed in O(1) time. Thus, a sequence of (n-1) Unions and m>=n Finds ends up being Omega(n-1+m) = Omega(n+m) = Omega(m).
Doesn't this imply that the Omega(m*logn) bound is not tight and the claim is, therefore, incorrect? I'm starting to wonder if I don't fully understand Big-O/Omega/Theta :/
EDIT : fixed up the question to be a little clearer
EDIT2: Here is the original question the way it was presented and the solution (it took me a little while to realize that Gambarino and the other guy are completely made up; hardcore Italian prof)
Seems like I indeed misunderstood the concept of Big-Omega. For some strange reason, I presumed Big-Omega to be equivalent to "what's the input into the function that results in the best possible performance". In reality, most likely unsurprisingly to the reader but a revelation to me, Big-Omega simply describes the lower bound of a function. That's it. Therefore, a worst case input will have a lower and upper bounds (big-O and omega), and so will the best possible input. In case of big-omega here, all we had to do was come up with a scenario where we pick the 'best' input given the limitations of the worst case, i.e. that there is some input of size n that will take the algorithm at least m*logn steps. If such input exists, then the lower bound is tight.