Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Every example I have seen of brute force algorithm has exponential run time.
Is this a strict rule, i.e. are all brute force algorithms exponential in run time?
No, certainly not. Consider a linear search algorithm to search a sorted array. You can do better, but a linear search could be considered "brute force".
See https://en.wikipedia.org/wiki/Brute-force_search for further examples and explanation. A relevant quote from that page:
While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions - which in many practical problems tends to grow very quickly as the size of the problem increases.
Nope. You can also do worse.
Example, finding the shortest tour (travelling salesman) using brute force is Omega(n!), which is not exponential.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
So far in my learning of algorithms, I have assumed that asymptotic boundings are directly related to patterns in control structures.
So if we have n^2 time complexity, I was thinking that this automatically means that I have to use nested loops. But I see that this is not always correct (and for other time complexities, not just quadratic).
How to approach this relationship between time complexity and control structure?
Thank you
Rice's theorem is a significant obstacle to making general statements about analyzing running time.
In practice there's a repertoire of techniques that get applied. A lot of algorithms have nested loop structure that's easy to analyze. When the bounds of one of those loops is data dependent, you might need to do an amortized analysis. Divide and conquer algorithms can often be analyzed with the Master Theorem or Akra–Bazzi.
In some cases, though, the running time analysis can be very subtle. Take union-find, for example: getting the inverse Ackermann running time bound requires pages of proof. And then for things like the Collatz conjecture we have no idea how to even get a finite bound.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have hard time to grasp the key idea of Held-Karp algorithm, how does it reduce the time-complexity?
Is it because it uses Dynamic programming so that time is saved by getting the intermediate result from the cache or because it removes some paths earlier in the calculation?
Also, is it possible to use 2 dimension table to show the calculation for
a simple TSP problem(3 or 4 cities)?
The dynamic programming procedure of the Held–Karp algorithm takes advantage of the following property of the TSP problem: Every subpath of a path of minimum distance is itself of minimum distance.
So essentially, instead of checking all solutions in a naive "top-down", brute force approach (of every possible permutation), we instead use a "bottom-up" approach where all the intermediate information required to solve the problem is developed once and once only. The initial step is the very smallest subpath. Every time we move up to solve a larger subpath, we are able to look up the solutions to all the smaller subpath problems which have already been computed. The time savings come because all of the smaller subproblems have already been solved and these savings compound exponentially (at each greater subpath level). But no "paths are removed" from the calculations–at the end of the procedure all of the subproblems will have been solved. The obvious drawback is that a very large memory size may be required to store all the intermediate results.
In summary, the time savings of the Held–Karp algorithm follow from the fact that it never duplicates solving the solution to any subset (combination) of the cities. But the brute force approach will recompute the solution to any given subset combination many times (albeit not necessarily in consecutive order within a given overall set permutation).
Wikipedia contains a 2D distance matrix example and pseudocode here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a variant question from the Elements of Programming Interviews and doesn't come with a solution.
How can you compute the smallest number of queens that can be placed to attack each uncovered square?
The problem is about finding a minimal dominating set in a graph (the queen graph in your case http://mathworld.wolfram.com/QueenGraph.html), this more general problem is NP-Hard. Even if this reduction (on this specific kind of graphs) is unlikely to be NP-Hard, you may expect to not be able to find any efficient (polynomial) algorithm and indeed as up today nobody find one.
As an interview question, I think an acceptable answer would be a backtracking algorithm. You can add small improvements like always stop the search if you already put (n-2)-queens on the board.
For more information and pseudo-code of the algorithm and also more sophisticated algorithms I would suggest to read:
Fernau, H. (2010). minimum dominating set of queens: A trivial programming exercise?. Discrete Applied Mathematics, 158(4), 308-318.
http://www.sciencedirect.com/science/article/pii/S0166218X09003722
The simplest way is probably exhaustive searching with 1,2,3... queens until you find a solution. If you take the symmetries of the board into account you will only need ~10^6 searches to confirm that 4 queens is not enough (at that point you could use the same search until you find a solution for 5 queens or alternately, use a greedy algorithm for 5 queens to find a solution faster).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Do all algorithms that use the divide and conquer approach use recursive functions or not necessarily?
Binary search is an application of the D&C paradigm. (As follows: split in two halves and continue into the half that may contain the key.)
It can be implemented both recursively or non-recursively.
Recursion is handy when you need to keep both "halves" of a split and queue them for later processing. A common situation is called tail recursion, when you only queue one of the halves and process the other immediately. In binary search, you just drop one of the halves.
In a very broad sense, D&C is the father of all algorithms when stated as "break the problem into easier subproblems of the same kind". This definition also encompasses iterative solutions, often implemented without recursion.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I always have problems with evaluating a complexity of a problem. I usually try to find an O(n) solution but sometimes O(nlogn) or even O(n^2) is the best possible one.
One "rule of thumb" I know is that if you have a sorted array and you need to find something, it probably can be done in O(logn). Also I know that sorting can't be done quicker than O(nlogn). Are there any similar rules an unexperienced programmer can follow? Reoccurring problems you know the complexity of?
The most troublesome for me is the O(n^2), especially if I'm under pressure on an exam and I waste time on trying to find a better one.
I hope this isn't a too broad and opinion-based question.
Thanks!
Non comparison based sorting takes O(n) time. Eg: radix sort.
This seems like a good read. http://bigocheatsheet.com/ It contains list of common algorithms, their space and time complexity. Hope this helps.