Algorithm to iteratively compute the reciprocal of a number [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was reading R.G.Droomey's book How to solve it by Computer and in chapter 3 I found this problem - " Design and implement an algorithm to iteratively compute the reciprocal of a number." I am totally confused how to do that as he was teaching before how to compute the square roots and then suddenly comes up with this question. What's the co-relation?
And what would be the algorithm for this? Plus why do we need this when we can directly find the reciprocal of a number?

Iteratively computing any function probably asks you to use some numerical analysis method, like Newton-Raphson (http://en.wikipedia.org/wiki/Newton%27s_method) or binary search.
This method, along with the whole concept of numerical analysis (http://en.wikipedia.org/wiki/Numerical_analysis) allows you to calculate the root of a function f(x) by approximation without using any given formula for the solution.
As an example, you can calculate the root of f(x) = 5*x^2 + sqrt(x) + ln(x), where it is difficult to find a solution formula.
Plus why do we need this when we can directly find the reciprocal of a
number?
Imagine that you need to calculate the reciprocal of a number in a machine where you cannot calculate a division, but only addition, subtraction and multiplication. How do you do it? You use numerical analysis :)

Related

Which algorithm would be the faster algorithm? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
As per Big O notations, if time complexity of one algorithm is O(2^n) and the other is O(n^1000), then which would be faster one?
How to recognize overall behavior for some non-obvious cases: get logarithm of both functions.
(Sometimes we can also get ratio of the functions and evaluate ratio limit for large n's, here this approach is not good)
log(2^n) = n*log(2)
log(n^1000) = 1000*log(n)
The first result is slanted line with positive coefficient. The second one's plot is convex curve with negative second derivative, so the first function becomes larger at some big n value.
How plot looks
O(n^1000) is in the same class as (n^2) and O(n^777777777) which is Polynomial time, whereas O(2^n) is Exponential time which is way slower than Polynomial
https://www.bigocheatsheet.com/

asymptotic bounding and control structures [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
So far in my learning of algorithms, I have assumed that asymptotic boundings are directly related to patterns in control structures.
So if we have n^2 time complexity, I was thinking that this automatically means that I have to use nested loops. But I see that this is not always correct (and for other time complexities, not just quadratic).
How to approach this relationship between time complexity and control structure?
Thank you
Rice's theorem is a significant obstacle to making general statements about analyzing running time.
In practice there's a repertoire of techniques that get applied. A lot of algorithms have nested loop structure that's easy to analyze. When the bounds of one of those loops is data dependent, you might need to do an amortized analysis. Divide and conquer algorithms can often be analyzed with the Master Theorem or Akra–Bazzi.
In some cases, though, the running time analysis can be very subtle. Take union-find, for example: getting the inverse Ackermann running time bound requires pages of proof. And then for things like the Collatz conjecture we have no idea how to even get a finite bound.

For TSP, how does Held–Karp algorithm reduce the time complexity from Brute-force's O(n!) to O(2^n*n^2)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have hard time to grasp the key idea of Held-Karp algorithm, how does it reduce the time-complexity?
Is it because it uses Dynamic programming so that time is saved by getting the intermediate result from the cache or because it removes some paths earlier in the calculation?
Also, is it possible to use 2 dimension table to show the calculation for
a simple TSP problem(3 or 4 cities)?
The dynamic programming procedure of the Held–Karp algorithm takes advantage of the following property of the TSP problem: Every subpath of a path of minimum distance is itself of minimum distance.
So essentially, instead of checking all solutions in a naive "top-down", brute force approach (of every possible permutation), we instead use a "bottom-up" approach where all the intermediate information required to solve the problem is developed once and once only. The initial step is the very smallest subpath. Every time we move up to solve a larger subpath, we are able to look up the solutions to all the smaller subpath problems which have already been computed. The time savings come because all of the smaller subproblems have already been solved and these savings compound exponentially (at each greater subpath level). But no "paths are removed" from the calculations–at the end of the procedure all of the subproblems will have been solved. The obvious drawback is that a very large memory size may be required to store all the intermediate results.
In summary, the time savings of the Held–Karp algorithm follow from the fact that it never duplicates solving the solution to any subset (combination) of the cities. But the brute force approach will recompute the solution to any given subset combination many times (albeit not necessarily in consecutive order within a given overall set permutation).
Wikipedia contains a 2D distance matrix example and pseudocode here.

How can you compute the smallest number of queens that can be placed to attack each uncovered square? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a variant question from the Elements of Programming Interviews and doesn't come with a solution.
How can you compute the smallest number of queens that can be placed to attack each uncovered square?
The problem is about finding a minimal dominating set in a graph (the queen graph in your case http://mathworld.wolfram.com/QueenGraph.html), this more general problem is NP-Hard. Even if this reduction (on this specific kind of graphs) is unlikely to be NP-Hard, you may expect to not be able to find any efficient (polynomial) algorithm and indeed as up today nobody find one.
As an interview question, I think an acceptable answer would be a backtracking algorithm. You can add small improvements like always stop the search if you already put (n-2)-queens on the board.
For more information and pseudo-code of the algorithm and also more sophisticated algorithms I would suggest to read:
Fernau, H. (2010). minimum dominating set of queens: A trivial programming exercise?. Discrete Applied Mathematics, 158(4), 308-318.
http://www.sciencedirect.com/science/article/pii/S0166218X09003722
The simplest way is probably exhaustive searching with 1,2,3... queens until you find a solution. If you take the symmetries of the board into account you will only need ~10^6 searches to confirm that 4 queens is not enough (at that point you could use the same search until you find a solution for 5 queens or alternately, use a greedy algorithm for 5 queens to find a solution faster).

Are all brute force algorithms exponential? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Every example I have seen of brute force algorithm has exponential run time.
Is this a strict rule, i.e. are all brute force algorithms exponential in run time?
No, certainly not. Consider a linear search algorithm to search a sorted array. You can do better, but a linear search could be considered "brute force".
See https://en.wikipedia.org/wiki/Brute-force_search for further examples and explanation. A relevant quote from that page:
While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions - which in many practical problems tends to grow very quickly as the size of the problem increases.
Nope. You can also do worse.
Example, finding the shortest tour (travelling salesman) using brute force is Omega(n!), which is not exponential.

Resources