I've encountered this question on an informal test.
T(n) is a reccurance relation
If the time complexity of an algorithm with input size of n is defined as:
T(1)=A
T(n)=T(n-1)+B when n>1
Where A and B are positive constant values.
Then the algorithm design pattern is best described as:
A. Decrease and Conquer - Correct answer
B. Divide and Conquer
C. Quadratic
D. Generate and Test
T(n) converges down to T(n) = nB + A -> O(n)
What's the difference between answer A and B?
Why is the answer Decrease and Conquer?
From Wiki,
Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class.
Now coming to your question:
What's the difference between answer A and B?
A(Decrease and Conquer) is used for algorithms which generates single subproblem, while B(Divide and Conquer) is used for algorithms which generate 2 or more subproblems.
Why is the answer Decrease and Conquer?
From the given problem: T(n)=T(n-1)+B, we are creating a single sub-problem i.e. T(n-1) hence we identify the algorithm as decrease and conquer.
Related
When calculating the sum of array, the cost of adding the value one by one is same with the divide and conquer recursive calculation. Why this time the Divide-And-Conquer does not show its performance benefit over adding one by one?
And why the Divide-And-Conquer is much better than comparing one by one on sorting?
What's the core difference between the two case?
First of all, when calculating the sum of an array, if divide and conquer is used, the runtime recurrence will be as follows.
T(n) = 2 * T(n/2) + 1
Via the Master Theorem, this yields a runtime bound of O(n). Furthermore, while being the same runtime bound as sequential addition, this bound is optimal; the output depends on every number in the input, which cannot be read within a runtime bound smaller than O(n).
That being said, divide-and-conquer does not per se yield a better runtime bound than any other approach; it is merely a design paradigm which describes a certain approach to the problem.
Futhermore, sequential addition can also be interpreted as divide and conquer, especially if it is implemented recusively; the runtime bound would be
T(n) = T(n-1) + 1
which is also O(n).
Assume you had a data set of size n and two algorithms that processed that data
set in the same way. Algorithm A took 10 steps to process each item in the data set. Algorithm B processed each item in 100 steps. What would the complexity
be of these two algorithms?
I have drawn from the question that algorithm A completes the processing of each item with 1/10th the complexity of algorithm B,and using the graph provided in the accepted answer from the question: What is a plain English explanation of "Big O" notation? I am concluding that algorithm B has a complexity of O(n^2) and algorithm A a complexity of O(n), but am struggling to make conclusions beyond that without the implementation.
You need more than one data point before you can start making any conclusions about time complexity. The difference of 10 steps and 100 steps between Algorithm A and Algorithm B could be for many different reasons:
Additive Constant difference: Algorithm A is always 90 steps faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
Scalar Multiplicative difference: Algorithm A is always 10 times faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
The case that you brought up, where B is O(n^2) and A is O(n)
Many, many other possibilities.
I would like to quote from Wikipedia
In mathematics, the minimum k-cut, is a combinatorial optimization
problem that requires finding a set of edges whose removal would
partition the graph to k connected components.
It is said to be the minimum cut if the set of edges is minimal.
For a k = 2, It would mean Finding the set of edges whose removal would Disconnect the graph into 2 connected components.
However, The same article of Wikipedia says that:
For a fixed k, the problem is polynomial time solvable in O(|V|^(k^2))
My question is Does this mean that minimum 2-cut is a problem that belongs to complexity class P?
The min-cut problem is solvable in polynomial time and thus yes it is true that it belongs to complexity class P. Another article related to this particular problem is the Max-flow min-cut theorem.
First of all, the time complexity an algorithm should be evaluated by expressing the number of steps the algorithm requires to finish as a function of the length of the input (see Time complexity). More or less formally, if you vary the length of the input, how would the number of steps required by the algorithm to finish vary?
Second of all, the time complexity of an algorithm is not exactly the same thing as to what complexity class does the problem the algorithm solves belong to. For one problem there can be multiple algorithms to solve it. The primality test problem (i.e. testing if a number is a prime or not) is in P, but some (most) of the algorithms used in practice are actually not polynomial.
Third of all, in the case of most algorithms you'll find on the Internet evaluating the time complexity is not done by definition (i.e. not as a function of the length of the input, at least not expressed directly as such). Lets take the good old naive primality test algorithm (the one in which you take n as input and you check for division by 2,3...n-1). How many steps does this algo take? One way to put it is O(n) steps. This is correct. So is this algorithm polynomial? Well, it is linear in n, so it is polynomial in n. But, if you take a look at what time complexity means, the algorithm is actually exponential. First, what is the length of the input to your problem? Well, if you provide the input n as an array of bits (the usual in practice) then the length of the input is, roughly said, L = log n. Your algorithm thus takes O(n)=O(2^log n)=O(2^L) steps, so exponential in L. So the naive primality test is in the same time linear in n, but exponential in the length of the input L. Both correct. Btw, the AKS primality test algorithm is polynomial in the size of input (thus, the primality test problem is in P).
Fourth of all, what is P in the first place? Well, it is a class of problems that contains all decision problems that can be solved in polynomial time. What is a decision problem? A problem that can be answered with yes or no. Check these two Wikipedia pages for more details: P (complexity) and decision problems.
Coming back to your question, the answer is no (but pretty close to yes :p). The minimum 2-cut problem is in P if formulated as a decision problem (your formulation requires an answer that is not just a yes-or-no). In the same time the algorithm that solves the problem in O(|V|^4) steps is a polynomial algorithm in the size of the input. Why? Well, the input to the problem is the graph (i.e. vertices, edges and weights), to keep it simple lets assume we use an adjacency/weights matrix (i.e. the length of the input is at least quadratic in |V|). So solving the problem in O(|V|^4) steps means polynomial in the size of the input. The algorithm that accomplishes this is a proof that the minimum 2-cut problem (if formulated as decision problem) is in P.
A class related to P is FP and your problem (as you formulated it) belongs to this class.
When learning for exam in Algorithms and Data Structures i have stumbled upon a question, what does it mean if an algorithm has pseudo polynomial time efficiency( analysis)
Did a lot of searching but turned empty handed
It means that the algorithm is polynomial with respect to the size of the input, but the input actually grows exponentially.
For example take the subset sum problem. We have a set S of n integers and we want to find a subset which sums up to t.
To solve this problem you can just check the sum of each subset, so it is O(P) where P is the number of subsets. However in fact the number of subsets is 2^n so the algorithm has exponential complexity.
I hope this introduction helps to understand the wikipedia's article about it http://en.wikipedia.org/wiki/Pseudo-polynomial_time :)
The time complexity of the closest pair problem is T(n) = 2T(n/2) + O(n). I understand that 2T(n/2) comes from the fact that the algorithm is applied to 2 sets of half the original's size, but why does the rest come out to O(n)? Thanks.
Check out http://en.wikipedia.org/wiki/Closest_pair_of_points_problem which mentions clearly where the O(n) comes from (Planar case).
Any divide and conquer algorithm will consist of a recursive 'divide' component, and a 'merge' component where the recursed results are put together. The linear O(n) component in closet pair comes from merging the results from the 'divide' step into a merged answer.