Problem: Find best way to cut a rod of length n.
Each cut is integer length.
Assume that each length i rod has a price p(i).
Given: rod of length n, and a list of prices p, which provided the price of each possible integer lenght between 0 and n.
Find best set of cuts to get maximum price.
Can use any number of cuts, from 0 to n−1.
There is no cost for a cut.
Following I present a naive algorithm for this problem.
CUT-ROD(p,n)
if(n == 0)
return 0
q = -infinity
for i = 1 to n
q = max(q, p[i]+CUT-ROD(p,n-1))
return q
How can I prove that this algorithm is exponential? Step-by-step.
I can see that it is exponential. However, I'm not able to proove it.
Let's translate the code to C++ for clarity:
int prices[n];
int cut-rod(int n) {
if(n == 0) {
return 0;
}
q = -1;
res = cut-rod(n-1);
for(int i = 0; i < n; i++) {
q = max(q, prices[i] + res);
}
return q;
}
Note: We are caching the result of cut-rod(n-1) to avoid unnecessarily increasing the complexity of the algorithm. Here, we can see that cut-rod(n) calls cut-rod(n-1), which calls cut-rod(n-2) and so on until cut-rod(0). For cut-rod(n), we see that the function iterates over the array n times. Therefore the time complexity of the algorithm is equal to O(n + (n-1) + (n-2) + (n-3)...1) = O(n(n+1)/2) which is approximately equal to O((n^2)/2).
EDIT:
If we are using the exact same algorithm as the one in the question, its time complexity is O(n!) since cut-rod(n) calls cut-rod(n-1) n times. cut-rod(n-1) calls cut-rod(n-2) n-1 times and so on. Therefore the time complexity is equal to O(n*(n-1)*(n-2)...1) = O(n!).
I am unsure if this counts as a step-by-step solution but it can be shown easily by induction/substitution. Just assume T(i)=2^i for all i<n then we show that it holds for n:
Related
It's quite clear that we see an O(n^2) algorithm to choose the second largest number, and an algorithm using tree style with O(n * Log(n)), but, with extra space cost, like below:
But, eh..., is there a in-place algorithm with time complexity O(n * Log(n)) to select the second largest number in an array/vector?
Yes, in fact you can do this with a single pass over the range without modifying it. Here's an example algorithm:
Let m and M be the second largest, and largest elements. Initialize them to the smallest possible values the input range could contain.
For each number n in the range, the new second largest number depends on the relative order between n, m and M. The 3 possible orderings are n < m < M, m < n < M, or m < M < n. The new second largest element must be m, n, and M respectively. Essentially, n must be clamped between m and M.
The new largest number can't be m, so it must be the larger of n and M.
Here's a demonstration in c++:
int m = 0, M = 0; // assuming a range with non-negative values
for (int n : v)
{
m = std::clamp(n, m, M);
M = std::max(n, M);
}
If you are looking for something very simple O(n):
int getSecondLargest(vector<int>& vec){
int firstLargest = INT_MIN, secondLargest = INT_MIN;
for(auto i: vec){
if(i >= firstLargest){
if(firstLargest != INT_MIN){
secondLargest = firstLargest;
}
firstLargest = i;
}
else if(i > secondLargest){
secondLargest = i;
}
}
return secondLargest;
}
nth_element:
Pros:
If tomorrow you want not the second largest but say fifth largest, you won't need much code changes. The above algorithm I presented won't help.
Cons:
If you are just looking for second largest, nth_element is an overkill. The swaps and/or writes are more as compared to the above algorithm I showed.
Why are you guys giving me O(n) when I am asking for O(nlogn)?
You can find various in-place O(nlogn) sorting algorithms. One of them is Block Sort.
No. I want it to solve it with a tree style and I want O(nlogn) and I want in place. Do you have something like that?
No. That is not possible. When you say in-place, you can't use extra space depending on n. Constant extra space is fine. But tree style would require O(logn) extra space.
Code below computes if s1 can be reduced to t, if so how many ways.
Let's say length of s1 is n and length of t is m. Worst case runtime of code below is O(n^m) without memoization. Say we can memoize sub-problems of s1, that substring recur. Runtime is O(m*n). Since we need to recur m times for each n. Is this reasoning correct?
static int distinctSeq(String s1, String t) {
if (s1.length() == t.length()) {
if (s1.equals(t))
return 1;
else
return 0;
}
int count = 0;
for (int i = 0; i < s1.length(); i++) {
String ss = s1.substring(0, i)+ s1.substring(i + 1);
count += distinctSeqRec(ss, t);
}
return count;
}
As #meowgoesthedog already mentioned your initial solution has a time complexity of O(n!/m!):
If you are starting with s1 of length n, and n > m then you can go into n different states by excluding one symbol from the original string.
You will continue doing it until your string has the length of m. The number of ways to come to the length of m from n using the given algorithm is n*(n-1)*(n-2)*...*(m+1), which is effectively n!/m!
For each string of length m formed by excluding symbols from initial string of length n you will have to compare string derived from s1 and t, which will require m operations (length of the strings), so the complexity from the previous step should be multiplied by m, but considering that you have factorial in the big-O, another *m won't change the asymptotic complexity.
Now about the memoization. If you add memoization, then algorithm would transition only to states that weren't already visited, which means that the task is to count the number of unique substrings of s1. For simplicity we will consider that all symbols of s1 are different. The number of states with length x is the number of ways of removing different n-x symbols from s1 disregarding the order. Which is actually a binomial coefficient - C(n,x) = n!/((n-x)! * x!).
The algorithm will transition through all lengths between n and m, so overall time complexity would be Sum(k=m...n, C(n,k)) = n!/((n-1)!*1!) + n!/((n-2)!*2! + ... + n!/((n-k)!*k!). Considering that we are counting asymptotic complexity we are interested in the largest member of that sum, which is the one with k as closest as possible to n/2. If m is lesser than n/2, then C(n, n/2) is present in the sum, otherwise C(n,m) is the largest element in it, so the complexity of the algorithm with memoization is O(max(C(n,n/2), C(n,m))).
I am wondering can powerset problem be transformed and reduced to knapsack problem? It seems to me that they are identical that for example the changes making problem which we can think of it as powerset that every recursive stage I launch 2 recursive calls (one takes the ith element, and the other one bypass it). I can also solve it with dynamic programming just like knapsack problem, so this makes me wondering if all the powerset problem can be transformed to knapsack problem. Is that correct ?
The following are the code fragment of the coin changes making one with O(2N) time complexity and one with dynamic programming O(N2) runtime.
// O(2^N) time complexity
void bruteforce(int[] coins, int i, int N, String expr)
{
if (i == coins.length) {
if (N == 0)
count++;
return;
}
if (N >= coins[i])
recursion(coins, i, N - coins[i], expr + " " + coins[i]);
recursion(coins, i + 1, N, expr);
}
// O(N^2) time complexity
int dynamicProgramming(int[] coins, int N)
{
int [] dp = new int[N + 1];
dp[0] = 1;
for(int i = 0; i < coins.length; i++)
for(int j = coins[i]; j <= N; j++)
dp[j] += dp[j - coins[i]];
return dp[N];
}
Finding powerset (generating all subsets of a set) can't be done in a way that has a complexity better than O(2n) because there are 2n subsets and merely printing them will take exponential time.
Problems like subset sum, knapsack or coin change are related to powerset because you implicity have to generate all subsets but there is a big difference between them and powerset. In these problems you are only counting some subsets and you aren't required to explicity generate those subsets. For example if the problem asks you to find all the ways to change X dollars to some coins then you can't solve this in linear time because you have to generate all the desired subsets and there could be 2n of them.
I have the below pseudocode that takes a given unsorted array of length size and finds the range by finding the max and min values in the array. I'm just learning about the various time efficiency methods, but I think the below code is Θ(n), as a longer array adds a fixed number of actions (3).
For example, ignoring the actual assignments to max and min (as the unsorted array is arbitrary and these assignments are unknown in advance), an array of length 2 would only require 5 actions total (including the final range calculation). An array of length 4 only uses 9 actions total, again adding the final range calculation. An array of length 12 uses 25 actions.
This all points me to Θ(n), as it is a linear relationship. Is this correct?
Pseudocode:
// Traverse each element of the array, storing the max and min values
// Assuming int size exists that is size of array a[]
// Assuming array is a[]
min = a[0];
max = a[0];
for(i = 0; i < size; i++) {
if(min > a[i]) { // If current min is greater than val,
min = a[i]; // replace min with val
}
if(max < a[i]) { // If current max is smaller than val,
max = a[i]; // replace max with val
}
}
range = max – min; // range is largest value minus smallest
You're right. It's O(n).
An easy way to tell in simple code (like the one above) is to see how many for() loops are nested, if any. For every "normal" loop (from i = 0 -> n), you add a factor of n.
[Edit2]: That is, if you have code like this:
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
for(int j = 0; j < n; ++j){ //Happens n*n times.
//something //Happens n*n times.
}
}
//Overall complexity is O(n^2)
Whereas
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
//something //Happens n times.
}
for(int j = 0; j < n; ++j){ //Happens n times.
//something //Happens n times.
}
//Overall complexity is O(2n) = O(n)
This is pretty rudimentary, but useful if someone has not taken an Algorithm course.
The procedures within your for() loop are irrelevant in a complexity question.
[Edit]: This assumes that size actually means the size of array a.
Yes, this would be Θ(n). Your reasoning is a little skewed though.
You have to look at every item in your loop so you're bounded above by a linear function. Conversely, you are also bounded below by a linear function (the same one in fact), because you can't avoid looking at every element.
O(n) only requires that you bound above, Omega(n) requires that you bound below.
Θ(n) says you're bounded on both sides.
Let size be n, then it's clear to see that you always have 2n comparisons and of course the single assignment at the end. So you always have 2n + 1 operations in this algorithm.
In the worst case scenario, you have 2n assignments, thus 2n + 1 + 2n = 4n + 1 = O(n).
In the best case scenrio, you have 0 assignments, thus 2n + 1 + 0 = 2n + 1 = Ω(n).
Therefore, we have that both the best and worst case perform in linear time. Hence, Ɵ(n).
Yeah this surely is O(n) algorithm. I don't think you really need to drill down to see number of comparisons to arrive on the conclusion about the complexity of the algorithm. Just try to see how the number of comparisons will change with the increasing size of the input. For O(n) the comparisons should have a linear increase with the increase in input. For O(n^2) it increases by some multiple of n and so on.
This was an interview question that I was asked to solve: Given an unsorted array, find out 2 numbers and their sum in the array. (That is, find three numbers in the array such that one is the sum of the other two.) Please note, I have seen question about the finding 2 numbers when the sum (int k) is given. However, this question expect you to find out the numbers and the sum in the array. Can it be solved in O(n), O(log n) or O(nlogn)
There is a standard solution of going through each integer and then doing a binary search on it. Is there a better solution?
public static void findNumsAndSum(int[] l) {
// sort the array
if (l == null || l.length < 2) {
return;
}
BinarySearch bs = new BinarySearch();
for (int i = 0; i < l.length; i++) {
for (int j = 1; j < l.length; j++) {
int sum = l[i] + l[j];
if (l[l.length - 1] < sum) {
continue;
}
if (bs.binarySearch(l, sum, j + 1, l.length)) {
System.out.println("Found the sum: " + l[i] + "+" + l[j]
+ "=" + sum);
}
}
}
}
This is very similar to the standard problem 3SUM, which many of the related questions along the right are about.
Your solution is O(n^2 lg n); there are O(n^2) algorithms based on sorting the array, which work with slight modification for this variant. The best known lower bound is O(n lg n) (because you can use it to perform a comparison sort, if you're clever about it). If you can find a subquadratic algorithm or a tighter lower bound, you'll get some publications out of it. :)
Note that if you're willing to bound the integers to fall in the range [-u, u], there's a solution for the a + b + c = 0 problem in time O(n + u lg u) using the Fast Fourier Transform. It's not immediately obvious to me how to adjust it to the a + b = c problem, though.
You can solve it in O(nlog(n)) as follows:
Sort your array in O(nlog(n)) ascendingly. You need 2 indices pointing to the left/right end of your array. Lets's call them i and j, i being the left one and j the right one.
Now calculate the sum of array[i] + array[j].
If this sum is greater than k, reduce j by one.
If this sum is smaller than k. increase i by one.
Repeat until the sum equals k.
So with this algorithm you can find the solution in O(nlog(n)) and it is pretty simple to implement
Sorry. It seems that I didn't read your post carefully enough ;)