It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
i and my friend discussed following algorithm problem,
"Describe a recursive algorithm for finding the maximum element in an array A of n
elements. What is your running time and space usage?"
we conclusioned that it has O(n) time usage. Accoriding to this statement, F(n) =compare A[n] with F(n-1), at base case of recursion, it compares A[0] and A[1], then returns bigger one, which will compare with A[2]. as recursion proceeds, finally in the end, it will return maximum element in an array.
each n time recursions, it compares only one time so finally we guessed it has O(n) time usage.
my question is we aren't sure with our solution, so we want any other comments about this algorithm and our solution. thank you.
You approach for finding the time complexity is fine if the array contains integers. In case of numbers, comparing two numbers can be considered to be a unit operation. And while iterating over the array, to find the maximum value, this operations is performed n times. Hence O(n).
But if the array contains complex datatypes, say string, then comparing two strings cannot be considered as a unit operation. To compare string you may have to iterate over each character of the string. In this case the time complexity of the algorithm may also start depending on the length of the strings in your array. Similarly for other datatypes, comparing two objects may not be a unit operation. But in your case, looks like the array contains numbers, so your are good.
Yes. you are correct, it is infact O(n). How you can do quite simply is,
The basic operation of the algorithm is the comparison. And in step of the recursion the comparison is done only once.
So you can say
m(n) = m(n-1) + 1
m(n-1) = m(n-2) + 1 + 1
m(n-2) = m(n-3) + 2 + 1
generalizing we get
m(n-i) = m(n-1-i) + i + 1
now in your basecase, you would be doing no comparisons (basecase is no elements left, so you return the current largest). you can write this as
m(0) = 1
now substituting in the recurrence equation to get the base case, let i = n-1
we get
m(n) = m(0) + n - 1 + 1
but m(0) = 0
So we get
m(n) = n
Hence your algorithm is O(n). There are other ways to prove this too. And even without a mathematical proof you can logically say your algorithm is O(n) since it does only one basic operation every recursive step, and the algorithm will always recurse n steps irrespective of the input.
Related
This question already has an answer here:
Find numbers having a particular difference within a sorted list
(1 answer)
Closed 7 years ago.
I am new to algorithm. found this question and got stuck half of my day. My guess of times of key comparison is T(n)=2n-2. Ie O(n) Any advice? Appreciate.
Given a sorted array of n elements, A[0..n − 1], and a constant C. We want to determine if there
is a pair of elements A[i] and A[j], i != j, such that A[i] + A[j] = C. (We want a Boolean function
which returns a TRUE/FALSE answer.)
(a) Outline how this problem may be solved by using the binary-search algorithm, BS(A, lef t, right, key).
(Do not give the code for BS. It is a given function, which you call.) Analyze the time complexity of this approach.
(b) Descrive a more efficient O(n) algorithm to solve this problem. Give the pseudo-code. Explain how the algorithm works, and provide a numerical illustration.
a) You could iterate through all of the elements in the array (in O(n) time) and call binary search to find the number C - A[i] in the same array (in O(log n) time. If such a number exists, the two numbers would sum to C. The total running time of this operation is O(n log n)
b) You could use a set to store all of the values in the array as you iterate through the array and then for each number you come across, you check to see if the set contains the number C-A[i]. You then add the number A[i] to the set. Checking to see if a set contains a number is O(1) (assuming it's a hashset), and iterating through the array is O(n), giving a final runtime of O(n).
Pseudocode:
set S to a new set
for i in list A:
if S contains C-A[i]:
return true
add A[i] to set
return false
EDIT: I see you are using T(n). In the programming world, everybody uses O(n), which is a crucial subject to understand if you want to become proficient in algorithms. I suggest you learn it before you progress to anything else.
Can you point out to me an example of a divide and conquer algorithm that runs in CONSTANT time! I'm in a "OMG! I can not think of any such thing" kind of situation. Point me to something please. Thanks
I know that an alg that follows the following recurence: T(n) = 2T(n/2) + n would be merge sort. We're dividing the problem into 2 subproblems - each of size n/2. Then we're taking n time to conquer everything back into one sorted array.
I also know that T(n) = T(n/2) + 1 would be binary search.
But what is T(n) = 1?
For a divide-and-conquer algorithm to run in constant time, it needs to do no more than a fixed amount of work on any input. Therefore, it can make at most a fixed number of recursive calls on any input, since if the number of calls was unbounded, the total work done would not be a constant. Moreover, it needs to do a constant amount of work across all those recursive calls.
This eliminates basically any reasonable-looking recurrence relation. Anything of the form
T(n) = aT(n / b) + O(nk)
is immediately out of the question, because the number of recursive calls would grow as a function of the input n.
You could make some highly contrived divide-and-conquer algorithms that run in constant time. For example, consider this problem:
Return the first element of the input array.
This could technically be solved with divide-and-conquer by noting that
The first element of a one-element array is equal to itself.
The first element of an n-element array is the first element of the subarray of just the first element.
The recurrence is then
T(n) = T(1) + O(1)
T(1) = 1
As you can see, this is a very odd-looking recurrence, but it does work out.
I've never heard of anything like this coming up in practice, but if I think of anything I'll try to update this answer with details. (A note: I'm not expecting to ever update this answer. ^_^)
Hope this helps!
I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
why the time complexity of best case of top-down merge sort is in O(nlogn)?
i think the best case of top-down merge sort is 1, only need to compare 1 time.
how about the time complexity of bottom-up merge sort in worst case, best case and average case.
One more question is why each iteration takes exactly O(n)? could some one help with that?
why the time complexity of best case of top-down merge sort is in
O(nlogn)?
Because at each iteration you split the array into two sublists, and recursively invoke the algorithm. At best case you split it exactly to half, and thus you reduce the problem (of each recursive call) to half of the original problem. You need log_2(n) iterations, and each iteration takes exactly O(n) (each iteration is on all sublists, total size is still n), so at total O(nlogn).
However, with a simple preprocessing to check if the list is already sorted - it can be reduced to O(n).
Since checking if a list is sorted is itself O(n) - it cannot be done in O(1). Note that the "best case" is the "best case" for general n, and not a specific size.
how about the time complexity of bottom-up merge sort in worst case,
best case and average case.
The same approach can give you O(n) best case to bottom up (simple pre processing). The worst case and best case of bottom up merge sort is O(nlogn) - since in this approach the list is always divided to 2 equally length (up to difference 1) lists.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
1) Given an array of integers (negative and positive) - what is the most efficient algorithm to return the max consecutive sum.
a) I thought to solve this with Dynamic Programing, but complexity is O(n^2). Is there another way?
b) What if we were given a infinite input of integers. Is there a way to output the current max consecutive sum? I guess not.
2) Given: an array of segments[start,end] (can elapse) ordered ascending by start point,
and a point.
What is the most efficient algorithm to return a segment that contains this point?/all segments that contain this point?
I thought to use binarySearch to hit the first segment that starts before this point the than trying to traverse right and left.
Any other idea ?
For 1) There is an algorithm that's working in O(n)
For 2) I think your approach is not bad (as long as you can't assume ordering w.r.t. ending points)
1) As long as the sum doesn't drop below zero, it's always better to continue with the consecutive summation. So you just pass through the array once (i.e. you have a linear runtime algorithm) from left to right and remember the current consecutive summation and the maximum consecutive summation so far, updating it whenever the current sum gets bigger then the max sum.
So at any point at of the array traversal, you can say what the max sum so far is. Hence you can use this algorithm for an (infinite) input stream, too.
2) Yes, binary search sounds good. But if I understand the question correctly, you can start with the right-most segment (that starts closest to the point) and then just traverse the segments to the left. Of course, the worst case runtime is still linear in the number of segments, but the average should be logarithmic.