I have a piece of code. I think the complexity of the code is O(n). But not sure, so can you please confirm me?
int low=0;
int high=array.length-1;
while(low<high)
{
while(array[low]!=0)
low++;
while(array[high]==0)
high--;
int temp=array[low];
array[low++]=array[high];
array[high--]=temp;
}
Your program keeps increasing low and decreasing high until they meets, so it's O(n)
Your program appears to be a Merge algorithm, which is O(N) or linear time.
At the end of your program, the number of times you increment low plus the number of times you decrement high is going to be the length of the array, which is O(N).
A famous algorithm with a similar structure to this is the partition step in Quicksort. You might be able to find more detailed analysis if you search for that.
O(n)
You can get confused with O(n^2) but as you can replace while with if conditions so then there will be no 2 loops, while loops are just put there to increase the computation.
you can also do like:
while(low<high)
{
if(arr[low]!=0)
low++;
if(arr[high]==0)
high--;
//Rest of the things
}
Here clearly the complexity is O(n) and the code is exactly same. So your code is also O(n) complexity.
Not necessarily
If everything in the {} is O(1) then yes it is O(n)
int low=0;
int high=array.length-1;
while(low<high)
{
}
If something in the {} was O(n) then it would be order n squared
Related
I'm trying to solve question 11.1 in Elements of Programming Interviews (EPI) in Java: Search a Sorted Array for First Occurrence of K.
The problem description from the book:
Write a method that takes a sorted array and a key and returns the index of the first occurrence of that key in the array.
The solution they provide in the book is a modified binary search algorithm that runs in O(logn) time. I wrote my own algorithm also based on a modified binary search algorithm with a slight difference - it uses recursion. The problem is I don't know how to determine the time complexity of my algorithm - my best guess is that it will run in O(logn) time because each time the function is called it reduces the size of the candidate values by half. I've tested my algorithm against the 314 EPI test cases that are provided by the EPI Judge so I know it works, I just don't know the time complexity - here is the code:
public static int searchFirstOfKUtility(List<Integer> A, int k, int Lower, int Upper, Integer Index)
{
while(Lower<=Upper){
int M = Lower + (Upper-Lower)/2;
if(A.get(M)<k)
Lower = M+1;
else if(A.get(M) == k){
Index = M;
if(Lower!=Upper)
Index = searchFirstOfKUtility(A, k, Lower, M-1, Index);
return Index;
}
else
Upper=M-1;
}
return Index;
}
Here is the code that the tests cases call to exercise my function:
public static int searchFirstOfK(List<Integer> A, int k) {
Integer foundKey = -1;
return searchFirstOfKUtility(A, k, 0, A.size()-1, foundKey);
}
So, can anyone tell me what the time complexity of my algorithm would be?
Assuming that passing arguments is O(1) instead of O(n), performance is O(log(n)).
The usual theoretical approach for analyzing recursion is calling the Master Theorem. It is to say that if the performance of a recursive algorithm follows a relation:
T(n) = a T(n/b) + f(n)
then there are 3 cases. In plain English they correspond to:
Performance is dominated by all the calls at the bottom of the recursion, so is proportional to how many of those there are.
Performance is equal between each level of recursion, and so is proportional to how many levels of recursion there are, times the cost of any layer of recursion.
Performance is dominated by the work done in the very first call, and so is proportional to f(n).
You are in case 2. Each recursive call costs the same, and so performance is dominated by the fact that there are O(log(n)) levels of recursion times the cost of each level. Assuming that passing a fixed number of arguments is O(1), that will indeed be O(log(n)).
Note that this assumption is true for Java because you don't make a complete copy of the array before passing it. But it is important to be aware that it is not true in all languages. For example I recently did a bunch of work in PL/pgSQL, and there arrays are passed by value. Meaning that your algorithm would have been O(n log(n)).
I've wrote this code for bubble sort.Can someone explain me the time complexity for this. It is working similar to 2 for loops. But still want to confirm with time complexity.
public int[] sortArray(int[] inpArr)
{
int i = 0;
int j = 0;
while(i != inpArr.length-1 && j != inpArr.length-1)
{
if(inpArr[i] > inpArr[i+1])
{
int temp = inpArr[i];
inpArr[i] = inpArr[i+1];
inpArr[i+1] = temp;
}
else
{
i++;
}
if(i==inpArr.length-1)
{
j++;
i = 0;
}
}
return inpArr;
}
This would have O(n^2) time complexity. Actually, this would be probably be both O(n^2) and theta(n^2).
Look at the logic of your code. You are performing the following:
Loop through the input array
If the current item is bigger than the next, switch the two
If that is not the case, increase the index(and essentially check the next item, so recursively walk through steps 1-2)
Once your index is the length-1 of the input array, i.e. it has gone through the entire array, your index is reset (the i=0 line), and j is increased, and the process restarts.
This essentially ensures that the given array will be looped through twice, meaning that you will have a WORST-CASE (big o, or O(x)) time complexity of O(n^2), but given this code, your AVERAGE (theta) time complexity will be theta(n^2).
There are SOME situations where you can have a BEST CASE (lambda) of nlg(n), giving a lambda(nlg*(n)) time complexity, but this situation is rare and I'm not even sure its achievable with this code.
Your time complexity is O(n^2) as a worst-case scenario and O(n) as a best case scenario. Your average scenario still performs O(n^2) comparisons but will have less swaps than O(n^2). This is because you're essentially doing the same thing as having two for loops. If you're interested in algorithmic efficiency, I'd recommend checking out pre-existing libraries that sort. The computer scientists that work on these sort of things really are intense. Java's Arrays.sort() method is based on a Python project called timsort that is based on merge-sorting. The disadvantage of your (and every) Bubble sort is that it's really inefficient for big, disordered arrays. Read more here.
My teacher gave me the below code to find out his average complexity:
int function(int a[], int n)
{
int k=0;
for(i=0;i<n;i++)
for(j=i+1;j<n;j++)
k=k+((a[i]*a[i]+a[j]*a[j])%5==0)
return k;
}
void main()
{
int vector={0,1,2,3,4,5,6,7,8,9}
int a=function(vector, 10);
printf( "%d\n", a);
}
By unrowling the loops I found out that the code executes n*(n+1)/2 times and I conclude that the worst case is O(n^2) because exists n*(n+1)/2 < c*n^2 for n>n0 .I know that the definition of average complexity is quite similar, but I found quite difficult to calculate it.I want to know what is the complexity in this case and if there are standardized methods for calculating these type of problems
( eg : nested loops with dependencies between the iterators).
In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs see here for definition.
In your case, you have already figured out that your program will execute for n*(n+1)/2 (for a given n) times. Then you can think: what if n = 1, 2, 3, ...? You only need to add up all those values using your formula and take an average. It is easy to get the O(n^2) solution.
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs.
There is only one possiblity in your algorithm. For all inputs, your algorithm runs in O(n*(n+1)/2) time.
Average time complexity is O(n*(n+1)/2) = O(n^2).
Consider two functions that accept as a parameter an unsigned integer and returns the number of digits of this number. One function is recursive and the other is non-recursive.
In terms of complexity , which implementation is better?
The language used is C/C++.
Here is non-recursive function:
int nbOfDigitsNR(int nb) {
int i=0
while(nb!=0){
nb=nb/10;
++i;
}
return i; // i is the number of digits
}
the recursive function:
int nbOfDigitsNR(int nb) {
static int i;
if (nb!=0){
i=i+1;
nbOfDigitsNR(nb/10);}
return i;
}
I suggest that the time complexity is the same: O(n),
and the space complexity is different: O(n) recursive. O(1) non recursive.
Should one solution be recursive and other iterative, the time complexity should be the same, if of course this is the same algorithm implemented twice - once recursively and once iteratively.
The difference comes in terms of space complexity and how programming language, in your case C++, handles recursion.
Your example illustrates exactly that. Both functions will have the same time complexity, while recursive one will have bigger space complexity, since C++ allocates variables for each recursive call on stack.
You are correct about time and space complexities, if n represents number of digits. Should n represent integer, then replace it with lg(n).
Saying that a function is recursive or non-recursive doesn't tell us anything about its complexity.
It could be equal, or one of them with a lower complexity.. it entirely depends on the algorithm.
I have a blue and a gray car. Which one is faster?
The question is rather simple, but I just can't find a good enough answer. On the most upvoted SO question regarding the big-O notation, it says that:
For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering).
Now let's consider the simple bubble sort algorithm:
for (int i = arr.length - 1; i > 0; i--) {
for (int j = 0; j < i; j++) {
if (arr[j] > arr[j+1]) {
switchPlaces(...)
}
}
}
I know that worst case is O(n²) and best case is O(n), but what is n exactly? If we attempt to sort an already sorted algorithm (best case), we would end up doing nothing, so why is it still O(n)? We are looping through 2 for-loops still, so if anything it should be O(n²). n can't be the number of comparison operations, because we still compare all the elements, right?
When analyzing the Big-O performance of sorting algorithms, n typically represents the number of elements that you're sorting.
So, for example, if you're sorting n items with Bubble Sort, the runtime performance in the worst case will be on the order of O(n2) operations. This is why Bubble Sort is considered to be an extremely poor sorting algorithm, because it doesn't scale well with increasing numbers of elements to sort. As the number of elements to sort increases linearly, the worst case runtime increases quadratically.
Here is an example graph demonstrating how various algorithms scale in terms of worst-case runtime as the problem size N increases. The dark-blue line represents an algorithm that scales linearly, while the magenta/purple line represents a quadratic algorithm.
Notice that for sufficiently large N, the quadratic algorithm eventually takes longer than the linear algorithm to solve the problem.
Graph taken from http://science.slc.edu/~jmarshall/courses/2002/spring/cs50/BigO/.
See Also
The formal definition of Big-O.
I think two things are getting confused here, n and the function of n that is being bounded by the Big-O analysis.
By convention, for any algorithm complexity analysis, n is the size of the input if nothing different is specified. For any given algorithm, there are several interesting functions of the input size for which one might calculate asymptotic bounds such as Big-O.
The commonest such function for a sorting algorithm is the worst case number of comparisons. If someone says a sorting algorithm is O(n^2), without specifying anything else, I would assume they mean the worst case comparison count is O(n^2), where n is the input size.
Another interesting function is the amount of work space, of space in addition to the array being sorted. Bubble sort's work space is O(1), constant space, because it only uses a few variables regardless of the array size.
Bubble sort can be coded to do only n-1 array element comparisons in the best case, by finishing after any pass that does no exchanges. See this pseudo code implementation, which uses swapped to remember whether there were any exchanges. If the array is already sorted the first pass does no exchanges, so the sort finishes after one pass.
n is usually the size of the input. For array, that would be the number of elements.
To see the different cases, you would need to change the algorithm:
for (int i = arr.length - 1; i > 0 ; i--) {
boolean swapped = false;
for (int j = 0; j<i; j++) {
if (arr[j] > arr[j+1]) {
switchPlaces(...);
swapped = true;
}
}
if(!swapped) {
break;
}
}
Your algorithm's best/worst cases are both O(n^2), but with the possibility of returning early, the best-case is now O(n).
n is array length. You want to find T(n) algorithm complexity.
It is much expensive to access memory then check condition if. So, you define T(n) to be number of access memory.
In the given algorithm BC and WC use O(n^2) accesses to memory because you check the if-condition O(n^2) times.
Make the complexity better: Hold a flag and if you don't do any swaps in the main-loop, it means your array is sorted and you can put a break.
Now, in BC the array is sorted and you access all elements once so O(n).
And in WC still O(n^2).