What Big-O notation would this fall under? - big-o

What Big-O notation would this fall under? I know setSearch() and removeAt() are of order O(n) (assume they are either way). I know without the for loop it'd be O(n), for certain, but I get confused how to calculate what it becomes with a for loop thrown into it. I'm not all that great at math... so. Would it be O(n^2)?
public void removeAll(DataElement clearElement)
{
if(length == 0)
System.err.println("Cannot delete from an empty list.");
else
{
for(int i = 0; i < list.length; i++)
{
loc = seqSearch(clearElement);
if(loc != -1)
{
removeAt(loc);
--i;
}
}
}
}

If removeAt() and seqSearch() are O(n) with respect to the length of the list then yes, this algorithm would be of order O(n^2). This is because within the for loop you call seqSearch every time, with a possibility of calling removeAt(loc). That means for each iteration you are doing either n or 2n operations. Taking the worst case, you have 2n^2 operations which is O(n^2).

Related

What is the time complexity of a loop algorithm which contains if-instruction inside, that runs n time? Is it n^2?

Like you can see here, I have this algorithm and I wanna know if the time complexity here is o(n) or o(n²).
public static boolean isTextPalindrome(String text) {
if (text == null) {
return false;
}
int left = 0;
int right = text.length() - 1;
while (left < right) {
if (text.charAt(left++) != text.charAt(right--)) {
return false;
}
}
return true;
}
In the worst case the complexity of this algorithm is O(n) as it is directly dependend on the input but only linearly. In that case we can ignore the impact of the if condition as it has only constant impact on the complexity.
The best case would be if you input a word that starts with a letter that it does not end with. But as you cannot rely on that we are more interested in this worst case to have an upper limit of the complexity.
I would suggest as a rule of thumb: if you have a loop which depends on the input you have a O(n) complexity. If there is another loop in the first one that is also dependend on the outer one the complexity increases to O(n^2). And so forth for more nested loops.

What is the algorithmic time complexity, if the exact is ∛(n^2 )

I have calculated the answer to be n raised to 2/3 . Could anyone tell me what is the worst case Big O()
Whether it is worst case or not that depends on the algorithm that you are using.
But in general, if you are having the number of operations to be n^(2/3) then the complexity in terms of O-notation is O(n^(2/3)).
Let me explain this in a bit more detail (so that we can get rid of the "general" word and give a definitive answer).
Consider a simple algorithm that finds the specific number in an array of n elements.
If your algorithm is something like this:
find(arr, number) {
boolean found = false;
for(i = 0; i < arr.length; i++) {
if(arr[i] == number) {
found = true;
}
}
return found;
}
The time complexity to find a number in an array using the above algorithm is O(n) always. By always I mean that what so-ever is the input, the above algorithm will always have n iterations (considering the length of array is n).
Now compare and contrast this algorithm with the following:
find(arr, number) {
for(i = 0; i < arr.length; i++) {
if(arr[i] == number) {
return true;
}
}
return false;
}
Now the time-complexity depends on the input of the array. (If you have an array with 10^8 elements and the first element matches with the element that you are searching for then you are done and you can instantly return without iterating over the whole array). So the worst-case complexity here becomes O(n) but the best case here is O(1).
So it basically depends on your algorithm how it is working. I assume that your intent when you say "exact" is that you have first version of find that I described above. And if that is the case then yes, the worst-case time complexity is O(n^2/3)

Measuring code complexity of a recursive function

According to my professor, this code is Teta(n^n)
Measuring line by line i cant discover myself why its n^n complexity
this is the code
any(v[], n, degree){
for(i=0; i<degree; i++){
any(v,n-1,degree)
}
}
i have been making myself.
any(v[], n, degree){
for(i=0 - C; i<degree c(n+1); i++ cn){
any(v,n-1,degree) n(T(n-1))
}
}
It is 2c+2cn+n(T(n-1)).
To start, it looks like this would actually be infinite since it doesn't break or return at n==0. Assuming that the algorithm does return at n==0 (it would have to in an if statement that is currently missing):
T(n) = degree*T(n-1), where T(0) = 1 and T(1) = degree
This reduces to O(degree^n)
I'm not really sure where the n^n comes from. Unless I did the maths wrong.
Your professor is right, that code would run for ever recursively calling itself and n growing negative. if that is not what you want, then you would have to implement a condition to end the recursion, i.e. value of n:
any(v[], n, degree){
if (n > -1) {
for(i=0;i< degree;i++){
any(v,n-1,degree)
}
}
}

Is there a sorting algorithm with a worst case time complexity of n^3?

I'm familiar with other sorting algorithms and the worst I've heard of in polynomial time is insertion sort or bubble sort. Excluding the truly terrible bogosort and those like it, are there any sorting algorithms with a worse polynomial time complexity than n^2?
Here's one, implemented in C#:
public void BadSort<T>(T[] arr) where T : IComparable
{
for (int i = 0; i < arr.Length; i++)
{
var shortest = i;
for (int j = i; j < arr.Length; j++)
{
bool isShortest = true;
for (int k = j + 1; k < arr.Length; k++)
{
if (arr[j].CompareTo(arr[k]) > 0)
{
isShortest = false;
break;
}
}
if(isShortest)
{
shortest = j;
break;
}
}
var tmp = arr[i];
arr[i] = arr[shortest];
arr[shortest] = tmp;
}
}
It's basically a really naive sorting algorithm, coupled with a needlessly-complex method of calculating the index with the minimum value.
The gist is this:
For each index
Find the element from this point forward which
when compared with all other elements after it, ends up being <= all of them.
swap this shortest element with the element at this index
The innermost loop (with the comparison) will be executed O(n^3) times in the worst case (descending-sorted input), and every iteration of the outer loop will put one more element into the correct place, getting you just a bit closer to being fully sorted.
If you work hard enough, you could probably find a sorting algorithm with just about any complexity you want. But, as the commenters pointed out, there's really no reason to seek out an algorithm with a worst-case like this. You'll hopefully never run into one in the wild. You really have to try to come up with one this bad.
Here's an example of elegant algorithm called slowsort which runs in Ω(n^(log(n)/(2+ɛ))) for any positive ɛ:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.116.9158&rep=rep1&type=pdf (section 5).
Slow Sort
Returns the sorted vector after performing SlowSort.
It is a sorting algorithm that is of humorous nature and not useful.
It's based on the principle of multiply and surrender, a tongue-in-cheek joke of divide and conquer.
It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
This algorithm multiplies a single problem into multiple subproblems
It is interesting because it is provably the least efficient sorting algorithm that can be built asymptotically, and with the restriction that such an algorithm, while being slow, must still all the time be working towards a result.
void SlowSort(vector<int> &a, int i, int j)
{
if(i>=j)
return;
int m=i+(j-i)/2;
int temp;
SlowSort(a, i, m);
SlowSort(a, m + 1, j);
if(a[j]<a[m])
{
temp=a[j];
a[j]=a[m];
a[m]=temp;
}
SlowSort(a, i, j - 1);
}

Time Complexity of a Recursive Algorithm with a Condition not Based on Size

I have a question about the complexity of a recursive function
The code (in C#) is like this:
public void function sort(int[] a, int n)
{
bool done = true;
int j = 0;
while (j <= n - 2)
{
if (a[j] > a[j + 1])
{
// swap a[j] and a[j + 1]
done = false;
{
j++;
}
j = n - 1;
while (j >= 1)
{
if (a[j] < a[j - 1])
{
// swap a[j] and a[j - 1]
done = false;
{
j--;
}
if (!done)
sort(array, length);
}
Now, the difficulty I have is the recursive part of the function.
In all of the recursions I have seen so far, we can determine the number of recursive calls based on the input size because every time we call the function with a smaller input etc.
But for this problem, the recursive part doesn't depend on the input size; instead it's based on whether the elements are sorted or not. I mean, if the array is already sorted, the function will run in O(n) because of the two loops and no recursive calls (I hope I'm right about this part).
How can we determine O(n) for the recursive part?
O(f(n)) means that your algorithm is always faster or equal as f(n) regardless of input (considering only size of input). So you should find worst case for your input of size n.
This one looks like some bubble sort algorithm (although weirdly complicated) which is O(n^2). In worst case, every call of sort function takes O(n) and you transport highest number to the end of array - you have n items so its O(n)*O(n) => O(n^2).
This is bubble sort. It's O(n^2). Since the algorithm swaps adjacent elements, the running time is proportional to the number of inversions in a list, which is O(n^2). The number of recursions will be O(n). The backward pass just causes it to recurse about half the time but doesn't affect the actual complexity--it's still doing the same amount of work.

Resources