What is the Big-O of two independent nested loops, where the inner loop is of variable length? - big-o

There are two independent nested structures being iterated and connected somehow. The inner operations are hash table lookups.
# struct_a has unbounded length, but could be zero
for row in struct_a:
# row.subAttr has unbounded length, but could be zero
for sub in row.subAttr:
...
# struct_b has unbounded length, but could be zero
for row in struct_b:
# row.subAttr has unbounded length, but could be zero
for sub in row.subAttr:
...
What is the Big-O in this case (variable lengths)?

The length of struct_a or row will ultimately be reduced to a constant in Big O notation. The mathematical benefit of algorithmic analysis using Big O notation is that we do not worry about the constants - in fact, they are usually dropped from the term. A function that takes 5n2 is simply expressed as O(n2).
In your case, no matter the length of struct_a or row (even zero or a very large number), these will ultimately be constants. The double loop is the main factor in Big O notation, and the above code has a complexity of O(n2). One n term is contributed by the outer loop, and one n term is contributed by the inner loop.
In some situations where the difference between the outer and inner loop is very large, you might express the above as O(mn), to illustrate the fact that the inner and outer loops have different lengths (m represents the length of one of the loops, and n represents the length of the other). In many use cases, the difference in lengths between the two loops is negligible on average, and so m*n is simplified to n2. In this case, there is an implicit understanding that even though you are using the single variable n, the two loop lengths may be different.
Obviously, the final runtime will be affected by the length of the structures, whether they are zero or very long. There are many functions with higher Big O complexities (even n3 or n!) that take much less time in real life than simpler functions (log(n) or n) simply due to constant factors.
This answer is related and illustrates the same point.

Related

Do problem constraints change the time complexity of algorithms?

Let's say that the algorithm involves iterating through a string character by character.
If I know for sure that the length of the string is less than, say, 15 characters, will the time complexity be O(1) or will it remain as O(n)?
There are two aspects to this question - the core of the question is, can problem constraints change the asymptotic complexity of an algorithm? The answer to that is yes. But then you give an example of a constraint (strings limited to 15 characters) where the answer is: the question doesn't make sense. A lot of the other answers here are misleading because they address only the second aspect but try to reach a conclusion about the first one.
Formally, the asymptotic complexity of an algorithm is measured by considering a set of inputs where the input sizes (i.e. what we call n) are unbounded. The reason n must be unbounded is because the definition of asymptotic complexity is a statement like "there is some n0 such that for all n ≥ n0, ...", so if the set doesn't contain any inputs of size n ≥ n0 then this statement is vacuous.
Since algorithms can have different running times depending on which inputs of each size we consider, we often distinguish between "average", "worst case" and "best case" time complexity. Take for example insertion sort:
In the average case, insertion sort has to compare the current element with half of the elements in the sorted portion of the array, so the algorithm does about n2/4 comparisons.
In the worst case, when the array is in descending order, insertion sort has to compare the current element with every element in the sorted portion (because it's less than all of them), so the algorithm does about n2/2 comparisons.
In the best case, when the array is in ascending order, insertion sort only has to compare the current element with the largest element in the sorted portion, so the algorithm does about n comparisons.
However, now suppose we add the constraint that the input array is always in ascending order except for its smallest element:
Now the average case does about 3n/2 comparisons,
The worst case does about 2n comparisons,
And the best case does about n comparisons.
Note that it's the same algorithm, insertion sort, but because we're considering a different set of inputs where the algorithm has different performance characteristics, we end up with a different time complexity for the average case because we're taking an average over a different set, and similarly we get a different time complexity for the worst case because we're choosing the worst inputs from a different set. Hence, yes, adding a problem constraint can change the time complexity even if the algorithm itself is not changed.
However, now let's consider your example of an algorithm which iterates over each character in a string, with the added constraint that the string's length is at most 15 characters. Here, it does not make sense to talk about the asymptotic complexity, because the input sizes n in your set are not unbounded. This particular set of inputs is not valid for doing such an analysis with.
In the mathematical sense, yes. Big-O notation describes the behavior of an algorithm in the limit, and if you have a fixed upper bound on the input size, that implies it has a maximum constant complexity.
That said, context is important. All computers have a realistic limit to the amount of input they can accept (a technical upper bound). Just because nothing in the world can store a yottabyte of data doesn't mean saying every algorithm is O(1) is useful! It's about applying the mathematics in a way that makes sense for the situation.
Here are two contexts for your example, one where it makes sense to call it O(1), and one where it does not.
"I decided I won't put strings of length more than 15 into my program, therefore it is O(1)". This is not a super useful interpretation of the runtime. The actual time is still strongly tied to the size of the string; a string of size 1 will run much faster than one of size 15 even if there is technically a constant bound. In other words, within the constraints of your problem there is still a strong correlation to n.
"My algorithm will process a list of n strings, each with maximum size 15". Here we have a different story; the runtime is dominated by having to run through the list! There's a point where n is so large that the time to process a single string doesn't change the correlation. Now it makes sense to consider the time to process a single string O(1), and therefore the time to process the whole list O(n)
That said, Big-O notation doesn't have to only use one variable! There are problems where upper bounds are intrinsic to the algorithm, but you wouldn't put a bound on the input arbitrarily. Instead, you can describe each dimension of your input as a different variable:
n = list length
s = maximum string length
=> O(n*s)
It depends.
If your algorithm's requirements would grow if larger inputs were provided, then the algorithmic complexity can (and should) be evaluated independently of the inputs. So iterating over all the elements of a list, array, string, etc., is O(n) in relation to the length of the input.
If your algorithm is tied to the limited input size, then that fact becomes part of your algorithmic complexity. For example, maybe your algorithm only iterates over the first 15 characters of the input string, regardless of how long it is. Or maybe your business case simply indicates that a larger input would be an indication of a bug in the calling code, so you opt to immediately exit with an error whenever the input size is larger than a fixed number. In those cases, the algorithm will have constant requirements as the input length tends toward very large numbers.
From Wikipedia
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
...
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
In practice, almost all inputs have limits: you cannot input a number larger than what's representable by the numeric type, or a string that's larger than the available memory space. So it would be silly to say that any limits change an algorithm's asymptotic complexity. You could, in theory, use 15 as your asymptote (or "particular value"), and therefore use Big-O notation to define how an algorithm grows as the input approaches that size. There are some algorithms with such terrible complexity (or some execution environments with limited-enough resources) that this would be meaningful.
But if your argument (string length) does not tend toward a large enough value for some aspect of your algorithm's complexity to define the growth of its resource requirements, it's arguably not appropriate to use asymptotic notation at all.
NO!
The time complexity of an algorithm is independent of program constraints. Here is (a simple) way of thinking about it:
Say your algorithm iterates over the string and appends all consonants to a list.
Now, for iteration time complexity is O(n). This means that the time taken will increase roughly in proportion to the increase in the length of the string. (Time itself though would vary depending on the time taken by the if statement and Branch Prediction)
The fact that you know that the string is between 1 and 15 characters long will not change how the program runs, it merely tells you what to expect.
For example, knowing that your values are going to be less than 65000 you could store them in a 16-bit integer and not worry about Integer overflow.
Do problem constraints change the time complexity of algorithms?
No.
If I know for sure that the length of the string is less than, say, 15 characters ..."
We already know the length of the string is less than SIZE_MAX. Knowing an upper fixed bound for string length does not make the the time complexity O(1).
Time complexity remains O(n).
Big-O measures the complexity of algorithms, not of code. It means Big-O does not know the physical limitations of computers. A Big-O measure today will be the same in 1 million years when computers, and programmers alike, have evolved beyond recognition.
So restrictions imposed by today's computers are irrelevant for Big-O. Even though any loop is finite in code, that need not be the case in algorithmic terms. The loop may be finite or infinite. It is up to the programmer/Big-O analyst to decide. Only s/he knows which algorithm the code intends to implement. If the number of loop iterations is finite, the loop has a Big-O complexity of O(1) because there is no asymptotic growth with N. If, on the other hand, the number of loop iterations is infinite, the Big-O complexity is O(N) because there is an asymptotic growth with N.
The above is straight from the definition of Big-O complexity. There are no ifs or buts. The way the OP describes the loop makes it O(1).
A fundamental requirement of big-O notation is that parameters do not have an upper limit. Suppose performing an operation on N elements takes a time precisely equal to 3E24*N*N*N / (1E24+N*N*N) microseconds. For small values of N, the execution time would be proportional to N^3, but as N gets larger the N^3 term in the denominator would start to play an increasing role in the computation.
If N is 1, the time would be 3 microseconds.
If N is 1E3, the time would be about 3E33/1E24, i.e. 3.0E9.
If N is 1E6, the time would be about 3E42/1E24, i.e. 3.0E18
If N is 1E7, the time would be 3E45/1.001E24, i.e. ~2.997E21
If N is 1E8, the time would be about 3E48/2E24, i.e. 1.5E24
If N is 1E9, the time would be 3E51/1.001E27, i.e. ~2.997E24
If N is 1E10, the time would be about 3E54/1.000001E30, i.e. 2.999997E24
As N gets bigger, the time would continue to grow, but no matter how big N gets the time would always be less than 3.000E24 seconds. Thus, the time required for this algorithm would be O(1) because one could specify a constant k such that the time necessary to perform the computation with size N would be less than k.
For any practical value of N, the time required would be proportional to N^3, but from an O(N) standpoint the worst-case time requirement is constant. The fact that the time changes rapidly in response to small values of N is irrelevant to the "big picture" behaviour, which is what big-O notation measures.
It will be O(1) i.e. constant.
This is because for calculating time complexity or worst-case time complexity (to be precise), we think of the input as a huge chunk of data and the length of this data is assumed to be n.
Let us say, we do some maximum work C on each part of this input data, which we will consider as a constant.
In order to get the worst-case time complexity, we need to loop through each part of the input data i.e. we need to loop n times.
So, the time complexity will be:
n x C.
Since you fixed n to be less than 15 characters, n can also be assumed as a constant number.
Hence in this case:
n = constant and,
(maximum constant work done) = C = constant
So time complexity is n x C = constant x constant = constant i.e. O(1)
Edit
The reason why I have said n = constant and C = constant for this case, is because the time difference for doing calculations for smaller n will become so insignificant (compared to n being a very large number) for modern computers that we can assume it to be constant.
Otherwise, every function ever build will take some time, and we can't say things like:
lookup time is constant for hashmaps

Calculating Time complexity for a transpose matrix

The code is a summarized version of a piece of code trying to transpose a matrix. My task for this is to find the time complexity of this program.
I am only interested in the time complexity for the number of swaps that occur. I found out that on the outer loop for the swapping it occurs n-1 times and as for the inner loop it occurs (n^2 -n)/2 times.
I derived these solutions by subst n with a number.
When n=4, my inner loop would loop 1+2+3 times
When n=5, my inner loop would loop 1+2+3+4 times
Therefore innerloop=(n^2 - n) / 2
How would i calculate the time complexity of this code?
I saw somewhere online where the person just took the values from his innerloop count and determine it was a O(n) complexity.
[Edit]
Do i need to cater in the other loops to calculate the time complexity as well?
Your swap algorithm is a clear (1+2+3+...n) case, which translates to n×(n+1)/2.
Since constants and lower degree parts of a calculation don't count in Big O notation, this translates to O(n^2).
Taking in the other loops, won't change your time complexity, since they are also O(n^2). In terms of Big O, it doesn't make sense to write O(3(n^2)), because constants are dropped.
But what could help you here is that you don't need to bother with the more complex swap for loop. (I don't mean when you're learning. I mean, when you're dealing with real-world problems.)
On a side note, I would recommend reading example 3 of the Big O section in Cracking the Coding Interview (6th edition) (2015) by - McDowell, Gayle, which addresses this exact situation and many similar ones.

What is the overall O(n) time complexity of O(sum(a)) if a is an array of integers and n is the length of the array?

I’m having a hard time using O(n) principles to generalize the time complexity of an algorithm whose more specific time complexity is O(sum(a)) where a is an array of integers.
My intuition is that this time complexity should generalize to O(n) as you can think of this as a “linear” equation of ki values that occur n times where k is the integer value in the array, making it O(n)( k=1 for a straight up O(n) case).
But it doesn’t seem to be exactly the same as O(n) - the value of k could be much larger than n, and if all these k values are larger you have something that could be O(n^2) or O(n^3) depending on how large that value is.
Is this something to take into account for O(n) complexity where n is the length of the array? Should I actually be defining n as the sum of all elements in the array instead of the length of the array?
In general, what would be the best way to think about this?
Fundamentally, we want to describe the runtime of an algorithm based on the input. The "runtime" is a vague term, that is often swept under the rug. For example, the "runtime" of a sorting algorithm or a hashtable operation is measured in number of comparisons, but using "runtime" to mean the number of basic operations (which are also usually only vaguely defined) is also possible.
There are two choices (or simplifications) often made when calcuating runtime. The first, is to ignore the actual input, and to use the size of the input (measured somehow) instead. This size is usually denoted n. The second, is to use big-O notation to describe the worst case (or best case, or average, or amortized...).
Neither of these choices is always necessary, and sometimes, they won't make sense. To repeat, since this is the crux of the answer: describing runtimes in big-O of n is not the only way to describe runtimes and sometimes it makes no sense to do so.
For example, in the case of an algorithm that runs in O(sum(a)) time:
func f(a) {
t = 0
for x in a {
for i = 1..x {
t += 1
}
}
}
It's not useful to describe the runtime of this using the length of the input array a. It's not useful because the length of a doesn't say anything about the worst-case runtime.
Saying that t is incremented sum(a) times is a useful statement about the runtime of the program. It doesn't use big-O complexity notation.
And if you do want to express that in big-O notation, you can say that the runtime of this code is O(sum(a)). This blurs exactly what you're measuring in the runtime, because you can be including the cost of performing the statements other than incrementing t.
And going back to the example, you could (and if you were studying complexity classes, you probably would) say n is the size (in bits) of the input array. Then you could say something about the runtime (measured in basic operations): it's O(2^n), since the worst case input is an array with one element which takes the value 2^n-1 (*note).
*note: this ignores some technical details about how to encode an array using bits.

What exctly is "cn" while calculating the complexity of Merge Sort?

I was reading CLRS today to understand the complexity of Merge sort in a better way. I came across a line that says "where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps." I know what the author means by problems of size 1 but what exactly is time per array element of the divide and combine steps and why is it "cn"?
The image of the text is given below for reference.
You're right that it's a bit confusing and not very precise of the authors. It's better to use recurrences like to this to count discrete events like comparisons. But then you have to lay down details of a particular implementation, and that gets fiddly. This proof is a kind of estimate, which is good enough since we're only looking for big Omega behavior. Constant factors don't make a difference.
To make sense of it, think of cn (which is c times n) as the amount of time it takes to do a merge step on lists with total length n. So c is a rough expression for the constant time it takes to handle one element: the time it takes to execute one iteration of whatever loop is doing the merging.
Rather than merging, he calls this "combining". He's also proposing there might be a per-element cost to splitting the lists for recursive sorting. In a normal array implementation, there is no such per-element cost. A linked list mergesort will have one, though: a loop that divides a big list into two halves. Then c represents one iteration of both loops.
The recursive term 2T(n/2) is an expression for the amount of time it takes to sort the two sub-lists.
You could make this expression a little more precise by saying
T(n) = 2T(n/2) + cn + k
where k is the constant time of code that runs outside the merge (and split if there is one) loop: function call overhead, sublist length math, etc. You might try solving the recurrence with this extra term as an exercise to prove to yourself that the big Omega result doesn't change.

Expected syntax for Big O notation

are there a limited amount of basic O Notations, considering you are meant to 'distil' them down to their most important part?
O(n^2):
O(n):
O(1):
O(log n) logarithmic
O(n!) factorial
O(na) polynomial
Or are you expected to work out variations such as O(n^4) etc... and if so, is that the only exception? the power of X one?
Generally, you distill Big-O notation (and related Bachman-Landau notations like Big-Theta and Big-Omega) down to the operation of the fastest-growing N term. So, you remove/simplify lesser terms (N2 + N == O(N2)) and nonvariable coefficients of the term (O(4N2) == O(N2)), but NOT powers or exponent bases (O(34N) == O(3N)). You also don't strip variable coefficients; NlogN is NlogN, NOT logN or N.
So, you will normally only see numbers in a Big-Oh notation if if the complexity is polynomial (power of N) or exponential (Nth power of a base). The most common Big-Oh notations are much as you show, with the addition of NlogN (VERY common).
However, if you are differentiating between two algorithms of equal general complexity, you MAY add lesser terms and/or coefficients back in to demonstrate the relative difference; an algorithm that performs linearly but has double the instructions of another might be described as O(2N) when comparing it with the other O(N) algorithm. However, taken individually, both algorithms are linear (O(N)).
Some Big-O notations are not algebraic, and may involve multiple variables in therir simplest general-case form. The counting sort, for instance, is complexity O(Max(N,M)), where N is the number of elements in the list, and M is the range of those elements. Often it is possible to reduce this in specific cases by defining M in terms of N and thus reducing to a single variable (if the list in question is of the first N squares, M = N2-1), but in the general case both variables are independent and significant. BucketSort's complexity is officially O(N), but really it's more like O(NlogM) where M is the maximum value of the list of N elements. M is usually considered insignificant, but that depends on the values you normally sort (sorting 5 values each in the billions will require more loops to compare each power of 10 than traversals through the list to put them in the buckets) and on the radix used (RadixSort is a base-2 BucketSort; again, sorting values with a greater log2 value will require more loops than traversals).
The Big-O notation is a way to provide an upper bound on the limiting behaviour of a function. There are no restrictions on its functional form. However, there are certain conventions, as explained by Wikipedia:
In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f(x) is derived by the following simplification rules:
If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.
There are, of course, some functional forms that show up more frequently that others. Some common classes are listed here.
No, the number of different O-classes is not finite.
As you already mentioned O(n^x) describes a different set for every x. And that is not the only "exception". O(x^n) is also a different set for every x. Likewise O(n^n), O(n^n^n), O(n^n^n^n) etc. are all different sets (and you can of course continue that ad infinitum).
In general, you split the expression into a sum of products, keep the largest term, and divide by constants to simplify it as much as possible.
ex:
n(2n+3log(n)) => 2n^2+3nlog(n) => 2n^2 => n^2
(n+1)(2nlog(n)+n) => 2n^2log(n)+n^2+2nlog(n)+n => 2n^2log(n) => n^2log(n)

Resources