Algorithm Complexity Time - algorithm

I am currently having trouble identifying and understanding the complexity time of the following algorithm.
Background: There is a list of files, each containing a list of candidate Ids. Both, number of files and number of candidates within them are not fixed.
How would you calculate the time complexity for an algorithm which is responsible for:
Reading each file and adding all the unique candidate Ids into a Hashset?
Thanks.

i'm just repeating what amit said, so please give him the upvote if that is clear to you - i find that explanation a bit confusing.
your average complexity is O(n) where n is the total number of candidates (from all files). so if you have a files, each with b candidates then the time taken is proportional to a * b.
this is because the simplest way to solve your problem is to simply loop through all the data, adding them to the set. the set will discard duplicates as necessary.
looping over all values takes time proportional to the number of values (that is the O(n) part). adding a value to a hash set takes constant time (or O(1)). since that is constant time per entry, your overall time remains O(n).
however, hash sets have a strange worst case behaviour - they take time proportional to the size of the contents in some (unusual) cases. so in the very worst case, each time you add a value it requires O(m) amount of work, where m is the number of entries in the set.
now m is (approximately - it starts at zero and goes up to...) the number of distinct values. so we have two common cases:
if the number of distinct candidates increases as we read more (so, for example, 90% of the files are always new candidates) then m is proportional to n. that means that the work of adding each candidate increases proportional to n. so the total work is proportional to n^2 (since for each candidate we do work proportional to n, and there are n candidates). so the worst case is O(n^2).
if the number of distinct candidates is actually fixed, then as you read more and more files they tend to be just full of known candidates. in that case the extra work for inserting into the set is constant (you only get the strange behaviour a fixed number of times for the unique candidates - it doesn't depend on n). in that case the performance of the set does not keep getting worse as n gets larger and larger, so the worst case complexity remains O(n).

Related

Do problem constraints change the time complexity of algorithms?

Let's say that the algorithm involves iterating through a string character by character.
If I know for sure that the length of the string is less than, say, 15 characters, will the time complexity be O(1) or will it remain as O(n)?
There are two aspects to this question - the core of the question is, can problem constraints change the asymptotic complexity of an algorithm? The answer to that is yes. But then you give an example of a constraint (strings limited to 15 characters) where the answer is: the question doesn't make sense. A lot of the other answers here are misleading because they address only the second aspect but try to reach a conclusion about the first one.
Formally, the asymptotic complexity of an algorithm is measured by considering a set of inputs where the input sizes (i.e. what we call n) are unbounded. The reason n must be unbounded is because the definition of asymptotic complexity is a statement like "there is some n0 such that for all n ≥ n0, ...", so if the set doesn't contain any inputs of size n ≥ n0 then this statement is vacuous.
Since algorithms can have different running times depending on which inputs of each size we consider, we often distinguish between "average", "worst case" and "best case" time complexity. Take for example insertion sort:
In the average case, insertion sort has to compare the current element with half of the elements in the sorted portion of the array, so the algorithm does about n2/4 comparisons.
In the worst case, when the array is in descending order, insertion sort has to compare the current element with every element in the sorted portion (because it's less than all of them), so the algorithm does about n2/2 comparisons.
In the best case, when the array is in ascending order, insertion sort only has to compare the current element with the largest element in the sorted portion, so the algorithm does about n comparisons.
However, now suppose we add the constraint that the input array is always in ascending order except for its smallest element:
Now the average case does about 3n/2 comparisons,
The worst case does about 2n comparisons,
And the best case does about n comparisons.
Note that it's the same algorithm, insertion sort, but because we're considering a different set of inputs where the algorithm has different performance characteristics, we end up with a different time complexity for the average case because we're taking an average over a different set, and similarly we get a different time complexity for the worst case because we're choosing the worst inputs from a different set. Hence, yes, adding a problem constraint can change the time complexity even if the algorithm itself is not changed.
However, now let's consider your example of an algorithm which iterates over each character in a string, with the added constraint that the string's length is at most 15 characters. Here, it does not make sense to talk about the asymptotic complexity, because the input sizes n in your set are not unbounded. This particular set of inputs is not valid for doing such an analysis with.
In the mathematical sense, yes. Big-O notation describes the behavior of an algorithm in the limit, and if you have a fixed upper bound on the input size, that implies it has a maximum constant complexity.
That said, context is important. All computers have a realistic limit to the amount of input they can accept (a technical upper bound). Just because nothing in the world can store a yottabyte of data doesn't mean saying every algorithm is O(1) is useful! It's about applying the mathematics in a way that makes sense for the situation.
Here are two contexts for your example, one where it makes sense to call it O(1), and one where it does not.
"I decided I won't put strings of length more than 15 into my program, therefore it is O(1)". This is not a super useful interpretation of the runtime. The actual time is still strongly tied to the size of the string; a string of size 1 will run much faster than one of size 15 even if there is technically a constant bound. In other words, within the constraints of your problem there is still a strong correlation to n.
"My algorithm will process a list of n strings, each with maximum size 15". Here we have a different story; the runtime is dominated by having to run through the list! There's a point where n is so large that the time to process a single string doesn't change the correlation. Now it makes sense to consider the time to process a single string O(1), and therefore the time to process the whole list O(n)
That said, Big-O notation doesn't have to only use one variable! There are problems where upper bounds are intrinsic to the algorithm, but you wouldn't put a bound on the input arbitrarily. Instead, you can describe each dimension of your input as a different variable:
n = list length
s = maximum string length
=> O(n*s)
It depends.
If your algorithm's requirements would grow if larger inputs were provided, then the algorithmic complexity can (and should) be evaluated independently of the inputs. So iterating over all the elements of a list, array, string, etc., is O(n) in relation to the length of the input.
If your algorithm is tied to the limited input size, then that fact becomes part of your algorithmic complexity. For example, maybe your algorithm only iterates over the first 15 characters of the input string, regardless of how long it is. Or maybe your business case simply indicates that a larger input would be an indication of a bug in the calling code, so you opt to immediately exit with an error whenever the input size is larger than a fixed number. In those cases, the algorithm will have constant requirements as the input length tends toward very large numbers.
From Wikipedia
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
...
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
In practice, almost all inputs have limits: you cannot input a number larger than what's representable by the numeric type, or a string that's larger than the available memory space. So it would be silly to say that any limits change an algorithm's asymptotic complexity. You could, in theory, use 15 as your asymptote (or "particular value"), and therefore use Big-O notation to define how an algorithm grows as the input approaches that size. There are some algorithms with such terrible complexity (or some execution environments with limited-enough resources) that this would be meaningful.
But if your argument (string length) does not tend toward a large enough value for some aspect of your algorithm's complexity to define the growth of its resource requirements, it's arguably not appropriate to use asymptotic notation at all.
NO!
The time complexity of an algorithm is independent of program constraints. Here is (a simple) way of thinking about it:
Say your algorithm iterates over the string and appends all consonants to a list.
Now, for iteration time complexity is O(n). This means that the time taken will increase roughly in proportion to the increase in the length of the string. (Time itself though would vary depending on the time taken by the if statement and Branch Prediction)
The fact that you know that the string is between 1 and 15 characters long will not change how the program runs, it merely tells you what to expect.
For example, knowing that your values are going to be less than 65000 you could store them in a 16-bit integer and not worry about Integer overflow.
Do problem constraints change the time complexity of algorithms?
No.
If I know for sure that the length of the string is less than, say, 15 characters ..."
We already know the length of the string is less than SIZE_MAX. Knowing an upper fixed bound for string length does not make the the time complexity O(1).
Time complexity remains O(n).
Big-O measures the complexity of algorithms, not of code. It means Big-O does not know the physical limitations of computers. A Big-O measure today will be the same in 1 million years when computers, and programmers alike, have evolved beyond recognition.
So restrictions imposed by today's computers are irrelevant for Big-O. Even though any loop is finite in code, that need not be the case in algorithmic terms. The loop may be finite or infinite. It is up to the programmer/Big-O analyst to decide. Only s/he knows which algorithm the code intends to implement. If the number of loop iterations is finite, the loop has a Big-O complexity of O(1) because there is no asymptotic growth with N. If, on the other hand, the number of loop iterations is infinite, the Big-O complexity is O(N) because there is an asymptotic growth with N.
The above is straight from the definition of Big-O complexity. There are no ifs or buts. The way the OP describes the loop makes it O(1).
A fundamental requirement of big-O notation is that parameters do not have an upper limit. Suppose performing an operation on N elements takes a time precisely equal to 3E24*N*N*N / (1E24+N*N*N) microseconds. For small values of N, the execution time would be proportional to N^3, but as N gets larger the N^3 term in the denominator would start to play an increasing role in the computation.
If N is 1, the time would be 3 microseconds.
If N is 1E3, the time would be about 3E33/1E24, i.e. 3.0E9.
If N is 1E6, the time would be about 3E42/1E24, i.e. 3.0E18
If N is 1E7, the time would be 3E45/1.001E24, i.e. ~2.997E21
If N is 1E8, the time would be about 3E48/2E24, i.e. 1.5E24
If N is 1E9, the time would be 3E51/1.001E27, i.e. ~2.997E24
If N is 1E10, the time would be about 3E54/1.000001E30, i.e. 2.999997E24
As N gets bigger, the time would continue to grow, but no matter how big N gets the time would always be less than 3.000E24 seconds. Thus, the time required for this algorithm would be O(1) because one could specify a constant k such that the time necessary to perform the computation with size N would be less than k.
For any practical value of N, the time required would be proportional to N^3, but from an O(N) standpoint the worst-case time requirement is constant. The fact that the time changes rapidly in response to small values of N is irrelevant to the "big picture" behaviour, which is what big-O notation measures.
It will be O(1) i.e. constant.
This is because for calculating time complexity or worst-case time complexity (to be precise), we think of the input as a huge chunk of data and the length of this data is assumed to be n.
Let us say, we do some maximum work C on each part of this input data, which we will consider as a constant.
In order to get the worst-case time complexity, we need to loop through each part of the input data i.e. we need to loop n times.
So, the time complexity will be:
n x C.
Since you fixed n to be less than 15 characters, n can also be assumed as a constant number.
Hence in this case:
n = constant and,
(maximum constant work done) = C = constant
So time complexity is n x C = constant x constant = constant i.e. O(1)
Edit
The reason why I have said n = constant and C = constant for this case, is because the time difference for doing calculations for smaller n will become so insignificant (compared to n being a very large number) for modern computers that we can assume it to be constant.
Otherwise, every function ever build will take some time, and we can't say things like:
lookup time is constant for hashmaps

Big O notation for inverse exponential algorithm

Let's say you had an algorithm which had n^(-1/2) complexity, say a scientific algorithm where one sample doesn't give much information so it takes ages to process it, but many samples to cross-reference made it faster. Would you represent that as O(n^(-1/2))? Is that even possible theoretically? Tldr can you have an inverse exponential time complexity?
You could define O(n^(-0.5)) using this set:
O(n^(-0.5)) := {g(n) : There exist positive constants c and N such that 0<=g(n)<=cn^(-0.5), for n > N}.
The function n^(-1), for example, belongs to this set.
None of the elements of the set above, however, could be a an upper bound on the running time of an algorithm.
Note that for any constant c:
if: n>c^2 then: n^(-0.5)*c < 1.
This means that your algorithm do less than one simple operation for input large enough. Since it must execute a natural number of simple operation, we have that it does exactly 0 operations - nothing at all.
A decreasing running time doesn't make sense in practice (even less if it decreases to zero). If that existed, you would find ways to add dummy elements and increase N artificially.
But most algorithm have at least O(N) complexity (whenever every data element influences the final solution); even if not, just the representation of N gets longer and longer which will eventually increase the running time (like O(Log N)).

The best case for my algorithm is n=1 because that is the fastest? is it correct?

Best case is defined as which input of size n is cheapest among all inputs of size n.
“The best case for my algorithm is n=1 because that is the fastest.”? Is it right or wrong? If i give input N of large size, its mean it will take extra time. if i give input of smaller N value, its mean it will take less time? So, its mean we are dependent on the size of input..? and, if i give input any number(like 45) for the N size array for searching, and element found at the end, its also mean worst case? (but where from N comes? is it already fixed? )
I am confused about all this? If i consider both cases. mean,
We will fix the size of array like N, I made an array of N items
We will put an element as input for search.
its mean, worst case, best case, average case, is dependent on both things that are mentioned above ( N size array, and type of input).
am i right?
n is fixed, you cannot set it to 1: "is cheapest among all inputs of size n". Best case and worst case depends only on the type of input, which must be of size n.
For example, if you do a linear search among n elements, the best case is if you find it immediately on first try, the worst case is if you have to look at all n elements.
Well the thing is - it is not the number of input that is in case over here. Ofcourse if you sort one element then it will be best. If you search in one element list then it will be faster. We generalize this notion keeping in mind that input is n - and it's fixed w.r.t to this analysis. We can't say that mergesort with 1 element is faster than quicksort with 2 elements. It's not a valid comparison. With this being said,
Best case: A case for which it takes fastest time to complete, the conditions, the inputs all are perfect, optimal as expected by the algorithm.
Worst case: The case when input is such that we will run into a higher time.
Average case: Algorithm is run many times on different inputs (not saying that their size will be different - it won't. The size is fixed at n). And then we will take the average over all the running time. Take the average over all the inputs (of this given size n), weighted with probability distribution.
So to answer your question - it's the type of input that we talk about. The property of the input, for example:-
For quicksort best case is O(nlongn) worst case O(n^2) and average case is O(nlogn). (Worst case appears when the pivot is being selected as first element of the numbers).
Take the idea, here for best case we are not considering the number of input. The best case of quicksort occurs when the pivot we pick happens to divide the array into two exactly equal parts, in every step. Again you see number o inputs we are considering as n.
Check CLRS for getting the average case analysis. Solve the math or atleast try to. It's fun how you derive that.
When it is stated that something is O(n), that means that the expected time is proportional to the number of elements in the input. This means that if you double the input, then you double the expected time of the work. An example of this is going through through an array element by element until you find the result. Or adding all the elements of an array.
O(1) means that the function will take the same amount of time regardless of the amount of input. You'll see this when looking up a value in a hash. It is an indexed lookup, so it doesn't have to go through every element.
Something like O(n^2) means that the effort is proportional to the square of the number of elements involved. You'll see this when running all the combinations of the elements. So an array of 10 would provide 100 different possible inputs to a function with 2 parameters.
Searching an ordered array might be done in O(log(n)) because you can guess an element, then eliminate half and never have to search them.
It's up to your algorithm. For example, if I want to use a element in an array, whatever your size is, it takes the same time. Because it's spend O(1) time. However, if you want to use an algorithm that takes O(N) time:
FindMaxElementInAnArray(A)
a=-∞
for each i in A
if i>a
a=i
return a
the bigger the array is, the slower the algorithms runs.
And there's a situation like this
SomeBoredPseudocode(A)
if(A.size()>100)
error "oops, I don't need such a big array"
i=100;
if i==A.size()
exit
else
i=i-1
This one takes O(100-N) time.

How can the worst case for an algorithm have different bounds?

I've been trying to figure this out all day. Some other threads address this, but I really don't understand the answers. There are also many answers that contradict one another.
I understand that an algorithm will never take longer than the upper bound and never be faster than the lower bound. However, I didn't know an upper bound existed for best case time and a lower bound existed for worst case time. This question really threw me in a loop. I can't wrap my head around this... a given run time can have a different upper and lower bound?
For example, if someone asked: "Show that the worst-case running time of some algorithm on a heap of size n is Big Omega(lg(n))". How do you possibly get a lower bound, any bound for that matter, when given a run time?
So, in summation, an algorithm's worst case upper bound can be different than its worst case lower bound? How can this be? Once given the case, don't bounds become irrelevant? Trying to independent study algorithms and I really need to wrap my head around this first.
The meat of my accepted answer to that question is a function whose running time oscillates between n^2 and n^3 depending on whether n is odd. The point that I was trying to make is that sometimes bounds of the form O(n^k) and Omega(n^k) aren't sufficiently descriptive, even though the worst case running time is a perfectly well defined function (which, like all functions, is its own best lower and upper bound). This happens with more natural functions like n log n, which is Omega(n^k) but not O(n^k) for k ≤ 1, and O(n^k) but not Omega(n^k) for k > 1 (and hence not Theta(n^k) regardless of how we choose a constant k).
Suppose you write a program like this to find the smallest prime factor of an integer:
function lpf(n):
for i = 2 to n
if n%i == 0 then return i
If you run the function on the number 10^11 + 3, it will take 10^11 + 2 steps. If you run it on the number 10^11 + 4 it will take just one step. So the function's best-case time is O(1) steps and its worst-case time is O(n) steps.
Big O notation, describes efficiency in runtime iterations, generally based on size of an input data set.
The notation is written in its simplest form, ignoring multiples or additives, but keeping exponential form. If you have an operation of O(1) it is executed in constant time, no matter the input data.
However if you have something such as O(N) or O(log(N)), they will execute at different rates depending on input data.
The high and low bounds describe the largest and least iterations, respectively, that an algorithm can take.
Example: O(N), high bound is largest input data and low bound is smallest.
Extra sources:
Big O Cheat Sheet and MIT Lecture Notes
UPDATE:
Looking at the Stack Overflow question mentioned above, that algorithm is broken into three parts, where it has 3 possible types of runtime, depending on data. Now really, this is three different algorithms designed to handle for different data values. An algorithm is generally classified with just one notation of efficiency and that is of the notation taking the least time for ALL possible values of N.
In the case of O(N^2), larger data will take exponentially longer, and having a smaller number will proceed quickly. The algorithm determines how quickly a data set will be run, yet bounds are given depending on the range of data the algorithm is designed to handle.
I will try to explain it in the quicksort algorithm.
In quicksort you have an array and choose an element as pivot. The next step is to partition the input array into two arrays. The first one will contain elements < pivot and the second one elements > pivot.
Now assume you will apply quicksort on an already sorted list and the pivot element will always be the last element of the array. The result of partition will be an array of size n-1 and an array oft size 1 (the pivot element). This will result in a runtime of O(n*n). Now assume that the pivot element will always split the array in two equal sized array. In every step the array size will be cut in halves. This will result in O(n log n). I hope this example will make this a bit clearer for you.
Another well known sort algorithm is mergesort. Mergesort has always runtime of O(n log n). In mergesort you will cut the array down until only one element is left und will climb up the call stack to merge the one sized arrays and after that merge the array of size two and so on.
Let's say you implement a set using an array. To insert a element you simply put in the next available bucket. If there is no available bucket you increase the capacity of the array by a value m.
For the insert algorithm "there is no enough space" is the worse case.
insert (S, e)
if size(S) >= capacity(S)
reserve(S, size(S) + m)
put(S,e)
Assume we never delete elements. By keeping track of the last available position, put, size and capacity are Θ(1) in space and memory.
What about reserve? If it is implemented like [realloc in C][1], in the best case you just allocate new memory at the end of the existing memory (best case for reserve), or you have to move all existing elements as well (worse case for reserve).
The worst case lower bound for insert is the best case of
reserve(), which is linear in m if we dont nitpick. insert in
worst case is Ω(m) in space and time.
The worst case upper bound for insert is the worse case of
reserve(), which is linear in m+n. insert in worst case is
O(m+n) in space and time.

Trying to prove/disprove complexity analysis of an algorithm

I am not looking for an algorithm to the above question. I just want someone to comment on my answer.
I was asked the following question in an interview:
How to get top 100 numbers out of a large set of numbers (can't fit in
memory)
And this is what I said:
Divide the numbers in batches of 1000 each. Sort each batch in "O(1)" time. Total time taken is O(n) up till now. Now take 1st 100 numbers from 1st and 2nd batch (in O(1)). Take 1st 100 from the above computed nos and the 3rd batch and so on. This will take O(n) in total - so it is an O(n) algorithm.
The interviewer replies that sorting a batch of 1000 nos. won't take O(1) time and so won't picking out 1st 100 out of a batch and after a lot of discussion he said, he doesn't have problem with the algo taking O(n) time, he just has a problem with me saying that sorting the batch takes O(1) time.
My explanation was that 1000 doesn't depend on the input (n). Irrespective of what n is, I'll always make batches of 1000 nos. and if you have to calculate, the sorting takes O(1000*log 1000)) which is essentially O(1).
If you have to make proper calculations, it would be
1000*log 1000 to sort one batch
sort (n/1000) such batches
takes 1000 * log 1000 * n/1000 = O(n*log(1000)) time = O(n) time
I asked a lot of my friends also about this and although they agreed with me but partially.
So I wan't to know if my reasoning is 100% accurate (please criticize even if it is 99% correct).
Just remember, this post is not asking for the answer to the above posted question. I have already found a better answer at Retrieving the top 100 numbers from one hundred million of numbers
The interviewer is wrong, but it's useful to consider why. What you're saying is correct, but there is an unstated assumption that you depend on. Possibly, the interviewer is making a different assumption.
If we say that sorting 1000 numbers is O(1), we're being a bit informal. Specifically, what we mean is that, in the limit as N goes to infinity, there is a constant greater than or equal to the cost of sorting the 1000 numbers. Since the cost of sorting the fixed-size set is independent of N, the limit isn't going to depend on N, either. Thus, it's O(1) as N goes to infinity.
A generous interpretation is that the interviewer wanted you to treat the sorting step differently. You could be more precise and say that it was O(M*log(M)) as M goes to infinity (or M goes to N, if you prefer), with M representing the size of the batches of numbers. That would make an overall O(N*log(M)) for your approach, as N and M both approach infinity. Of course, that wasn't the limit you described.
Strictly speaking, it's meaningless to say that something is O(1) without specifying the limit. One usually doesn't need to bother for algorithms, because it's clear from the context: the limit commonly taken is as a single parameter approaches infinity. Your description is correct when considering only N, but you could consider more than just N.
It is indeed O(n) - but the constants are very high, especially considering you will need to read each element from the filesystem twice [once in the sort, and once in the second phase], and file system access, is much slower then memory access. Since this will probably be the bottleneck of the algorithm, your solution will probably run twice slower then using a priority-queue.
Note that for a constant top 100, even the naive solution is O(n):
for each i in range(1,100):
x <- find highest element
remove x from the list
append x to the solution
This solution is also O(n), since you have 100 iteration, in each iteration you need 2 traversals of the list [with some optimisations, 1 traversal per iteration can be done]. So, the total number of traversals is strictly smaller then 1000, and there are no more factors that depend on the size, thus the solution is O(n) - but it is definetly a terrible solution.
I think the interviewer meant that your solution - though O(n) has very large constants.

Resources