P(x,y,z){
print x
if(y!=x) print y
if(z!=x && z!=y) print z
}
Trivial Algorithm here, values x,y,z are chosen randomly from {1,...r} with r >= 1.
I'm trying to determine the average case complexity of this algorithm and I measure complexity based on the number of print statements.
The best case here is T(n) = 1 or O(1), when x=y=z and the probability of that is 1/3.
The worst case here is still T(n) = 3 or still O(1) when x!=y!=z and the probability is 2/3.
But when it comes to mathematically deriving the average case:
Sample Space is n possible inputs, Probability over Sample Space is 1/n chance
So, how do I calculate average case complexity? (This is where I draw a blank..)
Your algorithm has three cases:
All three numbers are equal. The probability of this is 1/r, since
once you choose x, there's only one choice for y and for z. The cost
for this case is 1.
x != y, but x == z or y == z. The probability of this is 1/r * (1/(r - 1))* 1/2,
since once you choose x, you only have r -1 choices left for y, and z can only be
one of these two choices. Cost = 2.
All three numbers are distinct. Probability that all three are distinct is
1/r * (1/(r - 1))*(1/(r - 2)). Cost = 3.
Thus, the average case can be computed as:
1/r + 1/r * (1/(r - 1)) + 1/r * (1/(r - 1))*(1/(r - 2)) * 3 == O(1)
Edit: The above expression is O(1), since the whole expression is made up of constants.
The average case will be somewhere between the best and worst cases; for this particular problem, that's all you need (at least as far as big-O).
1) Can you program the general case at least? Write the (pseudo)-code and analyze it, it might be readily apparent. You may actually program it suboptimally and there may exist a better solution. This is very typical and it's part of the puzzle-solving of the mathematics end of computer science, e.g. it's hard to discover quicksort on your own if you're just trying to code up a sort.
2) If you can, then run a monte carlo simulation and graph the results. i.e., for N = 1, 5, 10, 20, ..., 100, 1000, or whatever sample is realistic, run 10000 trials and plot the average time. If you're lucky X = sample size, Y = avg. time for 10000 runs at that sample size will graph out a nice line, or parabola, or some easy-to-model curve.
So I'm not sure if you need help on (1) finding or coding the algorithm or (2) analyzing it, you will probably want to revise your question to specify this.
P(x,y,z){
1.print x
2.if(y!=x)
3. print y
4.if(z!=x && z!=y)
5. print z
}
Line 1: takes a constant time c1 (c1:print x)
Line 2: takes a constant time c2 (c2:condition test)
Line 3 :takes a constant time c3 (c3 :print y)
Line 3: takes a constant time c4 (c4:condition test)
Line 4: takes a constant time c5 (c5:print z)
Analysis :
Unless your function P(x,y,z) does not depend on input size " r" the program will take a constant amount of time to run since Time Taken :T(c1)+T(c2+c3)+T(c4+c5) ..summing up the Big O of the function P(x,y,z) is O(1) where 1 is a constant and indicates constant amount of time since T(c1),T(c2),..T(c5) all take constant amount of time.. and say if the function P(x,y,z) iterates from 1 to r..then the complexity of your snippet would have changed and will be in terms of the input size i.e "r"
Best Case : O(1)
Average Case : O(1)
worst Case : O(1)
Related
I am learning about calculating the time complexity of an algorithm, and there are two examples that I can't get my head around why their time complexity is different than I calculated.
After doing the reading I learned that the for-loop with counter increasing once each iteration has the time complexity of O(n) and the nested for-loop with different iteration conditions is O(n*m).
This is the first question where I provided the time complexity to be O(n) but the solution says it was O(1):
function PrintColours():
colours = { "Red", "Green", "Blue", "Grey" }
foreach colour in colours:
print(colour)
This is the second one where I provided the time complexity to be O(n^2) but the solution says its O(n):
function CalculateAverageFromTable(values, total_rows, total_columns):
sum = 0
n = 0
for y from 0 to total_rows:
for x from 0 to total_columns:
sum += values[y][x]
n += 1
return sum / n
What am I getting wrong with these two questions?
There are several ways for denoting the runtime of an algorithm. One of most used notation is the Big - O notation.
Link to Wikipedia: https://en.wikipedia.org/wiki/Big_O_notation
big O notation is used to classify algorithms according to how their
run time or space requirements grow as the input size grows.
Now, while the mathematical definition of the notation might be daunting, you can think of it as a polynomial function of input size where you strip away all the constants and lower degree polynomials.
For ex: ax^2 + bx + c in Big-O would be O(x^2) (we stripped away all the constants a,b and c and lower degree polynomial bx)
Now, let's consider your examples. But before doing so, let's assume each operation takes a constant time c.
First example:
Input is: colours = { "Red", "Green", "Blue", "Grey" } and you are looping through these elements in your for loop. As the input size is four, the runtime would be 4 * c. It's constant runtime and constant runtime is written as O(1) in Big-O
Second example:
The inner for loop runs for total_columns times and it has two operations
for x from 0 to total_columns:
sum += values[y][x]
n += 1
So, it'll take 2c * total_columns times. And, the outer for loop runs for total_rows times, resulting in total time of total_rows * (2c * total_columns) = 2c * total_rows * total_columns. In Big-O it'd be written as O(total_rows * total_columns) (we stripped away the constant)
When you get out of outer loop, n which was set to 0 initially, would become total_rows * total_columns and that's why they mentioned the answer to be O(n).
One good definition of time complexity is:
"It is the number of operations an algorithm performs to complete its
task with respect to the input size".
If we think the following question input size can be defined as X= total_rows*total_columns. Then, what is the number of operations? It is X again because there will be X addition because of the operation sum += values[y][x] (neglect increment operation for n += 1 for simplicity). Then, think that we double array size from X to 2*X. How many operations there will be? It is 2*X again. As you can see, increase in number of operations is linear when we increase input size which makes time complexity O(N).
function CalculateAverageFromTable(values, total_rows, total_columns):
sum = 0
n = 0
for y from 0 to total_rows:
for x from 0 to total_columns:
sum += values[y][x]
n += 1
return sum / n
For your first question, the reason is that colours is a set. In python, {} defines a set. Accessing elements from unordered set is O(1) time complexity regardless of the input size. For furher information you can check here.
I try to understand a formula when we should use quicksort. For instance, we have an array with N = 1_000_000 elements. If we will search only once, we should use a simple linear search, but if we'll do it 10 times we should use sort array O(n log n). How can I detect threshold when and for which size of input array should I use sorting and after that use binary search?
You want to solve inequality that rougly might be described as
t * n > C * n * log(n) + t * log(n)
where t is number of checks and C is some constant for sort implementation (should be determined experimentally). When you evaluate this constant, you can solve inequality numerically (with uncertainty, of course)
Like you already pointed out, it depends on the number of searches you want to do. A good threshold can come out of the following statement:
n*log[b](n) + x*log[2](n) <= x*n/2 x is the number of searches; n the input size; b the base of the logarithm for the sort, depending on the partitioning you use.
When this statement evaluates to true, you should switch methods from linear search to sort and search.
Generally speaking, a linear search through an unordered array will take n/2 steps on average, though this average will only play a big role once x approaches n. If you want to stick with big Omicron or big Theta notation then you can omit the /2 in the above.
Assuming n elements and m searches, with crude approximations
the cost of the sort will be C0.n.log n,
the cost of the m binary searches C1.m.log n,
the cost of the m linear searches C2.m.n,
with C2 ~ C1 < C0.
Now you compare
C0.n.log n + C1.m.log n vs. C2.m.n
or
C0.n.log n / (C2.n - C1.log n) vs. m
For reasonably large n, the breakeven point is about C0.log n / C2.
For instance, taking C0 / C2 = 5, n = 1000000 gives m = 100.
You should plot the complexities of both operations.
Linear search: O(n)
Sort and binary search: O(nlogn + logn)
In the plot, you will see for which values of n it makes sense to choose the one approach over the other.
This actually turned into an interesting question for me as I looked into the expected runtime of a quicksort-like algorithm when the expected split at each level is not 50/50.
the first question I wanted to answer was for random data, what is the average split at each level. It surely must be greater than 50% (for the larger subdivision). Well, given an array of size N of random values, the smallest value has a subdivision of (1, N-1), the second smallest value has a subdivision of (2, N-2) and etc. I put this in a quick script:
split = 0
for x in range(10000):
split += float(max(x, 10000 - x)) / 10000
split /= 10000
print split
And got exactly 0.75 as an answer. I'm sure I could show that this is always the exact answer, but I wanted to move on to the harder part.
Now, let's assume that even 25/75 split follows an nlogn progression for some unknown logarithm base. That means that num_comparisons(n) = n * log_b(n) and the question is to find b via statistical means (since I don't expect that model to be exact at every step). We can do this with a clever application of least-squares fitting after we use a logarithm identity to get:
C(n) = n * log(n) / log(b)
where now the logarithm can have any base, as long as log(n) and log(b) use the same base. This is a linear equation just waiting for some data! So I wrote another script to generate an array of xs and filled it with C(n) and ys and filled it with n*log(n) and used numpy to tell me the slope of that least squares fit, which I expect to equal 1 / log(b). I ran the script and got b inside of [2.16, 2.3] depending on how high I set n to (I varied n from 100 to 100'000'000). The fact that b seems to vary depending on n shows that my model isn't exact, but I think that's okay for this example.
To actually answer your question now, with these assumptions, we can solve for the cutoff point of when: N * n/2 = n*log_2.3(n) + N * log_2.3(n). I'm just assuming that the binary search will have the same logarithm base as the sorting method for a 25/75 split. Isolating N you get:
N = n*log_2.3(n) / (n/2 - log_2.3(n))
If your number of searches N exceeds the quantity on the RHS (where n is the size of the array in question) then it will be more efficient to sort once and use binary searches on that.
I'm trying to understand the time complexity (Big-O) of the following algorithm which finds x such that g^x = y (mod p) (i.e. finding the discrete logarithm of y with base g modulo p).
Here's the pseudocode:
discreteLogarithm(y, g, p)
y := y mod p
a := g
x := 1
until a = y
a := (a * g) mod p
x++
return x
end
I know that the time complexity of this approach is exponential in the number of binary digits in p - but what does this mean and why does it depend on p?
I understand that the complexity is determined by the number of loops (until a = y), but where does p come into this, what's this about binary digits?
The run time depends upon the order of g mod p. The worst case is order (p-1)/2, which is O(p). The run time is thus O(p) modular multiplies. The key here is that p has log p bits, where I use 'log' to mean base 2 logarithm. Since p = 2^( log p ) -- mathematical identity -- we see the run time is exponential in the number of bits of p. To make it more clear, let's use b=log p to represent the number of bits. The worst case run time is O(2^b) modular multiplies. Modular multiplies take O(b^2) time, so the full run time is O(2^b * b^2) time. The 2^b is the dominant term.
Depending upon your particular p and g, the order could be much smaller than p. However, some heuristics in analytical number theory show that on average, it is order p.
EDIT: If you are not familiar with the concept of 'order' from group theory, here is brief explanation. If you keep multiplying g by itself mod p, it eventually comes to 1. The order is the number of multiplies before that happens.
As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.
Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?
As an example of the reset, the k value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.
The code is:
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
MergeSort is a standard merge sort with O(nlogn) complexity.
It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:
with
and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:
Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).
Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.
But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.
If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).
Mary got a magic ball for her birthday. The ball, when thrown from
some height, bounces for the double of this height. Mary's thrown the
ball from her balcony which is x above the ground. Help her
calculate how many bounces are there needed for the ball to reach whe
height w.
Input: One integer z (1 ≤ z ≤ 106) as the number of test cases. For
every test, integers x and w (1 ≤ x ≤ 109, 0 ≤ w ≤ 109).
Output: For every case one integer equal to the number of bounces
needed fot the ball to reach w should be printed.
OK, so, though it looks unspeakably easy, I can't find a more efficient way to solve it than a simple, dumb, brutal approach of a loop multiplying x by 2 till it's at least w. For a maximum test, it will get a horrific time, of course. Then, I thought of using previous cases which saves quite a bit time providing that we can get the closest yet smaller result from the previous cases in a short time (O(1)?) which, however, I can't (and don't know if it's possible..) implement. How should this be done?
You are essentially trying to solve the problem
2i x = w
and then finding the smallest integer greater than i. Solving, we get
2i = w / x
i = log2 (w / x)
So one approach would be to compute this value explicitly and then take the ceiling. Of course, you'd have to watch out for numerical instability when doing this. For example, if you are using floats to encode the values and then let w = 8,000,001 and x = 1,000,000, you will end up getting the wrong answer (3 instead of 4). If you use doubles to hold the value, you will also get the wrong answer when x = 1 and w = 536870912 (reporting 30 instead of 29, since 1 x 229 = 536870912, but due to inaccuracies in the double the answer is erroneously rounded up to 30). It looks like we'll have to switch to a different approach.
Let's revisit your initial solution of just doubling the value of x until it exceeds w should be perfectly fine here. The maximum number of times you can double x until it reaches w is given by log2 (w/x), and since w/x is at most one billion, this iterates at most log2 109 times, which is about thirty times each. Doing thirty iterations of a multiply by two is probably going to be extremely fast. More generally, if the upper bound of w / x is U, then this will take at most O(log U) time to complete. If you have k (x, w) pairs to check, this takes time O(k log U).
If you're not satisfied with doing this, though, there's another very fast algorithm you could try. Essentially, you want to compute log2 w/x. You could start off by creating a table that lists all powers of two along with their logarithms. For example, your table might look like
T[1] = 0
T[2] = 1
T[4] = 2
T[8] = 3
...
You could then compute w/x, then do a binary search to figure out where in which range the value lies. The upper bound of this range is then the number of times the ball must bounce. This means that if you have k different pairs to inspect, and if you know that the maximum ratio of w/x is U, creating this table takes O(log U) time and each query then takes time proportional to the log of the size of the table, which is O(log log U). The overall runtime is then O(log U + k log log U), which is extremely good. Given that you're dealing with at most one million problem instances and that U is one billion, k log log U is just under five million, and log U is about thirty.
Finally, if you're willing to do some perversely awful stuff with bitwise hackery, since you know for a fact that w/x fits into a 32-bit word, you can use this bitwise trickery with IEEE doubles to compute the logarithm in a very small number of machine operations. This would probably be faster than the above two approaches, though I can't necessarily guarantee it.
Hope this helps!
Use this formula to calculate the number of bounces for each test case.
ceil( log(w/x) / log(2) )
This is pseudo-code, but it should be pretty simple to convert it to any language. Just replace log with a function that finds the logarithm of a number in some specific base and replace ceil with a function that rounds up a given decimal value to the next int above it (for example, ceil(2.3) = 3).
See http://www.purplemath.com/modules/solvexpo2.htm for why this works (in your case, you're trying to solve the equation x * 2 ^ n = w for an integer n, and you should start by dividing both sides by x).
EDIT:
Before using this method, you should check that w > x and return 1 if it isn't. (The ball always has to bounce at least once).
Also, it has been pointed out that inaccuracies in floating point values may cause this method to sometimes fail. You can work around this by checking if 2 ^ (n-1) >= w, where n is the result of the equation above, and if so returning (n - 1) instead of n.