Time complexity for an infinte loop - big-o

I have understood very little of computing complexities, but can you actually calculate the run-time complexity of a function running infinitely, for eg:
for(i = 0;i < 10;i *= 2)
{
[Algo / Lines of code]
}
Can you help me out?

The run-time complexity depends on a parameter n, e.g. the size of a data set to be processed, and considers how the runtime changes when n changes, but independent of any "technical" constraints, e.g. the speed of the CPU.
Since you algo does not depend on any parameter, but its runtime is always infinite, one cannot determine a run rime complexity for it.

Related

Calculating space complexity

I was given this sample of code and im not sure what the space complexity will be here.
void f1(int n){
int b = 2;
while (n >= 1){
b *= b;
n /= 2;
}
free(malloc(b));
}
As far as the loop, it runs log(n) times, but the variable b is increased exponentially.
Thats why im not sure whats going on.
Thanks for any help regarding this :)
First of all if you run that as it is then you would get a memory overflow since most numeric datatypes have limited memory (e.g. 32 bits). Therefore I will assume that you run that algorithm on a Turing machine which is an idealized computer model. In that case n will be much smaller then the last value of b. In addition, I assume that your goal is to compute the last value of b, so you need to write the value of b on the memory at the end (say in binary). Keeping these in mind you only need to use enough space record all values of b on the memory (tape). You wouldn't need much more memory than to record the last value b takes. Therefore your space complexity is $O(2^{2^{log_2(n)}})$ which is simply O(2^n)

Can the efficiency of an algorithm be modelled as a function between input size and time?

Consider the following algorithm (just as an example as the implementation is obviously inefficient):
def add(n):
for i in range(n):
n += 1
return n
The program adds one number with itself and returns it. Now the efficiency of an algorithm is sometimes modelled as a function between the size of the input and the number of primitive steps the algorithm has to compute. In this case, the input is an integer, n, and as n gets increased the number of steps necessary to complete the algorithm also increase (in this case linearly). But is it true that the size of the input increases? Let's assume that the machine where the program is running is representing integers in 8 bits. So if I increase the hypthetical input 3 for example to 7, the number of bits involved remains the same: 00000011 -> 00000111. However, the steps necessary to compute the algorithm increase. So it seems like that it's not always true that algorithmic efficiency can be modelled as a relation between input size and steps to compute. Could somebody explain to me where I go wrong or if I don't go wrong, why it still makes sense to model the efficiency of an algorithm as a function between the size of the input and the number of primitive steps to be computed?
Let S be the size of the input n. (Normally we'd use n for this size, but since the argument is also called n, that's confusing). For positive n, there's a relation between S and n, namely S = ceil(ln(n)). The program loops n times, and since n < 2^S, it loops at most 2^S times. You can also show it loops at least 1/2 * 2^S times, so the runtime (measured in loop iterations) is Theta(2^S).
This shows there's a way to model the runtime as a function of the size, even if it's not exact.
Whether it makes sense. In your example it doesn't much, but if your input is an array for sorting, taking size as the number of elements in the array does makes sense. (And it's typically what's used for example to model the number of comparisons done by different sort algorithms).

Does O(1) mean an algorithm takes one step to execute a required task?

I thought it meant it takes a constant amount of time to run. Is that different than one step?
O(1) is a class of functions. Namely, it includes functions bounded by a constant.
We say that an algorithm has the complexity of O(1) iff the amount of steps it takes, as a function of the size of the input, is bounded by a(n arbirtary) constant. This function can be a constant, or it can grow, or behave chaotically, or undulate as a sine wave. As long as it never exceeds some predefined constant, it's O(1).
For more information, see Big O notation.
It means that even if you increase the size of whatever the algorithm is operating on, the number of calculations required to run remains the same.
More specifically it means that the number of calculations doesn't get larger than some constant no matter how big the input gets.
In contrast, O(N) means that if the size of the input is N, the number of steps required is at most a constant times N, no matter how big N gets.
So for example (in python code since that's probably easy for most to interpret):
def f(L, index): #L a list, index an integer
x = L[index]
y=2*L[index]
return x + y
then even though f has several calculations within it, the time taken to run is the same regardless of how long the list L is. However,
def g(L): #L a list
return sum(L)
This will be O(N) where N is the length of list L. Even though there is only a single calculation written, the system has to add all N entries together. So it has to do at least one step for each entry. So as N increases, the number of steps increases proportional to N.
As everyone has already tried to answer it, it simply means..
No matter how many mangoes you've got in a box, it'll always take you the same amount of time to eat 1 mango. How you plan on eating it is irrelevant, there maybe a single step or you might go through multiple steps and slice it nicely to consume it.

Runtime of following algorithm (example from cracking the coding interview)

One of problem in cracking the coding interview book asks the run-time for following algorithm, which prints the powers of 2 from 1 through n inclusive:
int powersOf2(int n) {
if (n < 1) return 0;
else if (n == 1) print(1); return 1;
else
{
int prev = powersOf2(n/2);
int curr = prev * 2;
print(curr);
return curr;
}
}
The author answers that it runs in O(log n).
It makes perfect sense, but... n is the VALUE of the input! (pseudo-sublinear run-time).
Isn't it more correct to say that the run-time is O(m) where m is the length of input to the algorithm? (O(log(2^m)) = O(m)).
Or is it perfectly fine to simply say it runs in O(log n) without mentioning anything about pseudo- runtimes...
I am preparing for an interview, and wondering whether I need to mention that the run-time is pseudo-sublinear for questions like this that depend on value of an input.
I think the term that you're looking for here is "weakly polynomial," meaning "polynomial in the number of bits in the input, but still dependent on the numeric value of the input."
Is this something you need to mention in an interview? Probably not. Analyzing the algorithm and saying that the runtime is O(log n) describes the runtime perfectly as a function of the input parameter n. Taking things a step further and then looking at how many bits are required to write out the number n, then mentioning that the runtime is linear in the size of the input, is a nice flourish and might make an interviewer happy.
I'd actually be upset if an interviewer held it against you if you didn't mention this - this is the sort of thing you'd only know if you had a good university education or did a lot of self-studying.
When you say that an algorithm takes O(N) time, and it's not specified what N is, then it's taken to be the size of the input.
In this case, however, the algorithm is said to to take O(n) time, where n identifies a specific input parameter. That is also perfectly OK, and is common when the size of the input isn't what you would practically want to measure against. You will also see complexity given in terms of multiple parameters, like O(|V|+|E]) for graph algorithms, etc.
To make things a bit more confusing, the input value n is a single machine word, and numbers that fit into 1 or 2 machine words are usually considered to be constant size, because in practice they are.
Since giving a complexity in terms of the size of n is therefore not useful in any way, if you were asked to give a complexity without any specific instructions of how to measure the input size, you would measure it in terms of the value of n, because that is the useful way to do it.

Parallelizing large loops and improving cache accesses

I have a code like the following which I am using to find prime numbers (using Eratosthenes sieve) within a range, and using OpenMP to parallelize. Before this, I have a preprocessing stage where I am flagging off all even numbers, and multiples of 3 and 5 so that I have to do less work in this stage.
The shared L3 cache of the testbed is 12MB, and the physical memory is 32 GB. I am using 12 threads. The flag array is unsigned char.
#pragma omp parallel for
for (i = 0; i < range; i++)
{
for (j = 5; j < range; j+=2)
{
if( flag[i] == 1 && i*j < range )
if ( flag[i*j] == 1 )
flag[i*j] = 0;
}
}
This program works well for ranges less than 1,000,000...but after that the execution time shoots up for larger ranges; eg, for range = 10,000,000 this program takes around 70 mins (not fitting in cache?). I have modified the above program to incorporate loop tiling so that it could utilize the cache for any loop range, but even the blocking approach seems to be time consuming. Interchanging the loops also do not help for large ranges.
How do I modify the above code to tackle large ranges? And how could I rewrite the code to make it fully parallel (range and flag [since the flag array is quite large so I can't declare it private] is shared)?
Actually, I just noticed a few easy speedups in your code. So I'll mention these before I get into the fast algorithm:
Use a bit-field instead of a char array. You can save a factor of 8 in memory.
Your outer loop is running over all integers. Not just the primes. After each iteration, start from the first number that hasn't been crossed off yet. (that number will be prime)
I'm suggesting this because you mentioned that it take 70 min. on a (pretty powerful) machine to run N = 10,000,000. That didn't look right, since my own trivial implementation can do N = 2^32 in under 20 seconds on a laptop - single-threaded, no source-level optimizations. So then I noticed that you missed a few basic optimizations.
Here's the efficient solution. But it takes some work.
The key is to recognize that the Eratosthenes Sieve only needs to go up to sqrt(N) of your target size. In other words, you only need to run the sieve on all prime numbers up to sqrt(N) before you are done.
So the trick is to first run the algorithm on sqrt(N). Then dump all the primes into a dense data structure. By pre-computing all the needed primes, you break the dependency on the outer-loop.
Now, for the rest of the numbers from sqrt(N) - N, you can cross off all numbers that are divisible by any prime in your pre-computed table. Note that this is independent for all the remaining numbers. So the algorithm is now embarrassingly parallel.
To be efficient, this needs to be done using "mini"-sieves on blocks that fit in cache. To be even more efficient, you should compute and cache the reciprocals of all the primes in the table. This will help you efficiently find the "initial offsets" of each prime when you fill out each "mini-sieve".
The initial step of running the algorithm sequential for sqrt(N) will be very fast since it's only sqrt(N). The rest of the work is completely parallelizable.
In the fully general case, this algorithm can be applied recursively on the initial sieve, but that's generally overkill.

Resources