Is it possible to calculate the time complexity of genetic algorithm?
These are my parameter settings:
Population size (P) = 100
# of Generations (G) = 1000
Crossover probability (Pc) = 0.5 (fixed)
Mutation probability (Pm) = 0.01 (fixed)
Thanks
Updated:
problem: document clustering
Chromosome: 50 genes/chrom, allele value = integer(document index)
crossover: one point crossover (crossover point is randomly selected)
mutation: randomly change one gene
termination criteria: 1000 generation
fitness: Davies–Bouldin index
isnt it something like O(P * G * O(Fitness) * ((Pc * O(crossover)) + (Pm * O(mutation))))
IE the complexity is relative to the number of items, the number of generations and the computation time per generation
If P, G, Pc, and Pm are constant that really simplifies to O( O(Fitness) * (O(mutation) + O(crossover)))
If the number of generations and population size is constant, as long as your mutation function, crossover function, and fitness function takes a known amount of time, the big o is O(1) - it takes a constant amount of time.
Now, if you are asking what the big O would be for a population of N and a number of generations M, that is different, but as stated where you know all the variables ahead of time, the amount of time taken is constant with respect to your input.
Genetic Algorithms are not chaotic, they are stochastic.
The complexity depends on the genetic operators, their implementation (which may have a very significant effect on overall complexity), the representation of the individuals and the population, and obviously on the fitness function.
Given the usual choices (point mutation, one point crossover, roulette wheel selection) a Genetic Algorithms complexity is O(g(nm + nm + n)) with g the number of generations, n the population size and m the size of the individuals. Therefore the complexity is on the order of O(gnm)).
This is of cause ignoring the fitness function, which depends on the application.
Is it possible to calculate the time and computation complexity of genetic algorithm?
Yes, Luke & Kane's answer can work (with caveats).
However, most genetic algorithms are inherently chaotic. So calculating O() is unlikely to be useful and worse probably misleading.
There is a better way to measure the time complexity--by actually measuring the run time and averaging.
Related
Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2
Say, an algorithm has a theoretical time complexity O(n2). However, when it runs in some specific or realistic cases—for example, in the Facebook social graph each person cannot have more than 200 close friends (I know it’s not true, but let’s just assume that)—then its complexity is just linear O(n) due to some special characteristics of the input, even though theoretically it’s still O(n2).
I believe, I have seen a formal name for complexity of the algorithm in realistic cases, but can’t remember exactly what it is. It’s sorta “real complexity” or “realistic complexity” or something. Does anyone know if there is a special name for it? Or do I just happen to remember something from my dreams? :) I need it in my technical writing. Thanks
I don’t think there is a special name for such kind of complexity classes, since the performance of your algorithm strongly depends on the data distribution. However, the approach where you are estimating running time for some real world data has a name; it’s called Smoothed Analysis.
There isn't a formal name as far as I'm aware, but "real-world average/best/worst case complexity" would probably be understandable enough.
Just for completeness:
The worst case running time is O(n2).
The best case running time is O(n).
We can't really say what the average case running time is from what is given.
Insertion sort is an example of this. It runs in O(n) when the data is already sorted, but runs in O(n2) in the average / worst case.
I think the best answer for this would be average case. Your average is computed as the weighted mean over the probability of the input, so if an algorithm has complexity a * n + b for input I1 and c * n^2 + d * n + e for input I2, its average complexity will be:
AC = p * (a * n + b) + (1 - p) * (c * n^2 + d * n + e)
Where p is the probability of input being I1. If in real cases, input is almost always I1 (i.e. p = 1) then your average complexity will be O(n). Academic proofs tend to assume a uniform distribution for the inputs (all inputs have the same probability of happening, p = 1/2 in my example), but to study the complexity of an algorithm in the 'real world', you have to estimate the actual probability of occurrence of the inputs, and this might change the average complexity of your algorithm (although it usually doesn't: in the example, even if p = 0.99999, the average complexity would still be O(n^2), although the factors would be changed).
If you assume that no one ever has more than 200 friends, you implicitly set the probability of nb_friends > 200 to 0, which changes your average complexity even though best-case and worst-case are unchanged.
I have a homework problem:
If an algorithm takes 0.5 ms for an input size of 100, how long will it take for inputs of 500, 1,000 and 10,000 if it is:
Linear,
O(N log N),
Quadratic,
Cubic,
Exponential.
I understand the basic concepts here, I'm just unsure of how to approach the question mathematically. I'm tempted to simply calculate the amount of time it takes to do process each individual item of input (for instance, in a) I'll divide 0.5 by 100 to get .05, which I'll then multiply by 500, 1000 and 1000 to find how long it will take to process inputs of those sizes.
While this is simple for the linear calculations, and also pretty straightforward for quadratic and cubic ones, I don't quite understand how to apply this strategy to (N log N) or exponential functions: What value do I use as the base of the exponent?
For (N log N), is it as simple as calculating c = 100 * log(100) = 200; then calculating c = 500 * log(500) = 1349, which is 6.745 times 200, then multiplying 6.745 by .5 to arrive at 3.3725 as a final answer?
This is a bad exercise:
It teaches you to extrapolate from a single datapoint.
It teaches you to apply the results of asymptotic analysis where they can't be applied.
To expand on the latter, you cannot predict the performance of a system when given specific inputs based on an asymptotic bound because of hidden terms. Many algorithms have hidden constant and linear terms that dominate the run time for small input sizes like n=100.
But if you are supposed to ignore these facts, your approach is correct.
Suppose the function's runtime is given by T(n). You can solve these problems by doing the following:
Write a general expression for T(n) in terms of the function's growth rate, given there is some hidden constant multiplier. For example, for the n log n case, write T(n) = c n log n for some constant c.
Solve for c using the one data point you have; T(100) is given to you.
Plug in values of n as needed to estimate the runtime.
This works for any growth rate.
A quick note: "exponential" is not precise enough for you to make a meaningful estimate. 2^n and 100^n are both exponential but the latter grows significantly faster than the former. In computer science, typically logs use base 2, though there's no general conventions for exponents.
Hope this helps!
Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2
Could it be done by keeping a counter to see how many iterations an algorithm goes through, or does the time duration need to be recorded?
The currently accepted won't give you any theoretical estimation, unless you are somehow able to fit the experimentally measured times with a function that approximates them. This answer gives you a manual technique to do that and fills that gap.
You start by guessing the theoretical complexity function of the algorithm. You also experimentally measure the actual complexity (number of operations, time, or whatever you find practical), for increasingly larger problems.
For example, say you guess an algorithm is quadratic. Measure (Say) the time, and compute the ratio of time to your guessed function (n^2):
for n = 5 to 10000 //n: problem size
long start = System.time()
executeAlgorithm(n)
long end = System.time()
long totalTime = end - start
double ratio = (double) time / (n * n)
end
. As n moves towards infinity, this ratio...
Converges to zero? Then your guess is too low. Repeat with something bigger (e.g. n^3)
Diverges to infinity? Then your guess is too high. Repeat with something smaller (e.g. nlogn)
Converges to a positive constant? Bingo! Your guess is on the money (at least approximates the theoretical complexity for as large n values as you tried)
Basically that uses the definition of big O notation, that f(x) = O(g(x)) <=> f(x) < c * g(x) - f(x) is the actual cost of your algorithm, g(x) is the guess you put, and c is a constant. So basically you try to experimentally find the limit of f(x)/g(x); if your guess hits the real complexity, this ratio will estimate the constant c.
Algorithm complexity is defined as (something like:)
the number of operations the algorithm does as a function
of its input size.
So you need to try your algorithm with various input sizes (i.e. for sort - try sorting 10 elements, 100 elements etc.), and count each operation (e.g. assignment, increment, mathematical operation etc.) the algorithm does.
This will give you a good "theoretical" estimation.
If you want real-life numbers on the other hand - use profiling.
As others have mentioned, the theoretical time complexity is a function of number of cpu operations done by your algorithm. In general processor time should be a good approximation for that modulo a constant. But the real run time may vary because of a number of reasons such as:
processor pipeline flushes
Cache misses
Garbage collection
Other processes on the machine
Unless your code is systematically causing some of these things to happen, with enough number of statistical samples, you should have a fairly good idea of the time complexity of your algorithm, based on observed runtime.
The best way would be to actually count the number of "operations" performed by your algorithm. The definition of "operation" can vary: for an algorithm such as quicksort, it could be the number of comparisons of two numbers.
You could measure the time taken by your program to get a rough estimate, but various factors could cause this value to differ from the actual mathematical complexity.
yes.
you can track both, actual performance and number of iterations.
Might I suggest using ANTS profiler. It will provide you this kind of detail while you run your app with "experimental" data.