Related
I‘m trying to wrap my head around the meaning of the Landau-Notation in the context of analysing an algorithm‘s complexity.
What exactly does the O formally mean in Big-O-Notation?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x), meaning, for example in the case of O(n^2):
where t(x) could be, for instance, x + 3 or x^2 + 5. Is my understanding correct?
Furthermore, are the following notations correct?
I saw the following written down by a tutor. What does this mean? How can you use less or equal, if the O-Notation returns a set?
Could I also write something like this?
So the way I understand it is that O(g(x)) gives a set of functions which grow as rapidly or slower as g(x).
This explanation of Big-Oh notation is correct.
f(n) = n^2 + 5n - 2, f(n) is an element of O(n^2)
Yes, we can say that. O(n^2) in plain English, represents "set of all functions that grow as rapidly as or slower than n^2". So, f(n) satisfies that requirement.
O(n) is a subset of O(n^2), O(n^2) is a subset of O(2^n)
This notation is correct and it comes from the definition. Any function that is in O(n), is also in O(n^2) since growth rate of it is slower than n^2. 2^n is an exponential time complexity, whereas n^2 is polynomial. You can take limit of n^2 / 2^n as n goes to infinity and prove that O(n^2) is a subset of O(2^n) since 2^n grows bigger.
O(n) <= O(n^2) <= O(2^n)
This notation is tricky. As explained here, we don't have "less than or equal to" for sets. I think tutor meant that time complexity for the functions belonging to the set O(n) is less than (or equal to) the time complexity for the functions belonging to the set O(n^2). Anyways, this notation doesn't really seem familiar, and it's best to avoid such ambiguities in textbooks.
O(g(x)) gives a set of functions which grow as rapidly or slower as g(x)
That's technically right, but a bit imprecise. A better description contains the addenda
O(g(x)) gives the set of functions which are asymptotically bounded above by g(x), up to constant factors.
This may seem like a nitpick, but one inference from the imprecise definition is wrong.
The 'fixed version' of your first equation, if you make the variables match up and have one limit sign, seems to be:
This is incorrect: the ratio only has to be less than or equal to some fixed constant c > 0.
Here is the correct version:
where c is some fixed positive real number, that does not depend on n.
For example, f(x) = 3 (n^2) is in O(n^2): one constant c that works for this f is c = 4. Note that the requirement isn't 'for all c > 0', but rather 'for at least one constant c > 0'
The rest of your remarks are accurate. The <= signs in that expression are an unusual usage, but it's true if <= means set inclusion. I wouldn't worry about that expression's meaning.
There's other, more subtle reasons to talk about 'boundedness' rather than growth rates. For instance, consider the cosine function. |cos(x)| is in O(1), but its derivative fluctuates from negative one to positive one even as x increases to infinity.
If you take 'growth rate' to mean something like the derivative, example like these become tricky to talk about, but saying |cos(x)| is bounded by 2 is clear.
For an even better example, consider the logistic curve. The logistic function is O(1), however, its derivative and growth rate (on positive numbers) is positive. It is strictly increasing/always growing, while 1 has a growth rate of 0. This seems to conflict with the first definition without lots of additional clarifying remarks of what 'grow' means.
An always growing function in O(1) (image from the Wikipedia link):
I have some confusion regarding the Asymptotic Analysis of Algorithms.
I have been trying to understand this upper bound case, seen a couple of youtube videos. In one of them, there was an example of this equation
where we have to find the upper bound of the equation 2n+3. So, by looking at this, one can say that it is going o be O(n).
My first question :
In algorithmic complexity, we have learned to drop the constants and find the dominant term, so is this Asymptotic Analysis to prove that theory? or does it have other significance? otherwise, what is the point of this analysis when it is always going to be the biggest n in the equation, example- if it were n+n^2+3, then the upper bound would always be n^2 for some c and n0.
My second question :
as per rule the upper bound formula in Asymptotic Analysis must satisfy this condition f(n) = O(g(n)) IFF f(n) < c.g(n) where n>n0,c>0, n0>=1
i) n is the no of inputs, right? or does n represent the number of steps we perform? and does f(n) represents the algorithm?
ii) In the following video to prove upper bound of the equation 2n+3 could be n^2 the presenter considered c =1, and that is why to satisfy the equation n had to be >= 3 whereas one could have chosen c= 5 and n=1 as well, right? So then why were, in most cases in the video, the presenter was changing the value of n and not c to satisfy the conditions? is there a rule, or is it random? Can I change either c or n(n0) to satisfy the condition?
My Third Question:
In the same video, the presenter mentioned n0 (n not) is the number of steps. Is that correct? I thought n0 is the limit after which the graph becomes the upper bound (after n0, it satisfies the condition for all values of n); hence n0 also represents the input.
Would you please help me understand because people come up with different ideas in different explanations, and I want to understand them correctly?
Edit
The accepted answer clarified all of the questions except the first one. I have gone through many articles on the web, and here I am documenting my conclusion if anyone else has the same question. This will help them.
My first question was
In algorithmic complexity, we have learned to drop the constants and
find the dominant term, so is this Asymptotic Analysis to prove that
theory?
No, Asymptotic Analysis describes the algorithmic complexity, which is all about understanding or visualizing the Asymptotic behavior or the tail behavior of a function or a group of functions by plotting mathematical expression.
In computer science, we use it to evaluate (note: evaluate is not measuring) the performance of an algorithm in terms of input size.
for example, these two functions belong to the same group
mySet = set()
def addToMySet(n):
for i in range(n):
mySet.add(i*i)
mySet2 = set()
def addToMySet2(n):
for i in range(n):
for j in range(500):
mySet2.add(i*j)
Even though the execution time of the addToMySet2(n) is always > the execution time of addToMySet(n), the tail behavior of both of these functions would be the same with respect to the largest n, if one plot them in a graph the tendency of that graph for both of the functions would be linear thus they belong to the same group. Using Asymptotic Analysis, we get to see the behavior and group them.
A mistake that I made assuming upper bound represents the worst case. In reality, The upper bound of any algorithm is associated with all of the best, average, and worst cases. so the correct way of putting that would be
upper/lower bound in the best/average/worst case of an
algorithm
.
We can't relate the upper bound of an algorithm with the worst-case time complexity and the lower bound with the best-case complexity. However, an upper bound can be higher than the worst-case because upper bounds are usually asymptotic formulae that have been proven to hold.
I have seen this kind of question like find the worst-case time complexity of such and such algorithm, and the answer is either O(n) or O(n^2) or O(log-n), etc.
For example, if we consider the function addToMySet2(n), one would say the algorithmic time complexity of that function is O(n), which is technically wrong because there are three factors bound, bound type, (inclusive upper bound and strict upper bound) and case are involved determining the algorithmic time complexity.
When one denote O(n) it is derived from this Asymptotic Analysis f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0) so we are considering upper bound of best/average/worst case. In the above statement the case is missing.
I think We can consider, when not indicated, the big O notation generally describes an asymptotic upper bound on the worst-case time complexity. Otherwise, one can also use it to express asymptotic upper bounds on the average or best case time complexities
The whole point of asymptotic analysis is to compare algorithms performance scaling. For example, if I write two version of the same algorithm, one with O(n^2) time complexity and the other with O(n*log(n)) time complexity, I know for sure that the O(n*log(n)) one will be faster when n is "big". How big? it depends. You actually can't know unless you benchmark it. What you know is at some point, the O(n*log(n)) will always be better.
Now with your questions:
the "lower" n in n+n^2+3 is "dropped" because it is negligible when n scales up compared to the "dominant" one. That means that n+n^2+3 and n^2 behave the same asymptotically. It is important to note that even though 2 algorithms have the same time complexity, it does not mean they are as fast. For example, one could be always 100 times faster than the other and yet have the exact same complexity.
(i) n can be anything. It may be the size of the input (eg. an algorithm that sorts a list) but it may also be the input itself (eg. an algorithm that give the n-th prime number) or a number of iteration, etc
(ii) he could have taken any c, he chose c=1 as an example as he could have chosen c=1.618. Actually the correct formulation would be:
f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0)
the n0 from the formula is a pure mathematical construct. For c>0, it is the n value from which the function f is bounded by g. Since n can represent anything (size of a list, input value, etc), it is the same for n0
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2
Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2