complexity analysis of RSA algorithm - algorithm

The security of RSA hinges upon a simple assumption:
Given N, e, and y = (x ^e) mod N, it is computationally intractable to
determine x.
This assumption is quite plausible. How might Eve try to guess x?
She could experiment with all possible values of x, each time checking whether x^e is equal to y mod N, but this would take exponential time. Or she could try to factor N to retrieve p and q, and then figure out d by inverting e modulo (p-1)(q-1), but we believe factoring to be hard. Intractability is normally a source of dismay; the insight of RSA lies in using it to advantage.
My question on above text
How we got exponential time for calcuation for each value of x in above context?

https://en.wikipedia.org/wiki/Time_complexity
...the time complexity is generally expressed as a function of the
size of the input
The size of the input of this task is proportional to b = the number of bits in N. Thus the iteration through all possible values of x has time complexity O(2^b) that is exponential.
The algorithms that have a polynomial time complexity in terms of numeric value of the input are called pseudo-polynomial algorithms. For example, it is well known that integer factorization problem has no known polynomial algorithm. But we can simply iterate from 2 to sqrt(N) and find all prime factors of number N in O(sqrt(N)) time. This algorithm has a polynomial complexity in terms of N, but the length of the input of this problem is not N, it is log(N) approximately. As a result, this iteration is only a pseudo-polynomial solution.

Related

Checking if an integer is an integer power of another?

This is identical to the question found on Check if one integer is an integer power of another, but I am wondering about the complexity of a method that I came up with to solve this problem.
Given an integer n and another integer m, is n = m^p for some integer p. Note that ^ here is exponentiation and not xor.
There is a simple O(log_m n) solution based on dividing n repeatedly by m until it's 1 or until there's a non-zero remainder.
I'm thinking of a method inspired by binary search, and it's not clear to me how complexity should be calculated in this case.
Essentially you start with m, then you go to m^2, then m^4, m^8, m^16, .....
When you find that m^{2^k} > n, you check the range bounded by m^{2^{k-1}} and m^{2^k}. Is this solution O(log_2 (log_m(n)))?
Somewhat related to this, if I do something like
m^2 * m^2
vs.
m * m * m * m
Do these 2 have the same complexity? If they do, then I think the algorithm I came up with is still O(log_m (n))
Not quite. First of all, let's assume that multiplication is O(1), and exponentiation a^b is O(log b) (using exponentiation by squaring).
Now using your method of doubling the exponent p_candidate and then doing a binary search, you can find the real p in log(p) steps (or observe that p does not exist). But each try within the binary search requires you to compute m^p_candidate, which is bounded by m^p, which by assumption is O(log(p)). So the overall time complexity is O(log^2(p)).
But we want to express the time complexity in terms of the inputs n and m. From the relationship n = m^p, we get p = log(n)/log(m), and hence log(p) = log(log(n)/log(m)). Hence the overall time complexity is
O(log^2(log(n)/log(m)))
If you want to get rid of the m, you can provide a looser upper bound by using
O(log^2(log(n)))
which is close to, but not quite O(log(log(n))).
(Note that you can always omit the logarithmic bases in the O-notation since all logarithmic functions differ only by a constant factor.)
Now, the interesting question is: is this algorithm better than one that is O(log(n))? I haven't proved it, but I'm pretty certain it is the case that O(log^2(log(n))) is in O(log(n)) but not vice versa. Anyone cares to prove it?

Polynomial in n code? what does it mean [duplicate]

Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2

why exponential time is applied only to size in pseudo-polynomial time complexity

i have been searching about pseudo-polynomial time.
i have unresolved question about it.
for example, 0-1 knapsack algorithm's time complexity is O(NW).
N is the number of items and W is the size of knapsack.
it is pseudo-polynomial because time complexity is O(N X 2bits in W).
then, i think O(2bits in N X 2bits in W) is possible for time complexity. but why 0-1 knapsack algorithm is pseudo-polynomial only due to 'W' not 'N'?
Because it is about the size of the input, not its value. N is the size of an array. W is just a value.
I like to think it is pseudo-polynomial because it behaves exponentially if we encode the W in binary (or greater) representation, but if we encode the W in unary representation, it behaves "polynomially".
But for N this isn't true. No mater which representation we encode the input array, the time function will always behave as a polynomial.
In practice
This has a practical result in how easy it is to create an input that takes too long to process. If we want to attack on N, we must create an array with N elements. But if we choose W, it is as easy as creating an array with a single big value (using log2 W bits).
Note that N indicates the number of elements you have, each element is represented as (w_i,c_i) (weight, cost) - so overall the input size is Omega(n) - since it must contain all n elements.
However, W is a size, and the input contains only the number, but since you need log(W) bits to represent the size, the input size is Omega(log(W))
In conclusion, we say knapsack (for example) is pseudo polynomial - because it is NOT polynomial in the size of the INPUT, it is exponential in it, since it takes O(N*2^b) time, and the input size is O(N+b) - so it takes exponential time relative to the input size.

What is pseudopolynomial time? How does it differ from polynomial time?

What is pseudopolynomial time? How does it differ from polynomial time? Some algorithms that run in pseudopolynomial time have runtimes like O(nW) (for the 0/1 Knapsack Problem) or O(√n) (for trial division); why doesn't that count as polynomial time?
To understand the difference between polynomial time and pseudopolynomial time, we need to start off by formalizing what "polynomial time" means.
The common intuition for polynomial time is "time O(nk) for some k." For example, selection sort runs in time O(n2), which is polynomial time, while brute-force solving TSP takes time O(n · n!), which isn't polynomial time.
These runtimes all refer to some variable n that tracks the size of the input. For example, in selection sort, n refers to the number of elements in the array, while in TSP n refers to the number of nodes in the graph. In order to standardize the definition of what "n" actually means in this context, the formal definition of time complexity defines the "size" of a problem as follows:
The size of the input to a problem is the number of bits required to write out that input.
For example, if the input to a sorting algorithm is an array of 32-bit integers, then the size of the input would be 32n, where n is the number of entries in the array. In a graph with n nodes and m edges, the input might be specified as a list of all the nodes followed by a list of all the edges, which would require Ω(n + m) bits.
Given this definition, the formal definition of polynomial time is the following:
An algorithm runs in polynomial time if its runtime is O(xk) for some constant k, where x denotes the number of bits of input given to the algorithm.
When working with algorithms that process graphs, lists, trees, etc., this definition more or less agrees with the conventional definition. For example, suppose you have a sorting algorithm that sorts arrays of 32-bit integers. If you use something like selection sort to do this, the runtime, as a function of the number of input elements in the array, will be O(n2). But how does n, the number of elements in the input array, correspond to the the number of bits of input? As mentioned earlier, the number of bits of input will be x = 32n. Therefore, if we express the runtime of the algorithm in terms of x rather than n, we get that the runtime is O(x2), and so the algorithm runs in polynomial time.
Similarly, suppose that you do depth-first search on a graph, which takes time O(m + n), where m is the number of edges in the graph and n is the number of nodes. How does this relate to the number of bits of input given? Well, if we assume that the input is specified as an adjacency list (a list of all the nodes and edges), then as mentioned earlier the number of input bits will be x = Ω(m + n). Therefore, the runtime will be O(x), so the algorithm runs in polynomial time.
Things break down, however, when we start talking about algorithms that operate on numbers. Let's consider the problem of testing whether a number is prime or not. Given a number n, you can test if n is prime using the following algorithm:
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
So what's the time complexity of this code? Well, that inner loop runs O(n) times and each time does some amount of work to compute n mod i (as a really conservative upper bound, this can certainly be done in time O(n3)). Therefore, this overall algorithm runs in time O(n4) and possibly a lot faster.
In 2004, three computer scientists published a paper called PRIMES is in P giving a polynomial-time algorithm for testing whether a number is prime. It was considered a landmark result. So what's the big deal? Don't we already have a polynomial-time algorithm for this, namely the one above?
Unfortunately, we don't. Remember, the formal definition of time complexity talks about the complexity of the algorithm as a function of the number of bits of input. Our algorithm runs in time O(n4), but what is that as a function of the number of input bits? Well, writing out the number n takes O(log n) bits. Therefore, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(24x), which is not a polynomial in x.
This is the heart of the distinction between polynomial time and pseudopolynomial time. On the one hand, our algorithm is O(n4), which looks like a polynomial, but on the other hand, under the formal definition of polynomial time, it's not polynomial-time.
To get an intuition for why the algorithm isn't a polynomial-time algorithm, think about the following. Suppose I want the algorithm to have to do a lot of work. If I write out an input like this:
10001010101011
then it will take some worst-case amount of time, say T, to complete. If I now add a single bit to the end of the number, like this:
100010101010111
The runtime will now (in the worst case) be 2T. I can double the amount of work the algorithm does just by adding one more bit!
An algorithm runs in pseudopolynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it. Our prime testing algorithm is a pseudopolynomial time algorithm, since it runs in time O(n4), but it's not a polynomial-time algorithm because as a function of the number of bits x required to write out the input, the runtime is O(24x). The reason that the "PRIMES is in P" paper was so significant was that its runtime was (roughly) O(log12 n), which as a function of the number of bits is O(x12).
So why does this matter? Well, we have many pseudopolynomial time algorithms for factoring integers. However, these algorithms are, technically speaking, exponential-time algorithms. This is very useful for cryptography: if you want to use RSA encryption, you need to be able to trust that we can't factor numbers easily. By increasing the number of bits in the numbers to a huge value (say, 1024 bits), you can make the amount of time that the pseudopolynomial-time factoring algorithm must take get so large that it would be completely and utterly infeasible to factor the numbers. If, on the other hand, we can find a polynomial-time factoring algorithm, this isn't necessarily the case. Adding in more bits may cause the work to grow by a lot, but the growth will only be polynomial growth, not exponential growth.
That said, in many cases pseudopolynomial time algorithms are perfectly fine because the size of the numbers won't be too large. For example, counting sort has runtime O(n + U), where U is the largest number in the array. This is pseudopolynomial time (because the numeric value of U requires O(log U) bits to write out, so the runtime is exponential in the input size). If we artificially constrain U so that U isn't too large (say, if we let U be 2), then the runtime is O(n), which actually is polynomial time. This is how radix sort works: by processing the numbers one bit at a time, the runtime of each round is O(n), so the overall runtime is O(n log U). This actually is polynomial time, because writing out n numbers to sort uses Ω(n) bits and the value of log U is directly proportional to the number of bits required to write out the maximum value in the array.
Pseudo-polynomial time complexity means polynomial in the value/magnitude of input but exponential in the size of input.
By size we mean the number of bits required to write the input.
From the pseudo-code of knapsack, we can find the time complexity to be O(nW).
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n) //
Knapsack capacity (W)
for w from 0 to W
do m[0, w] := 0
end for
for i from 1 to n do
for j from 0 to W do
if j >= w[i] then
m[i, j] := max(m[i-1, j], m[i-1, j-w[i]] + v[i])
else
m[i, j] := m[i-1, j]
end if
end for
end for
Here, W is not polynomial in the length of the input though, which is what makes it pseudo-polynomial.
Let s be number of bits required to represent W
i.e. size of input= s =log(W) (log= log base 2)
-> 2^(s)=2^(log(W))
-> 2^(s)=W (because 2^(log(x)) = x)
Now, running time of knapsack= O(nW) = O(n * 2^s)
which is not polynomial.

Polynomial time and exponential time

Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2

Resources