NP-hardness. Is it average case or worst-case? - complexity-theory

Do we measure the NP-hardness in terms of average-case hardness or worst-case hardness?
I've found this here:
"However, NP-completeness is defined in terms of worst-case complexity".
Does it remain true to NP-hardness?
I don't know what the term "worst-case complexity" means. What is the difference between worst-case complexity and worst-case problems?

An interesting nuance here is that NP-hardness, by itself, doesn’t speak about worst-case or average case complexity. Rather, the formal definition of NP-hardness purely says that there’s a polynomial-time reduction from every problem in NP to any NP-hard problem. That reduction means that any instance of any problem in NP could be solved by applying the reduction and then solving the NP-hard problem. But because this applies to “any instance” and the specific transform done by the reduction isn’t specified, that definition by itself doesn’t say anything about average-case complexity.
We can artificially construct NP-hard problems that are extremely easy to solve on average. Here’s an example. Take an NP-hard problem - say, the problem of checking whether a graph is 3-colorable. We can solve this in time (roughly) O(3n) by simply trying all possible colorings. (The actual time complexity is a bit higher because we need to check edges in each step, but let’s ignore that for now). Now, we’ll invent a contrived problem of the following form:
Given a string of 0s, 1s, and 2s, determine whether
The first half of the string contains a 1 or a 2, or
Whether it doesn’t and the back half of the string is a base-3 encoding of a graph that’s 3-colorable.
This problem is NP-hard because we can reduce graph 3-colorability to it by just prepending a bunch of 0s to any input instance of 3-colorability. But on average it’s very easy to solve this problem. The probability that a string’s first half is all 0s is 1 / 3n/2, where n is the length of the string. This means that even if it takes O(3n/2) time to check the coloring of the graph encoded in the back half of a suitable string, mathematically the average amount of work required to solve this problem is O(1). (I’m aware I’m conflating the meaning of n as “the number of nodes in a graph” and “how long the string is,” but the math still checks out here.)
What’s worrisome is that we still don’t have a very well-developed theory of average-case complexity for NP-hard problems. Some NP-hard problems, like the one above, are very easy on average. But others like SAT, graph coloring, etc. are mysteries to us, where we legitimately don’t know how hard they are for random instances. It’s entirely possible, for example, that P ≠ NP and yet the average-case hardness of individual NP-hard problems are not known.

Related

What is nondeterministic in NP exactly?

I am studying NP-Completeness and I have a question about the definition of the NP problems.
Material says
nondeterministic refers to the fact that a solution can be guessed out
of polynomially many options in O(1) time
Here, what does it mean by polynomially many options in O(1) time?
For example, in the case of famous 3SAT problem, isn't there a exponentially many options?
(b.c. each literal can be true or false and if there are are n literals, total number of options are 2*2*2* ... * 2 = 2^n)
However, it says 3SAT problem is NP problem. How can it be NP problem even though there are exponentionally many certificates?
Thanks
That quote seems to be a weird way of phrasing it, but it might refer to something similar to being able to pick a random number between 1 and n in O(1) - there are n possibilities, but only picking one of them takes O(1).
See also: nondeterministic algorithms.
"Nondeterministic polynomial time" is the full definition of NP - "polynomial time" is important - each decision you make might take O(1), but there are polynomially many such decisions, leading to something that can theoretically be solved in polynomial time, if you can make the right choice at every step or execute all options at the same time.
Picture a k-ary tree with height p(n). You can get to the correct leaf in O(p(n)) if you (randomly) pick the correct child at each step from the root, or if you can somehow visit all paths concurrently.
Of course, in practice, you can't rely on making correct random choices, nor do you have infinitely many processors - if you were to visit all nodes sequentially, that will take O(kp(n)).
For 3SAT, we can randomly pick true or false for every literal, which leads us to a polynomial time algorithm which would produce the correct result if all our random choices were correct.

I need to solve an NP-hard problem. Is there hope?

There are a lot of real-world problems that turn out to be NP-hard. If we assume that P ≠ NP, there aren't any polynomial-time algorithms for these problems.
If you have to solve one of these problems, is there any hope that you'll be able to do so efficiently? Or are you just out of luck?
If a problem is NP-hard, under the assumption that P ≠ NP there is no algorithm that is
deterministic,
exactly correct on all inputs all the time, and
efficient on all possible inputs.
If you absolutely need all of the above guarantees, then you're pretty much out of luck. However, if you're willing to settle for a solution to the problem that relaxes some of these constraints, then there very well still might be hope! Here are a few options to consider.
Option One: Approximation Algorithms
If a problem is NP-hard and P ≠ NP, it means that there's is no algorithm that will always efficiently produce the exactly correct answer on all inputs. But what if you don't need the exact answer? What if you just need answers that are close to correct? In some cases, you may be able to combat NP-hardness by using an approximation algorithm.
For example, a canonical example of an NP-hard problem is the traveling salesman problem. In this problem, you're given as input a complete graph representing a transportation network. Each edge in the graph has an associated weight. The goal is to find a cycle that goes through every node in the graph exactly once and which has minimum total weight. In the case where the edge weights satisfy the triangle inequality (that is, the best route from point A to point B is always to follow the direct link from A to B), then you can get back a cycle whose cost is at most 3/2 optimal by using the Christofides algorithm.
As another example, the 0/1 knapsack problem is known to be NP-hard. In this problem, you're given a bag and a collection of objects with different weights and values. The goal is to pack the maximum value of objects into the bag without exceeding the bag's weight limit. Even though computing an exact answer requires exponential time in the worst case, it's possible to approximate the correct answer to an arbitrary degree of precision in polynomial time. (The algorithm that does this is called a fully polynomial-time approximation scheme or FPTAS).
Unfortunately, we do have some theoretical limits on the approximability of certain NP-hard problems. The Christofides algorithm mentioned earlier gives a 3/2 approximation to TSP where the edges obey the triangle inequality, but interestingly enough it's possible to show that if P ≠ NP, there is no polynomial-time approximation algorithm for TSP that can get within any constant factor of optimal. Usually, you need to do some research to learn more about which problems can be well-approximated and which ones can't, since many NP-hard problems can be approximated well and many can't. There doesn't seem to be a unified theme.
Option Two: Heuristics
In many NP-hard problems, standard approaches like greedy algortihms won't always produce the right answer, but often do reasonably well on "reasonable" inputs. In many cases, it's reasonable to attack NP-hard problems with heuristics. The exact definition of a heuristic varies from context to context, but typically a heuristic is either an approach to a problem that "often" gives back good answers at the cost of sometimes giving back wrong answers, or is a useful rule of thumb that helps speed up searches even if it might not always guide the search the right way.
As an example of the first type of heuristic, let's look at the graph-coloring problem. This NP-hard problem asks, given a graph, to find the minimum number of colors necessary to paint the nodes in the graph such that no edge's endpoints are the same color. This turns out to be a particularly tough problem to solve with many other approaches (the best known approximation algorithms have terrible bounds, and it's not suspected to have a parameterized efficient algorithm). However, there are many heuristics for graph coloring that do quite well in practice. Many greedy coloring heuristics exist for assigning colors to nodes in a reasonable order, and these heuristics often do quite well in practice. Unfortunately, sometimes these heuristics give terrible answers back, but provided that the graph isn't pathologically constructed the heuristics often work just fine.
As an example of the second type of heuristic, it's helpful to look at SAT solvers. SAT, the Boolean satisfiability problem, was the first problem proven to be NP-hard. The problem asks, given a propositional formula (often written in conjunctive normal form), to determine whether there is a way to assign values to the variables such that the overall formula evaluates to true. Modern SAT solvers are getting quite good at solving SAT in many cases by using heuristics to guide their search over possible variable assignments. One famous SAT-solving algorithm, DPLL, essentially tries all possible assignments to see if the formula is satisfiable, using heuristics to speed up the search. For example, if it finds that a variable is either always true or always false, DPLL will try assigning that variable its forced value before trying other variables. DPLL also finds unit clauses (clauses with just one literal) and sets those variables' values before trying other variables. The net effect of these heuristics is that DPLL ends up being very fast in practice, even though it's known to have exponential worst-case behavior.
Option Three: Pseudopolynomial-Time Algorithms
If P ≠ NP, then no NP-hard problem can be solved in polynomial time. However, in some cases, the definition of "polynomial time" doesn't necessarily match the standard intuition of polynomial time. Formally speaking, polynomial time means polynomial in the number of bits necessary to specify the input, which doesn't always sync up with what we consider the input to be.
As an example, consider the set partition problem. In this problem, you're given a set of numbers and need to determine whether there's a way to split the set into two smaller sets, each of which has the same sum. The naive solution to this problem runs in time O(2n) and works by just brute-force testing all subsets. With dynamic programming, though, it's possible to solve this problem in time O(nN), where n is the number of elements in the set and N is the maximum value in the set. Technically speaking, the runtime O(nN) is not polynomial time because the numeric value N is written out in only log2 N bits, but assuming that the numeric value of N isn't too large, this is a perfectly reasonable runtime.
This algorithm is called a pseudopolynomial-time algorithm because the runtime O(nN) "looks" like a polynomial, but technically speaking is exponential in the size of the input. Many NP-hard problems, especially ones involving numeric values, admit pseudopolynomial-time algorithms and are therefore easy to solve assuming that the numeric values aren't too large.
For more information on pseudopolynomial time, check out this earlier Stack Overflow question about pseudopolynomial time.
Option Four: Randomized Algorithms
If a problem is NP-hard and P ≠ NP, then there is no deterministic algorithm that can solve that problem in worst-case polynomial time. But what happens if we allow for algorithms that introduce randomness? If we're willing to settle for an algorithm that gives a good answer on expectation, then we can often get relatively good answers to NP-hard problems in not much time.
As an example, consider the maximum cut problem. In this problem, you're given an undirected graph and want to find a way to split the nodes in the graph into two nonempty groups A and B with the maximum number of edges running between the groups. This has some interesting applications in computational physics (unfortunately, I don't understand them at all, but you can peruse this paper for some details about this). This problem is known to be NP-hard, but there's a simple randomized approximation algorithm for it. If you just toss each node into one of the two groups completely at random, you end up with a cut that, on expectation, is within 50% of the optimal solution.
Returning to SAT, many modern SAT solvers use some degree of randomness to guide the search for a satisfying assignment. The WalkSAT and GSAT algorithms, for example, work by picking a random clause that isn't currently satisfied and trying to satisfy it by flipping some variable's truth value. This often guides the search toward a satisfying assignment, causing these algorithms to work well in practice.
It turns out there's a lot of open theoretical problems about the ability to solve NP-hard problems using randomized algorithms. If you're curious, check out the complexity class BPP and the open problem of its relation to NP.
Option Five: Parameterized Algorithms
Some NP-hard problems take in multiple different inputs. For example, the long path problem takes as input a graph and a length k, then asks whether there's a simple path of length k in the graph. The subset sum problem takes in as input a set of numbers and a target number k, then asks whether there's a subset of the numbers that dds up to exactly k.
Interestingly, in the case of the long path problem, there's an algorithm (the color-coding algorithm) whose runtime is O((n3 log n) · bk), where n is the number of nodes, k is the length of the requested path, and b is some constant. This runtime is exponential in k, but is only polynomial in n, the number of nodes. This means that if k is fixed and known in advance, the runtime of the algorithm as a function of the number of nodes is only O(n3 log n), which is quite a nice polynomial. Similarly, in the case of the subset sum problem, there's a dynamic programming algorithm whose runtime is O(nW), where n is the number of elements of the set and W is the maximum weight of those elements. If W is fixed in advance as some constant, then this algorithm will run in time O(n), meaning that it will be possible to exactly solve subset sum in linear time.
Both of these algorithms are examples of parameterized algorithms, algorithms for solving NP-hard problems that split the hardness of the problem into two pieces - a "hard" piece that depends on some input parameter to the problem, and an "easy" piece that scales gracefully with the size of the input. These algorithms can be useful for finding exact solutions to NP-hard problems when the parameter in question is small. The color-coding algorithm mentioned above, for example, has proven quite useful in practice in computational biology.
However, some problems are conjectured to not have any nice parameterized algorithms. Graph coloring, for example, is suspected to not have any efficient parameterized algorithms. In the cases where parameterized algorithms exist, they're often quite efficient, but you can't rely on them for all problems.
For more information on parameterized algorithms, check out this earlier Stack Overflow question.
Option Six: Fast Exponential-Time Algorithms
Exponential-time algorithms don't scale well - their runtimes approach the lifetime of the universe for inputs as small as 100 or 200 elements.
What if you need to solve an NP-hard problem, but you know the input is reasonably small - say, perhaps its size is somewhere between 50 and 70. Standard exponential-time algorithms are probably not going to be fast enough to solve these problems. What if you really do need an exact solution to the problem and the other approaches here won't cut it?
In some cases, there are "optimized" exponential-time algorithms for NP-hard problems. These are algorithms whose runtime is exponential, but not as bad an exponential as the naive solution. For example, a simple exponential-time algorithm for the 3-coloring problem (given a graph, determine if you can color the nodes one of three colors each so that no edge's endpoints are the same color) might work checking each possible way of coloring the nodes in the graph, testing if any of them are 3-colorings. There are 3n possible ways to do this, so in the worst case the runtime of this algorithm will be O(3n · poly(n)) for some small polynomial poly(n). However, using more clever tricks and techniques, it's possible to develop an algorithm for 3-colorability that runs in time O(1.3289n). This is still an exponential-time algorithm, but it's a much faster exponential-time algorithm. For example, 319 is about 109, so if a computer can do one billion operations per second, it can use our initial brute-force algorithm to (roughly speaking) solve 3-colorability in graphs with up to 19 nodes in one second. Using the O((1.3289n)-time exponential algorithm, we could solve instances of up to about 73 nodes in about a second. That's a huge improvement - we've grown the size we can handle in one second by more than a factor of three!
As another famous example, consider the traveling salesman problem. There's an obvious O(n! · poly(n))-time solution to TSP that works by enumerating all permutations of the nodes and testing the paths resulting from those permutations. However, by using a dynamic programming algorithm similar to that used by the color-coding algorithm, it's possible to improve the runtime to "only" O(n2 2n). Given that 13! is about one billion, the naive solution would let you solve TSP for 13-node graphs in roughly a second. For comparison, the DP solution lets you solve TSP on 28-node graphs in about one second.
These fast exponential-time algorithms are often useful for boosting the size of the inputs that can be exactly solved in practice. Of course, they still run in exponential time, so these approaches are typically not useful for solving very large problem instances.
Option Seven: Solve an Easy Special Case
Many problems that are NP-hard in general have restricted special cases that are known to be solvable efficiently. For example, while in general it’s NP-hard to determine whether a graph has a k-coloring, in the specific case of k = 2 this is equivalent to checking whether a graph is bipartite, which can be checked in linear time using a modified depth-first search. Boolean satisfiability is, generally speaking, NP-hard, but it can be solved in polynomial time if you have an input formula with at most two literals per clause, or where the formula is formed from clauses using XOR rather than inclusive-OR, etc. Finding the largest independent set in a graph is generally speaking NP-hard, but if the graph is bipartite this can be done efficiently due to König’s theorem.
As a result, if you find yourself needing to solve what might initially appear to be an NP-hard problem, first check whether the inputs you actually need to solve that problem on have some additional restricted structure. If so, you might be able to find an algorithm that applies to your special case and runs much faster than a solver for the problem in its full generality.
Conclusion
If you need to solve an NP-hard problem, don't despair! There are lots of great options available that might make your intractable problem a lot more approachable. No one of the above techniques works in all cases, but by using some combination of these approaches, it's usually possible to make progress even when confronted with NP-hardness.

Confused about the definition of "Exact Algorithm"

What does it mean by saying "an algorithm is exact" in terms of Optimization and/or Computer Science? I need a precisely logical/epistemological definition.
Exact and approximate algorithms are methods for solving optimization problems.
Exact algorithms are algorithms that always find the optimal solution to a given optimization problem.
However, in combinatorial problems or total optimization problems, conventional methods are usually not effective enough, especially when the problem search area is large and complex. Among other methods we can use heuristics to solve that problems. Heuristics tend to give suboptimal solutions. A subset of heuristics are approximation algorithms.
When we use approximation algorithms we can prove a bound on the ratio between the optimal solution and the solution produced by the algorithm.
E.g. In some NP-hard problems there are some polynomial-time approximation algorithms while the best known exact algorithms need exponential time.
For example while there is a polynomial-time approximation algorithm for Vertex Cover, the best exact algorithm (using memoization) runs in O(1.1889n) pp 62-63.
The term exact is usually used to mean "the opposite of approximate". An approximation algorithm finds a solution to a slight variation of an optimzation problem that admits soltions that are "close" to the optimum in some sense, but nonetheless are desirable. As #Sirko said in the comments, the approximation is usually of interest because the exact problem is intractable or undecidable, where the approximate version is not. Often, more than one kind of approximation may be of interest.
Here are examples:
An exact algorithm for the Traveling Salesman problem is NP Complete. The TSP is to find a route of minimum length L for visiting each of N cities on a map. NP Completeness says the best known algorithms still need time that is an exponential function of N. An approximation algorithm for TSP finds a route of length no more than cL for some fixed c > 1. For example, you can easily construct the minimum spanning tree of the cities in time that is a polynomial in N and walk around the tree, covering each edge twice, to obtain an approximatoin algorithm for the case c = 2. The implied goal is to find algorithms for constants c as close to one as possible.
An exact algorithm for compiling a program that produces correct results in minimum time from any given source code is - under reasonable assumptions - undecidable. Yet of course we use "optimizing compilers" every day that improve the speed of code with no promise of true optimality.
In optimization, there are two kinds of algorithms. Exact and approximate algorithms.
Exact algorithms can find the optimum solution with precision.
Approximate algorithms can find a near optimum solution.
The main difference is that exact algorithms apply in "easy" problems.
What makes a problem "easy" is that it can be solved in reasonable time and the computation time doesn't scale up exponentially if the problem gets bigger. This class of problems is known as P(Deterministic Polynomial Time). Problems of this class are used to be optimized using exact algorithms.
For every other class of problems approximate algorithms are preferred.

What is fixed-parameter tractability? Why is it useful?

Some problems that are NP-hard are also fixed-parameter tractable, or FPT. Wikipedia describes a problem as fixed-parameter tractable if there's an algorithm that solves it in time f(k) · |x|O(1).
What does this mean? Why is this concept useful?
To begin with, under the assumption that P ≠ NP, there are no polynomial-time, exact algorithms for any NP-hard problem. Although we don't know whether P = NP or P ≠ NP, we don't have any polynomial-time algorithms for any NP-hard problems.
The idea behind fixed-parameter tractability is to take an NP-hard problem, which we don't know any polynomial-time algorithms for, and to try to separate out the complexity into two pieces - some piece that depends purely on the size of the input, and some piece that depends on some "parameter" to the problem.
As an example, consider the 0/1 knapsack problem. In this problem, you're given a list of n objects that have associated weights and values, along with some maximum weight W that you're allowed to carry. The question is to determine the maximum amount of value that you can carry. This problem is NP-hard, meaning that there's no polynomial-time algorithm that solves it. A brute-force method will take time around O(2n) by considering all possible subsets of the items, which is extremely slow for large n. However, it is possible to solve this problem in time O(nW), where n is the number of elements and W is the amount of weight you can carry. If you look at the runtime O(nW), you'll notice that it's split into two parts: a component that's linear in the number of elements (the n part) and a component that's linear in the weight (the W part). If W is any fixed constant, then the runtime of this algorithm will be O(n), which is linear-time, even though the problem in general is NP-hard. This means that if we treat W as some tunable "parameter" of the problem, for any fixed value of this parameter, the problem ends up running in polynomial time (which is "tractable," in the complexity theory sense of the word.)
As another example, consider the problem of finding long, simple paths in a graph. This problem is also NP-hard, and the naive algorithm for finding simple paths of length k in a graph takes time O(n! / (n - k)!), which for large k ends up being superexponential. However, using the technique of color-coding, it's possible to solve this problem in time O((2e)kn3 log n), where k is the length of the path to find and n is the number of nodes in the input graph. Notice that this runtime also has two "components:" one component that's a polynomial in the number of nodes in the input graph (the n3 log n part) and one component that's exponential in k (the (2e)k part). This means that for any fixed value of k, there's a polynomial-time algorithm for finding length-k paths in the graph; the runtime will be O(n3 log n).
In both of these cases, we can take a problem for which we have an exponential-time solution (or worse) and find a new solution whose runtime is some polynomial in n times some crazy-looking function of some extra "parameter." In the case of the knapsack problem, that parameter is the maximum amount of weight we can carry; in the case of finding long paths, the parameter is the length of the path to find. Generally speaking, a problem is called fixed-parameter tractable if there is some algorithm for solving the problem defined in terms of two quantities: n, the size of the input, and k, some "parameter," where the runtime is
O(p(n) · f(k))
Where p(n) is some polynomial function and f(k) is an arbitrary function in k. Intuitively, this means that the complexity of the problem scales polynomially with n (meaning that as only the problem size increases, the runtime will scale nicely), but can scale arbitrarily badly with the parameter k. This separates out the "inherent hardness" of the problem such that the "hard part" of the problem is blamed on the parameter k, while the "easy part" of the problem is charged to the size of the input.
Once you have a runtime that looks like O(p(n) · f(k)), we immediately get polynomial-time algorithms for solving the problem for any fixed k. Specifically, if k is fixed, then f(k) is some constant, so O(p(n) · f(k)) is just O(p(n)). This is a polynomial-time algorithm. Therefore, if we "fix" the parameter, we get back some "tractable" algorithm for solving the problem. This is the origin of the term fixed-parameter tractable.
(A note: Wikipedia's definition of fixed-parameter tractability says that the algorithm should have runtime f(k) · |x|O(1). Here, |x| refers to the size of the input, which I've called n here. This means that Wikipedia's definition is the same as saying that the runtime is f(k) · nO(1). As mentioned in this earlier answer, nO(1) means "some polynomial in n," and so this definition ends up being equivalent to the one I've given here).
Fixed-parameter tractability has enormous practical implications for a problem. It's common to encounter problems that are NP-hard. If you find a problem that's fixed-parameter tractable and the parameter is low, it can be significantly more efficient to use the fixed-parameter tractable algorithm than to use the normal brute-force algorithm. The color-coding example above for finding long paths in a graph, for example, has been used to great success in computational biology to find sequencing pathways in yeast cells, and the 0/1 knapsack solution is used frequently because common values of W are low enough for it to be practical.
Hope this helps!
I believe that the explanation of #templatetypedef was already quite comprehensive of the generality of FPT.
I would like to add that in practice, it appears quite often that the class of problem one is trying to solve is FPT, such as above examples.
In the case of problems expressed as set of constraints (e.g. SAT, CSP, ILP, etc.) a very common parameter is treewidth, which basically explicits how much your problem is organized as a tree.
This allows to split ones problem into a tree of subproblems which can then be solved more individually using dynamic programming.
In such case, many problems are linear-time fixed-parameter tractable, that is the complexity grows linearly with the number of components (i.e. the size of the system) by exponentially in the size of its biggest component.
Although the use of explicit techniques is possible to solve sub-problems is possible, in order to scale-up to more reasonnable instances, using symbolic representations is recomended.

theory about p, np problems

I am reading about P , NP and NP-Complete problems theory. Here is text snippet.
The class NP includes all problems that have polynomial-time
solutions, since obviously the solution provides a check. One would
expect that since it is so much easier to check an answer than to
come up with one from scratch, there would be problems in NP that do
not have polynomial-time solutions. To date no such problem has been
found, so it is entirely possible, though not considered likely by
experts, that nondeterminism is not such an important improvement. The
problem is that proving exponential lower bounds is an extremely
difficult task. The information theory bound technique, which we used
to show that sorting requires (n log n) comparisons, does not seem to
be adequate for the task, because the decision trees are not nearly
large enough.
My question is what does author mean by
by statement "To date no such problem has been found, so it is entirely possible, though
not considered likely by experts, that nondeterminism is not such an important improvement." ?
Another question what does author mean by in last statement by "because the decision trees are not nearly large enough." ?
Thanks!
(1) I think the author means that no NP problem has been found, for which it is proven that it is not in P. Certainly there are problems in NP for which no polynomial solution is known, but that's not the same as knowing that none exists.
If in fact P = NP (that is to say, if in fact there are no NP problems that don't have a polynomial solution), then in some sense a nondeterministic machine is no "more powerful" than a deterministic machine, since they solve the same problems in polynomial time. Then we'd say "nondeterminism is not such an important improvement".
(2) The way that the n log n proof works is that there are n! possible outputs from a sorting function, any one of which might be the correct one according to what order the input was in. Each comparison adds a two-legged branch to the tree of all possible states that a given comparison sort algorithm can get into. In order to sort any input, this "decision tree" must have enough branches to produce any of the n! possible re-orderings of the input, and hence there must be at least log(n!) comparisons. So, the lower bound on runtime comes from the size of the tree.
The author is saying that there are no known NP problems for which we've proved they require a tree so large that it implies a lower bound that is super-polynomial. Any such proof would prove P != NP.
The Author gives the possibility of someone may come up with a solution to NP-Complete problems that is not exponential time.
The second part is little vague, he seems so that the lower bound of search tree which we all agree to be O(n log n) is by information theory and if we use large decision trees, which can furture reduce the lower bounds. This is really vague.
BTW, of all the introductions to NP related buzzword explainations, I find this super confusing, which book/ chapter is this from?
A good text is Micheal Sipser's Theory of Computation or listen to Shai Simonson's lectures.

Resources