Generality of Approximation for NP Hard Problems - approximation

I've been struggling with this question for more than a week. What is the most general approximation result that can be achieved for an NP-hard problem?
Approximation Algorithm
Approximation Scheme
Fully Polynomial-time Approximation Scheme
Polynomial-time Approximation Scheme
I can't tell if I'm the only one who doesn't understand that 'most general' means.

Related

ILP in poly-time?

Integer programming is said to be NP-complete. However, I think formulating a problem into ILP can't prove the problem to be NP-hard. Is there any example of problem that can be modeled into ILP but has a polynomial time?

Confused about the definition of "Exact Algorithm"

What does it mean by saying "an algorithm is exact" in terms of Optimization and/or Computer Science? I need a precisely logical/epistemological definition.
Exact and approximate algorithms are methods for solving optimization problems.
Exact algorithms are algorithms that always find the optimal solution to a given optimization problem.
However, in combinatorial problems or total optimization problems, conventional methods are usually not effective enough, especially when the problem search area is large and complex. Among other methods we can use heuristics to solve that problems. Heuristics tend to give suboptimal solutions. A subset of heuristics are approximation algorithms.
When we use approximation algorithms we can prove a bound on the ratio between the optimal solution and the solution produced by the algorithm.
E.g. In some NP-hard problems there are some polynomial-time approximation algorithms while the best known exact algorithms need exponential time.
For example while there is a polynomial-time approximation algorithm for Vertex Cover, the best exact algorithm (using memoization) runs in O(1.1889n) pp 62-63.
The term exact is usually used to mean "the opposite of approximate". An approximation algorithm finds a solution to a slight variation of an optimzation problem that admits soltions that are "close" to the optimum in some sense, but nonetheless are desirable. As #Sirko said in the comments, the approximation is usually of interest because the exact problem is intractable or undecidable, where the approximate version is not. Often, more than one kind of approximation may be of interest.
Here are examples:
An exact algorithm for the Traveling Salesman problem is NP Complete. The TSP is to find a route of minimum length L for visiting each of N cities on a map. NP Completeness says the best known algorithms still need time that is an exponential function of N. An approximation algorithm for TSP finds a route of length no more than cL for some fixed c > 1. For example, you can easily construct the minimum spanning tree of the cities in time that is a polynomial in N and walk around the tree, covering each edge twice, to obtain an approximatoin algorithm for the case c = 2. The implied goal is to find algorithms for constants c as close to one as possible.
An exact algorithm for compiling a program that produces correct results in minimum time from any given source code is - under reasonable assumptions - undecidable. Yet of course we use "optimizing compilers" every day that improve the speed of code with no promise of true optimality.
In optimization, there are two kinds of algorithms. Exact and approximate algorithms.
Exact algorithms can find the optimum solution with precision.
Approximate algorithms can find a near optimum solution.
The main difference is that exact algorithms apply in "easy" problems.
What makes a problem "easy" is that it can be solved in reasonable time and the computation time doesn't scale up exponentially if the problem gets bigger. This class of problems is known as P(Deterministic Polynomial Time). Problems of this class are used to be optimized using exact algorithms.
For every other class of problems approximate algorithms are preferred.

Understanding Polynomial TIme Approximation Scheme

Is an approximation algorithm the same as a Polynomial Time Approximation Algorithm (PTAS)? E.g. It can be shown that A(I) <= 2 * OPT(I) for vertex cover. Does it mean that Vertex Cover has a 2-polynomial time approximation algorithm or a PTAS?
Thanks!
Note: The text in Italics is the edit I made after I posted my question.
No, this isn't necessarily the case. A PTAS is an algorithm where given any ε > 0, you can approximate the answer to a factor of (1 + ε) in polynomial time. In other words, you can get arbitrarily good approximations.
Some problems are known (for example, MAX-3SAT) that have approximation algorithms for specific factors (for example, 5/8), but where it's known that unless P = NP there is a hard limit to how well the problem can be approximated in polynomial time. For example, the PCP theorem says that MAX-3SAT doesn't have a polynomial-time 7/8 approximation unless P = NP. It's therefore possible that MAX-3SAT has a PTAS, but only if P = NP.
Hope this helps!
Vertex cover having 2-approximation algorithm is not same as having a PTAS algorithm. Sometimes, there are problems where much better approximation is possible. These problems then admit PTAS.
Such algorithms take an instance of the problem as input, with another input parameter epsilon>0. And it gives an output whose value is at most (1+epsilon).OPT for minimisation problem; and (1/(1+epsilon)).OPT for maximisation problem.
Run time of a PTAS algorithm is polynomial in n (size of problem instance). Sometimes, the runtime is also polynomial in epsilon, then its called to admit FPTAS(fully PTAS).
Example:
Dynamic programming algorithm for KNAPSACK with integer-profits gives optimal solution.
While, KNAPSACK problem with real-valued profits do not admit polynomial-time algorithm. But it admits a FPTAS, where real-value profits are converted into integer profits; and DP algorithm is used to calculate the solution with "rounded" profits.
Another example, Max Independent Set does not admit a PTAS or FPTAS. Because, in this case, we can set a value for epsilon, which will always give optimal solution for any graph using that PTAS algorithm; which is not possible until P=NP.

How to calculate the efficiency of Simplex algorithm for diet

I'm trying to write a program to solve diet problem http://www.phpsimplex.com/en/diet_problem.htm
using SIMPLEX algorithms. My assignment require also to calculate the efficiency of the algorithms.
I understood from wiki http://en.wikipedia.org/wiki/Simplex_algorithm that it's has exponential time in worst case. But it doesn't show the exact big O notation, or how I could calculate that.
Is there any advice how could I calculate the efficiency of Simplex algorithm for the above diet problem?

Can 1 approximation algorithm be used for multiple NP-Hard problems?

Since any NP Hard problem be reduced to any other NP Hard problem by mapping, my question is 1 step forward;
for example every step of that algo : could that also be mapped to the other NP hard?
Thanks in advance
From http://en.wikipedia.org/wiki/Approximation_algorithm we see that
NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, such as the maximum clique problem.
(end quote)
It follows from this that a good approximation in one NP-complete problem is not necessarily a good approximation in another NP-complete problem. In that fortunate world we could use easily-approximated NP-complete problems to find good approximate algorithms for all other NP-complete problems, which is not the case here, as there are hard-to-approximate NP-complete problems.
When proving a problem is NP-Hard, we usually consider the decision version of the problem, whose output is either yes or no. However, when considering approximation algorithms, we consider the optimization version of the problem.
If you use one problem's approximation algorithm to solve another problem by using the reduction in the proof of NP-Hard, the approximation ratio may change. For example, if you have a 2-approximation algorithm for problem A and you use it to solve problem B, then you may get a O(n)-approximation algorithm for problem B, since the reduction does not preserve approximation ratio. Hence, if you want to use an approximation algorithm for one problem to solve another problem, you need to ensure that the reduction will not change approximation ratio too much in order to get a useful algorithm. For example, you can use L-reduction or PTAS reduction.

Resources