I'm trying to write a program to solve diet problem http://www.phpsimplex.com/en/diet_problem.htm
using SIMPLEX algorithms. My assignment require also to calculate the efficiency of the algorithms.
I understood from wiki http://en.wikipedia.org/wiki/Simplex_algorithm that it's has exponential time in worst case. But it doesn't show the exact big O notation, or how I could calculate that.
Is there any advice how could I calculate the efficiency of Simplex algorithm for the above diet problem?
Related
I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.
What does it mean by saying "an algorithm is exact" in terms of Optimization and/or Computer Science? I need a precisely logical/epistemological definition.
Exact and approximate algorithms are methods for solving optimization problems.
Exact algorithms are algorithms that always find the optimal solution to a given optimization problem.
However, in combinatorial problems or total optimization problems, conventional methods are usually not effective enough, especially when the problem search area is large and complex. Among other methods we can use heuristics to solve that problems. Heuristics tend to give suboptimal solutions. A subset of heuristics are approximation algorithms.
When we use approximation algorithms we can prove a bound on the ratio between the optimal solution and the solution produced by the algorithm.
E.g. In some NP-hard problems there are some polynomial-time approximation algorithms while the best known exact algorithms need exponential time.
For example while there is a polynomial-time approximation algorithm for Vertex Cover, the best exact algorithm (using memoization) runs in O(1.1889n) pp 62-63.
The term exact is usually used to mean "the opposite of approximate". An approximation algorithm finds a solution to a slight variation of an optimzation problem that admits soltions that are "close" to the optimum in some sense, but nonetheless are desirable. As #Sirko said in the comments, the approximation is usually of interest because the exact problem is intractable or undecidable, where the approximate version is not. Often, more than one kind of approximation may be of interest.
Here are examples:
An exact algorithm for the Traveling Salesman problem is NP Complete. The TSP is to find a route of minimum length L for visiting each of N cities on a map. NP Completeness says the best known algorithms still need time that is an exponential function of N. An approximation algorithm for TSP finds a route of length no more than cL for some fixed c > 1. For example, you can easily construct the minimum spanning tree of the cities in time that is a polynomial in N and walk around the tree, covering each edge twice, to obtain an approximatoin algorithm for the case c = 2. The implied goal is to find algorithms for constants c as close to one as possible.
An exact algorithm for compiling a program that produces correct results in minimum time from any given source code is - under reasonable assumptions - undecidable. Yet of course we use "optimizing compilers" every day that improve the speed of code with no promise of true optimality.
In optimization, there are two kinds of algorithms. Exact and approximate algorithms.
Exact algorithms can find the optimum solution with precision.
Approximate algorithms can find a near optimum solution.
The main difference is that exact algorithms apply in "easy" problems.
What makes a problem "easy" is that it can be solved in reasonable time and the computation time doesn't scale up exponentially if the problem gets bigger. This class of problems is known as P(Deterministic Polynomial Time). Problems of this class are used to be optimized using exact algorithms.
For every other class of problems approximate algorithms are preferred.
I read that there is an algorithm that can calculate the product of a matrix in n^(2.3) complexity, but was unable to find the algorithm.
There have been several algorithms found for matrix multiplication with a big O less than n^3. But here's one of the problems with making conclusions based on big O notation. It only gives the limiting behaviour as n goes to infinity. In this case a more useful metric is the total time complexity which includes the coefficients and lower order terms.
For the general algorithm the time complexity could be An^3 + Bn^2 +...
For the case of the Coppersmith-Winograd algorithm the coefficient for the n^2.375477 term is so large that for all practical purposes the general algorithm with O(n^3) complexity is faster.
This is also true for the Strassen Algorithm as well if it's used on single elements. However,
there is a paper which claims that using a hybrid algorithm which uses the Strassen Algorithm for matrix blocks down to some limit and then switches to the O(n^3) algorithm is faster for large matrices.
So although there exists algorithms which have a smaller time complexity the only one that is useful which I'm aware of is the Strassen algorithm and that's only for large matrices (whatever large means).
Edit: Wikipedia actually has a nice summary of the algorithms for matrix multiplication. Here is plot from that same link showing the reduction in omega for the different algorithms vs. the year they were discovered.
https://en.wikipedia.org/wiki/Matrix_multiplication#mediaviewer/File:Bound_on_matrix_multiplication_omega_over_time.svg
The Strassen Algorithm is able to multiply matrices with an asymptotic complexity smaller than O(n^3).
Coppersmith–Winograd algorithm calculates the the product of a NxN matrix in O(n^{2.375477}) asymptotic time.
I was reading about some geometric routing algorithms, there it says that when employing heuristics in a version of the main algorithm it may improve performance, but takes away asymptotic optimality.
Why is that the case? Should we prefer asymptotic optimality over better performance? Are there prototypical cases where one should prefer asymptotic optimality? Are there any benchmarks known?
I think you are asking about optimization problems where heuristics run fast but might not achieve the totally optimal solution, whereas truly optimal solution finding algorithms can run much slower in the worst-case although they always give the totally optimal solution. If so, here's some info. In general, the decision to use a heuristic algorithm often depends on how well it approximates the optimal solution "in practice", and if this typical solution quality is good enough for you, and whether or not you think your particular problem instance falls into the category of the problems that are encountered in practice. If you are interested, you can look up approximation algorithms for NP-complete problems. There are some problems where the score of the solution found by a heuristic is within a constant multiplier (1 + epsilon) of the score of the optimal solution, and you can choose epsilon; however typically the running time increases as epsilon decreases.
My guess is that they are talking about use of (non-admissible) heuristics for approximation algorithms. For instance, the traveling salesman problem is NP-complete, yet there are heuristic approximation methods that are much faster than known algorithms for NP-complete problems but are only guaranteed to get within a few percent of optimal.
When I was reading about quantum algorithms I faced the Deutsch-Jozsa algorithm, I see that if we want to solve that problem in a non-quantum algorithm, our algorithm would have exponential time complexity. Now I want to know what is the time complexity of Deutsch-Jozsa algorithm as a quantum algorithm on quantum computers?
According to Wikipedia the complexity of the quantum algorithm is constant:
The Deutsch-Jozsa quantum algorithm produces an answer that is always correct with a single evaluation of f.
The algorithm itself are just some calculations on quantum states, without any iterations/... so complexity is O(1).