Are there any problems we have O(1) algorithms for that we originally didn't? - complexity-theory

Are there any problems which we now have constant time algorithms for, but originally had more complicated algorithms?

Related

How efficient is efficient when it comes to Polynomial Time Algorithms?

I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.

Is it better to use genetic algorithm to solve the 0-1 Knapsack problem?

The Knapsack problem is a combinatorial optimization problem where one has to maximize the bene t of objects in a knapsack without exceeding its capacity. We know that there are many ways to solve this problem, genetic algorithm, dynamic programmming, and greedy method. I want to know what is the advantage and disadvantage to use the genetic algorithm comparing with dynamic programming? Space complexity, time complexity, and optimality?
So in order to answer this, it is important to consider what you think is the most important: Speed or Accuracy
Genetic algorithms do not guarantee finding the most optimal solution, however, they typically run very quickly.
Some quick descriptions of a Genetic Algorithm might yield:
It is an estimation function and does not guarantee finding the globally optimal solution
It typically runs very fast (both in memory usage and complexity)
Actual calculations are hard, since genetic algorithms are typically problem specific and chaotic in nature. A good base for what its complexity might look like:
O( O(Fitness) * (O(mutation) + O(crossover)))
However, Dynamic Programming does guarantee to find the most optimal solution, granted with a much longer run time. Some quick descriptions of Dynamic Programming might yield:
It is an exact function -- guarantees convergence on the most globally optimal solution
High memory usage and a very long run time
Complexity for knapsack 01 problem is something like: O(numItems * knapsackCapacity), and memory complexity like O(knapsackCapacity) (credits to this post for that)
If you are asking what is preferred, that is subject specific. If you want a good enough guess quickly, GA is probably the way to go. But if you need a guaranteed and verifiable solution, DP is the way to go.
This should satisfy a basic comparison.

Why using heuristics in an algorithm takes away asymptotic optimality?

I was reading about some geometric routing algorithms, there it says that when employing heuristics in a version of the main algorithm it may improve performance, but takes away asymptotic optimality.
Why is that the case? Should we prefer asymptotic optimality over better performance? Are there prototypical cases where one should prefer asymptotic optimality? Are there any benchmarks known?
I think you are asking about optimization problems where heuristics run fast but might not achieve the totally optimal solution, whereas truly optimal solution finding algorithms can run much slower in the worst-case although they always give the totally optimal solution. If so, here's some info. In general, the decision to use a heuristic algorithm often depends on how well it approximates the optimal solution "in practice", and if this typical solution quality is good enough for you, and whether or not you think your particular problem instance falls into the category of the problems that are encountered in practice. If you are interested, you can look up approximation algorithms for NP-complete problems. There are some problems where the score of the solution found by a heuristic is within a constant multiplier (1 + epsilon) of the score of the optimal solution, and you can choose epsilon; however typically the running time increases as epsilon decreases.
My guess is that they are talking about use of (non-admissible) heuristics for approximation algorithms. For instance, the traveling salesman problem is NP-complete, yet there are heuristic approximation methods that are much faster than known algorithms for NP-complete problems but are only guaranteed to get within a few percent of optimal.

Implementation of Chazelle's triangulation algorithm

There is an algorithm for triangulating a polygon in linear time due to Chazelle (1991), but, AFAIK, there aren't any standard implementations of his algorithm in general mathematical software libraries.
Does anyone know of such an implementation?
See this answer to the question Powerful algorithms too complex to implement:
According to Skiena (author of The Algorithm Design Manual), "[the] algorithm is quite hopeless to implement."
I've looked for an implementation before, but couldn't find one. I think it's safe to assume no-one has implemented it due to its complexity, and I think it also has quite a large constant factor so wouldn't do well against O(n lg n) algorithms that have smaller constant factors.
This is claimed to be an
Implementation of Chazelle's Algorithm for Triangulating a Simple Polygon in Linear Time (mainly C++ and C).

Resource on computing time complexity of algorithms [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is there any good resource (book, reference, web site, application...) which explains how to compute time complexity of an algorithm?
Because, it is hard to make the things concrete in my mind. Sometimes it is talking about an iteration has time complexity of lg n; and then according to the another loop it becomes n.lg n; and sometimes they use big omega notation to express it, sometimes big-o and etc...
These things are quite abstract to me. So, I need a resource which has good explanation and a lot of examples to make me see what's goin on.
Hopefully, I explained my problem clearly. I am quite sure that everyone who just started to study algorithms has also the same problem with me. So, maybe those experienced guys can also share their experience with us. Thanks...
I think the best book is Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.
Here is it on Amazon.
Guys, you're all recommending true complexity theory books -- Arora and Barak contains all sorts of things like PCP, Interactive Proofs, Quantum Computing and topics on Expander graphs -- things that most programmers/software developers have never heard of or will ever use. Papdimitriou is in the same category. Knuth is freaking impossible to read (and I was a CS/Math major) and gives zero intuition on how things work.
If you want a simple way to compute runtimes, and to get the flavour of the analysis, try the first chapter or so of Kleinberg and Tardos "Design and Analysis of Algorithms", that holds your hand through the fundamentals, and then you can work on much harder problems.
I read Christos Papadimitriou's Computational Complexity in university and really enjoyed it. It's not an easy matter, the book is long but it's well written (i.e., understandable and suitable for self-teaching) and it contains lots of useful knowledge, much more than just the "how do I figure out time complexity of algorithm x".
I agree that Introduction to Algorithms is a good book. For more detailed instructions on e.g. how to solve recurrence relations see Knuth's Concrete Mathematics. A good book on Computational Complexity itself is the one by Papadimitriou. But that last book might be a bit too thorough if you only want to calculate the complexity of given algorithms.
A short overview about big-O/-Omega notation:
If you can give an algorithm that solves a problem in time T(c*(n log n)) (c being a constant), than the time complexity of that problem is O(n log n). The big-O gets rid of the c, that is any constant factors not depending on the input size n. Big-O gives an upper bound on the running time, because you have shown (by giving an algorithm) that you can solve the problem in that amount of time.
If you can give a proof that any algorithm solving the problem takes at least time T(c*(n log n)) than the problem is in Omega(n log n). Big-Omega gives a lower bound on the problem. In most cases lower bounds are much harder to proof than upper bounds, because you have to show that any possible algorithm for the problem takes at least T(c*(n log n). Knowing a lower bound does not mean knowing an algorithm that reaches that lower bound.
If you have a lower bound, e.g. Omega(n log n), and an algorithm that solves the problem in that time (an upper bound), than the problem is in Theta(n log n). This means an "optimal" algorithm for this problem is known. Of course this notation hides the constant factor c which can be quite big (that's why I wrote optimal in quotes).
Note that when using these notations you are usually talking about the worst-case running time of an algorithm.
Computational complexity theory article in Wikipedia has a list of references, including a link to the book draft Computational Complexity: A Modern Approach, a textbook by Sanjeev Arora and Boaz Barak, Cambridge University Press.
The classic set of books on the subject is Knuth's Art of Computer Programming series. They're heavy on theory and formal proofs, so brush up on your calculus before you tackle them.
A course in Discrete Mathematics is sometimes given before Introduction to Algorithms.

Resources