How to develop program which have less time complexity as possible? - data-structures

How to develop programs which has less time complexity as possible?

There is no proven method to come up with the optimal algorithm for a given problem.
There are many problems for which is likely the world has not come up with the most efficient algorithm yet.
For example: what is the algorithm with the best time complexity for matrix multiplication? At the time of writing no one knows what that theoretical best time complexity would be, let be there is someone who has designed an algorithm with that time complexity.

Related

How efficient is efficient when it comes to Polynomial Time Algorithms?

I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.

Is it better to use genetic algorithm to solve the 0-1 Knapsack problem?

The Knapsack problem is a combinatorial optimization problem where one has to maximize the bene t of objects in a knapsack without exceeding its capacity. We know that there are many ways to solve this problem, genetic algorithm, dynamic programmming, and greedy method. I want to know what is the advantage and disadvantage to use the genetic algorithm comparing with dynamic programming? Space complexity, time complexity, and optimality?
So in order to answer this, it is important to consider what you think is the most important: Speed or Accuracy
Genetic algorithms do not guarantee finding the most optimal solution, however, they typically run very quickly.
Some quick descriptions of a Genetic Algorithm might yield:
It is an estimation function and does not guarantee finding the globally optimal solution
It typically runs very fast (both in memory usage and complexity)
Actual calculations are hard, since genetic algorithms are typically problem specific and chaotic in nature. A good base for what its complexity might look like:
O( O(Fitness) * (O(mutation) + O(crossover)))
However, Dynamic Programming does guarantee to find the most optimal solution, granted with a much longer run time. Some quick descriptions of Dynamic Programming might yield:
It is an exact function -- guarantees convergence on the most globally optimal solution
High memory usage and a very long run time
Complexity for knapsack 01 problem is something like: O(numItems * knapsackCapacity), and memory complexity like O(knapsackCapacity) (credits to this post for that)
If you are asking what is preferred, that is subject specific. If you want a good enough guess quickly, GA is probably the way to go. But if you need a guaranteed and verifiable solution, DP is the way to go.
This should satisfy a basic comparison.

Is there an effective way to calculate runtime complexity in the algorithm of java code?

I am practicing algorithm and data structures, I was wondering if there is an effective way of calculating algorithm and data structures in the Java code? Such as being able to system.out.println the speed of my algorithm? Thanks.
First thing your question is not very clear and one can easily find the worst case time complexity by looking at the pseudo-code of your algorithm but still if you want to know the exact amount of time that your algorithm took, here is one approach that you can try out.
Get the time at the starting of your algorithm.
Get the time at the end of your algorithm.
Now take the difference between them and it will give you the amount of time your program took to run.
Now keep on increasing the value N (N here is the value on which your algorithm depends).
Now you can plot a graph between N vs time taken and from the graph you can get an idea about the worst case time complexity of your algorithm.
Hope this helps!

How to translate algorithm complexity to time necessary for computation

If I know complexity of an algorithm, can I predict how long it will compute in real life?
A bit more context:
I have been trying to solve university assignment which has to find the best possible result in a game from given position. I have written an algorithm and it works, however very slow. The complexity is O(n)=5^n . For 24 elements it computes a few minutes. I'm not sure if it's because my implementation is wrong, or if this algorithm is simply very slow. Is there a way for me to approximate how much time any algorithm should take?
Worst case you can base on extrapolation. So having time on N=1,2,3,4 elements (the more the better) and O-notation estimation for algorithm complexity you can estimate time for any finite number. Another question this estimation precision goes lower and lower as N increases.
What you can do with it? Search for error estimation algorithms for such approaches. In practice it usually gives good enough result.
Also please don't forget about model adequateness checks. So having results for N=1..10 and O-notation complexity you should check 'how good' your results correlate with your O-model (if you can select numbers for O-notation formula that meets your results). If you cannot get numbers, you need either more numbers to get wider picture or ... OK, you can have wrong complexity estimation :-).
Useful links:
Brief review on algorithm complexity.
Time complexity catalogue
Really good point to start - look for examples based on code as input.
You cannot predict running time based on time complexity alone. There are many factors involved: hardware speed, programming language, implementation details, etc. The only thing you can predict using the complexity is expected time increase when the size of the input increases.
For example, personally, I've seen O(N^2) algorithms take longer than O(N^3) ones, especially on small values of N, such as it is in your case. And by, the way, 5^24 is a huge number (5.9e16). I wouldn't be surprised if that took a few hours on a supercomputer, let alone on some mid-range personal pc, which most of us are using.

Software/script to calculate the computational complexity of code?

Does anyone know of any program/script to calculate the computational complexity of code (e.g. method function) automatically?
If not, is there a good way (e.g. a design pattern, algorithm, etc) that supports it?
I'm not trying to do this in general.
In most cases, I know the input, the algorithm running it, and what constitutes a halt. I'm trying to compare 2 or more algorithms this way.
E.g.
algo #1 - 2x^2 + 10x + 5
algo #2 - 5x^2 + 1x + 3
Both algorithms are O(N^2). But algo #2 is better in the short run, while algo #1 is better in the long run.
While it is impossible to develop an algorithm that solves your problem generally, you can write an algorithm that will calculate the complexity of a piece of software for a few example inputs.
The only software I can find reference to is called Trend-Profiler. But, if you are more interested in the algorithm than the result, there is a paper here that describes the software and its algorithm.
Wouldn't it be better to sample an algorithm with a different amount of inputs?
With the time calculated for each input, you could approximate the complexity function and therefore determine whichever algorithm is better and at which stage.

Resources