I am newbie to understanding Parallel Algorithms.
Can someone please explain in simple words [or examples] what Asymptotic Run Time of a Parallel Algorithm means?
Context:
If the best known sequential algorithm for a problem π has an asymptotic run time of S(n) and if T(n,p) is the asymptotic run time of a parallel algorithm, then the asymptotic speedup of the parallel algorithm is defined as S(n)/T(n,p)
If S(n)/T(n,p) = Ɵ(p), then the algorithm is said to have linear speed up.
The asymptotic run time of a parallel algorithm usually means the time required for an algorithm with p processors to give a correct solution. But, the analysis of an algorithm always depends on the model you are using. A satisfactory answer to your question would become too long. I strongly recommend you to read chapter 27 of Introduction to Algorithms. It is an excellent text to understand the analysis of parallel algorithms. In the first few pages you probably would find the answer to your question.
Related
How to develop programs which has less time complexity as possible?
There is no proven method to come up with the optimal algorithm for a given problem.
There are many problems for which is likely the world has not come up with the most efficient algorithm yet.
For example: what is the algorithm with the best time complexity for matrix multiplication? At the time of writing no one knows what that theoretical best time complexity would be, let be there is someone who has designed an algorithm with that time complexity.
I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.
The Knapsack problem is a combinatorial optimization problem where one has to maximize the bene t of objects in a knapsack without exceeding its capacity. We know that there are many ways to solve this problem, genetic algorithm, dynamic programmming, and greedy method. I want to know what is the advantage and disadvantage to use the genetic algorithm comparing with dynamic programming? Space complexity, time complexity, and optimality?
So in order to answer this, it is important to consider what you think is the most important: Speed or Accuracy
Genetic algorithms do not guarantee finding the most optimal solution, however, they typically run very quickly.
Some quick descriptions of a Genetic Algorithm might yield:
It is an estimation function and does not guarantee finding the globally optimal solution
It typically runs very fast (both in memory usage and complexity)
Actual calculations are hard, since genetic algorithms are typically problem specific and chaotic in nature. A good base for what its complexity might look like:
O( O(Fitness) * (O(mutation) + O(crossover)))
However, Dynamic Programming does guarantee to find the most optimal solution, granted with a much longer run time. Some quick descriptions of Dynamic Programming might yield:
It is an exact function -- guarantees convergence on the most globally optimal solution
High memory usage and a very long run time
Complexity for knapsack 01 problem is something like: O(numItems * knapsackCapacity), and memory complexity like O(knapsackCapacity) (credits to this post for that)
If you are asking what is preferred, that is subject specific. If you want a good enough guess quickly, GA is probably the way to go. But if you need a guaranteed and verifiable solution, DP is the way to go.
This should satisfy a basic comparison.
Everyone knows that bubblesort is O(n^2), but this is based on the number of comparisons needed to sort this. I have a question in which, if I didn't care about the number of comparisons, but the output time, then how do you do analysis of this? Is there a way to do analysis on output time instead of comparisons?
For example, if you could have bubble sort and have parallel comparisons happening at for all pairs (even then odd comparisons), then the throughput time would be something like 2n-1 throughput time. The number of comparisons would be high, but I don't care as the final throughput time is quick.
So in essence, is there a common analysis for overall parallel performance time? If so, just give me some key terms and I'll learn the rest from google.
Parallel programming is a bit of red herring here. Making assumptions about run time only on big O notation can be misleading. To compare run times of algorithms you need the full equation not just the big O notation.
The problem is that big O notation tells you the dominating term as n goes to infinity. But the run time is on finite ranges of n. This is easy to understand from continuous mathematics (my background).
Consider y=Ax and y=Bx^2 Big O notation would tell you that y=Bx^2 is slower. However, between (0,A/B) it's less than y=Ax. In this case it could be faster to use the O(x^2) algorithm than the O(x) algorithm for x<A/B.
In fact I have heard of sorting algorithms which start off with a O(nlogn) algorithm and then switch to a O(n^2) logarithm when n is sufficiently small.
The best example is matrix multiplication. The naïve algorithm is O(n^3) but there are algorithms that get that down to O(n^2.3727). However, every algorithm I have seen has such a large constant that the naïve O(n^3) is still the fastest algorithm for all particle values of n and that does not look likely to change any time soon.
So really what you need is the full equation to compare. Something like An^3 (let's ignore lower order terms) and Bn^2.3727. In this case B is so incredibly large that the O(n^3) method always wins.
Parallel programming usually just lowers the constant. For example when I do matrix multiplication using four cores my time goes from An^3 to A/4 n^3. The same thing will happen with your parallel bubble sort. You will decrease the constant. So it's possible that for some range of values of n that your parallel bubble sort will beat a non-parallel (or possibly even parallel) merge sort. Though, unlike matrix multiplication, I think the range will be pretty small.
Algorithm analysis is not meant to give actual run times. That's what benchmarks are for. Analysis tells you how much relative complexity is in a program, but the actual run time for that program depends on so many other factors that strict analysis can't guarantee real-world times. For example, what happens if your OS decides to suspend your program to install updates? Your run time will be shot. Even running the same program over and over yields different results due to the complexity of computer systems (memory page faults, virtual machine garbage collection, IO interrupts, etc). Analysis can't take these into account.
This is why parallel processing doesn't usually come under consideration during analysis. The mechanism for "parallelizing" a program's components is usually external to your code, and usually based on a probabilistic algorithm for scheduling. I don't know of a good way to do static analysis on that. Once again, you can run a bunch of benchmarks and that will give you an average run time.
The time efficiency we get by parallel steps can be measured by round complexity. Where each round consists of parallel steps occurring at the same time. By doing so, we can see how effective the throughput time is, in similar analysis that we are used to.
Where can I turn for information regarding computing times of mathematical functions? Has any (general) study with any amount of rigor been made?
For instance, the computing time of
constant + constant
generally takes O(1).
Suppose I want to start using math like integrals, and I'd like to get an asymptotic approximation to various integrals. Has there been a standard study of this, or must I take the information I have and figure out my own approximation. I'd be very interested in a standard approach to this, and I'd like to know if it already exists.
Here's my motivation:
I'm in the middle of writing a paper that points out the equivalence between NP hard problems and certain types of mathematical equations. It seems that there might be use for a study of math computing times that is generalized like a new science.
EDIT:
I guess I'm wondering if there is a standard computational complexity to any given math that cannot be avoided. I'm wondering if anyone has studied this question. I'd love to see what others have tried.
EDIT 2:
Wikipedia lists "Computational Complexity Theory" in their encyclopedia, which I think may fit the bill. I'm still wondering if someone who has studied this could affirm this.
"Standard" math has no notion of algorithmic complexity. That's reserved for computer algorithms.
There are ways to analyze the dynamic behavior of solutions of equations. Things like convergence matter a great deal to mathematicians.
You can ask what the algorithmic complexity of euler integration versus fifth-order Runge-Kutta for integration. They would compare based on number of function evaluations required and time step stability.
But what's the "running time" of the solution to Fermat's Last Theorem? What about the last of David Hilbert's challenge problems? Is the "running time" for those a century and counting? What's your running time for solving a partial differential equation using separation of variables?
When you think about it that way, do you have a better understanding of why people would be put off by your question?
Yes, for various mathematical functions, the computational complexity (running time) of computing the function has been studied. This can differ depending on the model of computation.
For example adding two n-bit numbers takes Θ(n) time, multiplying them takes Θ(n log n) time (using the FFT), finding their gcd takes Θ(n2) time with the usual Euclidean algorithm and Θ(n(log n)2 (log log n)) with better algorithms, etc. For more complicated stuff like integrals, obviously it depends on what algorithm you use to do it.
There isn't a collected body of work, but work on approximating functions comes close. For example, you'd like to know that approximating sin(x) to within an epsilon error can be done in time proportional to some polynomial in log(x) and 1/epsilon. There isn't a general theory here (you should look up information complexity though), and focusing on specific functions might help.
user389117,
I think that subconsciously you want to deduce the complexity of computing a mathematical type from the form of this mathematical type.
E.g. A math type which concerns the square of the variable (x^2) you think (at least subconsciously) that the complexity of the computation is anologous to x^2 so the complexity should be something like O(n^2) or there is a standard process to deduce the form of complexity from the form of the mathematical equation.
These both are different qualities and one cannot deduce the one quality from the other.
I will give you an example: In papers all algorithms are written in pseudo code and then the scientists deduce the complexity of the pseudo code.
The pseudo code must be inevitably written and then you compute the complexity.
There is no a magical way to have the complexity derived from the form of the thing you want to compute.
Even if you compute the complexity and you find that the form is analogous to the form of the equation computed then I think it would be hard, at least at first place, for you to convert that remark from pseudo-science to science.
Good Luck!