I wrote a Prim's Maze generator using equal rates so that it is almost entirely random. I have benchmarked the algorithm and found it to have a running time of O(5n). This equates to a 290 second run time to generate a 128 x 128 maze.
My question is, is this a good run time? Is this high, low, average? I have a feeling the slowdown is more to do with caching the nodes of the maze then in the relatively lightweight integer comparisons. I just want to do get an idea of whether I have a decent implementation or if it is just too slow.
Related
Suppose I wish to solve 0-1 knapsack but both weights and values of elements can get large. (Weight[i] < 1e5 and Value[i] < 1e9)
Also, number of items is ~2000 and size of knapsack is ~4e6.
Obviously dp[index][weight] exceeds memory and time expectations.
However, somewhat magically memoization dp (values stored in std::map or std::unordered_map) works well. I can see why it may be bit faster: after all, in recursive dp, we compute only the states we need.
But, is it actually significantly faster? In other words, the usual computation requires 8e9 operations, but how much speedup can we expect, on average, by computing dp this way?
Since computing a state takes constant time, the question boils down to - how many (expected / on average) states can be there to compute? (normally it's 8e9)
You can assume:
We use std::unordered_map.
Our hash function works well enough.
Recursive Overheads can be ignored.
Thanks!
In the worst case, the recursive version can compute nearly as many states as the iterative version. Since you've implemented the recursive version without exceeding your available memory, you are pretty far from the worst case.
In special cases, the recursive version can be faster than the naive iterative DP. For example, if all or most of your weights have a large common factor, then that factor will divide the number of states that need to be computed. If you implement the iterative version so that it only considers accessible weights, then it will see this speed-up as well.
From what I understand, randomised algorithm could give wrong answer.For example, using contraction algorithm to solve graph min-cut problem, you need to run the algorithm n^2*ln(n) times so that the possibility of failing to get the correct answer is at most 1/n. No matter how small the possibility of failure is, the answer could be incorrect, so when is the right time that we allow the incorrect answer?
To begin with, I think you need to differentiate between different classes of randomized algorithms:
A Monte Carlo algorithm is an algorithm which is random w.r.t. correctness. The randomized min-cut algorithm, from your question, is an example of such an algorithm.
A Las Vegas algorithm is an algorithm which is random w.r.t. running time. Randomized quicksort, for example, is such an algorithm.
You seem to mean Monte-Carlo algorithms in your question.
The question of whether a Monte-Carlo algorithm is suitable to you, probably can't be answered objectively, because it is based on something like the ecomonic theory of utility. Given two algorithms, A and B, then each invocation of A or B takes some time t and gives you the result whose correctness is c. The utility U(t, c) is a random variable, and only you can determine whether the distribution of UA(T, C) is better or worse than UB(T, C). Some examples, where algorithm A performs twice as fast as B, but errs with probability 1e-6:
If these are preference recommendations on a website, then it might be worth it for you to have your website twice as responsive as that of a competitor, at the risk that, rarely, a client gets wrong recommendations.
If these are control systems for a nuclear reactor (to borrow from TemplateTypedef's comment), then a slight chance of failure might not be worth the time saving (e.g., you probably would be better investing in a processor twice as fast running the slower algorithm).
The two examples above show that each of the two choices might be correct for different settings. In fact, utility theory rarely shows sets of choices that are clearly wrong. In the introduction to the book Randomized Algorithms by Motwani and Raghavan, however, the authors do give such an example for the fallacy of avoiding Monte-Carlo algorithms. The probability of a CPU malfunctioning due to cosmic radiation is some α (whose value I forget). Thus avoiding running a Monte-Carlo algorithm with probability of error much lower than α, is probably simply irrational.
You'll always need to analyze the properties of the algorithm and decide if the risk of a non-optimal answer is bearable in your application. (If the answer is Boolean, then "non-optimal" is the same as "wrong.")
There are many kinds of programming problems where some answer that's close to optimal and obtained in reasonable time is much better than the optimal answer provided too late or not at all.
The Traveling Salesman problem is an example. If you are Walmart and need to plan delivery routes each night for given sets of cities, getting a route that's close to optimal is much better than no route or a naively chosen one or the best possible route obtained 2 days from now.
There are many kinds of guarantees provided by randomized algorithms. They often have the form error <= F(cost), where error and cost can be almost anything. The cost may be expressed in run time or how many repeat runs are spent looking for better answers. Space may also figure in cost. The error may be probability of a wrong 1/0 answer, a distance metric from an optimal result, a discrete count of erroneous components, etc., etc.
Sometimes you just have to live with a maybe-wrong answer because there's no useful alternative. Primality testing on big numbers is in this category. Though there are polynomial time deterministic tests, they are still much slower than a probabilistic test that produces the correct answer for all practical purposes.
For example, if you have a Boolean randomized function where True results are always correct, but False are wrong 50% of the time, then you are in pretty good shape. (The Miller-Rabin primality test is actually better than this.)
Suppose you can afford to run the algorithm 40 times. If any of the runs says False, you know the answer is False. If they're all True then the probability of that the real answer if false is roughly 2^40 = 1/(1 trillion).
Even in safety-critical applications, this may be a fine result. The chance of being hit by lightning in a lifetime is about 1/10,000. We all live with that and don't give it a second thought.
Extending the question of streetparade, I would like to ask what is the difference, if any, between a stochastic and a heuristic algorithm.
Would it be right to say that a stochastic algorithm is actually one type of heuristic?
TTBOMK, "stochastic algorithm" is not a standard term. "Randomized algorithm" is, however, and it's probably what is meant here.
Randomized: Uses randomness somehow. There are two flavours: Monte Carlo algorithms always finish in bounded time, but don't guarantee an optimal solution, while Las Vegas algorithms aren't necessarily guaranteed to finish in any finite time, but promise to find the optimal solution. (Usually they are also required to have a finite expected running time.) Examples of common Monte Carlo algorithms: MCMC, simulated annealing, and Miller-Rabin primality testing. Quicksort with randomized pivot choice is a Las Vegas algorithm that always finishes in finite time. An algorithm that does not use any randomness is deterministic.
Heuristic: Not guaranteed to find the correct answer. An algorithm that is not heuristic is exact.
Many heuristics are sensitive to "incidental" properties of the input that don't affect the true solution, such as the order items are considered in the First-Fit heuristic for the Bin Packing problem. In this case they can be thought of as Monte Carlo randomized algorithms: you can randomly permute the inputs and rerun them, always keeping the best answer you find. OTOH, other heuristics don't have this property -- e.g. the First-Fit-Decreasing heuristic is deterministic, since it always first sorts the items in decreasing size order.
If the set of possible outputs of a particular randomized algorithm is finite and contains the true answer, then running it long enough is "practically guaranteed" to eventually find it (in the sense that the probability of not finding it can be made arbitrarily small, but never 0). Note that it's not automatically the case that some permutation of the inputs to a heuristic will result in getting the exact answer -- in the case of First-Fit, it turns out that this is true, but this was only proven in 2009.
Sometimes stronger statements about convergence of randomized algorithms can be made: these are usually along the lines of "For any given small threshold d, after t steps we will be within d of the optimal solution with probability f(t, d)", with f(t, d) an increasing function of t and d.
Booth approaches are usually used to speed up genere and test solutions to NP complete problems
Stochastic algorithms use randomness
They use all combinations but not in order but instead they use random ones from the whole range of possibilities hoping to hit the solution sooner. Implementation is fast easy and single iteration is also fast (constant time)
Heuristics algorithms
They pick up the combinations not randomly but based on some knowledge on used process, input dataset, or usage instead. So they lower the number of combinations significantly to only those they are probably the solution and use only those but usually all of them until solution is found.
Implementation complexity depends on the problem, single iteration is usually much much slower then stochastic approach (constant time) so heuristics is used only if the number of possibilities is lowered enough to actual speed up is visible because even if algorithm complexity with heuristic is usually much lower sometimes the constant time is big enough to even slow things down ... (in runtime terms)
Booth approaches can be combined together
If I know complexity of an algorithm, can I predict how long it will compute in real life?
A bit more context:
I have been trying to solve university assignment which has to find the best possible result in a game from given position. I have written an algorithm and it works, however very slow. The complexity is O(n)=5^n . For 24 elements it computes a few minutes. I'm not sure if it's because my implementation is wrong, or if this algorithm is simply very slow. Is there a way for me to approximate how much time any algorithm should take?
Worst case you can base on extrapolation. So having time on N=1,2,3,4 elements (the more the better) and O-notation estimation for algorithm complexity you can estimate time for any finite number. Another question this estimation precision goes lower and lower as N increases.
What you can do with it? Search for error estimation algorithms for such approaches. In practice it usually gives good enough result.
Also please don't forget about model adequateness checks. So having results for N=1..10 and O-notation complexity you should check 'how good' your results correlate with your O-model (if you can select numbers for O-notation formula that meets your results). If you cannot get numbers, you need either more numbers to get wider picture or ... OK, you can have wrong complexity estimation :-).
Useful links:
Brief review on algorithm complexity.
Time complexity catalogue
Really good point to start - look for examples based on code as input.
You cannot predict running time based on time complexity alone. There are many factors involved: hardware speed, programming language, implementation details, etc. The only thing you can predict using the complexity is expected time increase when the size of the input increases.
For example, personally, I've seen O(N^2) algorithms take longer than O(N^3) ones, especially on small values of N, such as it is in your case. And by, the way, 5^24 is a huge number (5.9e16). I wouldn't be surprised if that took a few hours on a supercomputer, let alone on some mid-range personal pc, which most of us are using.
I am trying to write a demo for an embedded processor, which is a multicore architecture and is very fast in floating point calculations. The problem is that the current hardware I have is the processor connected through an evaluation board where the DRAM to chip rate is somewhat limited, and the board to PC rate is very slow and inefficient.
Thus, when demonstrating big matrix multiplication, I can do, say, 128x128 matrices in a couple of milliseconds, but the I/O takes (lots of) seconds kills the demo.
So, I am looking for some kind of a calculation with higher complexity than n^3, the more the better (but preferably easy to program and to explain/understand) to make the computation part more dominant in the time budget, where the dataset is preferably bound to about 16KB per thread (core).
Any suggestion?
PS: I think it is very similar to this question in its essence.
You could generate large (256-bit) numbers and factor them; that's commonly used in "stress-test" tools. If you specifically want to exercise floating point computation, you can build a basic n-body simulator with a Runge-Kutta integrator and run that.
What you can do is
Declare a std::vector of int
populate it with N-1 to 0
Now keep using std::next_permutation repeatedly until they are sorted again i..e..next_permutation returns false.
With N integers this will need O(N !) calculations and also deterministic
PageRank may be a good fit. Articulated as a linear algebra problem, one repeatedly squares a certain floating-point matrix of controllable size until convergence. In the graphical metaphor, one "ripples" change coming into each node onto the other edges. Both treatments can be made parallel.
You could do a least trimmed squares fit. One use of this is to identify outliers in a data set. For example you could generate samples from some smooth function (a polynomial say) and add (large) noise to some of the samples, and then the problem is to find a subset H of the samples of a given size that minimises the sum of the squares of the residuals (for the polynomial fitted to the samples in H). Since there are a large number of such subsets, you have a lot of fits to do! There are approximate algorithms for this, for example here.
Well one way to go would be to implement brute-force solver for the Traveling Salesman problem in some M-space (with M > 1).
The brute-force solution is to just try every possible permutation and then calculate the total distance for each permutation, without any optimizations (including no dynamic programming tricks like memoization).
For N points, there are (N!) permutations (with a redundancy factor of at least (N-1), but remember, no optimizations). Each pair of points requires (M) subtractions, (M) multiplications and one square root operation to determine their pythagorean distance apart. Each permutation has (N-1) pairs of points to calculate and add to the total distance.
So order of computation is O(M((N+1)!)), whereas storage space is only O(N).
Also, this should not be either too hard, nor too intensive to parallelize across the cores, though it does take some overhead. (I can demonstrate, if needed).
Another idea might be to compute a fractal map. Basically, choose a grid of whatever dimensionality you want. Then, for each grid point, do the fractal iteration to get the value. Some points might require only a few iterations; I believe some will iterate forever (chaos; of course, this can't really happen when you have a finite number of floating-point numbers, but still). The ones that don't stop you'll have to "cut off" after a certain number of iterations... just make this preposterously high, and you should be able to demonstrate a high-quality fractal map.
Another benefit of this is that grid cells are processed completely independently, so you will never need to do communication (not even at boundaries, as in stencil computations, and definitely not O(pairwise) as in direct N-body simulations). You can usefully use O(gridcells) number of processors to parallelize this, although in practice you can probably get better utilization by using gridcells/factor processors and dynamically scheduling grid points to processors on an as-ready basis. The computation is basically all floating-point math.
Mandelbrot/Julia and Lyupanov come to mind as potential candidates, but any should do.