Difficult algorithm: Optimal Solution to 1 - algorithm

This is one of those difficult algorithms just because there are so many options. Imagine a number 'N' and a set of Primes under 10 i.e. {2, 3, 5, 7}. The goal is to keep dividing N till we reach 1. If at any steps N is not divisible by any of the given primes then you can an operation out of:
i) N= N-1
OR ii) N=N+1
This will ensure that N is even and we can continue.
The goal should be achieved while using minimum number of operations.
Please note that this may sound trivial i.e. you can implement a step in your algo that "if N is divisible by any prime then divide it". But this does not always produce the optimal solution
E.g. if N=134: Now 134 is divisble by 2. if you divide by 2 , you get 67. 67 is not divisible by any prime, so you do an operation and N will be 66/68 both of which require another operation. So total 2 operations.
Alternatively, if N=134 and you do an operation N=N+1 i.e. N=135, In this case the total operations needed to reach 1 is 1. So this is the optimal solution

Unless there is some mathematical solution for this problem (If you are looking for a mathematical solution, math.SE is better for this question) - you could reduce the problem to a shortest path problem.
Represent the problem as a graph G=(V,E) where V = N (all natural numbers) and E = {(u,v) | you can get from u to v in a single step }1.
Now, you need to run a classic search algorithm from your source (the input number) to your target (the number 1). Some of the choices to get an optimal solution are:
BFS - since the reduced graph is not weighted, BFS is guaranteed to be both complete (find a solution if one exists) and optimal (finds the shortest solution).
heuristic A* - which is also complete and optimal2, and if you have a good heuristic function - should be faster then an uninformed BFS.
Optimization note:
The graph can be constructed "on the fly", no need to create it as pre-processing. To do so, you will need a next:V->2^V (from a node to a set of node) function, such that next(v) = {u | (v,u) is in E}
P.S. complexity comment: The BFS solution is pseudo-polynomial (linear in the input number worst case), since the "highest" vertex you will ever develop is n+1, so the solution is basically O(n) worst case - though I believe deeper analysis can restrict it to a better limit.
(1) If you are interested only in +1/-1 to be counted as ops, you can create the edges based on the target after finishing divisions.
(2) If an admissible heuristic function is used.

Related

Time complexity of union

We have a directed graph G=(V,E) ,at which each edge (u, v) in E has a relative value r(u, v) in R and 0<=r(u, v) <= 1, that represents the reliability , at a communication channel, from the vertex u to the vertex v.
Consider as r(u, v) the probability that the chanel from u to v will not fail the transfer and that the probabilities are independent.
I want to write an efficient algorithm that finds the most reliable path between two given nodes.
I have tried the following:
DIJKSTRA(G,r,s,t)
1. INITIALIZE-SINGLE-SOURCE(G,s)
2. S=Ø
3. Q=G.V
4. while Q != Ø
5. u<-EXTRACT-MAX(Q)
6. if (u=t) return d[t]
7. S<-S U {u}
8. for each vertex v in G.Adj[u]
9. RELAX(u,v,r)
INITIAL-SINGLE-SOURCE(G,s)
1. for each vertex v in V.G
2. d[v]=-inf
3. pi[v]=NIL
4. d[s]=1
RELAX(u,v,r)
1. if d[v]<d[u]*r(u,v)
2 d[v]<-d[u]*r(u,v)
3. pi[v]<-u
and I wanted to find the complexity of the algorithm.
The time complexity of INITIALIZE-SINGLE-SOURCE(G,s) is O(|V|).
The time complexity of the line 4 is O(1).
The time complexity of the line 5 is O(|V|).
The time complexity of the line 7 is O(log(|V|)).
The time complexity of the line 8 is O(1).
Which is the time complexityof the command S<-S U {u} ?
The line 10 is executed in total O(Σ_{v \in V} deg(v))=O(E) times and the time complexity of RELAX is O(1).
So the time complexity of the algorithm is equal to the time complexity of the lines (3-9)+O(E).
Which is the time complexity of the union?
So the time complexity of the algorithm is equal to the time
complexity of the lines (3-9)+O(E). Which is the time complexity of
the union?
No, it is not the complexity of the union, union can be done pretty efficiently if you are using hash table for example. Moreover, since you use S only for the union, it seems to be redundant.
The complexity of the algorithm also depends heavily on your EXTRACT-MAX(Q) function (usually it is logarithmic in the size of the Q, so logV per iteration), and on RELAX(u,v,r) (which is also usually logarithmic in the size of Q, since you need to update entries in your priority queue).
As expected, this brings us to the same complexity of original Dijkstra's algorithm, which is O(E+VlogV) or O(ElogV), depending on implementation of your priority queue.
I think that the solution should be based on the classic Dijkstra algorithm (complexity of which is well-known), as you suggested, however in your solution you define the "shortest path" problem incorrectly.
Note that the probability of A and B is p(A) * p(B) (if they're independent). Hence, you should find a path, whose multiplication of edges is maximized. Whereas Dijkstra algorithm finds the path whose sum of edges is minimized.
To overcome this issue you should define the weight of your edges as:
R*(u, v) = -log ( R(u, v) )
By introducing the logarithm, you convert multiplicative problem to additive.

Is it possible to compute the minimum of a set of numbers modulo a given number in amortized sublinear time?

Is there a data structure representing a large set S of (64-bit) integers, that starts out empty and supports the following two operations:
insert(s) inserts the number s into S;
minmod(m) returns the number s in S such that s mod m is minimal.
An example:
insert(11)
insert(15)
minmod(7) -> the answer is 15 (which mod 7 = 1)
insert(14)
minmod(7) -> the answer is 14 (which mod 7 = 0)
minmod(10) -> the answer is 11 (which mod 10 = 1)
I am interested in minimizing the maximal total time spent on a sequence of n such operations. It is obviously possible to just maintain a list of elements for S and iterate through them for every minmod operation; then insert is O(1) and minmod is O(|S|), which would take O(n^2) time for n operations (e.g., n/2 insert operations followed by n/2 minmod operations would take roughly n^2/4 operations).
So: is it possible to do better than O(n^2) for a sequence of n operations? Maybe O(n sqrt(n)) or O(n log(n))? If this is possible, then I would also be interested to know if there are data structures that additionally admit removing single elements from S, or removing all numbers within an interval.
Another idea based on balanced binary search tree, as in Keith's answer.
Suppose all inserted elements so far are stored in balanced BST, and we need to compute minmod(m). Consider our set S as a union of subsets of numbers, lying in intervals [0,m-1], [m, 2m-1], [2m, 3m-1] .. etc. The answer will obviously be among the minimal numbers we have in each of that intervals. So, we can consequently lookup the tree to find the minimal numbers of that intervals. It's easy to do, for example if we need to find the minimal number in [a,b], we'll move left if current value is greater than a, and right otherwise, keeping track of the minimal value in [a,b] we've met so far.
Now if we suppose that m is uniformly distributed in [1, 2^64], let's calculate the mathematical expectation of number of queries we'll need.
For all m in [2^63, 2^64-1] we'll need 2 queries. The probability of this is 1/2.
For all m in [2^62, 2^63-1] we'll need 4 queries. The probability of this is 1/4.
...
The mathematical expectation will be sum[ 1/(2^k) * 2^k ], for k in [1,64], which is 64 queries.
So, to sum up, the average minmod(m) query complexity will be O(64*logn). In general, if we m has unknown upper bound, this will be O(logmlogn). The BST update is, as known, O(logn), so the overall complexity in case of n queries will be O(nlogm*logn).
Partial answer too big for a comment.
Suppose you implement S as a balanced binary search tree.
When you seek S.minmod(m), naively you walk the tree and the cost is O(n^2).
However, at a given time during the walk, you have the best (lowest) result so far. You can use this to avoid checking whole sub-trees when:
bestSoFar < leftChild mod m
and
rightChild - leftChild < m - leftChild mod m
This will only help much if a common spacing b/w the numbers in the set is smaller than common values of m.
Update the next morning...
Grigor has better and more fully articulated my idea and shown how it works well for "large" m. He also shows how a "random" m is typically "large", so works well.
Grigor's algorithm is so efficient for large m that one needs to think about the risk for much smaller m.
So it is clear that you need to think about the distribution of m and optimise for different cases if need be.
For example, it might be worth simply keeping track of the minimal modulus for very small m.
But suppose m ~ 2^32? Then the search algorithm (certainly as given but also otherwise) needs to check 2^32 intervals, which may amount to searching the whole set anyway.

How to prove correctness of this algorithm?

I am solving a problem from codeforces.
Our job is to find a minimum cost to make a given integer sequence be a non-decreasing sequence. We can increase/decrease any number of the sequence by 1 at each step and it will cost 1.
For example, when we are given a sequence 3, 2, -1, 2, 11, we can make the sequence be non decreasing with cost 4 (by decreasing 3 to 2 and increasing -1 to 2, so the non-decreasing sequence will be 2, 2, 2, 2, 11)
According the editorial of this problem, we can solve this problem using dynamic programming with 2 sequences, (one is the sequence we are given, and the other one is sorted sequence of the given one).
The outline of solution:
If we let a be the original sequence and b be the sorted sequence of the sequence a, and let f(i,j) be the minimal number of moves required to obtain the sequence in which the first i elements are non-decreasing and i-th element is at most bj. Then we can make recurrence as follows. (This is from the editorial of the problem)
f(1,1)=|a1-b1|
f(1,j)=min{|a1-bj|,f(1,j-1)}, j>1
f(i,1)=|ai-b1|+f(i-1,1), i>1
f(i,j)=min{f(i,j-1),f(i-1,j)+|ai-bj|}, i>1, j>1
I understand this recurrence. However, I can't figure out why we should compare the original sequence with the sorted one of itself and I am not sure whether we can get the correct minimum cost with another sequence other than the sorted sequence of the given one.
How we can prove the correctness of this solution? And how can we guarantee the answer with sorted sequence to be the minimum cost?
The point of the exercise is that this recurrence can be proven with induction. Once it is proven, then we have proven that f(n,n) is the minimum cost for winding up with a solution where the nth value is at most bn.
To finish proving the result there is one more step. Which is to prove that any solution where the nth value exceeds bn can be improved without increasing that maximum value. But that's trivial - just omit one of the +1s from the first value to exceed bn and you have a strictly better solution without a larger maximum. Therefore no solution winding up with a maximum greater than bn can be better than the best with a maximum at most bn.
Therefore we have the optimal solution.

How to enumerate all k-combinations of a set by sum?

Suppose I have a finite set of numeric values of size n.
Question: Is there an efficient algorithm for enumerating the k-combinations of that set so that combination I precedes combination J iff the sum of the elements in I is less than or equal to the sum of the elements in J?
Clearly it's possible to simply enumerate the combinations and sort them according to their sums. If the set is large, however, brute enumeration of all combinations, let alone sorting, will be infeasible. If I'm only interested in obtaining the first m << choose(n,k) combinations ranked by sum, is it possible to obtain them before the heat death of the universe?
There is no polynomial algorithm for enumerating the set this way (unless P=NP).
If there was such an algorithm (let it be A), then we could solve the subset sum problem polynomially:
run A
Do a binary search to find the subset that sums closest to the desired number.
Note that step 1 runs polynomially (assumption) and step 2 runs in O(log(2^n)) = O(n).
Conclusion: Since the Subset Sum problem is NP-Complete, solving this problem efficiently will prove P=NP - thus there is no known polynomial solution to the problem.
Edit: Even though the problem is NP-Hard, getting the "smallest" m subsets can be done on O(n+2^m) by selecting the smallest m elements, generating all the subsets from these m elements - and choosing the minimal m of those. So for fairly small values of m - it might be feasible to calculate it.

Algorithms with superexponential runtime?

I was talking with a student the other day about the common complexity classes of algorithms, like O(n), O(nk), O(n lg n), O(2n), O(n!), etc. I was trying to come up with an example of a problem for which solutions whose best known runtime is super-exponential, such as O(22n), but still decidable (e.g. not the halting problem!) The only example I know of is satisfiability of Presburger arithmetic, which I don't think any intro CS students would really understand or be able to relate to.
My question is whether there is a well-known problem whose best known solution has runtime that is superexponential; at least ω(n!) or ω(nn). I would really hope that there is some "reasonable" problem meeting this description, but I'm not aware of any.
Maximum Parsimony is the problem of finding an evolutionary tree connecting n DNA sequences (representing species) that requires the fewest single-nucleotide mutations. The n given sequences are constrained to appear at the leaves; the tree topology and the sequences at internal nodes are what we get to choose.
In more CS terms: We are given a bunch of length-k strings that must appear at the leaves of some tree, and we have to choose a tree, plus a length-k string for each internal node in the tree, so as to minimise the sum of Hamming distances across all edges.
When a fixed tree is also given, the optimal assignment of sequences to internal nodes can be determined very efficiently using the Fitch algorithm. But in the usual case, a tree is not given (i.e. we are asked to find the optimal tree), and this makes the problem NP-hard, meaning that every tree must in principle be tried. Even though an evolutionary tree has a root (representing the hypothetical ancestor), we only need to consider distinct unrooted trees, since the minimum number of mutations required is not affected by the position of the root. For n species there are 3 * 5 * 7 * ... * (2n-5) leaf-labelled unrooted binary trees. (There is just one such tree with 3 species, which has a single internal vertex and 3 edges; the 4th species can be inserted at any of the 3 edges to produce a distinct 5-edge tree; the 5th species can be inserted at any of these 5 edges, and so on -- this process generates all trees exactly once.) This is sometimes written (2n-5)!!, with !! meaning "double factorial".
In practice, branch and bound is used, and on most real datasets this manages to avoid evaluating most trees. But highly "non-treelike" random data requires all, or almost all (2n-5)!! trees to be examined -- since in this case many trees have nearly equal minimum mutation counts.
Showing all permutation of string of length n is n!, finding Hamiltonian cycle is n!, minimum graph coloring, ....
Edit: even faster Ackerman functions. In fact they seems without bound function.
A(x,y) = y+1 (if x = 0)
A(x,y) = A(x-1,1) (if y=0)
A(x,y) = A(x-1, A(x,y-1)) otherwise.
from wiki:
A(4,3) = 2^2^65536,...
Do algorithms to compute real numbers to a certain precision count? The formula for the area of the Mandelbrot set converges extremely slowly; 10118 terms for two digits, 101181 terms for three.
This is not a practical everyday problem, but it's a way to construct relatively straightforward problems of increasing complexity.
The Kolmogorov complexity K(x) is the size of the smallest program that outputs the string $x$ on a pre-determined universal computer U. It's easy to show that most strings cannot be compressed at all (since there are more strings of length n than programs of length n).
If we give U a maximum running time (say some polynomial function P), we get a time-bounded Kolmogorov complexity. The same counting argument holds: there are some strings that are incompressible under this time bounded Kolmogorov complexity. Let's call the first such string (of some length n) xP
Since the time-bounded Kolmogorov complexity is computable, we can test all strings, and find xP
Finding xP can't be done in polynomial time, or we could use this algorithm to compress it, so finding it must be a super-polynomial problem. We do know we can find it in exp(P) time, though. (Jumping over some technical details here)
So now we have a time-bound E = exp(P). We can repeat the procedure to find xE, and so on.
This approach gives us a decidable super-F problem for every time-constructible function F: find the first string of length n (some large constant) that is incompressible under time-bound F.

Resources