Is there a sparse edit distance algorithm? - algorithm

Say you have two strings of length 100,000 containing zeros and ones. You can compute their edit distance in roughly 10^10 operations.
If each string only has 100 ones and the rest are zeros then I can represent each string using 100 integers saying where the ones are.
Is there a much faster algorithm to compute the edit distance using
this sparse representation? Even better would be an algorithm that also uses 100^2 space instead of 10^10 space.
To give something to test on, consider these two strings with 10 ones each. The integers say where the ones are in each string.
[9959, 10271, 12571, 21699, 29220, 39972, 70600, 72783, 81449, 83262]
[9958, 10270, 12570, 29221, 34480, 37952, 39973, 83263, 88129, 94336]
In algorithmic terms, if we have two sparse binary strings of length n each represented by k integers each, does there exist an O(k^2) time edit distance algorithm?

Of course! There are so few possible operations with so many 0s. I mean, the answer is at most 200.
Take a look at
10001010000000001
vs ||||||
10111010100000010
Look at all the zeroes with pipes. Does it matter which one out of those you end up deleting? Nope. That's the key.
Solution 1
Let's consider the normal n*m solution:
dp(int i, int j) {
// memo & base case
if( str1[i-1] == str1[j-1] ) {
return dp(i-1, j-1);
}
return 1 + min( dp(i-1, j), dp(i-1, j-1), dp(i, j-1) );
}
If almost every single character was a 0, what would hog the most amount of time?
if( str1[i-1] == str1[j-1] ) { // They will be equal so many times, (99900)^2 times!
return dp(i-1, j-1);
}
I could imagine that trickling down for tens of thousands of recursions. All you actually need logic for are the ~200 critical points. You can ignore the rest. A simple modification would be
if( str1[i-1] == str1[j-1] ) {
if( str1[i-1] == 1 )
return dp(i-1, j-1); // Already hit a critical point
// rightmost location of a 1 in str1 or str2, that is <= i-1
best = binarySearch(CriticalPoints, i-1);
return dp(best + 1, best + 1); // Use that critical point
// Important! best+1 because we still want to compute the answer at best
// Without it, we would skip over in a case where str1[best] is 1, and str2[best] is 0.
}
CriticalPoints would be the array containing the index of every 1 in either str1 or str2. Make sure that it's sorted before you binary search. Keep in mind those gochya's. My logic was: Okay I need to make sure to calculate the answer at the index best itself, so let's go with best + 1 as the parameter. But, if best == i - 1, we get stuck in a loop. I'll handle that with a quick str1[i-1] == 1 check. Done, phew.
You can do a quick check for correctness by noting that at worst case you will hit all 200*100000 combinations of i and j that make critical points, and when those critical points call min(a, b, c), it only makes three recursive function calls. If any of those functions are critical points, then it's part of those 200*100000 we already counted and we can ignore it. If it's not, then in O(log(200)) it falls into a single call on another critical point (Now, it's something we know is part of the 200*100000 we already counted). Thus, each critical point takes at worst 3*log(200) time excluding calls to other critical points. Similarly, the very first function call will fall into a critical point in log(200) time. Thus, we have an upper bound of O(200*100000*3*log(200) + log(200)).
Also, make sure your memo table is a hashmap, not an array. 10^10 memory will not fit on your computer.
Solution 2
You know the answer is at most 200, so just prevent yourself from computing more than that many operations deep.
dp(int i, int j) { // O(100000 * 205), sounds good to me.
if( abs(i - j) > 205 )
return 205; // The answer in this case is at least 205, so it's irrelevant to calculating the answer because when min is called, it wont be smallest.
// memo & base case
if( str1[i-1] == str1[j-1] ) {
return dp(i-1, j-1);
}
return 1 + min( dp(i-1, j), dp(i-1, j-1), dp(i, j-1) );
}
This one is very simple, but I leave it for solution two because this solution seems to have come out from thin air, as opposed to analyzing the problem and figuring out where the slow part is and how to optimize it. Keep this in your toolbox though, since you should be coding this solution.
Solution 3
Just like Solution 2, we could have done it like this:
dp(int i, int j, int threshold = 205) {
if( threshold == 0 )
return 205;
// memo & base case
if( str1[i-1] == str1[j-1] ) {
return dp(i-1, j-1);
}
return 1 + min( dp(i-1, j, threshold - 1), dp(i-1, j-1, threshold - 1), dp(i, j-1, threshold - 1) );
}
You might be worried about dp(i-1, j-1) trickling down, but the threshold keeps i and j close together so it calculates a subset of Solution 2. This is because the threshold gets decremented every time i and j get farther apart. dp(i-1, j-1, threshold) would make it identical to Solution 2 (Thus, this one is slightly faster).
Space
These solutions will give you the answer very quickly, but if you want a space-optimizing solution as well, it would be easy to replace str1[i] with (i in Str1CriticalPoints) ? 1 : 0, using a hashmap. This would give a final solution that is still very fast (Though will be 10x slower), and also avoids keeping the long strings in memory (To the point where it could run on an Arduino). I don't think this is necessary though.
Note that the original solution does not use 10^10 space. You mention "even better, less than 10^10 space", with an implication that 10^10 space would be acceptable. Unfortunately, even with enough RAM, iterating though that space takes 10^10 time, which is definitely not acceptable. None of my solutions use 10^10 space: only 2 * 10^5 to hold the strings - which can be avoided as discussed above. 10^10 Bytes it 10 GB for reference.
EDIT: As maniek notes, you only need to check abs(i - j) > 105, as the remaining 100 insertions needed to equate i and j will pull the number of operations above 200.

Related

Proving that there are no overlapping sub-problems?

I just got the following interview question:
Given a list of float numbers, insert “+”, “-”, “*” or “/” between each consecutive pair of numbers to find the maximum value you can get. For simplicity, assume that all operators are of equal precedence order and evaluation happens from left to right.
Example:
(1, 12, 3) -> 1 + 12 * 3 = 39
If we built a recursive solution, we would find that we would get an O(4^N) solution. I tried to find overlapping sub-problems (to increase the efficiency of this algorithm) and wasn't able to find any overlapping problems. The interviewer then told me that there wasn't any overlapping subsolutions.
How can we detect when there are overlapping solutions and when there isn't? I spent a lot of time trying to "force" subsolutions to appear and eventually the Interviewer told me that there wasn't any.
My current solution looks as follows:
def maximumNumber(array, current_value=None):
if current_value is None:
current_value = array[0]
array = array[1:]
if len(array) == 0:
return current_value
return max(
maximumNumber(array[1:], current_value * array[0]),
maximumNumber(array[1:], current_value - array[0]),
maximumNumber(array[1:], current_value / array[0]),
maximumNumber(array[1:], current_value + array[0])
)
Looking for "overlapping subproblems" sounds like you're trying to do bottom up dynamic programming. Don't bother with that in an interview. Write the obvious recursive solution. Then memoize. That's the top down approach. It is a lot easier to get working.
You may get challenged on that. Here was my response the last time that I was asked about that.
There are two approaches to dynamic programming, top down and bottom up. The bottom up approach usually uses less memory but is harder to write. Therefore I do the top down recursive/memoize and only go for the bottom up approach if I need the last ounce of performance.
It is a perfectly true answer, and I got hired.
Now you may notice that tutorials about dynamic programming spend more time on bottom up. They often even skip the top down approach. They do that because bottom up is harder. You have to think differently. It does provide more efficient algorithms because you can throw away parts of that data structure that you know you won't use again.
Coming up with a working solution in an interview is hard enough already. Don't make it harder on yourself than you need to.
EDIT Here is the DP solution that the interviewer thought didn't exist.
def find_best (floats):
current_answers = {floats[0]: ()}
floats = floats[1:]
for f in floats:
next_answers = {}
for v, path in current_answers.iteritems():
next_answers[v + f] = (path, '+')
next_answers[v * f] = (path, '*')
next_answers[v - f] = (path, '-')
if 0 != f:
next_answers[v / f] = (path, '/')
current_answers = next_answers
best_val = max(current_answers.keys())
return (best_val, current_answers[best_val])
Generally the overlapping sub problem approach is something where the problem is broken down into smaller sub problems, the solutions to which when combined solve the big problem. When these sub problems exhibit an optimal sub structure DP is a good way to solve it.
The decision about what you do with a new number that you encounter has little do with the numbers you have already processed. Other than accounting for signs of course.
So I would say this is a over lapping sub problem solution but not a dynamic programming problem. You could use dive and conquer or evenmore straightforward recursive methods.
Initially let's forget about negative floats.
process each new float according to the following rules
If the new float is less than 1, insert a / before it
If the new float is more than 1 insert a * before it
If it is 1 then insert a +.
If you see a zero just don't divide or multiply
This would solve it for all positive floats.
Now let's handle the case of negative numbers thrown into the mix.
Scan the input once to figure out how many negative numbers you have.
Isolate all the negative numbers in a list, convert all the numbers whose absolute value is less than 1 to the multiplicative inverse. Then sort them by magnitude. If you have an even number of elements we are all good. If you have an odd number of elements store the head of this list in a special var , say k, and associate a processed flag with it and set the flag to False.
Proceed as before with some updated rules
If you see a negative number less than 0 but more than -1, insert a / divide before it
If you see a negative number less than -1, insert a * before it
If you see the special var and the processed flag is False, insert a - before it. Set processed to True.
There is one more optimization you can perform which is removing paris of negative ones as candidates for blanket subtraction from our initial negative numbers list, but this is just an edge case and I'm pretty sure you interviewer won't care
Now the sum is only a function of the number you are adding and not the sum you are adding to :)
Computing max/min results for each operation from previous step. Not sure about overall correctness.
Time complexity O(n), space complexity O(n)
const max_value = (nums) => {
const ops = [(a, b) => a+b, (a, b) => a-b, (a, b) => a*b, (a, b) => a/b]
const dp = Array.from({length: nums.length}, _ => [])
dp[0] = Array.from({length: ops.length}, _ => [nums[0],nums[0]])
for (let i = 1; i < nums.length; i++) {
for (let j = 0; j < ops.length; j++) {
let mx = -Infinity
let mn = Infinity
for (let k = 0; k < ops.length; k++) {
if (nums[i] === 0 && k === 3) {
// If current number is zero, removing division
ops.splice(3, 1)
dp.splice(3, 1)
continue
}
const opMax = ops[j](dp[i-1][k][0], nums[i])
const opMin = ops[j](dp[i-1][k][1], nums[i])
mx = Math.max(opMax, opMin, mx)
mn = Math.min(opMax, opMin, mn)
}
dp[i].push([mx,mn])
}
}
return Math.max(...dp[nums.length-1].map(v => Math.max(...v)))
}
// Tests
console.log(max_value([1, 12, 3]))
console.log(max_value([1, 0, 3]))
console.log(max_value([17,-34,2,-1,3,-4,5,6,7,1,2,3,-5,-7]))
console.log(max_value([59, 60, -0.000001]))
console.log(max_value([0, 1, -0.0001, -1.00000001]))

Memory-constrained coin changing for numbers up to one billion

I faced this problem on one training. Namely we have given N different values (N<= 100). Let's name this array A[N], for this array A we are sure that we have 1 in the array and A[i] ≤ 109. Secondly we have given number S where S ≤ 109.
Now we have to solve classic coin problem with this values. Actually we need to find minimum number of element which will sum to exactly S. Every element from A can be used infinite number of times.
Time limit: 1 sec
Memory limit: 256 MB
Example:
S = 1000, N = 10
A[] = {1,12,123,4,5,678,7,8,9,10}. The result is 10.
1000 = 678 + 123 + 123 + 12 + 12 + 12 + 12 + 12 + 12 + 4
What I have tried
I tried to solve this with classic dynamic programming coin problem technique but it uses too much memory and it gives memory limit exceeded.
I can't figure out what should we keep about those values. Thanks in advance.
Here are the couple test cases that cannot be solved with the classic dp coin problem.
S = 1000000000 N = 100
1 373241370 973754081 826685384 491500595 765099032 823328348 462385937
251930295 819055757 641895809 106173894 898709067 513260292 548326059
741996520 959257789 328409680 411542100 329874568 352458265 609729300
389721366 313699758 383922849 104342783 224127933 99215674 37629322
230018005 33875545 767937253 763298440 781853694 420819727 794366283
178777428 881069368 595934934 321543015 27436140 280556657 851680043
318369090 364177373 431592761 487380596 428235724 134037293 372264778
267891476 218390453 550035096 220099490 71718497 860530411 175542466
548997466 884701071 774620807 118472853 432325205 795739616 266609698
242622150 433332316 150791955 691702017 803277687 323953978 521256141
174108096 412366100 813501388 642963957 415051728 740653706 68239387
982329783 619220557 861659596 303476058 85512863 72420422 645130771
228736228 367259743 400311288 105258339 628254036 495010223 40223395
110232856 856929227 25543992 957121494 359385967 533951841 449476607
134830774
OUTPUT FOR THIS TEST CASE: 5
S = 999865497 N = 7
1 267062069 637323855 219276511 404376890 528753603 199747292
OUTPUT FOR THIS TEST CASE: 1129042
S = 1000000000 N = 40
1 12 123 4 5 678 7 8 9 10 400 25 23 1000 67 98 33 46 79 896 11 112 1223 412
532 6781 17 18 19 170 1400 925 723 11000 607 983 313 486 739 896
OUTPUT FOR THIS TEST CASE: 90910
(NOTE: Updated and edited for clarity. Complexity Analysis added at the end.)
OK, here is my solution, including my fixes to the performance issues found by #PeterdeRivaz. I have tested this against all of the test cases provided in the question and the comments and it finishes all in under a second (well, 1.5s in one case), using primarily only the memory for the partial results cache (I'd guess about 16MB).
Rather than using the traditional DP solution (which is both too slow and requires too much memory), I use a Depth-First, Greedy-First combinatorial search with pruning using current best results. I was surprised (very) that this works as well as it does, but I still suspect that you could construct test sets that would take a worst-case exponential amount of time.
First there is a master function that is the only thing that calling code needs to call. It handles all of the setup and initialization and calls everything else. (all code is C#)
// Find the min# of coins for a specified sum
int CountChange(int targetSum, int[] coins)
{
// init the cache for (partial) memoization
PrevResultCache = new PartialResult[1048576];
// make sure the coins are sorted lowest to highest
Array.Sort(coins);
int curBest = targetSum;
int result = CountChange_r(targetSum, coins, coins.GetLength(0)-1, 0, ref curBest);
return result;
}
Because of the problem test-cases raised by #PeterdeRivaz I have also added a partial results cache to handle when there are large numbers in N[] that are close together.
Here is the code for the cache:
// implement a very simple cache for previous results of remainder counts
struct PartialResult
{
public int PartialSum;
public int CoinVal;
public int RemainingCount;
}
PartialResult[] PrevResultCache;
// checks the partial count cache for already calculated results
int PrevAddlCount(int currSum, int currCoinVal)
{
int cacheAddr = currSum & 1048575; // AND with (2^20-1) to get only the first 20 bits
PartialResult prev = PrevResultCache[cacheAddr];
// use it, as long as it's actually the same partial sum
// and the coin value is at least as large as the current coin
if ((prev.PartialSum == currSum) && (prev.CoinVal >= currCoinVal))
{
return prev.RemainingCount;
}
// otherwise flag as empty
return 0;
}
// add or overwrite a new value to the cache
void AddPartialCount(int currSum, int currCoinVal, int remainingCount)
{
int cacheAddr = currSum & 1048575; // AND with (2^20-1) to get only the first 20 bits
PartialResult prev = PrevResultCache[cacheAddr];
// only add if the Sum is different or the result is better
if ((prev.PartialSum != currSum)
|| (prev.CoinVal <= currCoinVal)
|| (prev.RemainingCount == 0)
|| (prev.RemainingCount >= remainingCount)
)
{
prev.PartialSum = currSum;
prev.CoinVal = currCoinVal;
prev.RemainingCount = remainingCount;
PrevResultCache[cacheAddr] = prev;
}
}
And here is the code for the recursive function that does the actual counting:
/*
* Find the minimum number of coins required totaling to a specifuc sum
* using a list of coin denominations passed.
*
* Memory Requirements: O(N) where N is the number of coin denominations
* (primarily for the stack)
*
* CPU requirements: O(Sqrt(S)*N) where S is the target Sum
* (Average, estimated. This is very hard to figure out.)
*/
int CountChange_r(int targetSum, int[] coins, int coinIdx, int curCount, ref int curBest)
{
int coinVal = coins[coinIdx];
int newCount = 0;
// check to see if we are at the end of the search tree (curIdx=0, coinVal=1)
// or we have reached the targetSum
if ((coinVal == 1) || (targetSum == 0))
{
// just use math get the final total for this path/combination
newCount = curCount + targetSum;
// update, if we have a new curBest
if (newCount < curBest) curBest = newCount;
return newCount;
}
// prune this whole branch, if it cannot possibly improve the curBest
int bestPossible = curCount + (targetSum / coinVal);
if (bestPossible >= curBest)
return bestPossible; //NOTE: this is a false answer, but it shouldnt matter
// because we should never use it.
// check the cache to see if a remainder-count for this partial sum
// already exists (and used coins at least as large as ours)
int prevRemCount = PrevAddlCount(targetSum, coinVal);
if (prevRemCount > 0)
{
// it exists, so use it
newCount = prevRemCount + targetSum;
// update, if we have a new curBest
if (newCount < curBest) curBest = newCount;
return newCount;
}
// always try the largest remaining coin first, starting with the
// maximum possible number of that coin (greedy-first searching)
newCount = curCount + targetSum;
for (int cnt = targetSum / coinVal; cnt >= 0; cnt--)
{
int tmpCount = CountChange_r(targetSum - (cnt * coinVal), coins, coinIdx - 1, curCount + cnt, ref curBest);
if (tmpCount < newCount) newCount = tmpCount;
}
// Add our new partial result to the cache
AddPartialCount(targetSum, coinVal, newCount - curCount);
return newCount;
}
Analysis:
Memory: Memory usage is pretty easy to determine for this algorithm. Basiclly there's only the partial results cache and the stack. The cache is fixed at appx. 1 million entries times the size of each entry (3*4 bytes), so about 12MB. The stack is limited to O(N), so together, memory is clearly not a problem.
CPU: The run-time complexity of this algorithm starts out hard to determine and then gets harder, so please excuse me because there's a lot of hand-waving here. I tried to search for an analysis of just the brute-force problem (combinatorial search of sums of N*kn base values summing to S) but not much turned up. What little there was tended to say it was O(N^S), which is clearly too high. I think that a fairer estimate is O(N^(S/N)) or possibly O(N^(S/AVG(N)) or even O(N^(S/(Gmean(N))) where Gmean(N) is the geometric mean of the elements of N[]. This solution starts out with the brute-force combinatorial search and then improves it with two significant optimizations.
The first is the pruning of branches based on estimates of the best possible results for that branch versus what the best result it has already found. If the best-case estimators were perfectly accurate and the work for branches was perfectly distributed, this would mean that if we find a result that is better than 90% of the other possible cases, then pruning would effectively eliminate 90% of the work from that point on. To make a long story short here, this should work out that the amount of work still remaining after pruning should shrink harmonically as it progress. Assuming that some kind of summing/integration should be applied to get a work total, this appears to me to work out to a logarithm of the original work. So let's call it O(Log(N^(S/N)), or O(N*Log(S/N)) which is pretty darn good. (Though O(N*Log(S/Gmean(N))) is probably more accurate).
However, there are two obvious holes with this. First, it is true that the best-case estimators are not perfectly accurate and thus they will not prune as effectively as assumed above, but, this is somewhat counter-balanced by the Greedy-First ordering of the branches which gives the best chances for finding better solutions early in the search which increase the effectiveness of pruning.
The second problem is that the best-case estimator works better when the different values of N are far apart. Specifically, if |(S/n2 - S/n1)| > 1 for any 2 values in N, then it becomes almost perfectly effective. For values of N less than SQRT(S), then even two adjacent values (k, k+1) are far enough apart that that this rule applies. However for increasing values above SQRT(S) a window opens up so that any number of N-values within that window will not be able to effectively prune each other. The size of this window is approximately K/SQRT(S). So if S=10^9, when K is around 10^6 this window will be almost 30 numbers wide. This means that N[] could contain 1 plus every number from 1000001 to 1000029 and the pruning optimization would provide almost no benefit.
To address this, I added the partial results cache which allows memoization of the most recent partial sums up to the target S. This takes advantage of the fact that when the N-values are close together, they will tend to have an extremely high number of duplicates in their sums. As best as I can figure, this effectiveness is approximately the N times the J-th root of the problem size where J = S/K and K is some measure of the average size of the N-values (Gmean(N) is probably the best estimate). If we apply this to the brute-force combinatorial search, assuming that pruning is ineffective, we get O((N^(S/Gmean(N)))^(1/Gmean(N))), which I think is also O(N^(S/(Gmean(N)^2))).
So, at this point take your pick. I know this is really sketchy, and even if it is correct, it is still very sensitive to the distribution of the N-values, so lots of variance.
[I've replaced the previous idea about bit operations because it seems to be too time consuming]
A bit crazy idea and incomplete but may work.
Let's start with introducing f(n,s) which returns number of combinations in which s can be composed from n coins.
Now, how f(n+1,s) is related to f(n)?
One of possible ways to calculate it is:
f(n+1,s)=sum[coin:coins]f(n,s-coin)
For example, if we have coins 1 and 3,
f(0,)=[1,0,0,0,0,0,0,0] - with zero coins we can have only zero sum
f(1,)=[0,1,0,1,0,0,0,0] - what we can have with one coin
f(2,)=[0,0,1,0,2,0,1,0] - what we can have with two coins
We can rewrite it a bit differently:
f(n+1,s)=sum[i=0..max]f(n,s-i)*a(i)
a(i)=1 if we have coin i and 0 otherwise
What we have here is convolution: f(n+1,)=conv(f(n,),a)
https://en.wikipedia.org/wiki/Convolution
Computing it as definition suggests gives O(n^2)
But we can use Fourier transform to reduce it to O(n*log n).
https://en.wikipedia.org/wiki/Convolution#Convolution_theorem
So now we have more-or-less cheap way to find out what numbers are possible with n coins without going incrementally - just calculate n-th power of F(a) and apply inverse Fourier transform.
This allows us to make a kind of binary search which can help handling cases when the answer is big.
As I said the idea is incomplete - for now I have no idea how to combine bit representation with Fourier transforms (to satisfy memory constraint) and whether we will fit into 1 second on any "regular" CPU...

Quickly generating the "triangle sequence": avoiding mispredictions

I'm interested in calculating the triangle sequence1, which is the sequence of pairs (i, j): (0, 0), (1, 0), (1, 1), (2, 0), (2, 1) ...
which iterates though all pairs (i, j) with the restriction that i >= j. The same sequence with but with the restriction i > j is also interesting.
These sequences represent, among others things, all the ways to choose 2 (possibly identical) elements from a n-element set (for the sequence up to (n, n)2), or the indices of the lower triagular elements of a matrix3. The sequence of values for i alone is A003056 in OEIS, while j alone is A002262. The sequence frequently arises in combinartorial algorithms, where their performance may be critical.
A simple but branchy way to generate the next value in the sequence is:
if (i == j) {
j = 0;
i++;
} else {
j++;
}
}
However, this suffers from many mispredicts while calculating the initial elements of the sequence, when checking the condition (i == j) -
generally one mispredict each time i is incremented. As the sequence increases, the number of mispredicts becomes lower since i is incremented
with reduced frequency, so the j++ branch dominates and is well predicted. Still, some types of combinatorial search repeatedly iterate over the
small terms in the sequence, so I'm looking for a branch-free approach or some other approach that suffers fewer mispredicts.
For many uses, the order of the sequences isn't as important, so generating the values in differnet order than above is a allowable if it leads to
a better solution. For example, j could count down rather than up: (0, 0), (1, 1), (1, 0), (2, 2), (2, 1), ....
1 I'm also interested in knowing what the right name for this sequence is (perhaps so I make a better title for this question). I just kind of made up "triangle sequence".
2 Here, the i >= j version represents sub-multisets (repetition allowed), while the i > j variant represents normal subsets (no repetition).
3 Here, the i >= j version includes the main diagonal, while the i > j variant excludes it.
Here are two branch-free approaches that do not use any expensive calculations. First one uses comparison and logical AND:
const bool eq = i == j;
i += eq;
j = (j + 1) & (eq - 1);
Second one uses comparison and multiplication:
const bool eq = i == j;
i += eq;
j = (j + 1) * (1 - eq);
In theory "multiplication" variant should be slower than "logical" one, but measurements show very little difference.
Both approaches would result in branchless code only for processors that allow branchless comparisons (for example x86). Also these approaches assume to be implemented using a language where results of conditional expressions could be easily converted to integers (for example C/C++, where "false" comparisons are converted to zero integers, and "true" ones - to integers equal to "1").
The only problem with these approaches is performance. They could in theory outperform branchy code, but only when mispredicts are really frequent. A simple test where there is no other work besides generating "triangle sequence" (see it on ideone) shows miserable mispredict rate and therefore both branchless methods about 3 times slower than branchy one. The explanation is simple: there should be not much mispredicts for longer sequences; as for shorter ones, modern processors have very good branch predictors that almost never fail in case of short branch patterns; so we have not many mispredicts, branchy code almost always executes only 2 instructions (compare, increment), while branchless code executes both active and incative "branches" plus some instructions specific to branchless approach.
In case you want to repeatedly iterate over the small terms in the sequence, probably other approach would be preferable. You calculate the sequence only once, then repeatedly read it from memory.
In Python we can express this as:
i, j = i + (i == j), (j + 1) * (i != j)
but it turns out, at around a million iterations or so, on my machine, the following, more long winded, lazy evaluation code is about 20% faster:
from itertools import count, repeat
def gen_i():
""" A003056 """
for x in count(0): # infinitely counts up
yield from repeat(x, x + 1) # replication
def gen_j():
""" A002262 """
for x in count(0): # infinitely counts up
yield from range(x + 1) # count up to (including) x
sequence = zip(gen_i(), gen_j())
for _ in range(1000000):
i, j = next(sequence)
In the above code, gen_i(), gen_j(), count(), repeat(), and zip() are all generators (and range() is an iterator) so sequence continues to call into the code on demand as new (i, j) pairs are required. I assume both the implementation of range() and repeat() terminate with a misprediction.
Simple isn't necessarily also quick (i.e. consider all the unnecessary additions of zero and multiplictions by one in the compact form.)
So which is more important, quickly generating the sequence or avoiding mispredictions?
You can derive j from i:
...set val...
old_j = j;
j = (j + 1) % (i + 1);
if (i == old_j) {
i++;
}
...loop if more...
And further derive i increment from j and current i:
...set val...
old_j = j;
j = (j + 1) % (i + 1);
i = i + (i / old_j);
...loop if more...
(Can't test it at the moment... Please review)

Understanding solution to finding optimal strategy for game involving picking pots of gold

I am having trouble understanding the reasoning behind the solution to this question on CareerCup.
Pots of gold game: Two players A & B. There are pots of gold arranged
in a line, each containing some gold coins (the players can see how
many coins are there in each gold pot - perfect information). They get
alternating turns in which the player can pick a pot from one of the
ends of the line. The winner is the player which has a higher number
of coins at the end. The objective is to "maximize" the number of
coins collected by A, assuming B also plays optimally. A starts the
game.
The idea is to find an optimal strategy that makes A win knowing that
B is playing optimally as well. How would you do that?
At the end I was asked to code this strategy!
This was a question from a Google interview.
The proposed solution is:
function max_coin( int *coin, int start, int end ):
if start > end:
return 0
// I DON'T UNDERSTAND THESE NEXT TWO LINES
int a = coin[start] + min(max_coin(coin, start+2, end), max_coin(coin, start+1, end-1))
int b = coin[end] + min(max_coin(coin, start+1,end-1), max_coin(coin, start, end-2))
return max(a,b)
There are two specific sections I don't understand:
In the first line why do we use the ranges [start + 2, end] and [start + 1, end - 1]? It's always leaving out one coin jar. Shouldn't it be [start + 1, end] because we took the starting coin jar out?
In the first line, why do we take the minimum of the two results and not the maximum?
Because I'm confused about why the two lines take the minimum and why we choose those specific ranges, I'm not really sure what a and b actually represent?
First of all a and b represent respectively the maximum gain if start (respectively end) is played.
So let explain this line:
int a = coin[start] + min(max_coin(coin, start+2, end), max_coin(coin, start+1, end-1))
If I play start, I will immediately gain coin[start]. The other player now has to play between start+1 and end. He plays to maximize his gain. However since the number of coin is fixed, this amounts to minimize mine. Note that
if he plays start+1 I'll gain max_coin(coin, start+2, end)
if he plays end Ill gain max_coin(coin, start+1, end-1)
Since he tries to minimize my gain, I'll gain the minimum of those two.
Same reasoning apply to the other line where I play end.
Note: This is a bad recursive implementation. First of all max_coin(coin, start+1, end-1) is computed twice. Even if you fix that, you'll end up computing lots of time shorter case. This is very similar to what happens if you try to compute Fibonacci numbers using recursion. It would be better to use memoization or dynamic programming.
a and b here represent the maximum A can get by picking the starting pot or the ending pot, respectively.
We're actually trying to maximize A-B, but since B = TotalGold - A, we're trying to maximize 2A - TotalGold, and since TotalGold is constant, we're trying to maximize 2A, which is the same as A, so we completely ignore the values of B's picks and just work with A.
The updated parameters in the recursive calls include B picking as well - so coin[start] represents A picking the start, then B picks the next one from the start, so it's start+2. For the next call, B picks from the end, so it's start+1 and end-1. Similarly for the rest.
We're taking the min, because B will try to maximize it's own profit, so it will pick the choice that minimizes A's profit.
But actually I'd say this solution is lacking a bit in the sense that it just returns a single value, not 'an optimal strategy', which, in my mind, would be a sequence of moves. And it also doesn't take into account the possibility that A can't win, in which case one might want to output a message saying that it's not possible, but this would really be something to clarify with the interviewer.
Let me answer your points in reverse order, somehow it seems to make more sense that way.
3 - a and b represent the amount of coins the first player will get, when he/she chooses the first or the last pot respectively
2 - we take the minimum because it is the choice of the second player - he/she will act to minimise the amount of coins the first player will get
1 - the first line presents the scenario - if the first player has taken the first pot, what will the second player do? If he/she again takes the first pot, it will leave (start+2, end). If he/she takes the last pot, it will leave (start+1, end-1)
Assume what you gain on your turn is x and what you get in all consequent turns is y. Both values represent x+y, where a assumes you take next pot (x=coin[start]) from the front and b assumes you take your next pot (x=coin[end]) from the back.
Now how you compute y.
After your choice, the opponent will use the same optimum strategy (thus recursive calls) to maximise his profit, and you will be left with a the smaller profit for the turn. This is why your y=min(best_strategy_front(), best_strategy_end()) -- your value is the smaller of the two choices that are left because the opponent will take the bigger.
The indexing simply indicates the remaining sequences minus one pot on the front and on the back after you made your choice.
A penny from my end too. I have explained steps in detail.
public class Problem08 {
static int dp[][];
public static int optimalGameStrategy(int arr[], int i, int j) {
//If one single element then choose that.
if(i == j) return arr[i];
//If only two elements then choose the max.
if (i + 1 == j ) return Math.max(arr[i], arr[j]);
//If the result is already computed, then return that.
if(dp[i][j] != -1) return dp[i][j];
/**
* If I choose i, then the array length will shrink to i+1 to j.
* The next move is of the opponent. And whatever he choose, I would want the result to be
* minimum. If he choose j, then array will shrink to i+1, j-1. But if also choose i then
* array will shrink to i+2,j. Whatever he choose, I want the result to be min, hence I take
* the minimum of his two choices.
*
* Similarly for a case, when I choose j.
*
* I will eventually take the maximum of both of my case. :)
*/
int iChooseI = arr[i] + Math.min(optimalGameStrategy(arr, i+1, j-1),
optimalGameStrategy(arr, i+2, j));
int iChooseJ = arr[j] + Math.min(optimalGameStrategy(arr, i+1, j-1),
optimalGameStrategy(arr, i, j-2));
int res = Math.max(iChooseI, iChooseJ );
dp[i][j] = res;
return res;
}
public static void main(String[] args) {
int[] arr = new int[]{5,3,7,10};
dp = new int[arr.length][arr.length];
for(int i=0; i < arr.length; i++) {
for(int j=0; j < arr.length; j++) {
dp[i][j] = -1;
}
}
System.out.println( " Nas: " + optimalGameStrategy(arr, 0, arr.length-1));
}
}

Does a range of integers contain at least one perfect square?

Given two integers a and b, is there an efficient way to test whether there is another integer n such that a ≤ n2 < b?
I do not need to know n, only whether at least one such n exists or not, so I hope to avoid computing square roots of any numbers in the interval.
Although testing whether an individual integer is a perfect square is faster than computing the square root, the range may be large and I would also prefer to avoid performing this test for every number within the range.
Examples:
intervalContainsSquare(2, 3) => false
intervalContainsSquare(5, 9) => false (note: 9 is outside this interval)
intervalContainsSquare(9, 9) => false (this interval is empty)
intervalContainsSquare(4, 9) => true (4 is inside this interval)
intervalContainsSquare(5, 16) => true (9 is inside this interval)
intervalContainsSquare(1, 10) => true (1, 4 and 9 are all inside this interval)
Computing whether or not a number is a square isn't really faster than computing its square root in hard cases, as far as I know. What is true is that you can do a precomputation to know that it isn't a square, which might save you time on average.
Likewise for this problem, you can do a precomputation to determine that sqrt(b)-sqrt(a) >= 1, which then means that a and b are far enough apart that there must be a square between them. With some algebra, this inequality is equivalent to the condition that (b-a-1)^2 >= 4*a, or if you want it in a more symmetric form, that (a-b)^2+1 >= 2*(a+b). So this precomputation can be done with no square roots, only with one integer product and some additions and subtractions.
If a and b are almost exactly the same, then you can still use the trick of looking at low order binary digits as a precomputation to know that there isn't a square between them. But they have to be so close together that this precomputation might not be worth it.
If these precomputations are inconclusive, then I can't think of anything other than everyone else's solution, a <= ceil(sqrt(a))^2 < b.
Since there was a question of doing the algebra right:
sqrt(b)-sqrt(a) >= 1
sqrt(b) >= 1+sqrt(a)
b >= 1+2*sqrt(a)+a
b-a-1 >= 2*sqrt(a)
(b-a-1)^2 >= 4*a
Also: Generally when a is a large number, you would compute sqrt(a) with Newton's method, or with a lookup table followed by a few Newton's method steps. It is faster in principle to compute ceil(sqrt(a)) than sqrt(a), because the floating point arithmetic can be simplified to integer arithmetic, and because you don't need as many Newton's method steps to nail down high precision that you're just going to throw away. But in practice, a numerical library function can be much faster if it uses square roots implemented in microcode. If for whatever reason you don't have that microcode to help you, then it might be worth it to hand-code ceil(sqrt(a)). Maybe the most interesting case would be if a and b are unbounded integers (like, a thousand digits). But for ordinary-sized integers on an ordinary non-obsolete computer, you can't beat the FPU.
Get the square root of the lower number. If this is an integer then you are done.
Otherwise round up and square the number. If this is less than b then it is true.
You only need to compute one square root this way.
In order to avoid a problem of when a is equal to b, you should check that first. As this case is always false.
If you will accept calculating two square roots, because of its monotonicity you have this inequality which is equivalent to your starting one:
sqrt(a) <= n < sqrt(b)
thus, if floor(sqrt(a)) != floor(sqrt(b)), floor(sqrt(b)) - 1 is guaranteed to be such an n.
get the square root of the lower number and round it up
get the square root of the higher number and round it down
if 1 is lower or equal 2, there will be a perfect square
Find the integral part of sqrt(a) and sqrt(b), say sa and sb.
If sa2 = a, then output yes.
If sb2 = b and sa = sb-1, then output no.
If sa < sb output yes.
Else output no.
You can optimize the above to get rid of the computation of sqrt(b) (similar to JDunkerly's answer).
Or did you want to avoid computing square roots of a and b too?
You can avoid computing square roots completely by using a method similar to binary search.
You start with a guess for n, n = 1 and compute n2
Consider if a <= n < b, you can stop.
If n < a < b, you double your guess n.
if a < b < n, you make it close to average of current + previous guess.
This will be O(logb) time.
In addition to JDunkerley's nice solution (+1), there could be a possible improvement that needs to be tested and uses integer square roots to calculate integer square roots
Why are you hoping to avoid square roots entirely? Even before you get to the most efficient way of solving this, you have seen methods that call for only 2 square roots. That's done in O(1) time, so it seems to me that any improvement you could hope to make would take more time to think about than it would EVER save you computing time. Am I wrong?
One way is to use Newton's method to find the integer square root for b. Then you can check if that number falls in the range. I doubt that it is faster than simply calling the square root function, but it is certainly more interesting:
int main( int argc, char* argv[] )
{
int a, b;
double xk=0, xk1;
int root;
int iter=0;
a = atoi( argv[1] );
b = atoi( argv[2] );
xk1 = b / 32 + 1; // +1 to ensure > 0
xk1 = b;
while( fabs( xk1 - xk ) >= .5 ) {
xk = xk1;
xk1 = ( xk + b / xk ) / 2.;
printf( "%d) xk = %f\n", ++iter, xk1 );
}
root = (int)xk1;
// If b is a perfect square, then this finds that root, so it also
// needs to check if (n-1)^2 falls in the range.
// And this does a lot more multiplications than it needs
if ( root*root >= a && root*root < b ||
(root-1)*(root-1) >= a && (root-1)*(root-1) < b )
printf( "Contains perfect square\n" );
else
printf( "Does not contain perfect square\n" );
return 1;
}

Categories

Resources