Number of ways of colouring grid - algorithm

There is grid of n x 1. You have to colour it with atleast r red cells, atleast g green cells, atleast b blue cells. (n + r + g <= n). Two patterns are said to be different if they differ in atleast one position. In how many ways u can colour it. (Solution can be either algorithmic or mathematical).
My attempt:
enter code here
int func(int id, int r, int g, int b)
{
int ma = 0;
if (id == n) {
if (r > 0)
ma++;
if (g > 0)
ma++;
if (b > 0)
ma++;
return ma;
}
if (r > 0)
ma += func(r-1, g, b, id + 1);
if (g > 0)
ma += func(r, g-1, b, id + 1);
if (b > 0)
ma += func(r, g, b-1, id + 1);
if (r + g + b < n - id) {
ma += func(r, g, b, id + 1);
}
return ma;
}

Suppose the number of them is f(n,r,g,b), then we have the following recursion:
f(n,r,g,b) = f(n-1,r,g,b)*3 + f(n-1,r-1,g,b)+f(n-1,r,g-1,b)+f(n-1,r,g,b-1).
Also we know the base cases: f(1,1,0,0)=f(1,0,1,0)=f(1,0,0,1)=1. Start from bottom and by above recursion build up f(n,r,g,b). (Is simple if you use memoization instead of for loops). the running time is O(n*r*g*b).
Update: Your code is close to my answer but first I should say that it's wrong, second, you used naive recursion which causes to exponential running time, allocate an array of size nrg*b, to prevent from recomputing already computed answer. See this for an instance of memoization.

Related

how to get to the target floor in minimum time if one can move to N + 1, N - 1 and 2 * N in one minute?

This is a problem I was asked to solve during an interview, but the code I implemented during interview cannot pass all test cases, the problem is just as title said, given N and T (T >= N), which are initial floor and target floor respectively, one can move to current floor + 1, current floor - 1 or 2 * current floor in one minute, what's the minimum time need to reach the target? I think it's a typical DP problem, this is how I did in interview
#lru_cache(None)
def solve(cur):
if cur >= target: return cur - target
if cur * 2 >= target:
# if current floor * 2 < target, we can move to current floor + 1 each time or move backward to (target + 1) // 2 and move current floor * 2 next time, and if target is odd, we need extra 1 move
return min(target - cur, cur - (target + 1) // 2 + 1 + (target % 2))
return min(solve(cur + 1), solve(cur * 2)) + 1
I test it with some cases, it seems to work, I cannot figure out why it cannot pass all test cases during interview,
Update---------------------------------------------------------------
I tried using Dijkstra to solve this, but the code become a bit of mess, than I thought if the problem askes to find shortest distance, maybe we can use BFS, and I think it's the right solution, so below is the code
def solve():
while(True):
N, T = map(int, input().split())
q = deque([N])
vis = [False] * (T * 2)
vis[N] = True
steps = 0
while q:
for _ in range(len(q)):
u = q.popleft()
if u == T:
print(f'reach target in {steps} minutes')
break
cand = [u - 1, u + 1, u * 2] if u < T else [u - 1]
for v in cand:
if v > 0 and not vis[v]:
vis[v] = True
q.append(v)
steps += 1
An easy way is to start from the target T and to found how many iterations we need to go down to initial value N. Here, the allowed operations are +1, -1 and division by 2.
The key point is that division by 2 is only allowed for even value of T. Moreover, if T is even effectively, then it seems clear that division by 2 is the road to take, except if T is near enough to N. This little issue is solved by comparing 1 + nsteps(N, T/2) with T - N.
If T is odd, we must have to compare nsteps(N, T-1) with nsteps(N, T+1).
Last but not least, if T is less than N, then the number of steps is equal to N - T.
Complexity: ??
Here is a simple C++ implementation to illustrate the algorithm. It should be easy to adapt it in any language.
#include <iostream>
#include <algorithm>
int nsteps (int n, int t) {
if (t <= n) {
return n - t;
}
if (t%2 == 0) {
return std::min (1 + nsteps(n, t/2), t-n);
}
return 1 + std::min (nsteps(n, t-1), nsteps(n, t+1));
}
int main () {
int n, t;
std::cin >> n >> t;
std::cout << nsteps (n, t) << std::endl;
return 0;
}
In practice, as noted in a comment by #David Eisenstat, it is still slow, at least in some occasions. For example, for an input 1 1431655765, it needs 10891541 calls of the function. I modified the code hereafter, by using the value of T modulo 4 in order to speed it up: if T is large enough, we can decide betweens the two roads when Tis odd. In the same test case, only 46 calls are needed now.
In this case, the complexity seems indeed equal to O(log T).
int cpt2 = 0;
long long int nsteps2 (long long int n, long long int t) {
cpt2++;
if (t <= n) {
return n - t;
}
if (t%2 == 0) {
return std::min (1ll + nsteps2(n, t/2), t-n);
}
if (t/4 < n) return 1ll + std::min (nsteps2(n, t-1), nsteps2(n, t+1));
if (t%4 == 1) return 1ll + nsteps2(n, t-1);
else return 1ll + nsteps2(n, t+1);
}
As you mentioned, we can make use of normal BFS search for this question as the direction for our movement between floor is given already as floor + 1 or floor - 1 or floor / 2. As per my understanding a floor / 2 movement is only possible if we are at an even floor.
Here is my java code for the same :
int findSteps(int n, int t) {
if (n > t) {
return n - t;
}
if (n == t) {
return 0;
}
Queue<Integer> queue = new LinkedList<>();
Set<Integer> visited = new HashSet<>();
queue.offer(n);
visited.add(n);
int steps = 0;
while (!queue.isEmpty()) {
int size = queue.size();
for (int i=0; i<size; i++) {
int currentFloor = queue.poll();
if (currentFloor == t)
return steps;
int possibleMove1 = currentFloor + 1;
int possibleMove2 = currentFloor - 1;
int possibleMove3 = currentFloor % 2 == 0 ? currentFloor / 2 : -1;
// out of bound conditions or visited condition
if (possibleMove1 < t && possibleMove1 > n && !visited(possibleMove1)) {
queue.offer(possibleMove1);
visited.add(possibleMove1);
}
if (possibleMove2 < t && possibleMove2 > n && !visited(possibleMove2)) {
queue.offer(possibleMove2);
visited.add(possibleMove2);
}
if (possibleMove3 < t && possibleMove3 > n && !visited(possibleMove3)) {
queue.offer(possibleMove1);
visited.add(possibleMove1);
}
}
steps += 1;
}
return -1;
}
Note : There is some repeating codes that can be moved to a separate function and use that function instead.

Counting inversions in a segment with updates

I'm trying to solve a problem which goes like this:
Problem
Given an array of integers "arr" of size "n", process two types of queries. There are "q" queries you need to answer.
Query type 1
input: l r
result: output number of inversions in [l, r]
Query type 2
input: x y
result: update the value at arr [x] to y
Inversion
For every index j < i, if arr [j] > arr [i], the pair (j, i) is one inversion.
Input
n = 5
q = 3
arr = {1, 4, 3, 5, 2}
queries:
type = 1, l = 1, r = 5
type = 2, x = 1, y = 4
type = 1, l = 1, r = 5
Output
4
6
Constraints
Time: 4 secs
1 <= n, q <= 100000
1 <= arr [i] <= 40
1 <= l, r, x <= n
1 <= y <= 40
I know how to solve a simpler version of this problem without updates, i.e. to simply count the number of inversions for each position using a segment tree or fenwick tree in O(N*log(N)). The only solution I have to this problem is O(q*N*log(N)) (I think) with segment tree other than the O(q*N2) trivial algorithm. This however does not fit within the time constraints of the problem. I would like to have hints towards a better algorithm to solve the problem in O(N*log(N)) (if it's possible) or maybe O(N*log2(N)).
I first came across this problem two days ago and have been spending a few hours here and there to try and solve it. However, I'm finding it non-trivial to do so and would like to have some help/hints regarding the same. Thanks for your time and patience.
Updates
Solution
With the suggestion, answer and help by Tanvir Wahid, I've implemented the source code for the problem in C++ and would like to share it here for anyone who might stumble across this problem and not have an intuitive idea on how to solve it. Thank you!
Let's build a segment tree with each node containing information about how many inversions exist and the frequency count of elements present in its segment of authority.
node {
integer inversion_count : 0
array [40] frequency : {0...0}
}
Building the segment tree and handling updates
For each leaf node, initialise inversion count to 0 and increase frequency of the represented element from the input array to 1. The frequency of the parent nodes can be calculated by summing up frequencies of the left and right childrens. The inversion count of parent nodes can be calculated by summing up the inversion counts of left and right children nodes added with the new inversions created upon merging the two segments of their authority which can be calculated using the frequencies of elements in each child. This calculation basically finds out the product of frequencies of bigger elements in the left child and frequencies of smaller elements in the right child.
parent.inversion_count = left.inversion_count + right.inversion_count
for i in [39, 0]
for j in [0, i)
parent.inversion_count += left.frequency [i] * right.frequency [j]
Updates are handled similarly.
Answering range queries on inversion counts
To answer the query for the number of inversions in the range [l, r], we calculate the inversions using the source code attached below.
Time Complexity: O(q*log(n))
Note
The source code attached does break some good programming habits. The sole purpose of the code is to "solve" the given problem and not to accomplish anything else.
Source Code
/**
Lost Arrow (Aryan V S)
Saturday 2020-10-10
**/
#include "bits/stdc++.h"
using namespace std;
struct node {
int64_t inv = 0;
vector <int> freq = vector <int> (40, 0);
void combine (const node& l, const node& r) {
inv = l.inv + r.inv;
for (int i = 39; i >= 0; --i) {
for (int j = 0; j < i; ++j) {
// frequency of bigger numbers in the left * frequency of smaller numbers on the right
inv += 1LL * l.freq [i] * r.freq [j];
}
freq [i] = l.freq [i] + r.freq [i];
}
}
};
void build (vector <node>& tree, vector <int>& a, int v, int tl, int tr) {
if (tl == tr) {
tree [v].inv = 0;
tree [v].freq [a [tl]] = 1;
}
else {
int tm = (tl + tr) / 2;
build(tree, a, 2 * v + 1, tl, tm);
build(tree, a, 2 * v + 2, tm + 1, tr);
tree [v].combine(tree [2 * v + 1], tree [2 * v + 2]);
}
}
void update (vector <node>& tree, int v, int tl, int tr, int pos, int val) {
if (tl == tr) {
tree [v].inv = 0;
tree [v].freq = vector <int> (40, 0);
tree [v].freq [val] = 1;
}
else {
int tm = (tl + tr) / 2;
if (pos <= tm)
update(tree, 2 * v + 1, tl, tm, pos, val);
else
update(tree, 2 * v + 2, tm + 1, tr, pos, val);
tree [v].combine(tree [2 * v + 1], tree [2 * v + 2]);
}
}
node inv_cnt (vector <node>& tree, int v, int tl, int tr, int l, int r) {
if (l > r)
return node();
if (tl == l && tr == r)
return tree [v];
int tm = (tl + tr) / 2;
node result;
result.combine(inv_cnt(tree, 2 * v + 1, tl, tm, l, min(r, tm)), inv_cnt(tree, 2 * v + 2, tm + 1, tr, max(l, tm + 1), r));
return result;
}
void solve () {
int n, q;
cin >> n >> q;
vector <int> a (n);
for (int i = 0; i < n; ++i) {
cin >> a [i];
--a [i];
}
vector <node> tree (4 * n);
build(tree, a, 0, 0, n - 1);
while (q--) {
int type, x, y;
cin >> type >> x >> y;
--x; --y;
if (type == 1) {
node result = inv_cnt(tree, 0, 0, n - 1, x, y);
cout << result.inv << '\n';
}
else if (type == 2) {
update(tree, 0, 0, n - 1, x, y);
}
else
assert(false);
}
}
int main () {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
std::cout.precision(10);
std::cout << std::fixed << std::boolalpha;
int t = 1;
// std::cin >> t;
while (t--)
solve();
return 0;
}
arr[i] can be at most 40. We can use this to our advantage. What we need is a segment tree. Each node will hold 41 values (A long long int which represents inversions for this range and a array of size 40 for count of each numbers. A struct will do). How do we merge two children of a node. We know inversions for left child and right child. Also know frequency of each numbers in both of them. Inversion of parent node will be summation of inversions of both children plus number of inversions between left and right child. We can easily find inversions between two children from frequency of numbers. Query can be done in similar way. Complexity O(40*qlog(n))

How many numbers with length N with K digits D consecutively

Given positive numbers N, K, D (1<= N <= 10^5, 1<=K<=N, 1<=D<=9). How many numbers with N digits are there, that have K consecutive digits D? Write the answer mod (10^9 + 7).
For example: N = 4, K = 3, D = 6, there are 18 numbers:
1666, 2666, 3666, 4666, 5666, 6660,
6661, 6662, 6663, 6664, 6665, 6666, 6667, 6668, 6669, 7666, 8666 and 9666.
Can we calculate the answer in O(N*K) (maybe dynamic programming)?
I've tried using combination.
If
N = 4, K = 3, D = 6. The number I have to find is abcd.
+) if (a = b = c = D), I choose digit for d. There are 10 ways (6660, 6661, 6662, 6663, 6664, 6665, 6666, 6667, 6668, 6669)
+) if (b = c = d = D), I choose digit for a (a > 0). There are 9 ways (1666, 2666, 3666, 4666, 5666, 6666, 7666, 8666, 9666)
But in two cases, the number 6666 is counted twice. N and K is very large, how can I count all of them?
If one is looking for a mathematical solution (vs. necessarily an algorithmic one) it's good to look at it in terms of the base cases and some formulas. They might turn out to be something you can do some kind of refactoring and get a tidy formula for. So just for the heck of it...here's a take on it that doesn't deal with the special treatment of zeros. Because that throws some wrenches in.
Let's look at a couple of base cases, and call our answer F(N,K) (not considering D, as it isn't relevant to account for; but taking it as a parameter anyway.):
when N = 0
You'll never find any length sequences of digits when there's no digit.
F(0, K) = 0 for any K.
when N = 1
Fairly obvious. If you're looking for K sequential digits in a single digit, the options are limited. Looking for more than one? No dice.
F(1, K) = 0 for any K > 1
Looking for exactly one? Okay, there's one.
F(1, 1) = 1
Sequences of zero sequential digits allowed? Then all ten digits are fine.
F(1, 0) = 10
for N > 1
when K = 0
Basically, all N-digit numbers will qualify. So the number of possibilities meeting the bar is 10^N. (e.g. when N is 3 then 000, 001, 002, ... 999 for any D)
F(N, 0) = 10^N for any N > 1
when K = 1
Possibilities meeting the condition is any number with at least one D in it. How many N-digit numbers are there which contain at least one digit D? Well, it's going to be 10^N minus all the numbers that have no instances of the digit D. 10^N - 9^N
F(N, 1) = 10^N - 9^N for any N > 1
when N < K
No way to get K sequential digits if N is less than K
F(N, K) = 0 when N < K
when N = K
Only one possible way to get K sequential digits in N digits.
F(N, K) = 1 when N = K
when N > K
Okay, we already know that N > 1 and K > 1. So this is going to be the workhorse where we hope to use subexpressions for things we've already solved.
Let's start by considering popping off the digit at the head, leaving N-1 digits on the tail. If that N-1 series could achieve a series of K digits all by itself, then adding another digit will not change anything about that. That gives us a term 10 * F(N-1, K)
But if our head digit is a D, that is special. Our cases will be:
It might be the missing key for a series that started with K-1 instances of D, creating a full range of K.
It might complete a range of K-1 instances of D, but on a case that already had a K series of adjacent D (that we thus accounted for in the above term)
It might not help at all.
So let's consider two separate categories of tail series: those that start with K-1 instances of D and those that do not. Let's say we have N=7 shown as D:DDDXYZ and with K=4. We subtract one from N and from K to get 6 and 3, and if we subtract them we get how many trailing any-digits (XYZ here) are allowed to vary. Our term for the union of (1) and (2) to add in is 10^((N-1)-(K-1)).
Now it's time for some subtraction for our double-counts. We haven't double counted any cases that didn't start with K-1 instances of D, so we keep our attention on that (DDDXYZ). If the value in the X slot is a D then it was definitely double counted. We can subtract out the term for that as 10^(((N - 1) - 1) - (K - 1)); in this case giving us all the pairings of YZ digits you can get with X as D. (100).
The last thing to get rid of are the cases where X is not a D, but in whatever that leftover after the X position there was still a K length series of D. Again we reuse our function, and subtract a term 9 * F(N - K, K, D).
Paste it all together and simplify a couple of terms you get:
F(N, K) = 10 * F(N-1,K,D) + 10^(N-K) - 10^(10,N-K-1) - 9 * F(N-K-1,K,D)
Now we have a nice functional definition suitable for Haskelling or whatever. But I'm still awkward with that, and it's easy enough to test in C++. So here it is (assuming availability of a long integer power function):
long F(int N, int K, int D) {
if (N == 0) return 0;
if (K > N) return 0;
if (K == N) return 1;
if (N == 1) {
if (K == 0) return 10;
if (K == 1) return 1;
return 0;
}
if (K == 0)
return power(10, N);
if (K == 1)
return power(10, N) - power(9, N);
return (
10 * F(N - 1, K, D)
+ power(10, N - K)
- power(10, N - K - 1)
- 9 * F(N - K - 1, K, D)
);
}
To double-check this against an exhaustive generator, here's a little C++ test program that builds the list of vectors that it scans using std::search_n. It checks the slow way against the fast way for N and K. I ran it from 0 to 9 for each:
#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;
// http://stackoverflow.com/a/1505791/211160
long power(int x, int p) {
if (p == 0) return 1;
if (p == 1) return x;
long tmp = power(x, p/2);
if (p%2 == 0) return tmp * tmp;
else return x * tmp * tmp;
}
long F(int N, int K, int D) {
if (N == 0) return 0;
if (K > N) return 0;
if (K == N) return 1;
if (N == 1) {
if (K == 0) return 10;
if (K == 1) return 1;
return 0;
}
if (K == 0)
return power(10, N);
if (K == 1)
return power(10, N) - power(9, N);
return (
10 * F(N - 1, K, D)
+ power(10, N - K)
- power(10, N - K - 1)
- 9 * F(N - K - 1, K, D)
);
}
long FSlowCore(int N, int K, int D, vector<int> & digits) {
if (N == 0) {
if (search_n(digits.begin(), digits.end(), K, D) != end(digits)) {
return 1;
} else
return 0;
}
long total = 0;
digits.push_back(0);
for (int curDigit = 0; curDigit <= 9; curDigit++) {
total += FSlowCore(N - 1, K, D, digits);
digits.back()++;
}
digits.pop_back();
return total;
}
long FSlow(int N, int K, int D) {
vector<int> digits;
return FSlowCore(N, K, D, digits);
}
bool TestF(int N, int K, int D) {
long slow = FSlow(N, K, D);
long fast = F(N, K, D);
cout << "when N = " << N
<< " and K = " << K
<< " and D = " << D << ":\n";
cout << "Fast way gives " << fast << "\n";
cout << "Slow way gives " << slow << "\n";
cout << "\n";
return slow == fast;
}
int main() {
for (int N = 0; N < 10; N++) {
for (int K = 0; K < 10; K++) {
if (!TestF(N, K, 6)) {
exit(1);
}
}
}
}
Of course, since it counts leading zeros it will be different from the answers you got. See the test output here in this gist.
Modifying to account for the special-case zero handling is left as an exercise for the reader (as is modular arithmetic). Eliminating the zeros make it messier. Either way, this may be an avenue of attack for reducing the number of math operations even further with some transformations...perhaps.
Miquel is almost correct, but he missed a lot of cases. So, with N = 8, K = 5, and D = 6, we will need to look for those numbers that has the form:
66666xxx
y66666xx
xy66666x
xxy66666
with additional condition that y cannot be D.
So we can have our formula for this example:
66666xxx = 10^3
y66666xx = 8*10^2 // As 0 can also not be the first number
xy66666x = 9*9*10
xxy66666 = 9*10*9
So, the result is 3420.
For case N = 4, K = 3 and D = 6, we have
666x = 10
y666 = 8//Again, 0 is not counted!
So, we have 18 cases!
Note: We need to be careful that the first number cannot be 0! And we need to handle the case when D is zero too!
Update Java working code, Time complexity O(N-K)
static long cal(int n, int k, int d) {
long Mod = 1000000007;
long result = 0;
for (int i = 0; i <= n - k; i++) {//For all starting positions
if (i != 0 || d != 0) {
int left = n - k;
int upper_half = i;//Amount of digit that preceding DDD
int lower_half = n - k - i;//Amount of digit following DDD
long tmp = 1;
if (upper_half == 1) {
if (d == 0) {
tmp = 9;
} else {
tmp = 8;
}
}else if(upper_half >= 2){
//The pattern is x..yDDD...
tmp = (long) (9 * 9 * Math.pow(10, upper_half - 2));
}
tmp *= Math.pow(10, lower_half);
//System.out.println(tmp + " " + upper_half + " " + lower_half);
result += tmp;
result %= Mod;
}
}
return result;
}
Sample Tests:
N = 8, K = 5, D = 6
Output
3420
N = 4, K = 3, D = 6
Output
18
N = 4, K = 3, D = 0
Output
9

Is there any modified Minimum Edit Distance (Levenshteina Distance ) for incomplete strings?

I've sequences builded from 0's and 1's. I want to somehow measure their distance from target string. But target string is incomplete.
Example of data I have, where x is target string, where [0] means the occurance of at least one '0' :
x =11[0]1111[0]1111111[0]1[0]`, the length of x is fixed and eaquel to length of y.
y1=11110111111000000101010110101010111
y2=01101000011100001101010101101010010
all y's have the same length
it's easy to see that x could be indeed interpreted as set of strings, but this set could be very large, mayby simply I need to sample from that set and take average of minimum edit distances, but again it's too big computional problem.
I've tried to figure out algo, but I'm stacked, it steps look like this :
x - target string - fuzzy one,
y - second string - fixed
Cx1, Cy1 - numbers of ones in x and y
Gx1, Gy1 - lists of vectors, length of each list is equal to number of groups of ones in given sequence,
Gx1[i] i-th vector,
Gx1[i]=(first one in i-th group of ones, length of i-th group of ones)
if lengths of Gx1 and Gy1 are the same then we know how many ones to add or remove from each group, but there's a problem, because I don't know if simple adding and removing gives minimum distance
Let (Q, Σ, δ, q0, F) be the target automaton, which accepts a regular language L &subseteq; Σ*, and let w &in; Σ* be the source string. You want to compute minx &in; L d(x, w), where d denotes Levenshtein distance.
My approach is to generalize the usual dynamic program. Let D be a table indexed by Q × {0, …, |w|}. At the end of the computation, D(q, i) will be
minx : δ(q0, x) = q d(x, w[0…i]),
where w[0…i] denotes the length-(i + 1) prefix of w. In other words, D(q, i) is the distance between w[0…i] and the set of strings that leave the automaton in state q. The overall answer is
minq &in; F D(q, |w|),
or the distance between w and the set of strings that leave the automaton in one of the final states, i.e., the language L.
The first column of D consists of the entries D(q, 0) for every state q &in; Q. Since for every string x &in; Σ* it holds that d(x, ε) = |x|, the entry D(q, 0) is the length of the shortest path from q0 to q in the graph defined by the transition function δ. Compute these entries by running "Dijkstra's algorithm" from q0 (actually just breadth-first search because the edge-lengths are all 1).
Subsequent columns of D are computed from the preceding column. First compute an auxiliary quantity D'(q, i) by minimizing over several possibilities.
Exact match For every state r &in; Q such that δ(r, w[i]) = q, include D(r, i - 1).
Deletion Include D(q, i - 1) + 1.
Substitution For every state r &in; Q and every letter a &in; Σ &setminus; {w[i]} such that δ(r, a) = q, include D(r, i - 1) + 1.
Note that I have left out Insertion. As with the first column, this is because it may be necessary to insert many letters here. To compute the D(i, q)s from the D'(i, q)s, run Dijkstra on an implicit graph with vertices Q ∪ {s} and, for every q &in; Q, edges of length D'(i, q) from the super-source s to q and, for every q &in; Q and a &in; Σ, edges of length 1 from q to δ(q, a). Let D(i, q) be the final distances.
I believe that this algorithm, if implemented well (with a heap specialized to support Dijkstra with unit lengths), has running time O(|Q| |w| |Σ|), which, for small alphabets Σ, is comparable to the usual Levenshtein DP.
I would propose that you use dynamic programming for this one. The dp is two dimensional:xi - the index in the xpattern string you are in and yi - the index in the y string you are in and the value for each subproblem is the minimum edit distance between the substrings x[xi..x.size-1] and y[yi...y.size-1].
Here is how you can find the minimum edit distance between a x pattern given as you explain an a fixed y string. I will assume that the symbol # in the x-pattern means any positive number of zeros. Also I will use some global variables to make the code easier to read.
#include <iostream>
#include <string>
using namespace std;
const int max_len = 1000;
const int NO_SOLUTION = -2;
int dp[max_len][max_len];
string x; // pattern;
string y; // to compute minimum edit distance to
int solve(int xi /* index in x */, int yi /* index in y */) {
if (yi + 1 == y.size()) {
if (xi + 1 != x.size()) {
return dp[xi][yi] = NO_SOLUTION;
} else {
if (x[xi] == y[yi] || (y[yi] == '0' && x[xi] == '#')) {
return dp[xi][yi] = 0;
} else {
return dp[xi][yi] = 1; // need to change the character
}
}
}
if (xi + 1 == x.size()) {
if (x[xi] != '#') {
return dp[xi][yi] = NO_SOLUTION;
}
int number_of_ones = 0;
for (int j = yi; j < y.size(); ++j) {
if (y[j] == '1') {
number_of_ones++;
}
}
return dp[xi][yi] = number_of_ones;
}
int best = NO_SOLUTION;
if (x[xi] != '#') {
int temp = ((dp[xi + 1][yi + 1] == -1)?solve(xi + 1, yi +1):dp[xi + 1][yi +1]);
if (temp != NO_SOLUTION && x[xi] != y[yi]) {
temp++;
}
best = temp;
} else {
int temp = ((dp[xi + 1][yi + 1] == -1)?solve(xi + 1, yi +1):dp[xi + 1][yi +1]);
if (temp != NO_SOLUTION) {
if (y[yi] != '0') {
temp++;
}
best = temp;
}
int edit_distance = 0; // number of '1' covered by the '#'
// Here i represents the number of chars covered by the '#'
for (int i = 1; i < y.size(); ++i) {
if (yi + i >= y.size()) {
break;
}
int temp = ((dp[xi][yi + i] == -1)?solve(xi, yi + i):dp[xi][yi + i]);
if (temp == NO_SOLUTION) {
continue;
}
if (y[yi] != '0') {
edit_distance++;
}
temp += edit_distance;
if (best == NO_SOLUTION || temp < best) {
best = temp;
}
}
}
return best;
}
int main() {
memset(dp, -1, sizeof(dp));
cin >> x >> y;
cout << "Minimum possible edit distance is: " << solve(0,0) << endl;
return 0;
}
Hope this helps.

Algorithm to view a larger container in a small screen

I need a mathematical algorithm (or not) simple (or not, too).
It is as follows:
I have two numbers a and b, and need to find the smaller number closer to b, c.
Such that "a% c == 0"
If "a% b == 0", then c == b
Why is that?
My screen has size x pixels. And a container has pixels y such that y> x.
I want to calculate how much I have to scroll so that I can see my container on my screen without wasting space.
I want to necessarily roll to view.
I need to know just how much I need to roll, according to my screen and how often to view my entire container.
This could you help? (Java code)
int a = 2000;
int b = 300;
int c = 0;
for (int i = b; i > 0; i--) {
if ( (a % i) == 0) {
c = i;
break;
}
}
The result will be in c.
The problem asks, given a and b, find the largest c such that
c <= b
c*k = a for some k
The first constraint puts a lower bound on k, and maximizing c is equivalent to minimizing k given the second constraint.
The lower bound for k is given by
a = c*k <= b*k
and so k >= a/b. Therefore we just look for the smallest k that is a divisor of a, e.g.
if (b > a) return a;
for (int k=a/b; k<=a; ++k)
if (a % k == 0) {
return a/k;
}
}

Resources